- StarRocks
- Introduction to StarRocks
- Quick Start
- Deployment
- Deployment overview
- Prepare
- Deploy
- Deploy shared-nothing StarRocks
- Deploy and use shared-data StarRocks
- Manage
- Table Design
- Understand StarRocks table design
- Table types
- Data distribution
- Data compression
- Sort keys and prefix indexes
- Data Loading
- Concepts
- Overview of data loading
- Load data from a local file system or a streaming data source using HTTP PUT
- Load data from HDFS
- Load data from cloud storage
- Load data from Apache Kafka®
- Continuously load data from Apache Kafka®
- Load data from Apache Spark™
- Load data using INSERT
- Load data using Stream Load transaction interface
- Realtime synchronization from MySQL
- Continuously load data from Apache Flink®
- Change data through loading
- Transform data at loading
- Data Unloading
- Query Data Lakes
- Query Acceleration
- Gather CBO statistics
- Synchronous materialized views
- Asynchronous materialized views
- Colocate Join
- Lateral Join
- Query Cache
- Index
- Computing the Number of Distinct Values
- Sorted streaming aggregate
- Integrations
- Administration
- Management
- Data recovery
- User Privilege and Authentication
- Performance Tuning
- Reference
- SQL Reference
- User Account Management
- Cluster Management
- ADD SQLBLACKLIST
- ADMIN CANCEL REPAIR TABLE
- ADMIN CHECK TABLET
- ADMIN REPAIR TABLE
- ADMIN SET CONFIG
- ADMIN SET REPLICA STATUS
- ADMIN SHOW CONFIG
- ADMIN SHOW REPLICA DISTRIBUTION
- ADMIN SHOW REPLICA STATUS
- ALTER RESOURCE GROUP
- ALTER STORAGE VOLUME
- ALTER SYSTEM
- CANCEL DECOMMISSION
- CREATE FILE
- CREATE RESOURCE GROUP
- CREATE STORAGE VOLUME
- DELETE SQLBLACKLIST
- DESC STORAGE VOLUME
- DROP FILE
- DROP RESOURCE GROUP
- DROP STORAGE VOLUME
- EXPLAIN
- INSTALL PLUGIN
- KILL
- SET
- SET DEFAULT STORAGE VOLUME
- SHOW BACKENDS
- SHOW BROKER
- SHOW COMPUTE NODES
- SHOW FILE
- SHOW FRONTENDS
- SHOW FULL COLUMNS
- SHOW INDEX
- SHOW PLUGINS
- SHOW PROC
- SHOW PROCESSLIST
- SHOW RESOURCE GROUP
- SHOW SQLBLACKLIST
- SHOW STORAGE VOLUMES
- SHOW TABLE STATUS
- SHOW VARIABLES
- UNINSTALL PLUGIN
- DDL
- ALTER DATABASE
- ALTER MATERIALIZED VIEW
- ALTER TABLE
- ALTER VIEW
- ALTER RESOURCE
- ANALYZE TABLE
- BACKUP
- CANCEL ALTER TABLE
- CANCEL BACKUP
- CANCEL RESTORE
- CREATE ANALYZE
- CREATE DATABASE
- CREATE EXTERNAL CATALOG
- CREATE FUNCTION
- CREATE INDEX
- CREATE MATERIALIZED VIEW
- CREATE REPOSITORY
- CREATE RESOURCE
- CREATE TABLE
- CREATE TABLE AS SELECT
- CREATE TABLE LIKE
- CREATE VIEW
- DROP ANALYZE
- DROP CATALOG
- DROP DATABASE
- DROP FUNCTION
- DROP INDEX
- DROP MATERIALIZED VIEW
- DROP REPOSITORY
- DROP RESOURCE
- DROP STATS
- DROP TABLE
- DROP VIEW
- HLL
- KILL ANALYZE
- RECOVER
- REFRESH EXTERNAL TABLE
- RESTORE
- SET CATALOG
- SHOW ANALYZE JOB
- SHOW ANALYZE STATUS
- SHOW FUNCTION
- SHOW META
- SHOW RESOURCES
- TRUNCATE TABLE
- USE
- DML
- ALTER LOAD
- ALTER ROUTINE LOAD
- BROKER LOAD
- CANCEL LOAD
- CANCEL EXPORT
- CANCEL REFRESH MATERIALIZED VIEW
- CREATE ROUTINE LOAD
- DELETE
- DROP TASK
- EXPORT
- GROUP BY
- INSERT
- PAUSE ROUTINE LOAD
- REFRESH MATERIALIZED VIEW
- RESUME ROUTINE LOAD
- SELECT
- SHOW ALTER TABLE
- SHOW ALTER MATERIALIZED VIEW
- SHOW BACKUP
- SHOW CATALOGS
- SHOW CREATE CATALOG
- SHOW CREATE DATABASE
- SHOW CREATE MATERIALIZED VIEW
- SHOW CREATE TABLE
- SHOW CREATE VIEW
- SHOW DATA
- SHOW DATABASES
- SHOW DELETE
- SHOW DYNAMIC PARTITION TABLES
- SHOW EXPORT
- SHOW LOAD
- SHOW MATERIALIZED VIEWS
- SHOW PARTITIONS
- SHOW PROPERTY
- SHOW REPOSITORIES
- SHOW RESTORE
- SHOW ROUTINE LOAD
- SHOW ROUTINE LOAD TASK
- SHOW SNAPSHOT
- SHOW TABLES
- SHOW TABLET
- SHOW TRANSACTION
- SPARK LOAD
- STOP ROUTINE LOAD
- STREAM LOAD
- SUBMIT TASK
- UPDATE
- Auxiliary Commands
- Data Types
- Keywords
- Function Reference
- Function list
- Java UDFs
- Window functions
- Lambda expression
- Aggregate Functions
- any_value
- approx_count_distinct
- array_agg
- avg
- bitmap
- bitmap_agg
- count
- corr
- covar_pop
- covar_samp
- group_concat
- grouping
- grouping_id
- hll_empty
- hll_hash
- hll_raw_agg
- hll_union
- hll_union_agg
- max
- max_by
- min
- min_by
- multi_distinct_sum
- multi_distinct_count
- percentile_approx
- percentile_cont
- percentile_disc
- retention
- stddev
- stddev_samp
- sum
- variance, variance_pop, var_pop
- var_samp
- window_funnel
- Array Functions
- all_match
- any_match
- array_agg
- array_append
- array_avg
- array_concat
- array_contains
- array_contains_all
- array_cum_sum
- array_difference
- array_distinct
- array_filter
- array_generate
- array_intersect
- array_join
- array_length
- array_map
- array_max
- array_min
- array_position
- array_remove
- array_slice
- array_sort
- array_sortby
- array_sum
- arrays_overlap
- array_to_bitmap
- cardinality
- element_at
- reverse
- unnest
- Bit Functions
- Bitmap Functions
- base64_to_bitmap
- bitmap_agg
- bitmap_and
- bitmap_andnot
- bitmap_contains
- bitmap_count
- bitmap_from_string
- bitmap_empty
- bitmap_has_any
- bitmap_hash
- bitmap_intersect
- bitmap_max
- bitmap_min
- bitmap_or
- bitmap_remove
- bitmap_subset_in_range
- bitmap_subset_limit
- bitmap_to_array
- bitmap_to_base64
- bitmap_to_string
- bitmap_union
- bitmap_union_count
- bitmap_union_int
- bitmap_xor
- intersect_count
- sub_bitmap
- to_bitmap
- JSON Functions
- Overview of JSON functions and operators
- JSON operators
- JSON constructor functions
- JSON query and processing functions
- Map Functions
- Binary Functions
- Conditional Functions
- Cryptographic Functions
- Date Functions
- add_months
- adddate
- convert_tz
- current_date
- current_time
- current_timestamp
- date
- date_add
- date_diff
- date_format
- date_slice
- date_sub, subdate
- date_trunc
- datediff
- day
- dayname
- dayofmonth
- dayofweek
- dayofyear
- days_add
- days_diff
- days_sub
- from_days
- from_unixtime
- hour
- hours_add
- hours_diff
- hours_sub
- last_day
- makedate
- microseconds_add
- microseconds_sub
- minute
- minutes_add
- minutes_diff
- minutes_sub
- month
- monthname
- months_add
- months_diff
- months_sub
- next_day
- now
- previous_day
- quarter
- second
- seconds_add
- seconds_diff
- seconds_sub
- str_to_date
- str2date
- time_slice
- time_to_sec
- timediff
- timestamp
- timestampadd
- timestampdiff
- to_date
- to_days
- unix_timestamp
- utc_timestamp
- week
- week_iso
- weekofyear
- weeks_add
- day_of_week_iso
- weeks_diff
- weeks_sub
- year
- years_add
- years_diff
- years_sub
- Geographic Functions
- Math Functions
- String Functions
- append_trailing_char_if_absent
- ascii
- char
- char_length
- character_length
- concat
- concat_ws
- ends_with
- find_in_set
- group_concat
- hex
- hex_decode_binary
- hex_decode_string
- instr
- lcase
- left
- length
- locate
- lower
- lpad
- ltrim
- money_format
- null_or_empty
- parse_url
- repeat
- replace
- reverse
- right
- rpad
- rtrim
- space
- split
- split_part
- starts_with
- strleft
- strright
- str_to_map
- substring
- trim
- ucase
- unhex
- upper
- url_decode
- url_encode
- Pattern Matching Functions
- Percentile Functions
- Scalar Functions
- Struct Functions
- Table Functions
- Utility Functions
- cast function
- hash function
- AUTO_INCREMENT
- Generated columns
- System variables
- User-defined variables
- Error code
- System limits
- AWS IAM policies
- SQL Reference
- FAQ
- Benchmark
- Ecosystem Release Notes
- Developers
- Contribute to StarRocks
- Code Style Guides
- Use the debuginfo file for debugging
- Development Environment
- Trace Tools
SHOW LOAD
Description
Displays information of all load jobs or given load jobs in a database. This statement can only display load jobs that are created by using Broker Load, INSERT, or Spark Load. You can also view information of load jobs via the curl
command. From v3.1 onwards, we recommend that you use the SELECT statement to query the results of Broker Load or Insert jobs from the loads
table in the information_schema
database. For more information, see Load data from HDFS, Load data from cloud storage, Load data using INSERT, and Bulk load using Apache Spark™.
In addition to the preceding loading methods, StarRocks supports using Stream Load and Routine Load to load data. Stream Load is a synchronous operation and will directly return information of Stream Load jobs. Routine Load is an asynchronous operation where you can use the SHOW ROUTINE LOAD statement to display information of Routine Load jobs.
Syntax
SHOW LOAD [ FROM db_name ]
[
WHERE [ LABEL { = "label_name" | LIKE "label_matcher" } ]
[ [AND] STATE = { "PENDING" | "ETL" | "LOADING" | "FINISHED" | "CANCELLED" } ]
]
[ ORDER BY field_name [ ASC | DESC ] ]
[ LIMIT { [offset, ] limit | limit OFFSET offset } ]
Note
You can add the
\G
option to the statement (such asSHOW LOAD WHERE LABEL = "label1"\G;
) to vertically display output rather than in the usual horizontal table format. For more information, see Example 1.
Parameters
Parameter | Required | Description |
---|---|---|
db_name | No | The database name. If this parameter is not specified, your current database is used by default. |
LABEL = "label_name" | No | The labels of load jobs. |
LABEL LIKE "label_matcher" | No | If this parameter is specified, the information of load jobs whose labels contain label_matcher is returned. |
AND | No |
|
STATE | No | The states of load jobs. The states vary based on loading methods.
STATE parameter is not specified, the information of load jobs in all states is returned by default. If the STATE parameter is specified, only the information of load jobs in the given state is returned. For example, STATE = "PENDING" returns the information of load jobs in the PENDING state. |
ORDER BY field_name [ASC | DESC] | No | If this parameter is specified, the output is sorted in ascending or descending order based on a field. The following fields are supported: JobId , Label , State , Progress , Type , EtlInfo , TaskInfo , ErrorMsg , CreateTime , EtlStartTime , EtlFinishTime , LoadStartTime , LoadFinishTime , URL , and JobDetails .
JobId by default. |
LIMIT limit | No | The number of load jobs that are allowed to display. If this parameter is not specified, the information of all load jobs that match the filter conditions are displayed. If this parameter is specified, for example, LIMIT 10 , only the information of 10 load jobs that match filter conditions are returned. |
OFFSET offset | No | The offset parameter defines the number of load jobs to be skipped. For example, OFFSET 5 skips the first five load jobs and returns the rest. The value of the offset parameter defaults to 0 . |
Output
+-------+-------+-------+----------+------+---------+----------+----------+------------+--------------+---------------+---------------+----------------+-----+------------+
| JobId | Label | State | Progress | Type | Priority | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+-------+-------+----------+------+---------+----------+----------+------------+--------------+---------------+---------------+----------------+-----+------------+
The output of this statement varies based on loading methods.
Field | Broker Load | Spark Load | INSERT |
---|---|---|---|
JobId | The unique ID assigned by StarRocks to identify the load job in your StarRocks cluster. | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
Label | The label of the load job. The label of a load job is unique within a database but can be duplicate across different databases. | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
State | The state of the load job.
| The state of the load job.
| The state of the load job.
|
Progress | The stage of the load job. A Broker Load job only has the LOAD stage, which ranges from 0% to 100% to describe the progress of the stage. When the load job enters the LOAD stage, LOADING is returned for the State parameter. A Broker Load job does not have the ETL stage. The ETL parameter is valid only for a Spark Load job.Note
| The stage of the load job. A Spark Load job has two stages:
ETL stage, ETL is returned for the State parameter. When the load job moves to the LOAD stage, LOADING is returned for the State parameter. The Note is the same as those for Broker Load. | The stage of the load job. An INSERT job only has the LOAD stage, which ranges from 0% to 100% to describe the progress of the stage. When the load job enters the LOAD stage, LOADING is returned for the State parameter. An INSERT job does not have the ETL stage. The ETL parameter is valid only for a Spark Load job.The Note is the same as those for Broker Load. |
Type | The method of the load job. The value of this parameter defaults to BROKER . | The method of the load job. The value of this parameter defaults to SPARK . | The method of the load job. The value of this parameter defaults to INSERT . |
Priority | The priority of the load job. Valid values: LOWEST, LOW, NORMAL, HIGH, and HIGHEST. | - | - |
EtlInfo | The metrics related to ETL.
max-filter-ratio parameter:dpp.abnorm.ALL /(unselected.rows + dpp.abnorm.ALL + dpp.norm.ALL ). | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The metrics related to ETL. An INSERT job does not have the ETL stage. Therefore, NULL is returned. |
TaskInfo | The parameters that are specified when you create the load job.
| The parameters that are specified when you create the load job.
| The parameters that are specified when you create the load job.
|
ErrorMsg | The error message returned when the load job fails. When the state of the loading job is PENDING , LOADING , or FINISHED , NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED , the value returned for the ErrorMsg field consists of two parts: type and msg .
| The error message returned when the load job fails. When the state of the loading job is PENDING , LOADING , or FINISHED , NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED , the value returned for the ErrorMsg field consists of two parts: type and msg .
| The error message returned when the load job fails. When the state of the loading job is FINISHED , NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED , the value returned for the ErrorMsg field consists of two parts: type and msg .
|
CreateTime | The time at which the load job was created. | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
EtlStartTime | A Broker Load job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. | The time at which the ETL stage starts. | An INSERT job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. |
EtlFinishTime | A Broker Load job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. | The time at which the ETL stage finishes. | An INSERT job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. |
LoadStartTime | The time at which the LOAD stage starts. | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
LoadFinishTime | The time at which the load job finishes. | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
URL | The URL that is used to access the unqualified data detected in the load job. You can use the curl or wget command to access the URL and obtain the unqualified data. If no unqualified data is detected, NULL is returned. | The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
JobDetails | Other information related to the load job.
| The field has the same meaning in a Spark Load job as it does in a Broker Load job. | The field has the same meaning in a INSERT job as it does in a Broker Load job. |
Usage notes
The information returned by the SHOW LOAD statement is valid for 3 days from
LoadFinishTime
of a load job. After 3 days, the information cannot be displayed. You can use thelabel_keep_max_second
parameter to modify the default validity period.ADMIN SET FRONTEND CONFIG ("label_keep_max_second" = "value");
If the value of the
LoadStartTime
field isN/A
for a long time, it means that load jobs heavily pile up. We recommended that you reduce the frequency of creating load jobs.Total time period consumed by a load job =
LoadFinishTime
-CreateTime
.Total time period a load job consumed in the
LOAD
stage =LoadFinishTime
-LoadStartTime
.
Examples
Example 1: Vertically display all load jobs in your current database.
SHOW LOAD\G;
*************************** 1. row ***************************
JobId: 976331
Label: duplicate_table_with_null
State: FINISHED
Progress: ETL:100%; LOAD:100%
Type: BROKER
Priority: NORMAL
EtlInfo: unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546
TaskInfo: resource:N/A; timeout(s):300; max_filter_ratio:0.0
ErrorMsg: NULL
CreateTime: 2022-10-17 19:35:00
EtlStartTime: 2022-10-17 19:35:04
EtlFinishTime: 2022-10-17 19:35:04
LoadStartTime: 2022-10-17 19:35:04
LoadFinishTime: 2022-10-17 19:35:06
URL: NULL
JobDetails: {"Unfinished backends":{"b90a703c-6e5a-4fcb-a8e1-94eca5be0b8f":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"b90a703c-6e5a-4fcb-a8e1-94eca5be0b8f":[10004]},"FileNumber":1,"FileSize":548622}
Example 2: Display two load jobs whose labels contain the string null
in your current database.
SHOW LOAD
WHERE LABEL LIKE "null"
LIMIT 2;
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10082 | duplicate_table_with_null | FINISHED | ETL:100%; LOAD:100% | BROKER | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:N/A; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:53:27 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:31 | NULL | {"Unfinished backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[10002]},"FileNumber":1,"FileSize":548622} |
| 10103 | unique_table_with_null | FINISHED | ETL:100%; LOAD:100% | SPARK | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:test_spark_resource_07af473a_1230_11ed_b483_00163e0e550b; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:06 | 2022-08-02 14:56:19 | 2022-08-02 14:56:41 | 2022-08-02 14:56:41 | 2022-08-02 14:56:44 | http://emr-header-1.cluster-49091:20888/proxy/application_1655710334658_26391/ | {"Unfinished backends":{"00000000-0000-0000-0000-000000000000":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"00000000-0000-0000-0000-000000000000":[-1]},"FileNumber":1,"FileSize":8790855} |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Example 3: Display the load jobs whose labels contain the string table
in example_db
. In addition, the load jobs returned are displayed in descending order of the LoadStartTime
field.
SHOW LOAD FROM example_db
WHERE LABEL Like "table"
ORDER BY LoadStartTime DESC;
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10103 | unique_table_with_null | FINISHED | ETL:100%; LOAD:100% | SPARK | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:test_spark_resource_07af473a_1230_11ed_b483_00163e0e550b; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:06 | 2022-08-02 14:56:19 | 2022-08-02 14:56:41 | 2022-08-02 14:56:41 | 2022-08-02 14:56:44 | http://emr-header-1.cluster-49091:20888/proxy/application_1655710334658_26391/ | {"Unfinished backends":{"00000000-0000-0000-0000-000000000000":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"00000000-0000-0000-0000-000000000000":[-1]},"FileNumber":1,"FileSize":8790855} |
| 10082 | duplicate_table_with_null | FINISHED | ETL:100%; LOAD:100% | BROKER | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:N/A; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:53:27 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:31 | NULL | {"Unfinished backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[10002]},"FileNumber":1,"FileSize":548622} |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Example 4: Display the load job whose label is duplicate_table_with_null
and state is FINISHED
in example_db
.
SHOW LOAD FROM example_db
WHERE LABEL = "duplicate_table_with_null" AND STATE = "FINISHED";
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+----------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+----------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10082 | duplicate_table_with_null | FINISHED | ETL:100%; LOAD:100% | BROKER | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:N/A; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:53:27 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:31 | NULL | {"Unfinished backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[10002]},"FileNumber":1,"FileSize":548622} |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+----------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Example 5: Skip the first load job and display the next two load jobs. In addition, these two load jobs are sorted in ascending order.
SHOW LOAD FROM example_db
ORDER BY CreateTime ASC
LIMIT 2 OFFSET 1;
Or
SHOW LOAD FROM example_db
ORDER BY CreateTime ASC
LIMIT 1,2;
The output of the preceding statements is as follows.
+-------+---------------------------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10103 | unique_table_with_null | FINISHED | ETL:100%; LOAD:100% | SPARK | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:test_spark_resource_07af473a_1230_11ed_b483_00163e0e550b; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:06 | 2022-08-02 14:56:19 | 2022-08-02 14:56:41 | 2022-08-02 14:56:41 | 2022-08-02 14:56:44 | http://emr-header-1.cluster-49091:20888/proxy/application_1655710334658_26391/ | {"Unfinished backends":{"00000000-0000-0000-0000-000000000000":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"00000000-0000-0000-0000-000000000000":[-1]},"FileNumber":1,"FileSize":8790855} |
| 10120 | insert_3a57b595-1230-11ed-b075-00163e14c85e | FINISHED | ETL:100%; LOAD:100% | INSERT | NULL | resource:N/A; timeout(s):3600; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | | {"Unfinished backends":{},"ScannedRows":0,"TaskNumber":0,"All backends":{},"FileNumber":0,"FileSize":0} |
+-------+---------------------------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+