- Introduction
- Quick Start
- Table Design
- Data Loading
- Overview of data loading
- Load data from a local file system or a streaming data source using HTTP push
- Load data from HDFS or cloud storage
- Routine Load
- Spark Load
- Insert Into
- Change data through loading
- Transform data at loading
- Json Loading
- Synchronize data from MySQL
- Load data by using flink-connector-starrocks
- DataX Writer
- Data Export
- Using StarRocks
- Reference
- SQL Reference
- User Account Management
- Cluster Management
- ADMIN CANCEL REPAIR
- ADMIN CHECK TABLET
- ADMIN REPAIR
- ADMIN SET CONFIG
- ADMIN SET REPLICA STATUS
- ADMIN SHOW CONFIG
- ADMIN SHOW REPLICA DISTRIBUTION
- ADMIN SHOW REPLICA STATUS
- ALTER SYSTEM
- CANCEL DECOMMISSION
- CREATE FILE
- DROP FILE
- INSTALL PLUGIN
- SHOW BACKENDS
- SHOW BROKER
- SHOW FILE
- SHOW FRONTENDS
- SHOW FULL COLUMNS
- SHOW INDEX
- SHOW PLUGINS
- SHOW TABLE STATUS
- UNINSTALL PLUGIN
- DDL
- ALTER DATABASE
- ALTER TABLE
- ALTER VIEW
- BACKUP
- CANCEL BACKUP
- CANCEL RESTORE
- CREATE DATABASE
- CREATE INDEX
- CREATE MATERIALIZED VIEW
- CREATE REPOSITORY
- CREATE RESOURCE
- CREATE TABLE AS SELECT
- CREATE TABLE LIKE
- CREATE TABLE
- CREATE VIEW
- CREATE FUNCTION
- DROP DATABASE
- DROP INDEX
- DROP MATERIALIZED VIEW
- DROP REPOSITORY
- DROP RESOURCE
- DROP TABLE
- DROP VIEW
- DROP FUNCTION
- HLL
- RECOVER
- RESTORE
- SHOW RESOURCES
- SHOW FUNCTION
- TRUNCATE TABLE
- DML
- ALTER ROUTINE LOAD
- BROKER LOAD
- CANCEL LOAD
- DELETE
- EXPORT
- GROUP BY
- INSERT
- PAUSE ROUTINE LOAD
- RESUME ROUTINE LOAD
- ROUTINE LOAD
- SELECT
- SHOW ALTER
- SHOW BACKUP
- SHOW DATA
- SHOW DATABASES
- SHOW DELETE
- SHOW DYNAMIC PARTITION TABLES
- SHOW EXPORT
- SHOW LOAD
- SHOW PARTITIONS
- SHOW PROPERTY
- SHOW REPOSITORIES
- SHOW RESTORE
- SHOW ROUTINE LOAD
- SHOW ROUTINE LOAD TASK
- SHOW SNAPSHOT
- SHOW TABLES
- SHOW TABLET
- SHOW TRANSACTION
- SPARK LOAD
- STOP ROUTINE LOAD
- STREAM LOAD
- Data Types
- Auxiliary Commands
- Function Reference
- Java UDFs
- Window Function
- Date Functions
- convert_tz
- curdate
- current_timestamp
- curtime
- datediff
- date_add
- date_format
- date_sub
- date_trunc
- day
- dayname
- dayofmonth
- dayofweek
- dayofyear
- from_days
- from_unixtime
- hour
- minute
- month
- monthname
- now
- quarter
- second
- str_to_date
- timediff
- timestampadd
- timestampdiff
- to_date
- to_days
- unix_timestamp
- utc_timestamp
- weekofyear
- year
- hours_diff
- minutes_diff
- months_diff
- seconds_diff
- weeks_diff
- years_diff
- Aggregate Functions
- Geographic Functions
- String Functions
- JSON Functions
- Overview of JSON functions and operators
- JSON constructor functions
- JSON query and processing functions
- JSON operators
- Aggregate Functions
- Bitmap Functions
- Array Functions
- cast function
- hash function
- Cryptographic Functions
- Math Functions
- Utility Functions
- System variables
- Error code
- System limits
- SQL Reference
- Administration
- FAQ
- Deploy
- Data Migration
- SQL
- Other FAQs
- Benchmark
- Developers
- Contribute to StarRocks
- Code Style Guides
- Use the debuginfo file for debugging
- Development Environment
- Trace Tools
EXPORT
description
This statement is used to export the data in a specified table to a specified location.
This function is implemented by broker process. For different purpose storage systems, different brokers need to be deployed. Deployed brokers can be viewed through SHOW BROKER.
This is an asynchronous operation, which returns if the task is submitted successfully. After execution, you can use the SHOW EXPORT command to check progress.
Syntax:
EXPORT TABLE table_name
[PARTITION (p1[,p2])]
TO export_path
[opt_properties]
broker;
table_name
The name of the table to be exported. Currently, this system only supports the export of tables with engine as OLAP and mysql.
partition
You can export certain specified partitions of the specified table.
export_path
The export path. Currently, you cannot export to local, but to broker.
If you need to export a directory, it must end with a slash. Otherwise, the part after the last slash will be identified as the prefix to the exported file.
opt_properties
It is used to specify some special parameters.
Syntax:
[PROPERTIES ("key"="value", ...)]
The following parameters can be specified:
column_separator: Specify the exported column separator, defaulting to t. line_delimiter: Specify the exported line separator, defaulting to\n. exec_mem_limit: Export the upper limit of memory usage for a single BE node, defaulting to 2GB in bytes. timeout:The time-out for importing jobs, defaulting to 1 day in seconds. include_query_id: Whether the exported file name contains query id, defaulting to true.
broker
It is used to specify the broker to be used for the export.
Syntax:
WITH BROKER broker_name ("key"="value"[,...])
Here you need to specify the specific broker name and the required broker properties.
For brokers corresponding to different storage systems, the input parameters are different. Specific parameters can be referred to: the required properties of broker in
help broker load
example
Export all data from the testTbl table to HDFS
EXPORT TABLE testTbl TO "hdfs://hdfs_host:port/a/b/c/" WITH BROKER "broker_name" ("username"="xxx", "password"="yyy");
Export partitions p1 and p2 from the testTbl table to HDFS
EXPORT TABLE testTbl PARTITION (p1,p2) TO "hdfs://hdfs_host:port/a/b/c/" WITH BROKER "broker_name" ("username"="xxx", "password"="yyy");
Export all data in the testTbl table to hdfs, using "," as column separator
EXPORT TABLE testTbl TO "hdfs://hdfs_host:port/a/b/c/" PROPERTIES ("column_separator"=",") WITH BROKER "broker_name" ("username"="xxx", "password"="yyy");
Export all data in the testTbl table to hdfs, using Hive custom separator "\x01" as column separator
EXPORT TABLE testTbl TO "hdfs://hdfs_host:port/a/b/c/" PROPERTIES ("column_separator"="\\x01") WITH BROKER "broker_name";
Export all data in the testTbl table to hdfs, specifying the exported file prefix as testTbl_
EXPORT TABLE testTbl TO "hdfs://hdfs_host:port/a/b/c/testTbl_" WITH BROKER "broker_name";
Export all data in the testTbl table to OSS
EXPORT TABLE testTbl TO "oss://oss-package/export/" WITH BROKER "broker_name" ( "fs.oss.accessKeyId" = "xxx", "fs.oss.accessKeySecret" = "yyy", "fs.oss.endpoint" = "oss-cn-zhangjiakou-internal.aliyuncs.com" );
Export all data in the testTbl table to COS
EXPORT TABLE testTbl TO "cosn://cos-package/export/" WITH BROKER "broker_name" ( "fs.cosn.userinfo.secretId" = "xxx", "fs.cosn.userinfo.secretKey" = "yyy", "fs.cosn.bucket.endpoint_suffix" = "cos.ap-beijing.myqcloud.com" );
Export all data in the testTbl table to S3
EXPORT TABLE testTbl TO "s3a://s3-package/export/" WITH BROKER "broker_name" ( "fs.s3a.access.key" = "xxx", "fs.s3a.secret.key" = "yyy", "fs.s3a.endpoint" = "s3-ap-northeast-1.amazonaws.com" );
keyword
EXPORT