- StarRocks
- Introduction to StarRocks
- Quick Start
- Deployment
- Deployment overview
- Prepare
- Deploy
- Deploy shared-nothing StarRocks
- Deploy and use shared-data StarRocks
- Manage
- Table Design
- Understand StarRocks table design
- Table types
- Data distribution
- Data compression
- Sort keys and prefix indexes
- Data Loading
- Concepts
- Overview of data loading
- Load data from a local file system or a streaming data source using HTTP PUT
- Load data from HDFS
- Load data from cloud storage
- Load data from Apache Kafka®
- Continuously load data from Apache Kafka®
- Load data from Apache Sparkâ„¢
- Load data using INSERT
- Load data using Stream Load transaction interface
- Realtime synchronization from MySQL
- Continuously load data from Apache Flink®
- Change data through loading
- Transform data at loading
- Data Unloading
- Query Data Lakes
- Query Acceleration
- Gather CBO statistics
- Synchronous materialized views
- Asynchronous materialized views
- Colocate Join
- Lateral Join
- Query Cache
- Index
- Computing the Number of Distinct Values
- Sorted streaming aggregate
- Integrations
- Administration
- Management
- Data recovery
- User Privilege and Authentication
- Performance Tuning
- Reference
- SQL Reference
- User Account Management
- Cluster Management
- ADD SQLBLACKLIST
- ADMIN CANCEL REPAIR TABLE
- ADMIN CHECK TABLET
- ADMIN REPAIR TABLE
- ADMIN SET CONFIG
- ADMIN SET REPLICA STATUS
- ADMIN SHOW CONFIG
- ADMIN SHOW REPLICA DISTRIBUTION
- ADMIN SHOW REPLICA STATUS
- ALTER RESOURCE GROUP
- ALTER STORAGE VOLUME
- ALTER SYSTEM
- CANCEL DECOMMISSION
- CREATE FILE
- CREATE RESOURCE GROUP
- CREATE STORAGE VOLUME
- DELETE SQLBLACKLIST
- DESC STORAGE VOLUME
- DROP FILE
- DROP RESOURCE GROUP
- DROP STORAGE VOLUME
- EXPLAIN
- INSTALL PLUGIN
- KILL
- SET
- SET DEFAULT STORAGE VOLUME
- SHOW BACKENDS
- SHOW BROKER
- SHOW COMPUTE NODES
- SHOW FILE
- SHOW FRONTENDS
- SHOW FULL COLUMNS
- SHOW INDEX
- SHOW PLUGINS
- SHOW PROC
- SHOW PROCESSLIST
- SHOW RESOURCE GROUP
- SHOW SQLBLACKLIST
- SHOW STORAGE VOLUMES
- SHOW TABLE STATUS
- SHOW VARIABLES
- UNINSTALL PLUGIN
- DDL
- ALTER DATABASE
- ALTER MATERIALIZED VIEW
- ALTER TABLE
- ALTER VIEW
- ALTER RESOURCE
- ANALYZE TABLE
- BACKUP
- CANCEL ALTER TABLE
- CANCEL BACKUP
- CANCEL RESTORE
- CREATE ANALYZE
- CREATE DATABASE
- CREATE EXTERNAL CATALOG
- CREATE FUNCTION
- CREATE INDEX
- CREATE MATERIALIZED VIEW
- CREATE REPOSITORY
- CREATE RESOURCE
- CREATE TABLE
- CREATE TABLE AS SELECT
- CREATE TABLE LIKE
- CREATE VIEW
- DROP ANALYZE
- DROP CATALOG
- DROP DATABASE
- DROP FUNCTION
- DROP INDEX
- DROP MATERIALIZED VIEW
- DROP REPOSITORY
- DROP RESOURCE
- DROP STATS
- DROP TABLE
- DROP VIEW
- HLL
- KILL ANALYZE
- RECOVER
- REFRESH EXTERNAL TABLE
- RESTORE
- SET CATALOG
- SHOW ANALYZE JOB
- SHOW ANALYZE STATUS
- SHOW FUNCTION
- SHOW META
- SHOW RESOURCES
- TRUNCATE TABLE
- USE
- DML
- ALTER LOAD
- ALTER ROUTINE LOAD
- BROKER LOAD
- CANCEL LOAD
- CANCEL EXPORT
- CANCEL REFRESH MATERIALIZED VIEW
- CREATE ROUTINE LOAD
- DELETE
- DROP TASK
- EXPORT
- GROUP BY
- INSERT
- PAUSE ROUTINE LOAD
- REFRESH MATERIALIZED VIEW
- RESUME ROUTINE LOAD
- SELECT
- SHOW ALTER TABLE
- SHOW ALTER MATERIALIZED VIEW
- SHOW BACKUP
- SHOW CATALOGS
- SHOW CREATE CATALOG
- SHOW CREATE DATABASE
- SHOW CREATE MATERIALIZED VIEW
- SHOW CREATE TABLE
- SHOW CREATE VIEW
- SHOW DATA
- SHOW DATABASES
- SHOW DELETE
- SHOW DYNAMIC PARTITION TABLES
- SHOW EXPORT
- SHOW LOAD
- SHOW MATERIALIZED VIEWS
- SHOW PARTITIONS
- SHOW PROPERTY
- SHOW REPOSITORIES
- SHOW RESTORE
- SHOW ROUTINE LOAD
- SHOW ROUTINE LOAD TASK
- SHOW SNAPSHOT
- SHOW TABLES
- SHOW TABLET
- SHOW TRANSACTION
- SPARK LOAD
- STOP ROUTINE LOAD
- STREAM LOAD
- SUBMIT TASK
- UPDATE
- Auxiliary Commands
- Data Types
- Keywords
- Function Reference
- Function list
- Java UDFs
- Window functions
- Lambda expression
- Aggregate Functions
- any_value
- approx_count_distinct
- array_agg
- avg
- bitmap
- bitmap_agg
- count
- corr
- covar_pop
- covar_samp
- group_concat
- grouping
- grouping_id
- hll_empty
- hll_hash
- hll_raw_agg
- hll_union
- hll_union_agg
- max
- max_by
- min
- min_by
- multi_distinct_sum
- multi_distinct_count
- percentile_approx
- percentile_cont
- percentile_disc
- retention
- stddev
- stddev_samp
- sum
- variance, variance_pop, var_pop
- var_samp
- window_funnel
- Array Functions
- all_match
- any_match
- array_agg
- array_append
- array_avg
- array_concat
- array_contains
- array_contains_all
- array_cum_sum
- array_difference
- array_distinct
- array_filter
- array_generate
- array_intersect
- array_join
- array_length
- array_map
- array_max
- array_min
- array_position
- array_remove
- array_slice
- array_sort
- array_sortby
- array_sum
- arrays_overlap
- array_to_bitmap
- cardinality
- element_at
- reverse
- unnest
- Bit Functions
- Bitmap Functions
- base64_to_bitmap
- bitmap_agg
- bitmap_and
- bitmap_andnot
- bitmap_contains
- bitmap_count
- bitmap_from_string
- bitmap_empty
- bitmap_has_any
- bitmap_hash
- bitmap_intersect
- bitmap_max
- bitmap_min
- bitmap_or
- bitmap_remove
- bitmap_subset_in_range
- bitmap_subset_limit
- bitmap_to_array
- bitmap_to_base64
- bitmap_to_string
- bitmap_union
- bitmap_union_count
- bitmap_union_int
- bitmap_xor
- intersect_count
- sub_bitmap
- to_bitmap
- JSON Functions
- Overview of JSON functions and operators
- JSON operators
- JSON constructor functions
- JSON query and processing functions
- Map Functions
- Binary Functions
- Conditional Functions
- Cryptographic Functions
- Date Functions
- add_months
- adddate
- convert_tz
- current_date
- current_time
- current_timestamp
- date
- date_add
- date_diff
- date_format
- date_slice
- date_sub, subdate
- date_trunc
- datediff
- day
- dayname
- dayofmonth
- dayofweek
- dayofyear
- days_add
- days_diff
- days_sub
- from_days
- from_unixtime
- hour
- hours_add
- hours_diff
- hours_sub
- last_day
- makedate
- microseconds_add
- microseconds_sub
- minute
- minutes_add
- minutes_diff
- minutes_sub
- month
- monthname
- months_add
- months_diff
- months_sub
- next_day
- now
- previous_day
- quarter
- second
- seconds_add
- seconds_diff
- seconds_sub
- str_to_date
- str2date
- time_slice
- time_to_sec
- timediff
- timestamp
- timestampadd
- timestampdiff
- to_date
- to_days
- unix_timestamp
- utc_timestamp
- week
- week_iso
- weekofyear
- weeks_add
- day_of_week_iso
- weeks_diff
- weeks_sub
- year
- years_add
- years_diff
- years_sub
- Geographic Functions
- Math Functions
- String Functions
- append_trailing_char_if_absent
- ascii
- char
- char_length
- character_length
- concat
- concat_ws
- ends_with
- find_in_set
- group_concat
- hex
- hex_decode_binary
- hex_decode_string
- instr
- lcase
- left
- length
- locate
- lower
- lpad
- ltrim
- money_format
- null_or_empty
- parse_url
- repeat
- replace
- reverse
- right
- rpad
- rtrim
- space
- split
- split_part
- starts_with
- strleft
- strright
- str_to_map
- substring
- trim
- ucase
- unhex
- upper
- url_decode
- url_encode
- Pattern Matching Functions
- Percentile Functions
- Scalar Functions
- Struct Functions
- Table Functions
- Utility Functions
- cast function
- hash function
- AUTO_INCREMENT
- Generated columns
- System variables
- User-defined variables
- Error code
- System limits
- AWS IAM policies
- SQL Reference
- FAQ
- Benchmark
- Ecosystem Release Notes
- Developers
- Contribute to StarRocks
- Code Style Guides
- Use the debuginfo file for debugging
- Development Environment
- Trace Tools
Paimon catalog
StarRocks supports Paimon catalogs from v3.1 onwards.
A Paimon catalog is a kind of external catalog that enables you to query data from Apache Paimon without ingestion.
Also, you can directly transform and load data from Paimon by using INSERT INTO based on Paimon catalogs.
To ensure successful SQL workloads on your Paimon cluster, your StarRocks cluster needs to integrate with two important components:
- Distributed file system (HDFS) or object storage like AWS S3, Microsoft Azure Storage, Google GCS, or or other S3-compatible storage system (for example, MinIO)
- Metastore like your file system or Hive metastore
Usage notes
You can only use Paimon catalogs to query data. You cannot use Paimon catalogs to drop, delete, or insert data into your Paimon cluster.
Integration preparations
Before you create a Paimon catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your Paimon cluster.
AWS IAM
If your Paimon cluster uses AWS S3 as storage, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources.
The following authentication methods are recommended:
- Instance profile (recommended)
- Assumed role
- IAM user
Of the above-mentioned three authentication methods, instance profile is the most widely used.
For more information, see Preparation for authentication in AWS IAM.
HDFS
If you choose HDFS as storage, configure your StarRocks cluster as follows:
- (Optional) Set the username that is used to access your HDFS cluster and Hive metastore. By default, StarRocks uses the username of the FE and BE processes to access your HDFS cluster and Hive metastore. You can also set the username by adding
export HADOOP_USER_NAME="<user_name>"
at the beginning of the fe/conf/hadoop_env.sh file of each FE and at the beginning of the be/conf/hadoop_env.sh file of each BE. After you set the username in these files, restart each FE and each BE to make the parameter settings take effect. You can set only one username for each StarRocks cluster. - When you query Paimon data, the FEs and BEs of your StarRocks cluster use the HDFS client to access your HDFS cluster. In most cases, you do not need to configure your StarRocks cluster to achieve that purpose, and StarRocks starts the HDFS client using the default configurations. You need to configure your StarRocks cluster only in the following situations:
- High availability (HA) is enabled for your HDFS cluster: Add the hdfs-site.xml file of your HDFS cluster to the $FE_HOME/conf path of each FE and to the $BE_HOME/conf path of each BE.
- View File System (ViewFs) is enabled for your HDFS cluster: Add the core-site.xml file of your HDFS cluster to the $FE_HOME/conf path of each FE and to the $BE_HOME/conf path of each BE.
NOTE
If an error indicating an unknown host is returned when you send a query, you must add the mapping between the host names and IP addresses of your HDFS cluster nodes to the /etc/hosts path.
Kerberos authentication
If Kerberos authentication is enabled for your HDFS cluster or Hive metastore, configure your StarRocks cluster as follows:
- Run the
kinit -kt keytab_path principal
command on each FE and each BE to obtain Ticket Granting Ticket (TGT) from Key Distribution Center (KDC). To run this command, you must have the permissions to access your HDFS cluster and Hive metastore. Note that accessing KDC with this command is time-sensitive. Therefore, you need to use cron to run this command periodically. - Add
JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"
to the $FE_HOME/conf/fe.conf file of each FE and to the $BE_HOME/conf/be.conf file of each BE. In this example,/etc/krb5.conf
is the save path of the krb5.conf file. You can modify the path based on your needs.
Create a Paimon catalog
Syntax
CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "paimon",
CatalogParams,
StorageCredentialParams,
)
Parameters
catalog_name
The name of the Paimon catalog. The naming conventions are as follows:
- The name can contain letters, digits (0-9), and underscores (_). It must start with a letter.
- The name is case-sensitive and cannot exceed 1023 characters in length.
comment
The description of the Paimon catalog. This parameter is optional.
type
The type of your data source. Set the value to paimon
.
CatalogParams
A set of parameters about how StarRocks accesses the metadata of your Paimon cluster.
The following table describes the parameter you need to configure in CatalogParams
.
Parameter | Required | Description |
---|---|---|
paimon.catalog.type | Yes | The type of metastore that you use for your Paimon cluster. Set this parameter to filesystem or hive . |
paimon.catalog.warehouse | Yes | The warehouse storage path of your Paimon data. |
hive.metastore.uris | No | The URI of your Hive metastore. Format: thrift://<metastore_IP_address>:<metastore_port> . If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas (, ), for example, "thrift://<metastore_IP_address_1>:<metastore_port_1>,thrift://<metastore_IP_address_2>:<metastore_port_2>,thrift://<metastore_IP_address_3>:<metastore_port_3>" . |
NOTE
If you use Hive metastore, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the
/etc/hosts
path before you query Paimon data. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.
StorageCredentialParams
A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.
If you use HDFS as storage, you do not need to configure StorageCredentialParams
.
If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configure StorageCredentialParams
.
AWS S3
If you choose AWS S3 as storage for your Paimon cluster, take one of the following actions:
To choose the instance profile-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "true", "aws.s3.region" = "<aws_s3_region>"
To choose the assumed role-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "true", "aws.s3.iam_role_arn" = "<iam_role_arn>", "aws.s3.region" = "<aws_s3_region>"
To choose the IAM user-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "false", "aws.s3.access_key" = "<iam_user_access_key>", "aws.s3.secret_key" = "<iam_user_secret_key>", "aws.s3.region" = "<aws_s3_region>"
The following table describes the parameters you need to configure in StorageCredentialParams
.
Parameter | Required | Description |
---|---|---|
aws.s3.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false . Default value: false . |
aws.s3.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS S3 bucket. If you use the assumed role-based authentication method to access AWS S3, you must specify this parameter. |
aws.s3.region | Yes | The region in which your AWS S3 bucket resides. Example: us-west-1 . |
aws.s3.access_key | No | The access key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
aws.s3.secret_key | No | The secret key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS S3 and how to configure an access control policy in AWS IAM Console, see Authentication parameters for accessing AWS S3.
S3-compatible storage system
If you choose an S3-compatible storage system, such as MinIO, as storage for your Paimon cluster, configure StorageCredentialParams
as follows to ensure a successful integration:
"aws.s3.enable_ssl" = "false",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
The following table describes the parameters you need to configure in StorageCredentialParams
.
Parameter | Required | Description |
---|---|---|
aws.s3.enable_ssl | Yes | Specifies whether to enable SSL connection. Valid values: true and false . Default value: true . |
aws.s3.enable_path_style_access | Yes | Specifies whether to enable path-style access. Valid values: true and false . Default value: false . For MinIO, you must set the value to true .Path-style URLs use the following format: https://s3.<region_code>.amazonaws.com/<bucket_name>/<key_name> . For example, if you create a bucket named DOC-EXAMPLE-BUCKET1 in the US West (Oregon) Region, and you want to access the alice.jpg object in that bucket, you can use the following path-style URL: https://s3.us-west-2.amazonaws.com/DOC-EXAMPLE-BUCKET1/alice.jpg . |
aws.s3.endpoint | Yes | The endpoint that is used to connect to your S3-compatible storage system instead of AWS S3. |
aws.s3.access_key | Yes | The access key of your IAM user. |
aws.s3.secret_key | Yes | The secret key of your IAM user. |
Microsoft Azure Storage
Azure Blob Storage
If you choose Blob Storage as storage for your Paimon cluster, take one of the following actions:
To choose the Shared Key authentication method, configure
StorageCredentialParams
as follows:"azure.blob.storage_account" = "<blob_storage_account_name>", "azure.blob.shared_key" = "<blob_storage_account_shared_key>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.blob.storage_account Yes The username of your Blob Storage account. azure.blob.shared_key Yes The shared key of your Blob Storage account. To choose the SAS Token authentication method, configure
StorageCredentialParams
as follows:"azure.blob.account_name" = "<blob_storage_account_name>", "azure.blob.container_name" = "<blob_container_name>", "azure.blob.sas_token" = "<blob_storage_account_SAS_token>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.blob.account_name Yes The username of your Blob Storage account. azure.blob.container_name Yes The name of the blob container that stores your data. azure.blob.sas_token Yes The SAS token that is used to access your Blob Storage account.
Azure Data Lake Storage Gen1
If you choose Data Lake Storage Gen1 as storage for your Paimon cluster, take one of the following actions:
To choose the Managed Service Identity authentication method, configure
StorageCredentialParams
as follows:"azure.adls1.use_managed_service_identity" = "true"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls1.use_managed_service_identity Yes Specifies whether to enable the Managed Service Identity authentication method. Set the value to true
.To choose the Service Principal authentication method, configure
StorageCredentialParams
as follows:"azure.adls1.oauth2_client_id" = "<application_client_id>", "azure.adls1.oauth2_credential" = "<application_client_credential>", "azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls1.oauth2_client_id Yes The client (application) ID of the service principal. azure.adls1.oauth2_credential Yes The value of the new client (application) secret created. azure.adls1.oauth2_endpoint Yes The OAuth 2.0 token endpoint (v1) of the service principal or application.
Azure Data Lake Storage Gen2
If you choose Data Lake Storage Gen2 as storage for your Paimon cluster, take one of the following actions:
To choose the Managed Identity authentication method, configure
StorageCredentialParams
as follows:"azure.adls2.oauth2_use_managed_identity" = "true", "azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>", "azure.adls2.oauth2_client_id" = "<service_client_id>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls2.oauth2_use_managed_identity Yes Specifies whether to enable the Managed Identity authentication method. Set the value to true
.azure.adls2.oauth2_tenant_id Yes The ID of the tenant whose data you want to access. azure.adls2.oauth2_client_id Yes The client (application) ID of the managed identity. To choose the Shared Key authentication method, configure
StorageCredentialParams
as follows:"azure.adls2.storage_account" = "<storage_account_name>", "azure.adls2.shared_key" = "<shared_key>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls2.storage_account Yes The username of your Data Lake Storage Gen2 storage account. azure.adls2.shared_key Yes The shared key of your Data Lake Storage Gen2 storage account. To choose the Service Principal authentication method, configure
StorageCredentialParams
as follows:"azure.adls2.oauth2_client_id" = "<service_client_id>", "azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>", "azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>"
The following table describes the parameters you need to configure
in StorageCredentialParams
.Parameter Required Description azure.adls2.oauth2_client_id Yes The client (application) ID of the service principal. azure.adls2.oauth2_client_secret Yes The value of the new client (application) secret created. azure.adls2.oauth2_client_endpoint Yes The OAuth 2.0 token endpoint (v1) of the service principal or application.
Google GCS
If you choose Google GCS as storage for your Paimon cluster, take one of the following actions:
To choose the VM-based authentication method, configure
StorageCredentialParams
as follows:"gcp.gcs.use_compute_engine_service_account" = "true"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.use_compute_engine_service_account FALSE TRUE Specifies whether to directly use the service account that is bound to your Compute Engine. To choose the service account-based authentication method, configure
StorageCredentialParams
as follows:"gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>", "gcp.gcs.service_account_private_key" = "<google_service_private_key>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.service_account_email "" "user@hello.iam.gserviceaccount.com" The email address in the JSON file generated at the creation of the service account. gcp.gcs.service_account_private_key_id "" "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" The private key ID in the JSON file generated at the creation of the service account. gcp.gcs.service_account_private_key "" "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" The private key in the JSON file generated at the creation of the service account. To choose the impersonation-based authentication method, configure
StorageCredentialParams
as follows:Make a VM instance impersonate a service account:
"gcp.gcs.use_compute_engine_service_account" = "true", "gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.use_compute_engine_service_account FALSE TRUE Specifies whether to directly use the service account that is bound to your Compute Engine. gcp.gcs.impersonation_service_account "" "hello" The service account that you want to impersonate. Make a service account (temporarily named as meta service account) impersonate another service account (temporarily named as data service account):
"gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>", "gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>", "gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.service_account_email "" "user@hello.iam.gserviceaccount.com" The email address in the JSON file generated at the creation of the meta service account. gcp.gcs.service_account_private_key_id "" "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" The private key ID in the JSON file generated at the creation of the meta service account. gcp.gcs.service_account_private_key "" "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" The private key in the JSON file generated at the creation of the meta service account. gcp.gcs.impersonation_service_account "" "hello" The data service account that you want to impersonate.
Examples
The following examples create a Paimon catalog named paimon_catalog_fs
whose metastore type paimon.catalog.type
is set to filesystem
to query data from your Paimon cluster.
AWS S3
If you choose the instance profile-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<s3_paimon_warehouse_path>", "aws.s3.use_instance_profile" = "true", "aws.s3.region" = "us-west-2" );
If you choose the assumed role-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<s3_paimon_warehouse_path>", "aws.s3.use_instance_profile" = "true", "aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role", "aws.s3.region" = "us-west-2" );
If you choose the IAM user-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<s3_paimon_warehouse_path>", "aws.s3.use_instance_profile" = "false", "aws.s3.access_key" = "<iam_user_access_key>", "aws.s3.secret_key" = "<iam_user_secret_key>", "aws.s3.region" = "us-west-2" );
S3-compatible storage system
Use MinIO as an example. Run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<paimon_warehouse_path>",
"aws.s3.enable_ssl" = "true",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
);
Microsoft Azure Storage
Azure Blob Storage
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<blob_paimon_warehouse_path>", "azure.blob.storage_account" = "<blob_storage_account_name>", "azure.blob.shared_key" = "<blob_storage_account_shared_key>" );
If you choose the SAS Token authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<blob_paimon_warehouse_path>", "azure.blob.account_name" = "<blob_storage_account_name>", "azure.blob.container_name" = "<blob_container_name>", "azure.blob.sas_token" = "<blob_storage_account_SAS_token>" );
Azure Data Lake Storage Gen1
If you choose the Managed Service Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<adls1_paimon_warehouse_path>", "azure.adls1.use_managed_service_identity" = "true" );
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<adls1_paimon_warehouse_path>", "azure.adls1.oauth2_client_id" = "<application_client_id>", "azure.adls1.oauth2_credential" = "<application_client_credential>", "azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>" );
Azure Data Lake Storage Gen2
If you choose the Managed Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<adls2_paimon_warehouse_path>", "azure.adls2.oauth2_use_managed_identity" = "true", "azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>", "azure.adls2.oauth2_client_id" = "<service_client_id>" );
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<adls2_paimon_warehouse_path>", "azure.adls2.storage_account" = "<storage_account_name>", "azure.adls2.shared_key" = "<shared_key>" );
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<adls2_paimon_warehouse_path>", "azure.adls2.oauth2_client_id" = "<service_client_id>", "azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>", "azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>" );
Google GCS
If you choose the VM-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>", "gcp.gcs.use_compute_engine_service_account" = "true" );
If you choose the service account-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>", "gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>", "gcp.gcs.service_account_private_key" = "<google_service_private_key>" );
If you choose the impersonation-based authentication method:
If you make a VM instance impersonate a service account, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>", "gcp.gcs.use_compute_engine_service_account" = "true", "gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>" );
If you make a service account impersonate another service account, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs PROPERTIES ( "type" = "paimon", "paimon.catalog.type" = "filesystem", "paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>", "gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>", "gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>", "gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>" );
View Paimon catalogs
You can use SHOW CATALOGS to query all catalogs in the current StarRocks cluster:
SHOW CATALOGS;
You can also use SHOW CREATE CATALOG to query the creation statement of an external catalog. The following example queries the creation statement of a Paimon catalog named paimon_catalog_fs
:
SHOW CREATE CATALOG paimon_catalog_fs;
Drop a Paimon catalog
You can use DROP CATALOG to drop an external catalog.
The following example drops a Paimon catalog named paimon_catalog_fs
:
DROP Catalog paimon_catalog_fs;
View the schema of a Paimon table
You can use one of the following syntaxes to view the schema of a Paimon table:
View schema
DESC[RIBE] <catalog_name>.<database_name>.<table_name>;
View schema and location from the CREATE statement
SHOW CREATE TABLE <catalog_name>.<database_name>.<table_name>;
Query a Paimon table
Use SHOW DATABASES to view the databases in your Paimon cluster:
SHOW DATABASES FROM <catalog_name>;
Use SET CATALOG to switch to the destination catalog in the current session:
SET CATALOG <catalog_name>;
Then, use USE to specify the active database in the current session:
USE <db_name>;
Or, you can use USE to directly specify the active database in the destination catalog:
USE <catalog_name>.<db_name>;
Use SELECT to query the destination table in the specified database:
SELECT count(*) FROM <table_name> LIMIT 10;
Load data from Paimon
Suppose you have an OLAP table named olap_tbl
, you can transform and load data like below:
INSERT INTO default_catalog.olap_db.olap_tbl SELECT * FROM paimon_table;