- StarRocks
- Introduction to StarRocks
- Quick Start
- Deployment
- Deployment overview
- Prepare
- Deploy
- Deploy classic StarRocks
- Deploy and use shared-data StarRocks
- Manage
- Table Design
- Data Loading
- Concepts
- Overview of data loading
- Load data from a local file system or a streaming data source using HTTP PUT
- Load data from HDFS or cloud storage
- Continuously load data from Apache Kafka®
- Bulk load using Apache Sparkâ„¢
- Load data using INSERT
- Load data using Stream Load transaction interface
- Realtime synchronization from MySQL
- Continuously load data from Apache Flink®
- Change data through loading
- Transform data at loading
- Data Unloading
- Query Data Sources
- Query Acceleration
- Gather CBO statistics
- Synchronous materialized view
- Asynchronous materialized view
- Colocate Join
- Lateral Join
- Query Cache
- Index
- Computing the Number of Distinct Values
- Sorted streaming aggregate
- Administration
- Management
- Data recovery
- User Privilege and Authentication
- Performance Tuning
- Reference
- SQL Reference
- User Account Management
- Cluster Management
- ADD SQLBLACKLIST
- ADMIN CANCEL REPAIR TABLE
- ADMIN CHECK TABLET
- ADMIN REPAIR TABLE
- ADMIN SET CONFIG
- ADMIN SET REPLICA STATUS
- ADMIN SHOW CONFIG
- ADMIN SHOW REPLICA DISTRIBUTION
- ADMIN SHOW REPLICA STATUS
- ALTER RESOURCE GROUP
- ALTER SYSTEM
- CANCEL DECOMMISSION
- CREATE FILE
- CREATE RESOURCE GROUP
- DELETE SQLBLACKLIST
- DROP FILE
- DROP RESOURCE GROUP
- EXPLAIN
- INSTALL PLUGIN
- KILL
- SET
- SHOW BACKENDS
- SHOW BROKER
- SHOW COMPUTE NODES
- SHOW FILE
- SHOW FRONTENDS
- SHOW FULL COLUMNS
- SHOW INDEX
- SHOW PLUGINS
- SHOW PROC
- SHOW PROCESSLIST
- SHOW RESOURCE GROUP
- SHOW SQLBLACKLIST
- SHOW TABLE STATUS
- SHOW VARIABLES
- UNINSTALL PLUGIN
- DDL
- ALTER DATABASE
- ALTER MATERIALIZED VIEW
- ALTER TABLE
- ALTER VIEW
- ALTER RESOURCE
- ANALYZE TABLE
- BACKUP
- CANCEL ALTER TABLE
- CANCEL BACKUP
- CANCEL RESTORE
- CREATE ANALYZE
- CREATE DATABASE
- CREATE EXTERNAL CATALOG
- CREATE INDEX
- CREATE MATERIALIZED VIEW
- CREATE REPOSITORY
- CREATE RESOURCE
- CREATE TABLE AS SELECT
- CREATE TABLE LIKE
- CREATE TABLE
- CREATE VIEW
- CREATE FUNCTION
- DROP ANALYZE
- DROP STATS
- DROP CATALOG
- DROP DATABASE
- DROP INDEX
- DROP MATERIALIZED VIEW
- DROP REPOSITORY
- DROP RESOURCE
- DROP TABLE
- DROP VIEW
- DROP FUNCTION
- HLL
- KILL ANALYZE
- RECOVER
- REFRESH EXTERNAL TABLE
- RESTORE
- SET CATALOG
- SHOW ANALYZE JOB
- SHOW ANALYZE STATUS
- SHOW META
- SHOW RESOURCES
- SHOW FUNCTION
- TRUNCATE TABLE
- USE
- DML
- ALTER LOAD
- ALTER ROUTINE LOAD
- BROKER LOAD
- CANCEL LOAD
- CANCEL EXPORT
- CANCEL REFRESH MATERIALIZED VIEW
- CREATE ROUTINE LOAD
- DELETE
- EXPORT
- GROUP BY
- INSERT
- PAUSE ROUTINE LOAD
- REFRESH MATERIALIZED VIEW
- RESUME ROUTINE LOAD
- SELECT
- SHOW ALTER TABLE
- SHOW ALTER MATERIALIZED VIEW
- SHOW BACKUP
- SHOW CATALOGS
- SHOW CREATE CATALOG
- SHOW CREATE MATERIALIZED VIEW
- SHOW CREATE TABLE
- SHOW CREATE VIEW
- SHOW DATA
- SHOW DATABASES
- SHOW DELETE
- SHOW DYNAMIC PARTITION TABLES
- SHOW EXPORT
- SHOW LOAD
- SHOW MATERIALIZED VIEWS
- SHOW PARTITIONS
- SHOW PROPERTY
- SHOW REPOSITORIES
- SHOW RESTORE
- SHOW ROUTINE LOAD
- SHOW ROUTINE LOAD TASK
- SHOW SNAPSHOT
- SHOW TABLES
- SHOW TABLET
- SHOW TRANSACTION
- SPARK LOAD
- STOP ROUTINE LOAD
- STREAM LOAD
- SUBMIT TASK
- UPDATE
- Auxiliary Commands
- Data Types
- Keywords
- AUTO_INCREMENT
- Function Reference
- Java UDFs
- Window functions
- Lambda expression
- Aggregate Functions
- array_agg
- avg
- any_value
- approx_count_distinct
- bitmap
- bitmap_agg
- count
- grouping
- grouping_id
- hll_empty
- hll_hash
- hll_raw_agg
- hll_union
- hll_union_agg
- max
- max_by
- min
- multi_distinct_sum
- multi_distinct_count
- percentile_approx
- percentile_cont
- percentile_disc
- retention
- stddev
- stddev_samp
- sum
- variance, variance_pop, var_pop
- var_samp
- window_funnel
- Array Functions
- array_agg
- array_append
- array_avg
- array_concat
- array_contains
- array_contains_all
- array_cum_sum
- array_difference
- array_distinct
- array_filter
- array_intersect
- array_join
- array_length
- array_map
- array_max
- array_min
- array_position
- array_remove
- array_slice
- array_sort
- array_sortby
- array_sum
- arrays_overlap
- array_to_bitmap
- cardinality
- element_at
- reverse
- unnest
- Bit Functions
- Bitmap Functions
- base64_to_bitmap
- bitmap_agg
- bitmap_and
- bitmap_andnot
- bitmap_contains
- bitmap_count
- bitmap_from_string
- bitmap_empty
- bitmap_has_any
- bitmap_hash
- bitmap_intersect
- bitmap_max
- bitmap_min
- bitmap_or
- bitmap_remove
- bitmap_to_array
- bitmap_to_base64
- bitmap_to_string
- bitmap_union
- bitmap_union_count
- bitmap_union_int
- bitmap_xor
- intersect_count
- sub_bitmap
- to_bitmap
- JSON Functions
- Overview of JSON functions and operators
- JSON operators
- JSON constructor functions
- JSON query and processing functions
- Map Functions
- Binary Functions
- Conditional Functions
- Cryptographic Functions
- Date Functions
- add_months
- adddate
- convert_tz
- current_date
- current_time
- current_timestamp
- date
- date_add
- date_format
- date_slice
- date_sub, subdate
- date_trunc
- datediff
- day
- dayname
- dayofmonth
- dayofweek
- dayofyear
- days_add
- days_diff
- days_sub
- from_days
- from_unixtime
- hour
- hours_add
- hours_diff
- hours_sub
- microseconds_add
- microseconds_sub
- minute
- minutes_add
- minutes_diff
- minutes_sub
- month
- monthname
- months_add
- months_diff
- months_sub
- now
- quarter
- second
- seconds_add
- seconds_diff
- seconds_sub
- str_to_date
- str2date
- time_slice
- time_to_sec
- timediff
- timestamp
- timestampadd
- timestampdiff
- to_date
- to_days
- unix_timestamp
- utc_timestamp
- week
- week_iso
- weekofyear
- weeks_add
- weeks_diff
- weeks_sub
- year
- years_add
- years_diff
- years_sub
- Geographic Functions
- Math Functions
- String Functions
- append_trailing_char_if_absent
- ascii
- char
- char_length
- character_length
- concat
- concat_ws
- ends_with
- find_in_set
- group_concat
- hex
- hex_decode_binary
- hex_decode_string
- instr
- lcase
- left
- length
- locate
- lower
- lpad
- ltrim
- money_format
- null_or_empty
- parse_url
- repeat
- replace
- reverse
- right
- rpad
- rtrim
- space
- split
- split_part
- starts_with
- strleft
- strright
- substring
- trim
- ucase
- unhex
- upper
- Pattern Matching Functions
- Percentile Functions
- Scalar Functions
- Utility Functions
- cast function
- hash function
- System variables
- User-defined variables
- Error code
- System limits
- SQL Reference
- FAQ
- Benchmark
- Developers
- Contribute to StarRocks
- Code Style Guides
- Use the debuginfo file for debugging
- Development Environment
- Trace Tools
- Integration
Iceberg catalog
An Iceberg catalog is a kind of external catalog that enables you to query data from Apache Iceberg without ingestion.
Also, you can directly transform and load data from Iceberg by using INSERT INTO based on Iceberg catalogs. StarRocks supports Iceberg catalogs from v2.4 onwards.
To ensure successful SQL workloads on your Hive cluster, your StarRocks cluster needs to integrate with two important components:
Object storage or distributed file system like AWS S3, other S3-compatible storage system, Microsoft Azure Storage, Google GCS, or HDFS
Metastore like Hive metastore or AWS Glue
NOTE
If you choose AWS S3 as storage, you can use HMS or AWS Glue as metastore. If you choose any other storage system, you can only use HMS as metastore.
Usage notes
The file formats of Iceberg that StarRocks supports are Parquet and ORC:
- Parquet files support the following compression formats: SNAPPY, LZ4, ZSTD, GZIP, and NO_COMPRESSION.
- ORC files support the following compression formats: ZLIB, SNAPPY, LZO, LZ4, ZSTD, and NO_COMPRESSION.
Iceberg catalogs do not support Iceberg v2 tables.
Integration preparations
Before you create an Iceberg catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your Iceberg cluster.
AWS IAM
If your Iceberg cluster uses AWS S3 as storage or AWS Glue as metastore, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources.
The following authentication methods are recommended:
- Instance profile
- Assumed role
- IAM user
Of the above-mentioned three authentication methods, instance profile is the most widely used.
For more information, see Preparation for authentication in AWS IAM.
HDFS
If you choose HDFS as storage, configure your StarRocks cluster as follows:
(Optional) Set the username that is used to access your HDFS cluster and Hive metastore. By default, StarRocks uses the username of the FE and BE processes to access your HDFS cluster and Hive metastore. You can also set the username by using the
HADOOP_USERNAME
parameter in the fe/conf/hadoop_env.sh file of each FE and the be/conf/hadoop_env.sh file of each BE. After you set the username in these files, restart each FE and each BE to make the parameter settings take effect. You can set only one username for each StarRocks cluster.When you query Iceberg data, the FEs and BEs of your StarRocks cluster use the HDFS client to access your HDFS cluster. In most cases, you do not need to configure your StarRocks cluster to achieve that purpose, and StarRocks starts the HDFS client using the default configurations. You need to configure your StarRocks cluster only in the following situations:
- High availability (HA) is enabled for your HDFS cluster: Add the hdfs-site.xml file of your HDFS cluster to the $FE_HOME/conf path of each FE and to the $BE_HOME/conf path of each BE.
- View File System (ViewFs) is enabled for your HDFS cluster: Add the core-site.xml file of your HDFS cluster to the $FE_HOME/conf path of each FE and to the $BE_HOME/conf path of each BE.
NOTE
If an error indicating an unknown host is returned when you send a query, you must add the mapping between the host names and IP addresses of your HDFS cluster nodes to the /etc/hosts path.
Kerberos authentication
If Kerberos authentication is enabled for your HDFS cluster or Hive metastore, configure your StarRocks cluster as follows:
- Run the
kinit -kt keytab_path principal
command on each FE and each BE to obtain Ticket Granting Ticket (TGT) from Key Distribution Center (KDC). To run this command, you must have the permissions to access your HDFS cluster and Hive metastore. Note that accessing KDC with this command is time-sensitive. Therefore, you need to use cron to run this command periodically. - Add
JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"
to the $FE_HOME/conf/fe.conf file of each FE and to the $BE_HOME/conf/be.conf file of each BE. In this example,/etc/krb5.conf
is the save path of the krb5.conf file. You can modify the path based on your needs.
Create an Iceberg catalog
Syntax
CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "iceberg",
MetastoreParams,
StorageCredentialParams
)
For more information, see CREATE EXTERNAL CATALOG.
Parameters
catalog_name
The name of the Iceberg catalog. The naming conventions are as follows:
- The name can contain letters, digits 0 through 9, and underscores (_) and must start with a letter.
- The name cannot exceed 64 characters in length.
comment
The description of the Iceberg catalog. This parameter is optional.
type
The type of your data source. Set the value to iceberg
.
MetastoreParams
A set of parameters about how StarRocks integrates with the metastore of your data source.
Hive metastore
If you choose Hive metastore as the metastore of your data source, configure MetastoreParams
as follows:
"hive.metastore.uris" = "<hive_metastore_uri>"
NOTE
Before querying Iceberg data, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the
/etc/hosts
path. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.
The following table describes the parameter you need to configure in MetastoreParams
.
Parameter | Required | Description |
---|---|---|
hive.metastore.uris | Yes | The URI of your Hive metastore. Format: thrift://<metastore_IP_address>:<metastore_port> .If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas ( , ), for example, "thrift://<metastore_IP_address_1>:<metastore_port_1>,thrift://<metastore_IP_address_2>:<metastore_port_2>,thrift://<metastore_IP_address_3>:<metastore_port_3>" . |
AWS Glue
If you choose AWS Glue as the metastore of your data source, which is supported only when you choose AWS S3 as storage, take one of the following actions:
To choose the instance profile-based authentication method, configure
MetastoreParams
as follows:"hive.metastore.type" = "glue", "aws.glue.use_instance_profile" = "true", "aws.glue.region" = "<aws_glue_region>"
To choose the assumed role-based authentication method, configure
MetastoreParams
as follows:"hive.metastore.type" = "glue", "aws.glue.use_instance_profile" = "true", "aws.glue.iam_role_arn" = "<iam_role_arn>", "aws.glue.region" = "<aws_glue_region>"
To choose the IAM user-based authentication method, configure
MetastoreParams
as follows:"aws.glue.use_instance_profile" = "false", "aws.glue.access_key" = "<iam_user_access_key>", "aws.glue.secret_key" = "<iam_user_secret_key>", "aws.glue.region" = "<aws_s3_region>"
The following table describes the parameters you need to configure in MetastoreParams
.
Parameter | Required | Description |
---|---|---|
hive.metastore.type | Yes | The type of metastore that you use for your Iceberg cluster. Set the value to glue . |
aws.glue.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false . Default value: false . |
aws.glue.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS Glue Data Catalog. If you use the assumed role-based authentication method to access AWS Glue, you must specify this parameter. |
aws.glue.region | Yes | The region in which your AWS Glue Data Catalog resides. Example: us-west-1 . |
aws.glue.access_key | No | The access key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter. |
aws.glue.secret_key | No | The secret key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS Glue and how to configure an access control policy in the AWS IAM Console, see Authentication parameters for accessing AWS Glue.
Custom metadata service
If you use a custom metadata service for your Iceberg cluster, you need to create a custom catalog class (the class name of the custom catalog cannot be the same as the name of any class that already exists in StarRocks) and implement the related interface in StarRocks so that StarRocks can access the custom metadata service. The custom catalog class needs to inherit the abstract class BaseMetastoreCatalog
. After the custom catalog is created, package the catalog and its related files, place them under the fe/lib path of each FE, and then restart each FE.
After you complete the preceding operations, you can create an Iceberg catalog and configure its properties.
Property | Required | Description |
---|---|---|
type | Yes | The type of the data source. Set the value to iceberg . |
iceberg.catalog.type | Yes | The type of the catalog configured your Iceberg cluster. Set the value to CUSTOM . |
iceberg.catalog-impl | Yes | The fully qualified class name of the custom catalog. FEs search for the catalog based on this name. If the custom catalog contains custom configuration items, you must add them to the PROPERTIES parameter as key-value pairs when you create an Iceberg catalog. |
StorageCredentialParams
A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.
If you use HDFS as storage, you do not need to configure StorageCredentialParams
.
If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configure StorageCredentialParams
.
AWS S3
If you choose AWS S3 as storage for your Iceberg cluster, take one of the following actions:
To choose the instance profile-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "true", "aws.s3.region" = "<aws_s3_region>"
To choose the assumed role-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "true", "aws.s3.iam_role_arn" = "<iam_role_arn>", "aws.s3.region" = "<aws_s3_region>"
To choose the IAM user-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "false", "aws.s3.access_key" = "<iam_user_access_key>", "aws.s3.secret_key" = "<iam_user_secret_key>", "aws.s3.region" = "<aws_s3_region>"
The following table describes the parameters you need to configure in StorageCredentialParams
.
Parameter | Required | Description |
---|---|---|
aws.s3.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false . Default value: false . |
aws.s3.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS S3 bucket. If you use the assumed role-based authentication method to access AWS S3, you must specify this parameter. |
aws.s3.region | Yes | The region in which your AWS S3 bucket resides. Example: us-west-1 . |
aws.s3.access_key | No | The access key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
aws.s3.secret_key | No | The secret key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS S3 and how to configure an access control policy in AWS IAM Console, see Authentication parameters for accessing AWS S3.
S3-compatible storage system
If you choose an S3-compatible storage system, such as MinIO, as storage for your Iceberg cluster, configure StorageCredentialParams
as follows to ensure a successful integration:
"aws.s3.enable_ssl" = "{true | false}",
"aws.s3.enable_path_style_access" = "{true | false}",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
The following table describes the parameters you need to configure in StorageCredentialParams
.
Parameter | Required | Description |
---|---|---|
aws.s3.enable_ssl | Yes | Specifies whether to enable SSL connection. Valid values: true and false . Default value: true . |
aws.s3.enable_path_style_access | Yes | Specifies whether to enable path-style access. Valid values: true and false . Default value: false .Path-style URLs use the following format: https://s3.<region_code>.amazonaws.com/<bucket_name>/<key_name> . For example, if you create a bucket named DOC-EXAMPLE-BUCKET1 in the US West (Oregon) Region, and you want to access the alice.jpg object in that bucket, you can use the following path-style URL: https://s3.us-west-2.amazonaws.com/DOC-EXAMPLE-BUCKET1/alice.jpg . |
aws.s3.endpoint | Yes | The endpoint that is used to connect to your S3-compatible storage system instead of AWS S3. |
aws.s3.access_key | Yes | The access key of your IAM user. |
aws.s3.secret_key | Yes | The secret key of your IAM user. |
Microsoft Azure Storage
Azure Blob Storage
If you choose Blob Storage as storage for your Iceberg cluster, take one of the following actions:
To choose the Shared Key authentication method, configure
StorageCredentialParams
as follows:"azure.blob.storage_account" = "<blob_storage_account_name>", "azure.blob.shared_key" = "<blob_storage_account_shared_key>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.blob.storage_account Yes The username of your Blob Storage account. azure.blob.shared_key Yes The shared key of your Blob Storage account. To choose the SAS Token authentication method, configure
StorageCredentialParams
as follows:"azure.blob.account_name" = "<blob_storage_account_name>", "azure.blob.container_name" = "<blob_container_name>", "azure.blob.sas_token" = "<blob_storage_account_SAS_token>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.blob.account_name Yes The username of your Blob Storage account. azure.blob.container_name Yes The name of the blob container that stores your data. azure.blob.sas_token Yes The SAS token that is used to access your Blob Storage account.
Azure Data Lake Storage Gen1
If you choose Data Lake Storage Gen1 as storage for your Iceberg cluster, take one of the following actions:
To choose the Managed Service Identity authentication method, configure
StorageCredentialParams
as follows:"azure.adls1.use_managed_service_identity" = "true"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls1.use_managed_service_identity Yes Specifies whether to enable the Managed Service Identity authentication method. Set the value to true
.To choose the Service Principal authentication method, configure
StorageCredentialParams
as follows:"azure.adls1.oauth2_client_id" = "<application_client_id>", "azure.adls1.oauth2_credential" = "<application_client_credential>", "azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls1.oauth2_client_id Yes The client (application) ID of the service principal. azure.adls1.oauth2_credential Yes The value of the new client (application) secret created. azure.adls1.oauth2_endpoint Yes The OAuth 2.0 token endpoint (v1) of the service principal or application.
Azure Data Lake Storage Gen2
If you choose Data Lake Storage Gen2 as storage for your Iceberg cluster, take one of the following actions:
To choose the Managed Identity authentication method, configure
StorageCredentialParams
as follows:"azure.adls2.oauth2_use_managed_identity" = "true", "azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>", "azure.adls2.oauth2_client_id" = "<service_client_id>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls2.oauth2_use_managed_identity Yes Specifies whether to enable the Managed Identity authentication method. Set the value to true
.azure.adls2.oauth2_tenant_id Yes The ID of the tenant whose data you want to access. azure.adls2.oauth2_client_id Yes The client (application) ID of the managed identity. To choose the Shared Key authentication method, configure
StorageCredentialParams
as follows:"azure.adls2.storage_account" = "<storage_account_name>", "azure.adls2.shared_key" = "<shared_key>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Required Description azure.adls2.storage_account Yes The username of your Data Lake Storage Gen2 storage account. azure.adls2.shared_key Yes The shared key of your Data Lake Storage Gen2 storage account. To choose the Service Principal authentication method, configure
StorageCredentialParams
as follows:"azure.adls2.oauth2_client_id" = "<service_client_id>", "azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>", "azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>"
The following table describes the parameters you need to configure
in StorageCredentialParams
.Parameter Required Description azure.adls2.oauth2_client_id Yes The client (application) ID of the service principal. azure.adls2.oauth2_client_secret Yes The value of the new client (application) secret created. azure.adls2.oauth2_client_endpoint Yes The OAuth 2.0 token endpoint (v1) of the service principal or application.
Google GCS
If you choose Google GCS as storage for your Iceberg cluster, take one of the following actions:
To choose the VM-based authentication method, configure
StorageCredentialParams
as follows:"gcp.gcs.use_compute_engine_service_account" = "true"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.use_compute_engine_service_account false true Specifies whether to directly use the service account that is bound to your Compute Engine. To choose the service account-based authentication method, configure
StorageCredentialParams
as follows:"gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>", "gcp.gcs.service_account_private_key" = "<google_service_private_key>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.service_account_email "" "user@hello.iam.gserviceaccount.com" The email address in the JSON file generated at the creation of the service account. gcp.gcs.service_account_private_key_id "" "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" The private key ID in the JSON file generated at the creation of the service account. gcp.gcs.service_account_private_key "" "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" The private key in the JSON file generated at the creation of the service account. To choose the impersonation-based authentication method, configure
StorageCredentialParams
as follows:Make a VM instance impersonate a service account:
"gcp.gcs.use_compute_engine_service_account" = "true", "gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.use_compute_engine_service_account false true Specifies whether to directly use the service account that is bound to your Compute Engine. gcp.gcs.impersonation_service_account "" "hello" The service account that you want to impersonate. Make a service account (temporarily named as meta service account) impersonate another service account (temporarily named as data service account):
"gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>", "gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>", "gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>"
The following table describes the parameters you need to configure in
StorageCredentialParams
.Parameter Default value Value example Description gcp.gcs.service_account_email "" "user@hello.iam.gserviceaccount.com" The email address in the JSON file generated at the creation of the meta service account. gcp.gcs.service_account_private_key_id "" "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" The private key ID in the JSON file generated at the creation of the meta service account. gcp.gcs.service_account_private_key "" "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" The private key in the JSON file generated at the creation of the meta service account. gcp.gcs.impersonation_service_account "" "hello" The data service account that you want to impersonate.
Examples
The following examples create an Iceberg catalog named iceberg_catalog_hms
or iceberg_catalog_glue
, depending on the type of metastore you use, to query data from your Iceberg cluster.
AWS S3
If you choose instance profile-based credential
If you use Hive metastore in your Iceberg cluster, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "aws.s3.use_instance_profile" = "true", "aws.s3.region" = "us-west-2", "hive.metastore.uris" = "thrift://xx.xx.xx:9083" );
If you use AWS Glue in your Amazon EMR Iceberg cluster, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_glue PROPERTIES ( "type" = "iceberg", "aws.s3.use_instance_profile" = "true", "aws.s3.region" = "us-west-2", "hive.metastore.type" = "glue", "aws.glue.use_instance_profile" = "true", "aws.glue.region" = "us-west-2" );
If you choose assumed role-based credential
If you use Hive metastore in your HIceberg cluster, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "aws.s3.use_instance_profile" = "true", "aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role", "aws.s3.region" = "us-west-2", "hive.metastore.uris" = "thrift://xx.xx.xx:9083" );
If you use AWS Glue in your Amazon EMR Iceberg cluster, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_glue PROPERTIES ( "type" = "iceberg", "aws.s3.use_instance_profile" = "true", "aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role", "aws.s3.region" = "us-west-2", "hive.metastore.type" = "glue", "aws.glue.use_instance_profile" = "true", "aws.glue.iam_role_arn" = "arn:aws:iam::081976408565:role/test_glue_role", "aws.glue.region" = "us-west-2" );
If you choose IAM user-based credential
If you use Hive metastore in your Iceberg cluster, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "aws.s3.use_instance_profile" = "false", "aws.s3.access_key" = "<iam_user_access_key>", "aws.s3.secret_key" = "<iam_user_access_key>", "aws.s3.region" = "us-west-2", "hive.metastore.uris" = "thrift://xx.xx.xx:9083" );
If you use AWS Glue in your Amazon EMR Iceberg cluster, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_glue PROPERTIES ( "type" = "iceberg", "aws.s3.use_instance_profile" = "false", "aws.s3.access_key" = "<iam_user_access_key>", "aws.s3.secret_key" = "<iam_user_secret_key>", "aws.s3.region" = "us-west-2", "hive.metastore.type" = "glue", "aws.glue.use_instance_profile" = "false", "aws.glue.access_key" = "<iam_user_access_key>", "aws.glue.secret_key" = "<iam_user_secret_key>", "aws.glue.region" = "us-west-2" );
S3-compatible storage system
Use MinIO as an example. Run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms
PROPERTIES
(
"type" = "iceberg",
"hive.metastore.uris" = "thrift://34.132.15.127:9083",
"aws.s3.enable_ssl" = "true",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
);
Microsoft Azure Storage
Azure Blob Storage
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.blob.storage_account" = "<blob_storage_account_name>", "azure.blob.shared_key" = "<blob_storage_account_shared_key>" );
If you choose the SAS Token authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.blob.account_name" = "<blob_storage_account_name>", "azure.blob.container_name" = "<blob_container_name>", "azure.blob.sas_token" = "<blob_storage_account_SAS_token>" );
Azure Data Lake Storage Gen1
If you choose the Managed Service Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.adls1.use_managed_service_identity" = "true" );
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.adls1.oauth2_client_id" = "<application_client_id>", "azure.adls1.oauth2_credential" = "<application_client_credential>", "azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>" );
Azure Data Lake Storage Gen2
If you choose the Managed Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.adls2.oauth2_use_managed_identity" = "true", "azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>", "azure.adls2.oauth2_client_id" = "<service_client_id>" );
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.adls2.storage_account" = "<storage_account_name>", "azure.adls2.shared_key" = "<shared_key>" );
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "azure.adls2.oauth2_client_id" = "<service_client_id>", "azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>", "azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>" );
Google GCS
If you choose the VM-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "gcp.gcs.use_compute_engine_service_account" = "true" );
If you choose the service account-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>", "gcp.gcs.service_account_private_key" = "<google_service_private_key>" );
If you choose the impersonation-based authentication method:
If you make a VM instance impersonate a service account, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "gcp.gcs.use_compute_engine_service_account" = "true", "gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>" );
If you make a service account impersonate another service account, run a command like below:
CREATE EXTERNAL CATALOG iceberg_catalog_hms PROPERTIES ( "type" = "iceberg", "hive.metastore.uris" = "thrift://34.132.15.127:9083", "gcp.gcs.service_account_email" = "<google_service_account_email>", "gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>", "gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>", "gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>", );
View Iceberg catalogs
You can use SHOW CATALOGS to query all catalogs in the current StarRocks cluster:
SHOW CATALOGS;
You can also use SHOW CREATE CATALOG to query the creation statement of an external catalog. The following example queries the creation statement of an Iceberg catalog named iceberg_catalog_glue
:
SHOW CREATE CATALOG iceberg_catalog_glue;
Drop an Iceberg catalog
You can use DROP CATALOG to drop an external catalog.
The following example drops an Iceberg catalog named iceberg_catalog_glue
:
DROP Catalog iceberg_catalog_glue;
View the schema of an Iceberg table
You can use one of the following syntaxes to view the schema of an Iceberg table:
View schema
DESC[RIBE] <catalog_name>.<database_name>.<table_name>
View schema and location from the CREATE statement
SHOW CREATE TABLE <catalog_name>.<database_name>.<table_name>
Query an Iceberg table
Use SHOW DATABASES to view the databases in your Iceberg cluster:
SHOW DATABASES FROM <catalog_name>
Use SET CATALOG to switch to the destination catalog in the current session:
SET CATALOG <catalog_name>;
Then, use USE to specify the active database in the current session:
USE <db_name>;
Or, you can use USE to directly specify the active database in the destination catalog:
USE <catalog_name>.<db_name>;
Use SELECT to query the destination table in the specified database:
SELECT count(*) FROM <table_name> LIMIT 10
Load data from Iceberg
Suppose you have an OLAP table named olap_tbl
, you can transform and load data like below:
INSERT INTO default_catalog.olap_db.olap_tbl SELECT * FROM iceberg_table
Configure metadata caching
The metadata files of your Iceberg cluster may be stored in remote storage such as AWS S3 or HDFS. By default, StarRocks caches Iceberg metadata in memory. To accelerate queries, StarRocks adopts a two-level metadata caching mechanism, with which it can cache metadata both in memory and on disk. For each initial query, StarRocks caches their computation results. If any subsequent query that is semantically equivalent to a previous query is issued, StarRocks first attempts to retrieve the requested metadata from its caches, and it retrieves the metadata from the remote storage only when the metadata cannot be hit in its caches.
StarRocks uses the Least Recently Used (LRU) algorithm to cache and evict data. The basic rules are as follows:
- StarRocks first attempts to retrieve the requested metadata from the memory. If the metadata cannot be hit in the memory, StarRock attempts to retrieve the metadata from the disks. The metadata that StarRocks has retrieved from the disks will be loaded into the memory. If the metadata cannot be hit in the disks either, StarRock retrieves the metadata from the remote storage and caches the retrieved metadata in the memory.
- StarRocks writes the metadata evicted out of the memory into the disks, but it directly discards the metadata evicted out of the disks.
The following table describes the FE configuration items that you can use to configure your Iceberg metadata caching mechanism.
Configuration item | Unit | Default value | Description |
---|---|---|---|
enable_iceberg_metadata_disk_cache | N/A | false | Specifies whether to enable the disk cache. |
iceberg_metadata_cache_disk_path | N/A | StarRocksFE.STARROCKS_HOME_DIR + "/caches/iceberg" | The save path of cached metadata files on disk. |
iceberg_metadata_disk_cache_capacity | Bytes | 2147483648 , equivalent to 2 GB | The maximum size of cached metadata allowed on disk. |
iceberg_metadata_memory_cache_capacity | Bytes | 536870912 , equivalent to 512 MB | The maximum size of cached metadata allowed in memory. |
iceberg_metadata_memory_cache_expiration_seconds | Seconds | 86500 | The amount of time after which a cache entry in memory expires counting from its last access. |
iceberg_metadata_disk_cache_expiration_seconds | Seconds | 604800 , equivalent to one week | The amount of time after which a cache entry on disk expires counting from its last access. |
iceberg_metadata_cache_max_entry_size | Bytes | 8388608 , equivalent to 8 MB | The maximum size of a file that can be cached. Files whose size exceeds the value of this parameter cannot be cached. If a query requests these files, StarRocks retrieves them from the remote storage. |