Edit

Deploy StarRocks with Operator

This topic introduces how to use the StarRocks Operator to automate the deployment and management of a StarRocks cluster on a Kubernetes cluster.

How it works

img

Before you begin

Create Kubernetes cluster

You can use the cloud-managed Kubernetes service, such as an Amazon Elastic Kubernetes Service (EKS) or Google Kubernetes Engine (GKE) cluster, or a self-managed Kubernetes cluster.

Deploy StarRocks Operator

  1. Add the custom resource StarRocksCluster.

    kubectl apply -f https://raw.githubusercontent.com/StarRocks/starrocks-kubernetes-operator/main/deploy/starrocks.com_starrocksclusters.yaml
  2. Deploy the StarRocks Operator. You can choose to deploy the StarRocks Operator by using a default configuration file or a custom configuration file.

    1. Deploy the StarRocks Operator by using a default configuration file.

      kubectl apply -f https://raw.githubusercontent.com/StarRocks/starrocks-kubernetes-operator/main/deploy/operator.yaml

      The StarRocks Operator is deployed to the namespace starrocks and manages all StarRocks clusters under all namespaces.

    2. Deploy the StarRocks Operator by using a custom configuration file.

      • Download the configuration file operator.yaml, which is used to deploy the StarRocks Operator.

        curl -O https://raw.githubusercontent.com/StarRocks/starrocks-kubernetes-operator/main/deploy/operator.yaml
      • Modify the configuration file operator.yaml to suit your needs.

      • Deploy the StarRocks Operator.

        kubectl apply -f operator.yaml
  3. Check the running status of the StarRocks Operator. If the pod is in the Running state and all containers inside the pod are READY, the StarRocks Operator is running as expected.

    $ kubectl -n starrocks get pods
    NAME                                  READY   STATUS    RESTARTS   AGE
    starrocks-controller-65bb8679-jkbtg   1/1     Running   0          5m6s

NOTE

If you customize the namespace in which the StarRocks Operator is located, you need to replacestarrocks with the name of your customized namespace.

Deploy StarRocks Cluster

You can directly use the sample configuration files provided by StarRocks to deploy a StarRocks cluster (an object instantiated by using the custom resource StarRocks Cluster). For example, you can use starrocks-fe-and-be.yaml to deploy a StarRocks cluster that contains three FE nodes and three BE nodes.

kubectl apply -f https://raw.githubusercontent.com/StarRocks/starrocks-kubernetes-operator/main/examples/starrocks/starrocks-fe-and-be.yaml

The following table describes a few important fields in the starrocks-fe-and-be.yaml file.

FieldDescription
KindThe resource type of the object. The value must be StarRocksCluster.
MetadataMetadata, in which the following sub-fields are nested:
  • name: the name of the object. Each object name uniquely identifies an object of the same resource type.
  • namespace: the namespace to which the object belongs.
SpecThe expected status of the object. Valid values are starRocksFeSpec, starRocksBeSpec, and starRocksCnSpec.

You can also deploy the StarRocks cluster by using a modified configuration file. For supported fields and detailed descriptions, see api.md.

Deploying the StarRocks cluster takes a while. During this period, you can use the command kubectl -n starrocks get pods to check the starting status of the StarRocks cluster. If all the pods are in the Running state and all containers inside the pods are READY, the StarRocks cluster is running as expected.

NOTE

If you customize the namespace in which the StarRocks cluster is located, you need to replace starrocks with the name of your customized namespace.

$ kubectl -n starrocks get pods
NAME                                  READY   STATUS    RESTARTS   AGE
starrocks-controller-65bb8679-jkbtg   1/1     Running   0          22h
starrockscluster-sample-be-0          1/1     Running   0          23h
starrockscluster-sample-be-1          1/1     Running   0          23h
starrockscluster-sample-be-2          1/1     Running   0          22h
starrockscluster-sample-fe-0          1/1     Running   0          21h
starrockscluster-sample-fe-1          1/1     Running   0          21h
starrockscluster-sample-fe-2          1/1     Running   0          22h

Note

If some pods cannot start after a long period of time, you can use kubectl logs -n starrocks <pod_name> to view the log information or use kubectl -n starrocks describe pod <pod_name> to view the event information to locate the problem.

Manage StarRocks Cluster

Access StarRocks Cluster

The components of the StarRocks cluster can be accessed through their associated Services, such as the FE Service. For detailed descriptions of Services and their access addresses, see api.md and Services.

NOTE

  • Only the FE Service is deployed by default. If you need to deploy the BE Service and CN Service, you need to configure starRocksBeSpec and starRocksCnSpec in the StarRocks cluster configuration file.
  • The name of a Service is <cluster name>-<component name>-service by default, for example, starrockscluster-sample-fe-service. You can also specify the Service name in the spec of each component.

Access StarRocks Cluster from within Kubernetes Cluster

From within the Kubernetes cluster, the StarRocks cluster can be accessed through the FE Service's ClusterIP.

  1. Obtain the internal virtual IP address CLUSTER-IP and port PORT(S) of the FE Service.

    $ kubectl -n starrocks get svc 
    NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
    be-domain-search                     ClusterIP   None             <none>        9050/TCP                              23m
    fe-domain-search                     ClusterIP   None             <none>        9030/TCP                              25m
    starrockscluster-sample-fe-service   ClusterIP   10.100.162.xxx   <none>        8030/TCP,9020/TCP,9030/TCP,9010/TCP   25m
  2. Access the StarRocks cluster by using the MySQL client from within the Kubernetes cluster.

    mysql -h 10.100.162.xxx -P 9030 -uroot

Access StarRocks Cluster from outside Kubernetes Cluster

From outside the Kubernetes cluster, you can access the StarRocks cluster through the FE Service's LoadBalancer or NodePort. This topic uses LoadBalancer as an example:

  1. Run the command kubectl -n starrocks edit src starrockscluster-sample to update the StarRocks cluster configuration file, and change the Service type of starRocksFeSpec to LoadBalancer.

    starRocksFeSpec:
        image: starrocks/alpine-fe:2.4.1
        replicas: 3
        requests:
        cpu: 4
        memory: 16Gi
        service:            
        type: LoadBalancer # specified as LoadBalancer
  2. Obtain the IP address EXTERNAL-IP and port PORT(S) that the FE Service exposes to the outside.

    $ kubectl -n starrocks get svc
    NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                                                       AGE
    be-domain-search                     ClusterIP      None             <none>                                                                   9050/TCP                                                      127m
    fe-domain-search                     ClusterIP      None             <none>                                                                   9030/TCP                                                      129m
    starrockscluster-sample-fe-service   LoadBalancer   10.100.162.xxx   a7509284bf3784983a596c6eec7fc212-618xxxxxx.us-west-2.elb.amazonaws.com   8030:30629/TCP,9020:32544/TCP,9030:32244/TCP,9010:32024/TCP   129m               ClusterIP      None            <none>                                                                   9030/TCP                                                      23h
  3. Log in to your machine host and access the StarRocks cluster by using the MySQL client.

    mysql -h a7509284bf3784983a596c6eec7fc212-618xxxxxx.us-west-2.elb.amazonaws.com -P9030 -uroot

Upgrade StarRocks Cluster

Upgrade BE nodes

Run the following command to specify a new BE image file, such as starrocks/be-ubuntu:2.5.0-fix-uid:

kubectl -n starrocks patch starrockscluster starrockscluster-sample --type='merge' -p '{"spec":{"starRocksBeSpec":{"image":"starrocks/be-ubuntu:2.5.0-fix-uid"}}}'

Upgrade FE nodes

Run the following command to specify a new FE image file, such as starrocks/fe-ubuntu:2.5.0-fix-uid:

kubectl -n starrocks patch starrockscluster starrockscluster-sample --type='merge' -p '{"spec":{"starRocksFeSpec":{"image":"starrocks/fe-ubuntu:2.5.0-fix-uid"}}}'

The upgrade process lasts for a while. You can run the command kubectl -n starrocks get pods to view the upgrade progress.

Scale StarRocks cluster

This topic takes scaling out the BE and FE clusters as examples.

Scale out BE cluster

Run the following command to scale out the BE cluster to 9 nodes:

kubectl -n starrocks patch starrockscluster starrockscluster-sample --type='merge' -p '{"spec":{"starRocksBeSpec":{"replicas":9}}}'

Scale out FE cluster

Run the following command to scale out the FE cluster to 4 nodes:

kubectl -n starrocks patch starrockscluster starrockscluster-sample --type='merge' -p '{"spec":{"starRocksFeSpec":{"replicas":4}}}'

The scaling process lasts for a while. You can use the command kubectl -n starrocks get pods to view the scaling progress.

Automatic scaling for CN cluster

Run the command kubectl -n starrocks edit src starrockscluster-sample to configure the automatic scaling policy for the CN cluster. You can specify the resource metrics for CNs as the average CPU utilization, average memory usage, elastic scaling threshold, upper elastic scaling limit, and lower elastic scaling limit. The upper elastic scaling limit and lower elastic scaling limit specify the maximum number and minimum number of CNs allowed for elastic scaling.

NOTE

If the automatic scaling policy for the CN cluster is configured, delete the replicas field from the starRocksCnSpec in the StarRocks cluster configuration file.

Kubernetes also supports using behavior to customize scaling behaviors according to business scenarios, helping you achieve rapid or slow scaling or disable scaling. For more information about automatic scaling policies, see Horizontal Pod Scaling.

The following is a template provided by StarRocks to help you configure automatic scaling policies:

  starRocksCnSpec:
    image: starrocks/centos-cn:2.4.1
    requests:
      cpu: 4
      memory: 4Gi
    autoScalingPolicy: # Automatic scaling policy of the CN cluster.
          maxReplicas: 10 # The maximum number of CNs is set to 10.
          minReplicas: 1 # The minimum number of CNs is set to 1.
          hpaPolicy:
            metrics: # Resource metrics
              - type: Resource
                resource: 
                  name: memory # The average memory usage of CNs is specified as a resource metric.
                  target:
                    averageUtilization: 30 
                    # The elastic scaling threshold is 30%.
                    # When the average memory utilization of CNs exceeds 30%, the number of CNs increases for scale-out.
                    # When the average memory utilization of CNs is below 30%, the number of CNs decreases for scale-in.
                    type: Utilization
              - type: Resource
                resource: 
                  name: cpu # The average CPU utilization of CNs is specified as a resource metric.
                  target:
                    averageUtilization: 60
                    # The elastic scaling threshold is 60%.
                    # When the average CPU utilization of CNs exceeds 60%, the number of CNs increases for scale-out.
                    # When the average CPU utilization of CNs is below 60%, the number of CNs decreases for scale-in.
                    type: Utilization
            behavior: #  The scaling behavior is customized according to business scenarios, helping you achieve rapid or slow scaling or disable scaling.
                policies:
                  - type: Pods
                    value: 1
                    periodSeconds: 10
              scaleDown:
                selectPolicy: Disabled

The following table describes a few important fields:

  • The upper and lower elastic scaling limit.
maxReplicas: 10 # The maximum number of CNs is set to 10.
minReplicas: 1 # The minimum number of CNs is set to 1.
  • The elastic scaling threshold.
# For example, the average CPU utilization of CNs is specified as a resource metric.
# The elastic scaling threshold is 60%.
# When the average CPU utilization of CNs exceeds 60%, the number of CNs increases for scale-out.
# When the average CPU utilization of CNs is below 60%, the number of CNs decreases for scale-in.
- type: Resource
  resource:
    name: cpu
    target:
      averageUtilization: 60