Maintain Different TiDB Clusters Separately Using Multiple Sets of TiDB Operator

You can use one set of TiDB Operator to manage multiple TiDB clusters. If you have the following application needs, you can deploy multiple sets of TiDB Operator to manage different TiDB clusters:

  • You need to perform a canary upgrade on TiDB Operator so that the potential issues of the new version do not affect your application.
  • Multiple TiDB clusters exist in your organization, and each cluster belongs to different teams. Each team needs to manage their own cluster.

This document describes how to deploy multiple sets of TiDB Operator to manage different TiDB clusters.

When you use TiDB Operator, tidb-scheduler is not mandatory. Refer to tidb-scheduler and default-scheduler to confirm whether you need to deploy tidb-scheduler.

Deploy multiple sets of TiDB Operator

  1. Deploy the first set of TiDB Operator.

    Refer to Deploy TiDB Operator - Customize TiDB Operator to deploy the first set of TiDB Operator. Add the following configuration in the values.yaml:

    controllerManager: selector: - user=dev
  2. Deploy the first TiDB cluster.

    1. Refer to Configure the TiDB Cluster - Configure TiDB deployment to configure the TidbCluster CR, and configure labels to match the selector set in the last step. For example:

      apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: basic1 labels: user: dev spec: ...

      If labels is not set when you deploy the TiDB cluster, you can configure labels by running the following command:

      kubectl -n ${namespace} label tidbcluster ${cluster_name} user=dev
    2. Refer to Deploy TiDB in General Kubernetes to deploy the TiDB cluster. Confirm that each component in the cluster is started normally.

  3. Deploy the second set of TiDB Operator.

    Refer to Deploy TiDB Operator to deploy the second set of TiDB Operator without tidb-scheduler. Add the following configuration in the values.yaml file, and deploy the second TiDB Operator (without tidb-scheduler) in a different namespace (such as tidb-admin-qa) with a different Helm Release Name (such as helm install tidb-operator-qa ...):

    controllerManager: selector: - user=qa appendReleaseSuffix: true scheduler: # If you do not need tidb-scheduler, set this value to false. create: false advancedStatefulset: create: false admissionWebhook: create: false
  4. Deploy the second TiDB cluster.

    1. Refer to Configure the TiDB Cluster to configure the TidbCluster CR, and configure labels to match the selector set in the last step. For example:

      apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: basic2 labels: user: qa spec: ...

      If labels is not set when you deploy the TiDB cluster, you can configure labels by running the following command:

      kubectl -n ${namespace} label tidbcluster ${cluster_name} user=qa
    2. Refer to Deploy TiDB in General Kubernetes to deploy the TiDB cluster. Confirm that each component in the cluster is started normally.

  5. View the logs of the two sets of TiDB Operator, and confirm that each TiDB Operator manages the TiDB cluster that matches the corresponding selectors.

    For example:

    View the log of tidb-controller-manager of the first TiDB Operator:

    kubectl -n tidb-admin logs tidb-controller-manager-55b887bdc9-lzdwv
    Output
    ... I0113 02:50:13.195779 1 main.go:69] FLAG: --selector="user=dev" ... I0113 02:50:32.409378 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully I0113 02:50:32.773635 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully I0113 02:51:00.294241 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully

    View the log of tidb-controller-manager of the second TiDB Operator:

    kubectl -n tidb-admin-qa logs tidb-controller-manager-qa-5dfcd7f9-vll4c
    Output
    ... I0113 02:50:13.195779 1 main.go:69] FLAG: --selector="user=qa" ... I0113 03:38:43.859387 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully I0113 03:38:45.060028 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully I0113 03:38:46.261045 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully

    By comparing the logs of the two sets of TiDB Operator, you can confirm that the first TiDB Operator only manages the tidb-cluster-1/basic1 cluster, and the second TiDB Operator only manages the tidb-cluster-2/basic2 cluster.

If you want to deploy a third or more sets of TiDB Operator, repeat step 3, step 4, and step 5.

In the values.yaml file in the tidb-operator chart, the following parameters are related to the deployment of multiple sets of TiDB Operator:

  • appendReleaseSuffix

    If this parameter is set to true, when you deploy TiDB Operator, the Helm chart automatically adds a suffix (-{{ .Release.Name }}) to the name of resources related to tidb-controller-manager and tidb-scheduler.

    For example, if you execute helm install canary pingcap/tidb-operator ..., the name of the tidb-controller-manager deployment is tidb-controller-manager-canary.

    If you need to deploy multiple sets of TiDB Operator, set this parameter to true.

    Default value: false.

  • controllerManager.create

    Controls whether to create tidb-controller-manager.

    Default value: true.

  • controllerManager.selector

    Sets the -selector parameter for tidb-controller-manager. The parameter is used to filter the CRs controlled by tidb-controller-manager according to the CR labels. If multiple selectors exist, the selectors are in and relationship.

    Default value: [] (tidb-controller-manager controls all CRs).

    Example:

    selector: - canary-release=v1 - k1==v1 - k2!=v2
  • scheduler.create

    Controls whether to create tidb-scheduler.

    Default value: true.

Was this page helpful?