Maintain Different TiDB Clusters Separately Using Multiple Sets of TiDB Operator
You can use one set of TiDB Operator to manage multiple TiDB clusters. If you have the following application needs, you can deploy multiple sets of TiDB Operator to manage different TiDB clusters:
- You need to perform a canary upgrade on TiDB Operator so that the potential issues of the new version do not affect your application.
- Multiple TiDB clusters exist in your organization, and each cluster belongs to different teams. Each team needs to manage their own cluster.
This document describes how to deploy multiple sets of TiDB Operator to manage different TiDB clusters.
When you use TiDB Operator, tidb-scheduler
is not mandatory. Refer to tidb-scheduler and default-scheduler to confirm whether you need to deploy tidb-scheduler
.
Deploy multiple sets of TiDB Operator
Deploy the first set of TiDB Operator.
Refer to Deploy TiDB Operator - Customize TiDB Operator to deploy the first set of TiDB Operator. Add the following configuration in the
values.yaml
:controllerManager: selector: - user=devDeploy the first TiDB cluster.
Refer to Configure the TiDB Cluster - Configure TiDB deployment to configure the TidbCluster CR, and configure
labels
to match theselector
set in the last step. For example:apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: basic1 labels: user: dev spec: ...If
labels
is not set when you deploy the TiDB cluster, you can configurelabels
by running the following command:kubectl -n ${namespace} label tidbcluster ${cluster_name} user=devRefer to Deploy TiDB on General Kubernetes to deploy the TiDB cluster. Confirm that each component in the cluster is started normally.
Deploy the second set of TiDB Operator.
Refer to Deploy TiDB Operator to deploy the second set of TiDB Operator without
tidb-scheduler
. Add the following configuration in thevalues.yaml
file, and deploy the second TiDB Operator (withouttidb-scheduler
) in a different namespace (such astidb-admin-qa
) with a different Helm Release Name (such ashelm install tidb-operator-qa ...
):controllerManager: selector: - user=qa appendReleaseSuffix: true scheduler: # If you do not need tidb-scheduler, set this value to false. create: false advancedStatefulset: create: false admissionWebhook: create: falseDeploy the second TiDB cluster.
Refer to Configure the TiDB Cluster to configure the TidbCluster CR, and configure
labels
to match theselector
set in the last step. For example:apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: basic2 labels: user: qa spec: ...If
labels
is not set when you deploy the TiDB cluster, you can configurelabels
by running the following command:kubectl -n ${namespace} label tidbcluster ${cluster_name} user=qaRefer to Deploy TiDB on General Kubernetes to deploy the TiDB cluster. Confirm that each component in the cluster is started normally.
View the logs of the two sets of TiDB Operator, and confirm that each TiDB Operator manages the TiDB cluster that matches the corresponding selectors.
For example:
View the log of
tidb-controller-manager
of the first TiDB Operator:kubectl -n tidb-admin logs tidb-controller-manager-55b887bdc9-lzdwvOutput
... I0113 02:50:13.195779 1 main.go:69] FLAG: --selector="user=dev" ... I0113 02:50:32.409378 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully I0113 02:50:32.773635 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully I0113 02:51:00.294241 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfullyView the log of
tidb-controller-manager
of the second TiDB Operator:kubectl -n tidb-admin-qa logs tidb-controller-manager-qa-5dfcd7f9-vll4cOutput
... I0113 02:50:13.195779 1 main.go:69] FLAG: --selector="user=qa" ... I0113 03:38:43.859387 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully I0113 03:38:45.060028 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully I0113 03:38:46.261045 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfullyBy comparing the logs of the two sets of TiDB Operator, you can confirm that the first TiDB Operator only manages the
tidb-cluster-1/basic1
cluster, and the second TiDB Operator only manages thetidb-cluster-2/basic2
cluster.
If you want to deploy a third or more sets of TiDB Operator, repeat step 3, step 4, and step 5.
Related parameters
In the values.yaml
file in the tidb-operator
chart, the following parameters are related to the deployment of multiple sets of TiDB Operator:
appendReleaseSuffix
If this parameter is set to
true
, when you deploy TiDB Operator, the Helm chart automatically adds a suffix (-{{ .Release.Name }}
) to the name of resources related totidb-controller-manager
andtidb-scheduler
.For example, if you execute
helm install canary pingcap/tidb-operator ...
, the name of thetidb-controller-manager
deployment istidb-controller-manager-canary
.If you need to deploy multiple sets of TiDB Operator, set this parameter to
true
.Default value:
false
.controllerManager.create
Controls whether to create
tidb-controller-manager
.Default value:
true
.controllerManager.selector
Sets the
-selector
parameter fortidb-controller-manager
. The parameter is used to filter the CRs controlled bytidb-controller-manager
according to the CR labels. If multiple selectors exist, the selectors are inand
relationship.Default value:
[]
(tidb-controller-manager
controls all CRs).Example:
selector: - canary-release=v1 - k1==v1 - k2!=v2scheduler.create
Controls whether to create
tidb-scheduler
.Default value:
true
.