- Introduction
- Get Started
- Deploy
- Deploy TiDB Cluster
- Deploy Heterogeneous Cluster
- Deploy TiFlash
- Deploy TiCDC
- Deploy TiDB Binlog
- Deploy Multiple Sets of TiDB Operator
- Deploy Monitoring
- Secure
- Operate
- Upgrade TiDB Cluster
- Upgrade TiDB Operator
- Perform a Canary Upgrade
- Pause Sync of TiDB Cluster
- Scale TiDB Cluster
- Backup and Restore
- Overview
- Grant Permissions to Remote Storage
- Backup and Restore with S3-Compatible Storage
- Backup and Restore with GCS
- Backup and Restore with Persistent Volumes
- Restart a TiDB Cluster
- Maintain a Kubernetes Node
- View TiDB Logs
- Configure Automatic Failover
- Destroy a TiDB Cluster
- Migrate from Helm 2 to Helm 3
- Disaster Recovery
- Import Data
- Troubleshoot
- FAQs
- Reference
- Architecture
- Sysbench Performance Test
- API References
- Cheat Sheet
- Tools
- Configure
- Log Collection
- Monitoring and Alert on Kubernetes
- Release Notes
- v1.1
- v1.0
- v0
Perform a Canary Upgrade on TiDB Operator
This document describes how to perform a canary upgrade on TiDB Operator. Using canary upgrades, you can prevent normal TiDB Operator upgrade from causing an unexpected impact on all the TiDB clusters in Kubernetes. After you confirm the impact of TiDB Operator upgrade or that the upgraded TiDB Operator works stably, you can normally upgrade TiDB Operator.
- You can perform a canary upgrade only on
tidb-controller-manager
andtidb-scheduler
. AdvancedStatefulSet controller andtidb-admission-webhook
do not support the canary upgrade. - Canary upgrade is supported since v1.1.10. The version of your current TiDB Operator should be >= v1.1.10.
Related parameters
To support canary upgrade, some parameters are added to the values.yaml
file in the tidb-operator
chart. See Related parameters for details.
Canary upgrade process
Configure selector for the current TiDB Operator:
Refer to Upgrade TiDB Operator. Add the following configuration in the
values.yaml
file, and upgrade TiDB Operator:controllerManager: selector: - version!=canary
If you have already performed the step above, skip to Step 2.
Deploy the canary TiDB Operator:
Refer to Deploy TiDB Operator. Add the following configuration in the
values.yaml
file, and deploy the canary TiDB Operator in a different namespace (such astidb-admin-canary
) with a different Helm Release Name (such ashelm install tidb-operator-canary ...
):controllerManager: selector: - version=canary appendReleaseSuffix: true #scheduler: # create: false advancedStatefulset: create: false admissionWebhook: create: false
Note- It is recommended to deploy the new TiDB Operator in a separate namespace.
- Set
appendReleaseSuffix
totrue
. - If you do not need to perform a canary upgrade on
tidb-scheduler
, configurescheduler.create: false
. - If you configure
scheduler.create: true
, a scheduler named{{ .scheduler.schedulerName }}-{{.Release.Name}}
will be created. To use this scheduler, configurespec.schedulerName
in theTidbCluster
CR to the name of this scheduler. - You need to set
advancedStatefulset.create: false
andadmissionWebhook.create: false
, because AdvancedStatefulSet controller andtidb-admission-webhook
do not support the canary upgrade.
To test the canary upgrade of
tidb-controller-manager
, set labels for a TiDB cluster by running the following command:kubectl -n ${namespace} label tc ${cluster_name} version=canary
Check the logs of the two deployed
tidb-controller-manager
s, and you can see this TiDB cluster is now managed by the canary TiDB Operator:View the log of
tidb-controller-manager
of the current TiDB Operator:kubectl -n tidb-admin logs tidb-controller-manager-55b887bdc9-lzdwv
I0305 07:52:04.558973 1 tidb_cluster_controller.go:148] TidbCluster has been deleted tidb-cluster-1/basic1
View the log of
tidb-controller-manager
of the canary TiDB Operator:kubectl -n tidb-admin-canary logs tidb-controller-manager-canary-6dcb9bdd95-qf4qr
I0113 03:38:43.859387 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully
To test the canary upgrade of
tidb-scheduler
, modifyspec.schedulerName
of some TiDB cluster totidb-scheduler-canary
by running the following command:kubectl -n ${namespace} edit tc ${cluster_name}
After the modification, all components in the cluster will be rolling updated.
Check the logs of
tidb-scheduler
of the canary TiDB Operator, and you can see this TiDB cluster is now using the canarytidb-scheduler
:kubectl -n tidb-admin-canary logs tidb-scheduler-canary-7f7b6c7c6-j5p2j -c tidb-scheduler
After the tests, you can revert the changes in Step 3 and Step 4 so that the TiDB cluster is again managed by the current TiDB Operator.
kubectl -n ${namespace} label tc ${cluster_name} version-
kubectl -n ${namespace} edit tc ${cluster_name}
Delete the canary TiDB Operator:
helm -n tidb-admin-canary uninstall ${release_name}
Refer to Upgrade TiDB Operator and upgrade the current TiDB Operator normally.