- Introduction
- Get Started
- Deploy
- In Self-Managed Kubernetes
- In Public Cloud Kubernetes
- Deploy TiDB on ARM64 Machines
- Deploy TiFlash to Explore TiDB HTAP
- Deploy TiDB Across Multiple Kubernetes Clusters
- Deploy a Heterogeneous TiDB Cluster
- Deploy TiCDC
- Deploy TiDB Binlog
- Monitor and Alert
- Migrate
- Import Data
- Migrate from MySQL
- Migrate TiDB to Kubernetes
- Manage
- Secure
- Scale
- Upgrade
- Upgrade a TiDB Cluster
- Upgrade TiDB Operator
- Backup and Restore
- Overview
- Backup and Restore Custom Resources
- Grant Permissions to Remote Storage
- Amazon S3 Compatible Storage
- Google Cloud Storage
- Persistent Volumes
- Maintain
- Restart a TiDB Cluster
- Destroy a TiDB Cluster
- View TiDB Logs
- Modify TiDB Cluster Configuration
- Configure Automatic Failover
- Pause Sync of TiDB Cluster
- Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator
- Maintain Kubernetes Nodes
- Migrate from Helm 2 to Helm 3
- Replace Nodes for a TiDB Cluster
- Disaster Recovery
- Troubleshoot
- FAQs
- Reference
- Release Notes
- v1.3
- v1.2
- v1.1
- v1.0
- v0
Replace Nodes for a TiDB Cluster on Local Disks
This document describes a method for replacing and upgrading nodes without downtime for a TiDB cluster that uses local storage.
If you only need to maintain a specific node in the TiDB cluster, refer to Maintain Kubernetes Nodes that Hold the TiDB Cluster.
Prerequisites
- A TiDB cluster is deployed. If not, refer to Deploy TiDB in General Kubernetes and deploy a cluster.
- The new node is ready and joins the Kubernetes cluster.
Step 1: Clone the configuration of the original TiDB cluster
Export a copy of the cluster configuration file,
tidb-cluster-clone.yaml
, by running the following command:kubectl get tidbcluster ${origin_cluster_name} -n ${namespace} -oyaml > tidb-cluster-clone.yaml
${origin_cluster_name}
is the name of the original cluster.${namespace}
is the namespace of the original cluster.Modify
tidb-cluster-clone.yaml
and allow the new clone cluster to join the original TiDB cluster.kind: TidbCluster metadata: name: ${clone_cluster_name} spec: cluster: name: ${origin_cluster_name} ...
${clone_cluster_name}
is the name of the clone cluster.${origin_cluster_name}
is the name of the original cluster.
Step 2: Sign certificates for the clone cluster
If the original cluster enables TLS, you need to sign certificates for the clone cluster. If not, you can skip this step and move to Step 3.
Use cfssl
If you use cfssl to sign certificates, you must sign certificates using the same certification authority (CA) as the original cluster. To complete the signing process, follow instructions in step 5~7 in Using cfssl.
Use cert-manager
If you use cert-manager, you must sign certificates using the same Issuer as the original cluster. To complete the signing process, follow instructions in step 3 in Using cert-manager.
Step 3: Mark the nodes to be replaced as non-schedulable
By marking the nodes to be replaced as non-schedulable, you can ensure that no new Pod is scheduled to the nodes. Run the kubectl cordon
command:
kubectl cordon ${replace_nodename1} ${replace_nodename2} ...
Step 4: Create the clone cluster
Create the clone cluster by running the following command:
kubectl apply -f tidb-cluster-clone.yaml
Confirm that the new TiDB cluster that consists of the clone cluster and the original cluster is running normally.
Obtain the count and state of stores in the new cluster:
# store count pd-ctl -u http://<address>:<port> store | jq '.count' # store state pd-ctl -u http://<address>:<port> store | jq '.stores | .[] | .store.state_name'
Access the TiDB cluster via MySQL client.
Step 5: Scale in all TiDB nodes of the original cluster
Scale in all TiDB nodes of the original cluster to 0. For details, refer to Horizontal scaling.
If you access the original TiDB cluster via a load balancer or database middleware, before scaling in the original TiDB cluster, you need to modify the configuration to route your application traffic to the target TiDB cluster. Otherwise, your application might be affected.
Step 6: Scale in all TiKV nodes of the original cluster
Scale in all TiKV nodes of the original cluster to 0. For details, refer to Horizontal scaling.
Step 7: Scale in all PD nodes of the original cluster
Scale in all PD nodes of the original cluster to 0. For details, refer to Horizontal scaling.
Step 8: Delete the spec.cluster
field in the clone cluster
Delete the spec.cluster
field in the clone cluster by running the following command:
kubectl patch -n ${namespace} tc ${clone_cluster_name} --type=json -p '[{"op":"remove", "path":"/spec/cluster"}]'
${namespace}
is the name of the clone cluster (unchanged). ${clone_cluster_name}
is the name of the clone cluster.
Step 9: Delete the cluster, data, and nodes of the original cluster
Delete the
TidbCluster
of the original cluster:kubectl delete -n ${namespace} tc ${origin_cluster_name}
${namespace}
is the name of the original cluster (unchanged).${original_cluster_name}
is the name of the original cluster.Delete the data of the original cluster. For details, refer to Delete PV and data.
Delete the nodes to be replaced from the Kubernetes cluster:
kubectl delete node ${replace_nodename1} ${replace_nodename2} ...
- Prerequisites
- Step 1: Clone the configuration of the original TiDB cluster
- Step 2: Sign certificates for the clone cluster
- Step 3: Mark the nodes to be replaced as non-schedulable
- Step 4: Create the clone cluster
- Step 5: Scale in all TiDB nodes of the original cluster
- Step 6: Scale in all TiKV nodes of the original cluster
- Step 7: Scale in all PD nodes of the original cluster
- Step 8: Delete the spec.cluster field in the clone cluster
- Step 9: Delete the cluster, data, and nodes of the original cluster