- Get Started
- Destroy a TiDB Cluster
- Restart a TiDB Cluster
- Maintain a Hosting Kubernetes Node
- Use PD Recover to Recover the PD Cluster
This document describes how to deploy a TiDB cluster on general Kubernetes.
Use the following commands to get the
values.yaml configuration file of the tidb-cluster chart to be deployed.
mkdir -p /home/tidb/<release-name> && \ helm inspect values pingcap/tidb-cluster --version=<chart-version> > /home/tidb/<release-name>/values-<release-name>.yaml
- You can replace
/home/tidbwith any directory as you like.
release-nameis the prefix of resources used by TiDB in Kubernetes (such as Pod, Service, etc.). You can give it a name that is easy to memorize but this name must be globally unique. You can view existing
release-names in the cluster by running the
helm ls -qcommand.
chart-versionis the version released by the
tidb-clusterchart. You can view the currently supported versions by running the
helm search -l tidb-clustercommand.
- In the rest of this document,
The TiDB cluster uses
local-storage by default.
- For the production environment, local storage is recommended. The actual local storage in Kubernetes clusters might be sorted by disk types, such as
- For the demonstration environment or functional verification, you can use network storage, such as
Different components of a TiDB cluster have different disk requirements. Before deploying a TiDB cluster, select the appropriate storage class for each component according to the storage classes supported by the current Kubernetes cluster and usage scenario. You can set the storage class by modifying
storageClassName of each component in
values.yaml. For the storage classes supported by the Kubernetes cluster, check with your system administrator.
If you set a storage class that does not exist in the TiDB cluster that you are creating, then the cluster creation goes to the Pending state. In this situation, you must destroy the TiDB cluster in Kubernetes.
The deployed cluster topology by default has 3 PD Pods, 3 TiKV Pods, 2 TiDB Pods, and 1 Monitor Pod. In this deployment topology, the scheduler extender of TiDB Operator requires at least 3 nodes in the Kubernetes cluster to provide high availability. If the number of Kubernetes cluster nodes is less than 3, 1 PD Pod goes to the Pending state, and neither TiKV Pods nor TiDB Pods are created.
When the number of nodes in the Kubernetes cluster is less than 3, to start the TiDB cluster, you can reduce both the number of PD Pods and TiKV Pods in the default deployment to
1, or modify the
default-scheduler, a built-in scheduler in Kubernetes.
default-scheduleris only applicable to the demonstration environment. After
schedulerNameis modified to
default-scheduler, the scheduling of TiDB clusters neither guarantees high availability of data nor supports features such as TiDB stable scheduling.
For more configuration parameters, see TiDB cluster configurations in Kubernetes.
After you deploy and configure TiDB Operator, deploy the TiDB cluster using the following commands:
helm install pingcap/tidb-cluster --name=<release-name> --namespace=<namespace> --version=<chart-version> -f /home/tidb/<release-name>/values-<release-name>.yaml
namespaceis a virtual cluster backed by the same physical cluster. You can give it a name that is easy to memorize, such as the same name as
You can view the Pod status using the following command:
kubectl get po -n <namespace> -l app.kubernetes.io/instance=<release-name>
You can use TiDB Operator to deploy and manage multiple sets of TiDB clusters in a single Kubernetes cluster by repeating the above command and replacing
release-name with a different name. Different clusters can be in the same or different
namespace. You can select different clusters according to your actual needs.