- Get Started
- In Self-Managed Kubernetes
- In Public Cloud Kubernetes
- Deploy TiDB on ARM64 Machines
- Deploy TiFlash to Explore TiDB HTAP
- Deploy TiDB Across Multiple Kubernetes Clusters
- Deploy a Heterogeneous TiDB Cluster
- Deploy TiCDC
- Deploy TiDB Binlog
- Monitor and Alert
- Backup and Restore
- Backup and Restore Custom Resources
- Grant Permissions to Remote Storage
- Amazon S3 Compatible Storage
- Google Cloud Storage
- Persistent Volumes
- Restart a TiDB Cluster
- Destroy a TiDB Cluster
- View TiDB Logs
- Modify TiDB Cluster Configuration
- Configure Automatic Failover
- Pause Sync of TiDB Cluster
- Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator
- Maintain Kubernetes Nodes
- Migrate from Helm 2 to Helm 3
- Replace Nodes for a TiDB Cluster
- Disaster Recovery
- Sysbench Performance Test
- API References
- Cheat Sheet
- Required RBAC Rules
- Log Collection
- Monitoring and Alert on Kubernetes
- Release Notes
TiDB Scheduler is a TiDB implementation of Kubernetes scheduler extender. TiDB Scheduler is used to add new scheduling rules to Kubernetes. This document introduces these new scheduling rules and how TiDB Scheduler works.
A kube-scheduler is deployed by default in the Kubernetes cluster for Pod scheduling. The default scheduler name is
In the early Kubernetes versions (< v1.16), the
default-scheduler was not flexible enough to support even scheduling for Pods. Therefore, to support even scheduling for the TiDB cluster Pods, TiDB Operator uses a TiDB Scheduler (
tidb-scheduler) to extend the scheduling rules of the
Starting from Kubernetes v1.16, the
default-scheduler has introduced the
EvenPodsSpread feature. This feature controls how Pods are spread across your Kubernetes cluster among failure-domains. It is in the beta phase in v1.18, and became generally available in v1.19.
Therefore, if the Kubernetes cluster meets one of the following conditions, you can use
default-scheduler directly instead of
tidb-scheduler. You need to configure
topologySpreadConstraints to make Pods evenly spread in different topologies.
- The Kubernetes version is v1.18.x and the
EvenPodsSpreadfeature gate is enabled.
- The Kubernetes version >= v1.19.
When you change an existing TiDB clusters from using
tidb-scheduler to using
default-scheduler, it triggers the rolling update of the TiDB clusters.
A TiDB cluster includes three key components: PD, TiKV, and TiDB. Each consists of multiple nodes: PD is a Raft cluster, and TiKV is a multi-Raft group cluster. PD and TiKV components are stateful. If the
EvenPodsSpread feature gate is not enabled in the Kubernetes cluster, the default scheduling rules of the Kubernetes scheduler cannot meet the high availability scheduling requirements of the TiDB cluster, so the Kubernetes scheduling rules need to be extended.
Currently, pods can be scheduled according to specific dimensions by modifying
metadata.annotations in TidbCluster, such as:
metadata: annotations: pingcap.com/ha-topology-key: kubernetes.io/hostname
The configuration above indicates scheduling by the node dimension (default). If you want to schedule pods by other dimensions, such as
pingcap.com/ha-topology-key: zone, which means scheduling by zone, each node should also be labeled as follows:
kubectl label nodes node1 zone=zone1
Different nodes may have different labels or the same label, and if a node is not labeled, the scheduler will not schedule any pod to that node.
TiDB Scheduler implements the following customized scheduling rules. The following example is based on node scheduling, scheduling rules based on other dimensions are the same.
Scheduling rule 1: Make sure that the number of PD instances scheduled on each node is less than
Replicas / 2. For example:
|PD cluster size (Replicas)||Maximum number of PD instances that can be scheduled on each node|
Scheduling rule 2: If the number of Kubernetes nodes is less than three (in this case, TiKV cannot achieve high availability), scheduling is not limited; otherwise, the number of TiKV instances that can be scheduled on each node is no more than
ceil(Replicas / 3). For example:
|TiKV cluster size (Replicas)||Maximum number of TiKV instances that can be scheduled on each node||Best scheduling distribution|
TiDB Scheduler adds customized scheduling rules by implementing Kubernetes Scheduler extender.
The TiDB Scheduler component is deployed as one or more Pods, though only one Pod is working at the same time. Each Pod has two Containers inside: one Container is a native
kube-scheduler, and the other is a
tidb-scheduler implemented as a Kubernetes scheduler extender.
If you configure the cluster to use
tidb-scheduler in the
TidbCluster CR, the
.spec.schedulerName attribute of PD, TiDB, and TiKV Pods created by TiDB Operator is set to
tidb-scheduler. This means that the TiDB Scheduler is used for the scheduling.
The scheduling process of a Pod is as follows:
kube-schedulerpulls all Pods whose
tidb-scheduler. And Each Pod is filtered using the default Kubernetes scheduling rules.
kube-schedulersends a request to the
tidb-schedulerfilters the sent nodes through the customized scheduling rules (as mentioned above), and returns schedulable nodes to
kube-schedulerdetermines the nodes to be scheduled.
If a Pod cannot be scheduled, see the troubleshooting document to diagnose and solve the issue.