Deploy a TiDB Cluster on Kubernetes
This document describes how to deploy a TiDB cluster on Kubernetes.
Prerequisites
- TiDB Operator is deployed.
Configure the TiDB cluster
A TiDB cluster consists of the following components. Each component is managed by a corresponding Custom Resource Definition (CRD):
| Component | CRD |
|---|---|
| PD | PDGroup |
| TiKV | TiKVGroup |
| TiDB | TiDBGroup |
| TiProxy (optional) | TiProxyGroup |
| TiFlash (optional) | TiFlashGroup |
| TiCDC (optional) | TiCDCGroup |
In the following steps, you will define a TiDB cluster using the Cluster CRD. Then, in each component CRD, specify the cluster.name field to associate the component with the cluster.
spec:
cluster:
name: <cluster>
Before deploying the cluster, prepare a YAML file for each component. The following lists some example configurations:
- PD:
pd.yaml - TiKV:
tikv.yaml - TiDB:
tidb.yaml - TiFlash:
tiflash.yaml - TiProxy:
tiproxy.yaml - TiCDC:
ticdc.yaml
Configure component version
Use the version field to specify the component version:
spec:
template:
spec:
version: v8.5.2
To use a custom image, set the image field:
spec:
template:
spec:
version: v8.5.2
image: gcr.io/xxx/tidb
If the version does not follow semantic versioning, you can specify it using the image field:
spec:
template:
spec:
version: v8.5.2
image: gcr.io/xxx/tidb:dev
Configure resources
Use the spec.resources field to define the CPU and memory resources for a component:
spec:
resources:
cpu: "4"
memory: 8Gi
Configure component parameters
Use the spec.config field to define config.toml settings:
spec:
config: |
[log]
level = warn
Configure volumes
Use the spec.volumes field to define mounted volumes for a component:
spec:
template:
spec:
volumes:
- name: test
mounts:
- mountPath: "/test"
storage: 100Gi
Some components support a type field to specify a volume's purpose. Related fields in config.toml are updated automatically. For example:
apiVersion: core.pingcap.com/v1alpha1
kind: TiKVGroup
...
spec:
template:
spec:
volumes:
- name: data
mounts:
# data is for TiKV's data dir
- type: data
storage: 100Gi
You can also specify a StorageClass and VolumeAttributeClass. For details, see Volume Configuration.
Configure scheduling policies
Use the spec.schedulePolicies field to distribute components evenly across nodes:
spec:
schedulePolicies:
- type: EvenlySpread
evenlySpread:
topologies:
- topology:
topology.kubernetes.io/zone: us-west-2a
- topology:
topology.kubernetes.io/zone: us-west-2b
- topology:
topology.kubernetes.io/zone: us-west-2c
To assign weights to topologies, use the weight field:
spec:
schedulePolicies:
- type: EvenlySpread
evenlySpread:
topologies:
- weight: 2
topology:
topology.kubernetes.io/zone: us-west-2a
- topology:
topology.kubernetes.io/zone: us-west-2b
You can also configure the following scheduling options using the Overlay feature:
Deploy the TiDB cluster
After preparing the YAML files for each component, deploy the TiDB cluster by following these steps:
Create a namespace:
kubectl create namespace dbDeploy the TiDB cluster:
Option 1: Deploy each component individually. The following example shows how to deploy a TiDB cluster with PD, TiKV, and TiDB.
The following is an example configuration for the
ClusterCRD:apiVersion: core.pingcap.com/v1alpha1 kind: Cluster metadata: name: basic namespace: db spec: {}Create the
ClusterCRD:kubectl apply -f cluster.yaml --server-sideThe following is an example configuration for the PD component:
apiVersion: core.pingcap.com/v1alpha1 kind: PDGroup metadata: name: pd namespace: db spec: cluster: name: basic replicas: 3 template: metadata: annotations: author: pingcap spec: version: v8.5.2 volumes: - name: data mounts: - type: data storage: 20GiCreate the PD component:
kubectl apply -f pd.yaml --server-sideThe following is an example configuration for the TiKV component:
apiVersion: core.pingcap.com/v1alpha1 kind: TiKVGroup metadata: name: tikv namespace: db spec: cluster: name: basic replicas: 3 template: metadata: annotations: author: pingcap spec: version: v8.5.2 volumes: - name: data mounts: - type: data storage: 100GiCreate the TiKV component:
kubectl apply -f tikv.yaml --server-sideThe following is an example configuration for the TiDB component:
apiVersion: core.pingcap.com/v1alpha1 kind: TiDBGroup metadata: name: tidb namespace: db spec: cluster: name: basic replicas: 2 template: metadata: annotations: author: pingcap spec: version: v8.5.2Create the TiDB component:
kubectl apply -f tidb.yaml --server-sideOption 2: Deploy all components at once. You can save all component YAML files in a local directory and execute the following command:
kubectl apply -f ./<directory> --server-sideCheck the status of the TiDB cluster:
kubectl get cluster -n db kubectl get group -n db