Deploy TiDB on General Kubernetes
This document describes how to deploy a TiDB cluster on general Kubernetes.
Prerequisites
- Meet prerequisites.
- Complete deploying TiDB Operator.
- Configure the TiDB cluster
Deploy the TiDB cluster
Create
Namespace
:kubectl create namespace ${namespace}Deploy the TiDB cluster:
kubectl apply -f ${cluster_name} -n ${namespace}If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use
docker load
to install the Docker image on the server.To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v8.1.0):
pingcap/pd:v8.1.0 pingcap/tikv:v8.1.0 pingcap/tidb:v8.1.0 pingcap/tidb-binlog:v8.1.0 pingcap/ticdc:v8.1.0 pingcap/tiflash:v8.1.0 pingcap/tiproxy:latest pingcap/tidb-monitor-reloader:v1.0.1 pingcap/tidb-monitor-initializer:v8.1.0 grafana/grafana:7.5.11 prom/prometheus:v2.18.1 busybox:1.26.2Next, download all these images with the following command:
docker pull pingcap/pd:v8.1.0 docker pull pingcap/tikv:v8.1.0 docker pull pingcap/tidb:v8.1.0 docker pull pingcap/tidb-binlog:v8.1.0 docker pull pingcap/ticdc:v8.1.0 docker pull pingcap/tiflash:v8.1.0 docker pull pingcap/tiproxy:latest docker pull pingcap/tidb-monitor-reloader:v1.0.1 docker pull pingcap/tidb-monitor-initializer:v8.1.0 docker pull grafana/grafana:7.5.11 docker pull prom/prometheus:v2.18.1 docker pull busybox:1.26.2 docker save -o pd-v8.1.0.tar pingcap/pd:v8.1.0 docker save -o tikv-v8.1.0.tar pingcap/tikv:v8.1.0 docker save -o tidb-v8.1.0.tar pingcap/tidb:v8.1.0 docker save -o tidb-binlog-v8.1.0.tar pingcap/tidb-binlog:v8.1.0 docker save -o ticdc-v8.1.0.tar pingcap/ticdc:v8.1.0 docker save -o tiproxy-latest.tar pingcap/tiproxy:latest docker save -o tiflash-v8.1.0.tar pingcap/tiflash:v8.1.0 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 docker save -o tidb-monitor-initializer-v8.1.0.tar pingcap/tidb-monitor-initializer:v8.1.0 docker save -o grafana-6.0.1.tar grafana/grafana:7.5.11 docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 docker save -o busybox-1.26.2.tar busybox:1.26.2Next, upload these Docker images to the server, and execute
docker load
to install these Docker images on the server:docker load -i pd-v8.1.0.tar docker load -i tikv-v8.1.0.tar docker load -i tidb-v8.1.0.tar docker load -i tidb-binlog-v8.1.0.tar docker load -i ticdc-v8.1.0.tar docker load -i tiproxy-latest.tar docker load -i tiflash-v8.1.0.tar docker load -i tidb-monitor-reloader-v1.0.1.tar docker load -i tidb-monitor-initializer-v8.1.0.tar docker load -i grafana-6.0.1.tar docker load -i prometheus-v2.18.1.tar docker load -i busybox-1.26.2.tarView the Pod status:
kubectl get po -n ${namespace} -l app.kubernetes.io/instance=${cluster_name}
You can use TiDB Operator to deploy and manage multiple TiDB clusters in a single Kubernetes cluster by repeating the above procedure and replacing cluster_name
with a different name.
Different clusters can be in the same or different namespace
, which is based on your actual needs.
Initialize the TiDB cluster
If you want to initialize your cluster after deployment, refer to Initialize a TiDB Cluster on Kubernetes.
Configure TiDB monitoring
For more information, see Deploy monitoring and alerts for a TiDB cluster.
Collect logs
System and application logs can be useful for troubleshooting issues and automating operations. By default, TiDB components output logs to the container's stdout
and stderr
, and log rotation is automatically performed based on the container runtime environment. When a Pod restarts, container logs will be lost. To prevent log loss, it is recommended to Collect logs of TiDB and its related components.