Deploy TiDB in General Kubernetes
This document describes how to deploy a TiDB cluster in general Kubernetes.
Prerequisites
- Meet prerequisites.
- Complete deploying TiDB Operator.
- Configure the TiDB cluster
Deploy the TiDB cluster
Create
Namespace
:kubectl create namespace ${namespace}Deploy the TiDB cluster:
kubectl apply -f ${cluster_name} -n ${namespace}If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use
docker load
to install the Docker image on the server.To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v5.3.0):
pingcap/pd:v5.3.0 pingcap/tikv:v5.3.0 pingcap/tidb:v5.3.0 pingcap/tidb-binlog:v5.3.0 pingcap/ticdc:v5.3.0 pingcap/tiflash:v5.3.0 pingcap/tidb-monitor-reloader:v1.0.1 pingcap/tidb-monitor-initializer:v5.3.0 grafana/grafana:6.0.1 prom/prometheus:v2.18.1 busybox:1.26.2Next, download all these images with the following command:
docker pull pingcap/pd:v5.3.0 docker pull pingcap/tikv:v5.3.0 docker pull pingcap/tidb:v5.3.0 docker pull pingcap/tidb-binlog:v5.3.0 docker pull pingcap/ticdc:v5.3.0 docker pull pingcap/tiflash:v5.3.0 docker pull pingcap/tidb-monitor-reloader:v1.0.1 docker pull pingcap/tidb-monitor-initializer:v5.3.0 docker pull grafana/grafana:6.0.1 docker pull prom/prometheus:v2.18.1 docker pull busybox:1.26.2 docker save -o pd-v5.3.0.tar pingcap/pd:v5.3.0 docker save -o tikv-v5.3.0.tar pingcap/tikv:v5.3.0 docker save -o tidb-v5.3.0.tar pingcap/tidb:v5.3.0 docker save -o tidb-binlog-v5.3.0.tar pingcap/tidb-binlog:v5.3.0 docker save -o ticdc-v5.3.0.tar pingcap/ticdc:v5.3.0 docker save -o tiflash-v5.3.0.tar pingcap/tiflash:v5.3.0 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 docker save -o tidb-monitor-initializer-v5.3.0.tar pingcap/tidb-monitor-initializer:v5.3.0 docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1 docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 docker save -o busybox-1.26.2.tar busybox:1.26.2Next, upload these Docker images to the server, and execute
docker load
to install these Docker images on the server:docker load -i pd-v5.3.0.tar docker load -i tikv-v5.3.0.tar docker load -i tidb-v5.3.0.tar docker load -i tidb-binlog-v5.3.0.tar docker load -i ticdc-v5.3.0.tar docker load -i tiflash-v5.3.0.tar docker load -i tidb-monitor-reloader-v1.0.1.tar docker load -i tidb-monitor-initializer-v5.3.0.tar docker load -i grafana-6.0.1.tar docker load -i prometheus-v2.18.1.tar docker load -i busybox-1.26.2.tarView the Pod status:
kubectl get po -n ${namespace} -l app.kubernetes.io/instance=${cluster_name}
You can use TiDB Operator to deploy and manage multiple TiDB clusters in a single Kubernetes cluster by repeating the above procedure and replacing cluster_name
with a different name.
Different clusters can be in the same or different namespace
, which is based on your actual needs.
Initialize the TiDB cluster
If you want to initialize your cluster after deployment, refer to Initialize a TiDB Cluster in Kubernetes.
Configure TiDB monitoring
For more information, see Deploy monitoring and alerts for a TiDB cluster.
Collect logs
System and application logs can be useful for troubleshooting issues and automating operations. By default, TiDB components output logs to the container's stdout
and stderr
, and log rotation is automatically performed based on the container runtime environment. When a Pod restarts, container logs will be lost. To prevent log loss, it is recommended to Collect logs of TiDB and its related components.