Sign InTry Free

Deploy TiDB on Google Cloud

This tutorial is designed to be directly run in Google Cloud Shell.

It takes you through the following steps:

  • Launch a new 3-node Kubernetes cluster (optional)
  • Install the Helm package manager for Kubernetes
  • Deploy the TiDB Operator
  • Deploy your first TiDB cluster
  • Connect to the TiDB cluster
  • Scale out the TiDB cluster
  • Shut down the Kubernetes cluster (optional)

Select a project

This tutorial launches a 3-node Kubernetes cluster of n1-standard-1 machines. Pricing information can be found here.

Please select a project before proceeding:

Enable API access

This tutorial requires use of the Compute and Container APIs. Please enable them before proceeding:

Configure gcloud defaults

This step defaults gcloud to your preferred project and zone, which simplifies the commands used for the rest of this tutorial:

gcloud config set project {{project-id}}
gcloud config set compute/zone us-west1-a

Launch a 3-node Kubernetes cluster

It's now time to launch a 3-node kubernetes cluster! The following command launches a 3-node cluster of n1-standard-1 machines.

It takes a few minutes to complete:

gcloud container clusters create tidb

Once the cluster has launched, set it to be the default:

gcloud config set container/cluster tidb

The last step is to verify that kubectl can connect to the cluster, and all three machines are running:

kubectl get nodes

If you see Ready for all nodes, congratulations! You've setup your first Kubernetes cluster.

Install Helm

Helm is the package manager for Kubernetes, and is what allows us to install all of the distributed components of TiDB in a single step. Helm requires both a server-side and a client-side component to be installed.

Install helm:

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash

Copy helm to your $HOME directory so that it persists after the Cloud Shell reaches its idle timeout:

mkdir -p ~/bin && \ cp /usr/local/bin/helm ~/bin && \ echo 'PATH="$PATH:$HOME/bin"' >> ~/.bashrc

Helm also needs a couple of permissions to work properly:

kubectl apply -f ./manifests/tiller-rbac.yaml && \ helm init --service-account tiller --upgrade

It takes a minute for helm to initialize tiller, its server component:

watch "kubectl get pods --namespace kube-system | grep tiller"

When you see Running, it's time to hit Ctrl+C and proceed to the next step!

Add Helm repo

Helm repo (https://charts.pingcap.org/) houses PingCAP managed charts, such as tidb-operator, tidb-cluster and tidb-backup, etc. Add and check the repo with following commands:

helm repo add pingcap https://charts.pingcap.org/ && \ helm repo list

Then you can check the available charts:

helm repo update
helm search tidb-cluster -l
helm search tidb-operator -l

Deploy TiDB Operator

Note that <chartVersion> is used in the rest of the document to represent the chart version, e.g. v1.0.0.

The first TiDB component we are going to install is the TiDB Operator, using a Helm Chart. TiDB Operator is the management system that works with Kubernetes to bootstrap your TiDB cluster and keep it running. This step assumes you are in the tidb-operator working directory:

kubectl apply -f ./manifests/crd.yaml && \ kubectl apply -f ./manifests/gke/persistent-disk.yaml && \ helm install pingcap/tidb-operator -n tidb-admin --namespace=tidb-admin --version=<chartVersion>

We can watch the operator come up with:

watch kubectl get pods --namespace tidb-admin -o wide

When you see both tidb-scheduler and tidb-controller-manager are Running, press Ctrl+C and proceed to launch a TiDB cluster!

Deploy your first TiDB cluster

Now with a single command we can bring-up a full TiDB cluster:

helm install pingcap/tidb-cluster -n demo --namespace=tidb --set pd.storageClassName=pd-ssd,tikv.storageClassName=pd-ssd --version=<chartVersion>

It takes a few minutes to launch. You can monitor the progress with:

watch kubectl get pods --namespace tidb -o wide

The TiDB cluster includes 2 TiDB pods, 3 TiKV pods, and 3 PD pods. When you see all pods Running, it's time to Ctrl+C and proceed forward!

Connect to the TiDB cluster

There can be a small delay between the pod being up and running, and the service being available. You can watch list services available with:

watch "kubectl get svc -n tidb"

When you see demo-tidb appear, you can Ctrl+C. The service is ready to connect to!

To connect to TiDB within the Kubernetes cluster, you can establish a tunnel between the TiDB service and your Cloud Shell. This is recommended only for debugging purposes, because the tunnel will not automatically be transferred if your Cloud Shell restarts. To establish a tunnel:

kubectl -n tidb port-forward svc/demo-tidb 4000:4000 &>/tmp/port-forward.log &

From your Cloud Shell:

sudo apt-get install -y mysql-client && \ mysql -h 127.0.0.1 -u root -P 4000

Try out a MySQL command inside your MySQL terminal:

select tidb_version();

If you did not specify a password in helm, set one now:

SET PASSWORD FOR 'root'@'%' = '<change-to-your-password>';

Congratulations, you are now up and running with a distributed TiDB database compatible with MySQL!

Scale out the TiDB cluster

With a single command we can easily scale out the TiDB cluster. To scale out TiKV:

helm upgrade demo pingcap/tidb-cluster --set pd.storageClassName=pd-ssd,tikv.storageClassName=pd-ssd,tikv.replicas=5 --version=<chartVersion>

Now the number of TiKV pods is increased from the default 3 to 5. You can check it with:

kubectl get po -n tidb

Accessing the Grafana dashboard

To access the Grafana dashboards, you can create a tunnel between the Grafana service and your shell. To do so, use the following command:

kubectl -n tidb port-forward svc/demo-grafana 3000:3000 &>/dev/null &

In Cloud Shell, click on the Web Preview button and enter 3000 for the port. This opens a new browser tab pointing to the Grafana dashboards. Alternatively, use the following URL https://ssh.cloud.google.com/devshell/proxy?port=3000 in a new tab or window.

If not using Cloud Shell, point a browser to localhost:3000.

Destroy the TiDB cluster

When the TiDB cluster is not needed, you can delete it with the following command:

helm delete demo --purge

The above commands only delete the running pods, the data is persistent. If you do not need the data anymore, you should run the following commands to clean the data and the dynamically created persistent disks:

kubectl delete pvc -n tidb -l app.kubernetes.io/instance=demo,app.kubernetes.io/managed-by=tidb-operator && \ kubectl get pv -l app.kubernetes.io/namespace=tidb,app.kubernetes.io/managed-by=tidb-operator,app.kubernetes.io/instance=demo -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

Shut down the Kubernetes cluster

Once you have finished experimenting, you can delete the Kubernetes cluster with:

gcloud container clusters delete tidb
Download PDF
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.