Back Up Data to GCS Using Dumpling (Helm)
This document describes how to back up the data of the TiDB cluster on Kubernetes to Google Cloud Storage (GCS). "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup).
The backup method described in this document is implemented using CustomResourceDefinition (CRD) in TiDB Operator v1.1 or later versions. Dumpling is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the remote GCS.
Dumpling is a data export tool that exports data stored in TiDB/MySQL as SQL or CSV files and can be used to make a logical full backup or export.
Usage scenarios
You can use the backup method described in this document if you want to make an ad-hoc full backup or scheduled full backup of the TiDB cluster data to GCS with the following needs:
- To export SQL or CSV files
- To limit the memory usage of a single SQL statement
- To export the historical data snapshot of TiDB
Prerequisites
Before you use Dumpling to back up the TiDB cluster data to GCS, make sure that you have the following privileges:
- The
SELECTandUPDATEprivileges of themysql.tidbtable: Before and after the backup, theBackupCR needs a database account with these privileges to adjust the GC time. - SELECT
- RELOAD
- LOCK TABLES
- REPLICATION CLIENT
Ad-hoc full backup to GCS
Ad-hoc full backup describes a backup operation by creating a Backup custom resource (CR) object. TiDB Operator performs the specific backup operation based on this Backup object. If an error occurs during the backup process, TiDB Operator does not retry and you need to handle this error manually.
To better explain how to perform the backup operation, this document shows an example in which the data of the demo1 TiDB cluster is backed up to the test1 Kubernetes namespace.
Step 1: Prepare for ad-hoc full backup
Download backup-rbac.yaml and execute the following command to create the role-based access control (RBAC) resources in the
test1namespace:kubectl apply -f backup-rbac.yaml -n test1Grant permissions to the remote storage.
Refer to GCS account permissions.
Create the
backup-demo1-tidb-secretsecret which stores the root account and password needed to access the TiDB cluster:kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password} --namespace=test1
Step 2: Perform ad-hoc backup
Create the
BackupCR and back up data to GCS:kubectl apply -f backup-gcs.yamlThe content of
backup-gcs.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-gcs namespace: test1 spec: from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret gcs: secretName: gcs-secret projectId: ${project_id} bucket: ${bucket} # prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private # bucketAcl: private # dumpling: # options: # - --threads=16 # - --rows=10000 # tableFilter: # - "test.*" storageClassName: local-storage storageSize: 10GiThe example above backs up all data in the TiDB cluster to GCS. Some parameters in
spec.gcscan be ignored, such aslocation,objectAcl,bucketAcl, andstorageClass. For more information about GCS configuration, refer to GCS fields.spec.dumplingrefers to Dumpling-related configuration. You can specify Dumpling's operation parameters in theoptionsfield. See Dumpling Option list for more information. These configuration items of Dumpling can be ignored by default. When these items are not specified, the default values ofoptionsfields are as follows:options: - --threads=16 - --rows=10000For more information about the
BackupCR fields, refer to Backup CR fields.After creating the
BackupCR, use the following command to check the backup status:kubectl get bk -n test1 -owide
Scheduled full backup to GCS
You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled full backup is described by a custom BackupSchedule CR object. A full backup is triggered at each backup time point. Its underlying implementation is the ad-hoc full backup.
Step 1: Prepare for scheduled backup
The preparation for the scheduled backup is the same as the prepare for ad-hoc full backup.
Step 2: Perform scheduled backup
Create the
BackupScheduleCR, and back up cluster data as described below:kubectl apply -f backup-schedule-gcs.yamlThe content of
backup-schedule-gcs.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-gcs namespace: test1 spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret gcs: secretName: gcs-secret projectId: ${project_id} bucket: ${bucket} # prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private # bucketAcl: private # dumpling: # options: # - --threads=16 # - --rows=10000 # tableFilter: # - "test.*" # storageClassName: local-storage storageSize: 10GiAfter creating the scheduled full backup, use the following command to check the backup status:
kubectl get bks -n test1 -owideUse the following command to check all the backup items:
kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-gcs -n test1
From the example above, you can see that the backupSchedule configuration consists of two parts. One is the unique configuration of backupSchedule, and the other is backupTemplate.
backupTemplate specifies the configuration related to the cluster and remote storage, which is the same as the spec configuration of the Backup CR. For the unique configuration of backupSchedule, refer to BackupSchedule CR fields.
Delete the backup CR
After the backup, you might need to delete the backup CR. For details, refer to Delete the Backup CR.
Troubleshooting
If you encounter any problem during the backup process, refer to Common Deployment Failures.