Sign InTry Free

Back up Data to GCS Using BR

This document describes how to back up the data of a TiDB cluster on Kubernetes to Google Cloud Storage (GCS). There are two backup types:

  • Snapshot backup. With snapshot backup, you can restore a TiDB cluster to the time point of the snapshot backup using full restoration.
  • Log backup. With snapshot backup and log backup, you can restore a TiDB cluster to any point in time. This is also known as Point-in-Time Recovery (PITR).

The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to get the backup data of the TiDB cluster, and then send the data to the AWS storage. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.

Usage scenarios

If you have the following backup needs, you can use BR's snapshot backup method to make an ad-hoc backup or scheduled snapshot backup of the TiDB cluster data to GCS.

  • To back up a large volume of data (more than 1 TB) at a fast speed
  • To get a direct backup of data as SST files (key-value pairs)

If you have the following backup needs, you can use BR log backup to make an ad-hoc backup of the TiDB cluster data to GCS (you can combine log backup and snapshot backup to restore data more efficiently):

  • To restore data of any point in time to a new cluster
  • The recovery point object (RPO) is within several minutes.

For other backup needs, refer to Backup and Restore Overview to choose an appropriate backup method.

Ad-hoc backup

Ad-hoc backup includes snapshot backup and log backup. For log backup, you can start or stop a log backup task and clean log backup data.

To get an Ad-hoc backup, you need to create a Backup Custom Resource (CR) object to describe the backup details. Then, TiDB Operator performs the specific backup operation based on this Backup object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.

This document provides an example about how to back up the data of the demo1 TiDB cluster in the test1 Kubernetes namespace to GCS. The following are the detailed steps.

Prerequisites: Prepare for an ad-hoc backup

  1. Create a namespace for managing backup. The following example creates a backup-test namespace:

    kubectl create namespace backup-test
  2. Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the test1 namespace:

    kubectl apply -f backup-rbac.yaml -n backup-test
  3. Grant permissions to the remote storage for the created backup-test namespace.

    Refer to GCS account permissions.

  4. For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.

    1. Make sure that you have the SELECT and UPDATE privileges on the mysql.tidb table of the backup database so that the Backup CR can adjust the GC time before and after the backup.

    2. Create the backup-demo1-tidb-secret secret to store the root account and password to access the TiDB cluster:

      kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=<password> --namespace=test1

Perform an ad-hoc backup

  1. Create the Backup CR to back up cluster data to GCS as described below:

    kubectl apply -f full-backup-gcs.yaml

    The content of full-backup-gcs.yaml is as follows:

    --- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-gcs namespace: backup-test spec: # backupType: full # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb-host} port: ${tidb-port} user: ${tidb-user} secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # sendCredToTikv: true # options: # - --lastbackupts=420134118382108673 gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-full-backup-folder # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private

    When configuring the full-backup-gcs.yaml, note the following:

    • Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp --lastbackupts in spec.br.options. For the limitations of incremental backup, refer to Use BR to Back up and Restore Data.
    • Some parameters in spec.br are optional, such as logLevel and statusAddr. For more information about BR configuration, refer to BR fields.
    • Some parameters in spec.gcs are optional, such as location, objectAcl, and storageClass. For more information about GCS configuration, refer to GCS fields.
    • For v4.0.8 or a later version, BR can automatically adjust tikv_gc_life_time. You do not need to configure spec.tikvGCLifeTime and spec.from fields in the Backup CR.
    • For more information about the Backup CR fields, refer to Backup CR fields.
  2. After you create the Backup CR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:

    kubectl get bk -n test1 -owide

View the snapshot backup status

After you create the Backup CR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:

kubectl get backup -n backup-test -o wide

From the output, you can find the following information for the Backup CR named demo1-full-backup-gcs. The COMMITTS field indicates the time point of the snapshot backup:

NAME TYPE MODE STATUS BACKUPPATH COMMITTS ... demo1-full-backup-gcs full snapshot Complete gcs://my-bucket/my-full-backup-folder/ 436979621972148225 ...

Log backup

You can use a Backup CR to describe the start and stop of a log backup task and manage the log backup data. Log backup grants permissions to remote storages in the same way as snapshot backup. In this section, the example shows how to create a Backup CR named demo1-log-backup-s3. See the following detailed steps.

Start log backup

  1. In the backup-test namespace, create a Backup CR named demo1-log-backup-gcs.

    kubectl apply -f log-backup-gcs.yaml

    The content of log-backup-gcs.yaml is as follows:

    --- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-gcs namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-log-backup-folder
  2. Wait for the start operation to complete:

    kubectl get jobs -n backup-test
    NAME COMPLETIONS ... backup-demo1-log-backup-gcs-log-start 1/1 ...
  3. View the newly created Backup CR:

    kubectl get backup -n backup-test
    NAME MODE STATUS .... demo1-log-backup-gcs log Running ....

View the log backup status

You can view the log backup status by checking the information of the Backup CR:

kubectl describe backup -n backup-test

From the output, you can find the following information for the Backup CR named demo1-log-backup-gcs. The Log Checkpoint Ts field indicates the latest point in time that can be recovered:

Status: Backup Path: gcs://my-bucket/my-log-backup-folder/ Commit Ts: 436568622965194754 Conditions: Last Transition Time: 2022-10-10T04:45:20Z Status: True Type: Scheduled Last Transition Time: 2022-10-10T04:45:31Z Status: True Type: Prepare Last Transition Time: 2022-10-10T04:45:31Z Status: True Type: Running Log Checkpoint Ts: 436569119308644661

Stop log backup

Because you already created a Backup CR named demo1-log-backup-gcs when you started log backup, you can stop the log backup by modifying the same Backup CR. The priority of all operations is: stop log backup > delete log backup data > start log backup.

kubectl edit backup demo1-log-backup-gcs -n backup-test

In the last line of the CR, append spec.logStop: true. Then save and quit the editor. The modified content is as follows:

--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-gcs namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-log-backup-folder logStop: true

You can see the STATUS of the Backup CR named demo1-log-backup-gcs change from Running to Stopped:

kubectl get backup -n backup-test
NAME MODE STATUS .... demo1-log-backup-gcs log Stopped ....

Clean log backup data

  1. Because you already created a Backup CR named demo1-log-backup-gcs when you started log backup, you can clean the log data backup by modifying the same Backup CR. The priority of all operations is: stop log backup > delete log backup data > start log backup. The following example shows how to clean log backup data generated before 2022-10-10T15:21:00+08:00.

    kubectl edit backup demo1-log-backup-gcs -n backup-test

    In the last line of the CR, append spec.logTruncateUntil: "2022-10-10T15:21:00+08:00". Then save and quit the editor. The modified content is as follows:

    --- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-gcs namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-log-backup-folder logTruncateUntil: "2022-10-10T15:21:00+08:00"
  2. Wait for the clean operation to complete:

    kubectl get jobs -n backup-test
    NAME COMPLETIONS ... ... backup-demo1-log-backup-gcs-log-truncate 1/1 ...
  3. View the Backup CR information:

    kubectl describe backup -n backup-test
    ... Log Success Truncate Until: 2022-10-10T15:21:00+08:00 ...

    You can also view the information by running the following command:

    kubectl get backup -n backup-test -o wide
    NAME MODE STATUS ... LOGTRUNCATEUNTIL demo1-log-backup-gcs log Stopped ... 2022-10-10T15:21:00+08:00

Backup CR examples

Back up data of all clusters
--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-gcs namespace: backup-test spec: # backupType: full # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 gcs: projectId: ${project_id} secretName: gcs-secret bucket: ${bucket} prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private
Back up data of a single database

The following example backs up data of the db1 database.

--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-gcs namespace: backup-test spec: # backupType: full # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret tableFilter: - "db1.*" br: cluster: demo1 clusterNamespace: test1 gcs: projectId: ${project_id} secretName: gcs-secret bucket: ${bucket} prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private
Back up data of a single table

The following example backs up data of the db1.table1 table.

--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-gcs namespace: backup-test spec: # backupType: full # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret tableFilter: - "db1.table1" br: cluster: demo1 clusterNamespace: test1 gcs: projectId: ${project_id} secretName: gcs-secret bucket: ${bucket} prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private
Back up data of multiple tables using the table filter

The following example backs up data of the db1.table1 table and db1.table2 table.

--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-gcs namespace: backup-test spec: # backupType: full # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret tableFilter: - "db1.table1" - "db1.table2" br: cluster: demo1 clusterNamespace: test1 gcs: projectId: ${project_id} secretName: gcs-secret bucket: ${bucket} prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private

Scheduled snapshot backup

You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled snapshot backup is described by a custom BackupSchedule CR object. A snapshot backup is triggered at each backup time point. Its underlying implementation is the ad-hoc snapshot backup.

Prerequisites: Prepare for a scheduled snapshot backup

The steps to prepare for a scheduled snapshot backup are the same as that of Prepare for an ad-hoc backup.

Perform a scheduled snapshot backup

  1. Create a BackupSchedule CR to back up cluster data as described below:

    kubectl apply -f backup-schedule-gcs.yaml

    The content of backup-schedule-gcs.yaml is as follows:

    --- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-gcs namespace: backup-test spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: # Clean outdated backup data based on maxBackups or maxReservedTime. If not configured, the default policy is Retain # cleanPolicy: Delete # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # sendCredToTikv: true gcs: secretName: gcs-secret projectId: ${project_id} bucket: ${bucket} prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private

    From the above in backup-schedule-gcs.yaml, you can see that the backupSchedule configuration consists of two parts. One is the unique configuration of backupSchedule, and the other is backupTemplate.

    • For the unique configuration of backupSchedule, refer to BackupSchedule CR fields.
    • backupTemplate specifies the configuration related to the cluster and remote storage, which is the same as the spec configuration of the Backup CR.
  2. After creating the scheduled snapshot backup, use the following command to check the backup status:

    kubectl get bks -n backup-test -owide

    Use the following command to check all the backup items:

    kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-gcs -n backup-test

Integrated management of scheduled snapshot backup and log backup

You can use the BackupSchedule CR to integrate the management of scheduled snapshot backup and log backup for TiDB clusters. By setting the backup retention time, you can regularly recycle the scheduled snapshot backup and log backup, and ensure that you can perform PITR recovery through the scheduled snapshot backup and log backup within the retention period.

The following example creates a BackupSchedule CR named integrated-backup-schedule-gcs. For more information about the authorization method, refer to GCS account permissions.

Prerequisites: Prepare for a scheduled snapshot backup environment

The steps to prepare for a scheduled snapshot backup are the same as those of Prepare for an ad-hoc backup.

Create BackupSchedule

  1. Create a BackupSchedule CR named integrated-backup-schedule-gcs in the backup-test namespace.

    kubectl apply -f integrated-backup-scheduler-gcs.yaml

    The content of integrated-backup-schedule-gcs.yaml is as follows:

    --- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: integrated-backup-schedule-gcs namespace: backup-test spec: maxReservedTime: "3h" schedule: "* */2 * * *" backupTemplate: backupType: full cleanPolicy: Delete br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: schedule-backup-folder-snapshot logBackupTemplate: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: schedule-backup-folder-log

    In the above example of integrated-backup-scheduler-gcs.yaml, the backupSchedule configuration consists of three parts: the unique configuration of backupSchedule, the configuration of the snapshot backup backupTemplate, and the configuration of the log backup logBackupTemplate.

    For the field description of backupSchedule, refer to BackupSchedule CR fields.

  2. After creating backupSchedule, use the following command to check the backup status:

    kubectl get bks -n backup-test -o wide

    A log backup task is created together with backupSchedule. You can check the log backup name through the status.logBackup field of the backupSchedule CR.

    kubectl describe bks integrated-backup-schedule-gcs -n backup-test
  3. To perform data restoration for a cluster, you need to specify the backup path. You can use the following command to check all the backup items under the scheduled snapshot backup.

    kubectl get bk -l tidb.pingcap.com/backup-schedule=integrated-backup-schedule-gcs -n backup-test

    The MODE field in the output indicates the backup mode. snapshot indicates the scheduled snapshot backup, and log indicates the log backup.

    NAME MODE STATUS .... integrated-backup-schedule-gcs-2023-03-08t02-50-00 snapshot Complete .... log-integrated-backup-schedule-gcs log Running ....

Delete the backup CR

If you no longer need the backup CR, refer to Delete the Backup CR.

Troubleshooting

If you encounter any problem during the backup process, refer to Common Deployment Failures.

Was this page helpful?

Download PDFRequest docs changesAsk questions on DiscordEdit this page
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.