Back up Data to Azure Blob Storage Using BR
This document describes how to back up the data of a TiDB cluster on Kubernetes to Azure Blob Storage. There are two backup types:
- Snapshot backup. With snapshot backup, you can restore a TiDB cluster to the time point of the snapshot backup using full restoration.
 - Log backup. With snapshot backup and log backup, you can restore a TiDB cluster to any point in time. This is also known as Point-in-Time Recovery (PITR).
 
The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to get the backup data of the TiDB cluster, and then send the data to Azure Blob Storage. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
Usage scenarios
If you have the following backup needs, you can use BR to make an ad-hoc backup or scheduled snapshot backup of the TiDB cluster data to Azure Blob Storage.
- To back up a large volume of data (more than 1 TB) at a fast speed.
 - To get a direct backup of data as SST files (key-value pairs).
 
If you have the following backup needs, you can use BR log backup to make an ad-hoc backup of the TiDB cluster data to Azure Blob Storage (you can combine log backup and snapshot backup to restore data more efficiently):
- To restore data of any point in time to a new cluster
 - The recovery point object (RPO) is within several minutes.
 
For other backup needs, refer to Backup and Restore Overview to choose an appropriate backup method.
Ad-hoc backup
Ad-hoc backup includes snapshot backup and log backup. For log backup, you can start or stop a log backup task and clean log backup data.
To get an ad-hoc backup, you need to create a Backup Custom Resource (CR) object to describe the backup details. Then, TiDB Operator performs the specific backup operation based on this Backup object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.
This document provides an example about how to back up the data of the demo1 TiDB cluster in the test1 Kubernetes namespace to Azure Blob Storage. The following are the detailed steps.
Prerequisites: Prepare an ad-hoc backup environment
Create a namespace for managing backup. The following example creates a
backup-testnamespace:kubectl create namespace backup-testDownload backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
backup-testnamespace:kubectl apply -f backup-rbac.yaml -n backup-testGrant permissions to the remote storage for the
backup-testnamespace. You can grant permissions to Azure Blob Storage by two methods. For details, refer to Azure account permissions. After you grant the permissions, thebackup-testnamespace has a secret object namedazblob-secretorazblob-secret-ad.For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.
Make sure that you have the
SELECTandUPDATEprivileges on themysql.tidbtable of the backup database so that theBackupCR can adjust the GC time before and after the backup.Create
backup-demo1-tidb-secretto store the account and password to access the TiDB cluster:kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password} --namespace=test1
Snapshot backup
To perform a snapshot backup, take the following steps:
Create the Backup CR named demo1-full-backup-azblob in the backup-test namespace:
kubectl apply -f full-backup-azblob.yaml
The content of full-backup-azblob.yaml is as follows:
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: demo1-full-backup-azblob
  namespace: backup-test
spec:
  backupType: full
  br:
    cluster: demo1
    clusterNamespace: test1
    # logLevel: info
    # statusAddr: ${status_addr}
    # concurrency: 4
    # rateLimit: 0
    # timeAgo: ${time}
    # checksum: true
    # sendCredToTikv: true
    # options:
    # - --lastbackupts=420134118382108673
  # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
  # from:
    # host: ${tidb_host}
    # port: ${tidb_port}
    # user: ${tidb_user}
    # secretName: backup-demo1-tidb-secret
  azblob:
    secretName: azblob-secret
    container: my-container
    prefix: my-full-backup-folder
    #accessTier: Hot
When you configure backup-azblob.yaml, note the following:
- Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp 
--lastbackuptsinspec.br.options. For the limitations of incremental backup, refer to Use BR to Back up and Restore Data. - For more information about Azure Blob Storage configuration, refer to Azure Blob Storage fields.
 - Some parameters in 
spec.brare optional, such aslogLevelandstatusAddr. For more information about BR configuration, refer to BR fields. spec.azblob.secretName: fill in the name of the secret object, such asazblob-secret.- For v4.0.8 or a later version, BR can automatically adjust 
tikv_gc_life_time. You do not need to configure thespec.tikvGCLifeTimeandspec.fromfields in theBackupCR. - For more information about the 
BackupCR fields, refer to Backup CR fields. 
View the snapshot backup status
After you create the Backup CR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:
kubectl get backup -n backup-test -o wide
From the output, you can find the following information for the Backup CR named demo1-full-backup-azblob. The COMMITTS field indicates the time point of the snapshot backup:
NAME                       TYPE   MODE       STATUS     BACKUPPATH                                    COMMITTS             ...
demo1-full-backup-azblob   full   snapshot   Complete   azure://my-container/my-full-backup-folder/   436979621972148225   ...
Log backup
You can use a Backup CR to describe the start and stop of a log backup task and manage the log backup data. In this section, the example shows how to create a Backup CR named demo1-log-backup-azblob. See the following detailed steps.
Start log backup
In the
backup-testnamespace, create aBackupCR nameddemo1-log-backup-azblob.kubectl apply -f log-backup-azblob.yamlThe content of
log-backup-azblob.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-azblob namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true azblob: secretName: azblob-secret container: my-container prefix: my-log-backup-folder #accessTier: HotWait for the start operation to complete:
kubectl get jobs -n backup-testNAME COMPLETIONS ... backup-demo1-log-backup-azblob-log-start 1/1 ...View the newly created
BackupCR:kubectl get backup -n backup-testNAME MODE STATUS .... demo1-log-backup-azblob log Running ....
View the log backup status
You can view the log backup status by checking the information of the Backup CR:
kubectl describe backup -n backup-test
From the output, you can find the following information for the Backup CR named demo1-log-backup-azblob. The Log Checkpoint Ts field indicates the latest point in time that can be recovered:
Status:
Backup Path: azure://my-container/my-log-backup-folder/
Commit Ts:    436568622965194754
Conditions:
    Last Transition Time:  2022-10-10T04:45:20Z
    Status:                True
    Type:                  Scheduled
    Last Transition Time:  2022-10-10T04:45:31Z
    Status:                True
    Type:                  Prepare
    Last Transition Time:  2022-10-10T04:45:31Z
    Status:                True
    Type:                  Running
Log Checkpoint Ts:       436569119308644661
Stop log backup
Because you already created a Backup CR named demo1-log-backup-azblob when you started log backup, you can stop the log backup by modifying the same Backup CR. The priority of all operations is: stop log backup > delete log backup data > start log backup.
kubectl edit backup demo1-log-backup-azblob -n backup-test
In the last line of the CR, append spec.logStop: true. Then save and quit the editor. The modified content is as follows:
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: demo1-log-backup-azblob
  namespace: backup-test
spec:
  backupMode: log
  br:
    cluster: demo1
    clusterNamespace: test1
    sendCredToTikv: true
  azblob:
    secretName: azblob-secret
    container: my-container
    prefix: my-log-backup-folder
    #accessTier: Hot
  logStop: true
You can see the STATUS of the Backup CR named demo1-log-backup-azblob change from Running to Stopped:
kubectl get backup -n backup-test
NAME                       MODE   STATUS    ....
demo1-log-backup-azblob    log    Stopped   ....
Clean log backup data
Because you already created a
BackupCR nameddemo1-log-backup-azblobwhen you started log backup, you can clean the log data backup by modifying the sameBackupCR. The priority of all operations is: stop log backup > delete log backup data > start log backup. The following example shows how to clean log backup data generated before 2022-10-10T15:21:00+08:00.kubectl edit backup demo1-log-backup-azblob -n backup-testIn the last line of the CR, append
spec.logTruncateUntil: "2022-10-10T15:21:00+08:00". Then save and quit the editor. The modified content is as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-azblob namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true azblob: secretName: azblob-secret container: my-container prefix: my-log-backup-folder #accessTier: Hot logTruncateUntil: "2022-10-10T15:21:00+08:00"Wait for the clean operation to complete:
kubectl get jobs -n backup-testNAME COMPLETIONS ... ... backup-demo1-log-backup-azblob-log-truncate 1/1 ...View the
BackupCR information:kubectl describe backup -n backup-test... Log Success Truncate Until: 2022-10-10T15:21:00+08:00 ...You can also view the information by running the following command:
kubectl get backup -n backup-test -o wideNAME MODE STATUS ... LOGTRUNCATEUNTIL demo1-log-backup log Complete ... 2022-10-10T15:21:00+08:00
Backup CR examples
Back up data of all clusters
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: demo1-backup-azblob
  namespace: backup-test
spec:
  backupType: full
  serviceAccount: tidb-backup-manager
  br:
    cluster: demo1
    sendCredToTikv: false
    clusterNamespace: test1
  # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
  # from:
    # host: ${tidb_host}
    # port: ${tidb_port}
    # user: ${tidb_user}
    # secretName: backup-demo1-tidb-secret
  azblob:
    secretName: azblob-secret-ad
    container: my-container
    prefix: my-folder
Back up data of a single database
The following example backs up data of the db1 database.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: demo1-backup-azblob
  namespace: backup-test
spec:
  backupType: full
  serviceAccount: tidb-backup-manager
  tableFilter:
  - "db1.*"
  br:
    cluster: demo1
    sendCredToTikv: false
    clusterNamespace: test1
  # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
  # from:
    # host: ${tidb_host}
    # port: ${tidb_port}
    # user: ${tidb_user}
    # secretName: backup-demo1-tidb-secret
  azblob:
    secretName: azblob-secret-ad
    container: my-container
    prefix: my-folder
Back up data of a single table
The following example backs up data of the db1.table1 table.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: demo1-backup-azblob
  namespace: backup-test
spec:
  backupType: full
  serviceAccount: tidb-backup-manager
  tableFilter:
  - "db1.table1"
  br:
    cluster: demo1
    sendCredToTikv: false
    clusterNamespace: test1
  # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
  # from:
    # host: ${tidb_host}
    # port: ${tidb_port}
    # user: ${tidb_user}
    # secretName: backup-demo1-tidb-secret
  azblob:
    secretName: azblob-secret-ad
    container: my-container
    prefix: my-folder
Back up data of multiple tables using the table filter
The following example backs up data of the db1.table1 table and db1.table2 table.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: demo1-backup-azblob
  namespace: backup-test
spec:
  backupType: full
  serviceAccount: tidb-backup-manager
  tableFilter:
  - "db1.table1"
  - "db1.table2"
  # ...
  br:
    cluster: demo1
    sendCredToTikv: false
    clusterNamespace: test1
  # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
  # from:
    # host: ${tidb_host}
    # port: ${tidb_port}
    # user: ${tidb_user}
    # secretName: backup-demo1-tidb-secret
  azblob:
    secretName: azblob-secret-ad
    container: my-container
    bucket: my-bucket
    prefix: my-folder
Scheduled snapshot backup
You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled snapshot backup is described by a custom BackupSchedule CR object. A snapshot backup is triggered at each backup time point. Its underlying implementation is the ad-hoc snapshot backup.
Prerequisites: Prepare a scheduled backup environment
Refer to Prepare an ad-hoc backup environment.
Perform a scheduled snapshot backup
Depending on which method you choose to grant permissions to the remote storage, perform a scheduled snapshot backup by doing one of the following:
Method 1: If you grant permissions by access key, create the
BackupScheduleCR, and back up cluster data as described below:kubectl apply -f backup-scheduler-azblob.yamlThe content of
backup-scheduler-azblob.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-azblob namespace: backup-test spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: backupType: full br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # sendCredToTikv: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: backup-demo1-tidb-secret azblob: secretName: azblob-secret-ad container: my-container prefix: my-folderMethod 2: If you grant permissions by Azure AD, create the
BackupScheduleCR, and back up cluster data as described below:kubectl apply -f backup-scheduler-azblob.yamlThe content of
backup-scheduler-azblob.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-azblob namespace: backup-test spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: backupType: full br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: backup-demo1-tidb-secret azblob: secretName: azblob-secret-ad container: my-container prefix: my-folder
From the preceding content in backup-scheduler-azblob.yaml, you can see that the backupSchedule configuration consists of two parts. One is the unique configuration of backupSchedule, and the other is backupTemplate.
- For the unique configuration of 
backupSchedule, refer to BackupSchedule CR fields. backupTemplatespecifies the configuration related to the cluster and remote storage, which is the same as thespecconfiguration of theBackupCR.
After creating the scheduled snapshot backup, you can run the following command to check the backup status:
kubectl get bks -n backup-test -o wide
You can run the following command to check all the backup items:
kubectl get backup -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-azblob -n backup-test
Integrated management of scheduled snapshot backup and log backup
You can use the BackupSchedule CR to integrate the management of scheduled snapshot backup and log backup for TiDB clusters. By setting the backup retention time, you can regularly recycle the scheduled snapshot backup and log backup, and ensure that you can perform PITR recovery through the scheduled snapshot backup and log backup within the retention period.
The following example creates a BackupSchedule CR named integrated-backup-schedule-azblob. For more information about the authorization method, refer to Azure account permissions.
Prerequisites: Prepare a scheduled snapshot backup environment
The steps to prepare for a scheduled snapshot backup are the same as those of Prepare for an ad-hoc backup.
Create BackupSchedule
Create a
BackupScheduleCR namedintegrated-backup-schedule-azblobin thebackup-testnamespace.kubectl apply -f integrated-backup-scheduler-azblob.yamlThe content of
integrated-backup-scheduler-azblob.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: integrated-backup-schedule-azblob namespace: backup-test spec: maxReservedTime: "3h" schedule: "* */2 * * *" backupTemplate: backupType: full cleanPolicy: Delete br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true azblob: secretName: azblob-secret container: my-container prefix: schedule-backup-folder-snapshot #accessTier: Hot logBackupTemplate: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true azblob: secretName: azblob-secret container: my-container prefix: schedule-backup-folder-log #accessTier: HotIn the above example of
integrated-backup-scheduler-azblob.yaml, thebackupScheduleconfiguration consists of three parts: the unique configuration ofbackupSchedule, the configuration of the snapshot backupbackupTemplate, and the configuration of the log backuplogBackupTemplate.For the field description of
backupSchedule, refer to BackupSchedule CR fields.After creating
backupSchedule, use the following command to check the backup status:kubectl get bks -n backup-test -o wideA log backup task is created together with
backupSchedule. You can check the log backup name through thestatus.logBackupfield of thebackupScheduleCR.kubectl describe bks integrated-backup-schedule-azblob -n backup-testTo perform data restoration for a cluster, you need to specify the backup path. You can use the following command to check all the backup items under the scheduled snapshot backup.
kubectl get bk -l tidb.pingcap.com/backup-schedule=integrated-backup-schedule-azblob -n backup-testThe
MODEfield in the output indicates the backup mode.snapshotindicates the scheduled snapshot backup, andlogindicates the log backup.NAME MODE STATUS .... integrated-backup-schedule-azblob-2023-03-08t02-48-00 snapshot Complete .... log-integrated-backup-schedule-azblob log Running ....
Delete the backup CR
If you no longer need the backup CR, you can delete it by referring to Delete the Backup CR.
Troubleshooting
If you encounter any problem during the backup process, refer to Common Deployment Failures.