Back Up Data to S3-Compatible Storage Using BR
This document describes how to back up the data of a TiDB cluster on AWS Kubernetes to AWS storage. There are two backup types:
- Snapshot backup. With snapshot backup, you can restore a TiDB cluster to the time point of the snapshot backup using full restoration.
- Log backup. With snapshot backup and log backup, you can restore a TiDB cluster to any point in time. This is also known as Point-in-Time Recovery (PITR).
The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to get the backup data of the TiDB cluster, and then send the data to the AWS storage. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
Usage scenarios
If you have the following backup needs, you can use BR's snapshot backup method to make an ad-hoc backup of the TiDB cluster data to S3-compatible storages.
- To back up a large volume of data (more than 1 TiB) at a fast speed
- To get a direct backup of data as SST files (key-value pairs)
If you have the following backup needs, you can use BR log backup to make an ad-hoc backup of the TiDB cluster data to S3-compatible storages (you can combine log backup and snapshot backup to restore data more efficiently):
- To restore data of any point in time to a new cluster
- The recovery point object (RPO) is within several minutes.
For other backup needs, refer to Backup and Restore Overview to choose an appropriate backup method.
Ad-hoc backup
Ad-hoc backup includes snapshot backup and log backup. For log backup, you can start or stop a log backup task and clean log backup data.
To get an ad-hoc backup, you need to create a Backup Custom Resource (CR) object to describe the backup details. Then, TiDB Operator performs the specific backup operation based on this Backup object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.
This document provides an example about how to back up the data of the demo1 TiDB cluster in the test1 Kubernetes namespace to the AWS storage. The following are the detailed steps.
Prerequisites: Prepare for an ad-hoc backup
Save the following content as the
backup-rbac.yamlfile to create the required role-based access control (RBAC) resources:--- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tidb-backup-manager labels: app.kubernetes.io/component: tidb-backup-manager rules: - apiGroups: [""] resources: ["events"] verbs: ["*"] - apiGroups: ["br.pingcap.com"] resources: ["*"] verbs: ["get", "watch", "list", "update"] --- kind: ServiceAccount apiVersion: v1 metadata: name: tidb-backup-manager --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tidb-backup-manager labels: app.kubernetes.io/component: tidb-backup-manager subjects: - kind: ServiceAccount name: tidb-backup-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: tidb-backup-managerExecute the following command to create the RBAC resources in the
test1namespace:kubectl apply -f backup-rbac.yaml -n test1Grant permissions to the remote storage for the
test1namespace:- If you are using Amazon S3 to back up your cluster, you can grant permissions in three methods. For more information, refer to AWS account permissions.
- If you are using other S3-compatible storage (such as Ceph and MinIO) to back up your cluster, you can grant permissions by using AccessKey and SecretKey.
Snapshot backup
Depending on which method you choose to grant permissions to the remote storage when preparing for the ad-hoc backup, export your data to the S3-compatible storage by doing one of the following:
Method 1: If you grant permissions by importing AccessKey and SecretKey, create the
BackupCR to back up cluster data as follows:kubectl apply -f full-backup-s3.yamlThe content of
full-backup-s3.yamlis as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-s3 namespace: test1 spec: backupType: full br: cluster: demo1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # sendCredToTikv: true # options: # - --lastbackupts=420134118382108673 s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-full-backup-folderMethod 2: If you grant permissions by associating IAM with Pod, create the
BackupCR to back up cluster data as follows:kubectl apply -f full-backup-s3.yamlThe content of
full-backup-s3.yamlis as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-s3 namespace: test1 annotations: iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user spec: backupType: full br: cluster: demo1 sendCredToTikv: false # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # options: # - --lastbackupts=420134118382108673 s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folderMethod 3: If you grant permissions by associating IAM with ServiceAccount, create the
BackupCR to back up cluster data as follows:kubectl apply -f full-backup-s3.yamlThe content of
full-backup-s3.yamlis as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-s3 namespace: test1 spec: backupType: full serviceAccount: tidb-backup-manager br: cluster: demo1 sendCredToTikv: false # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # options: # - --lastbackupts=420134118382108673 s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folder
When configuring full-backup-s3.yaml, note the following:
- Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp
--lastbackuptsinspec.br.options. For the limitations of incremental backup, refer to Use BR to Back up and Restore Data. - You can ignore the
acl,endpoint,storageClassconfiguration items of Amazon S3. For more information about S3-compatible storage configuration, refer to S3 storage fields. - Some parameters in
.spec.brare optional, such aslogLevelandstatusAddr. For more information about BR configuration, refer to BR fields. - For v4.0.8 or a later version, BR can automatically adjust
tikv_gc_life_time. You do not need to configurespec.tikvGCLifeTimeandspec.fromfields in theBackupCR. - For more information about the
BackupCR fields, refer to Backup CR fields.
View the snapshot backup status
After you create the Backup CR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:
kubectl get backup -n test1 -o wide
From the output, you can find the following information for the Backup CR named demo1-full-backup-s3. The COMMITTS field indicates the time point of the snapshot backup:
NAME TYPE MODE STATUS BACKUPPATH COMMITTS ...
demo1-full-backup-s3 full snapshot Complete s3://my-bucket/my-full-backup-folder/ 436979621972148225 ...
Log backup
You can use a Backup CR to describe the start and stop of a log backup task and manage the log backup data. Log backup grants permissions to remote storages in the same way as snapshot backup. In this section, the example shows log backup operations by taking a Backup CR named demo1-log-backup-s3 as an example. Note that these operations assume that permissions to remote storages are granted using accessKey and secretKey. See the following detailed steps.
Description of the logSubcommand field
In the Backup Custom Resource (CR), you can use the logSubcommand field to control the state of a log backup task. The logSubcommand field supports the following commands:
log-start: initiates a new log backup task or resumes a paused task. Use this command to start the log backup process or resume a task from a paused state.log-pause: temporarily pauses the currently running log backup task. After pausing, you can use thelog-startcommand to resume the task.log-stop: permanently stops the log backup task. After executing this command, the Backup CR enters a stopped state and cannot be restarted.
These commands provide fine-grained control over the lifecycle of log backup tasks, enabling you to start, pause, resume, and stop tasks effectively to manage log data retention in Kubernetes environments.
Start log backup
In the
test1namespace, create aBackupCR nameddemo1-log-backup-s3.kubectl apply -f log-backup-s3.yamlThe content of
log-backup-s3.yamlis as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-s3 namespace: test1 spec: backupMode: log br: cluster: demo1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-log-backup-folderWait for the start operation to complete:
kubectl get jobs -n test1NAME COMPLETIONS ... backup-demo1-log-backup-s3-log-start 1/1 ...View the newly created
BackupCR:kubectl get backup -n test1NAME MODE STATUS .... demo1-log-backup-s3 log Running ....
View the log backup status
You can view the log backup status by checking the information of the Backup CR:
kubectl describe backup -n test1
From the output, you can find the following information for the Backup CR named demo1-log-backup-s3. The Log Checkpoint Ts field indicates the latest point in time that can be recovered:
Status:
Backup Path: s3://my-bucket/my-log-backup-folder/
Commit Ts: 436568622965194754
Conditions:
Last Transition Time: 2022-10-10T04:45:20Z
Status: True
Type: Scheduled
Last Transition Time: 2022-10-10T04:45:31Z
Status: True
Type: Prepare
Last Transition Time: 2022-10-10T04:45:31Z
Status: True
Type: Running
Log Checkpoint Ts: 436569119308644661
Pause log backup
You can pause a log backup task by setting the logSubcommand field of the Backup Custom Resource (CR) to log-pause. The following example shows how to pause the demo1-log-backup-s3 CR created in Start log backup.
kubectl edit backup demo1-log-backup-s3 -n test1
To pause the log backup task, change the value of logSubcommand from log-start to log-pause, then save and exit the editor. The modified content is as follows:
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-log-backup-s3
namespace: test1
spec:
backupMode: log
logSubcommand: log-pause
br:
cluster: demo1
sendCredToTikv: true
s3:
provider: aws
secretName: s3-secret
region: us-west-1
bucket: my-bucket
prefix: my-log-backup-folder
You can verify that the STATUS of the demo1-log-backup-s3 Backup CR changes from Running to Pause:
kubectl get backup -n test1
NAME MODE STATUS ....
demo1-log-backup-s3 log Pause ....
Resume log backup
If a log backup task is paused, you can resume it by setting the logSubcommand field to log-start. The following example shows how to resume the demo1-log-backup-s3 CR that was paused in Pause Log Backup.
kubectl edit backup demo1-log-backup-s3 -n test1
To resume the log backup task, change the value of logSubcommand from log-pause to log-start, then save and exit the editor. The modified content is as follows:
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-log-backup-s3
namespace: test1
spec:
backupMode: log
logSubcommand: log-start
br:
cluster: demo1
sendCredToTikv: true
s3:
provider: aws
secretName: s3-secret
region: us-west-1
bucket: my-bucket
prefix: my-log-backup-folder
You can verify that the STATUS of the demo1-log-backup-s3 Backup CR changes from Pause to Running:
kubectl get backup -n test1
NAME MODE STATUS ....
demo1-log-backup-s3 log Running ....
Stop log backup
You can stop a log backup task by setting the logSubcommand field of the Backup Custom Resource (CR) to log-stop. The following example shows how to stop the demo1-log-backup-s3 CR created in Start log backup.
kubectl edit backup demo1-log-backup-s3 -n test1
Change the value of logSubcommand to log-stop, then save and exit the editor. The modified content is as follows:
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-log-backup-s3
namespace: test1
spec:
backupMode: log
logSubcommand: log-stop
br:
cluster: demo1
sendCredToTikv: true
s3:
provider: aws
secretName: s3-secret
region: us-west-1
bucket: my-bucket
prefix: my-log-backup-folder
You can verify that the STATUS of the Backup CR named demo1-log-backup-s3 changes from Running to Stopped:
kubectl get backup -n test1
NAME MODE STATUS ....
demo1-log-backup-s3 log Stopped ....
Clean log backup data
Because you already created a
BackupCR nameddemo1-log-backup-s3when you started log backup, you can clean the log data backup by modifying the sameBackupCR. The following example shows how to clean log backup data generated before 2022-10-10T15:21:00+08:00.kubectl edit backup demo1-log-backup-s3 -n test1In the last line of the CR, append
spec.logTruncateUntil: "2022-10-10T15:21:00+08:00". Then save and exit the editor. The modified content is as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-s3 namespace: test1 spec: backupMode: log logSubcommand: log-start/log-pause/log-stop br: cluster: demo1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-log-backup-folder logTruncateUntil: "2022-10-10T15:21:00+08:00"Wait for the clean operation to complete:
kubectl get jobs -n test1NAME COMPLETIONS ... ... backup-demo1-log-backup-s3-log-truncate 1/1 ...View the
BackupCR information:kubectl describe backup -n test1... Log Success Truncate Until: 2022-10-10T15:21:00+08:00 ...You can also view the information by running the following command:
kubectl get backup -n test1 -o wideNAME MODE STATUS ... LOGTRUNCATEUNTIL demo1-log-backup-s3 log Stopped ... 2022-10-10T15:21:00+08:00
Backup CR examples
Back up data of all clusters
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
br:
cluster: demo1
sendCredToTikv: false
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Back up data of a single database
The following example backs up data of the db1 database.
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
tableFilter:
- "db1.*"
br:
cluster: demo1
sendCredToTikv: false
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Back up data of a single table
The following example backs up data of the db1.table1 table.
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
tableFilter:
- "db1.table1"
br:
cluster: demo1
sendCredToTikv: false
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Back up data of multiple tables using the table filter
The following example backs up data of the db1.table1 table and db1.table2 table.
---
apiVersion: br.pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
tableFilter:
- "db1.table1"
- "db1.table2"
# ...
br:
cluster: demo1
sendCredToTikv: false
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Delete the backup CR
If you no longer need the backup CR, refer to Delete the Backup CR.
Troubleshooting
If you encounter any problem during the backup process, refer to Common Deployment Failures.