Restore Data from S3-Compatible Storage Using BR
This document describes how to restore the backup data stored in S3-compatible storages to a TiDB cluster on Kubernetes, including two restoration methods:
- Full restoration. This method takes the backup data of snapshot backup and restores a TiDB cluster to the time point of the snapshot backup.
- Point-in-time recovery (PITR). This method takes the backup data of both snapshot backup and log backup and restores a TiDB cluster to any point in time.
The restore method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to restore the data. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
PITR allows you to restore a new TiDB cluster to any point in time of the backup cluster. To use PITR, you need the backup data of snapshot backup and log backup. During the restoration, the snapshot backup data is first restored to the TiDB cluster, and then the log backup data between the snapshot backup time point and the specified point in time is restored to the TiDB cluster.
Full restoration
This document provides an example about how to restore the backup data from the spec.s3.prefix folder of the spec.s3.bucket bucket on Amazon S3 to the demo2 TiDB cluster in the test2 namespace. The following are the detailed steps.
Prerequisites: Complete the snapshot backup
In this example, the my-full-backup-folder folder in the my-bucket bucket of Amazon S3 stores the snapshot backup data. For steps of performing snapshot backup, refer to Back up Data to S3 Using BR.
Step 1: Prepare the restore environment
Before restoring backup data on a S3-compatible storage to TiDB using BR, take the following steps to prepare the restore environment:
Create a namespace for managing restoration. The following example creates a
restore-testnamespace:kubectl create namespace restore-testDownload backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
restore-testnamespace:kubectl apply -f backup-rbac.yaml -n restore-testGrant permissions to the remote storage for the
restore-testnamespace.If the data to be restored is in Amazon S3, you can grant permissions in three methods. For more information, see AWS account permissions.
If the data to be restored is in other S3-compatible storage (such as Ceph and MinIO), you can grant permissions by using AccessKey and SecretKey.
For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.
Make sure that you have the
SELECTandUPDATEprivileges on themysql.tidbtable of the target database so that theRestoreCR can adjust the GC time before and after the restore.Create the
restore-demo2-tidb-secretsecret to store the account and password to access the TiDB cluster:kubectl create secret generic restore-demo2-tidb-secret --from-literal=password=${password} --namespace=test2
Step 2: Restore the backup data to a TiDB cluster
Depending on which method you choose to grant permissions to the remote storage when preparing the restore environment, you can restore the data by doing one of the following:
Method 1: If you grant permissions by importing AccessKey and SecretKey, create the
RestoreCR to restore cluster data as described below:kubectl apply -f restore-full-s3.yamlThe content of
restore-full-s3.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-s3 namespace: restore-test spec: br: cluster: demo2 clusterNamespace: test2 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-full-backup-folderMethod 2: If you grant permissions by associating IAM with Pod, create the
RestoreCR to restore cluster data as described below:kubectl apply -f restore-full-s3.yamlThe content of
restore-full-s3.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-s3 namespace: restore-test annotations: iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user spec: br: cluster: demo2 sendCredToTikv: false clusterNamespace: test2 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folderMethod 3: If you grant permissions by associating IAM with ServiceAccount, create the
RestoreCR to restore cluster data as described below:kubectl apply -f restore-full-s3.yamlThe content of
restore-full-s3.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-s3 namespace: restore-test spec: serviceAccount: tidb-backup-manager # prune: afterFailed br: cluster: demo2 sendCredToTikv: false clusterNamespace: test2 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folder
When configuring restore-full-s3.yaml, note the following:
- For more information about S3-compatible storage configuration, refer to S3 storage fields.
- Some parameters in
.spec.brare optional, such aslogLevel,statusAddr,concurrency,rateLimit,checksum,timeAgo, andsendCredToTikv. For more information about BR configuration, refer to BR fields. - For v4.0.8 or a later version, BR can automatically adjust
tikv_gc_life_time. You do not need to configurespec.tofields in theRestoreCR. - For more information about the
RestoreCR fields, refer to Restore CR fields. - For TiDB v9.0.0 and later versions, the
RestoreCR supports a new field.spec.prune, which can be set toafterFailedto clean up residual metadata tables after a failed restore. Enabling this field changes the behavior and status of theRestoreCR when it enters theFailedstate. This feature is not supported in versions earlier than v9.0.0. For more details about the.spec.prunefield, see Prune field.
After creating the Restore CR, execute the following command to check the restore status:
kubectl get restore -n restore-test -o wide
NAME STATUS ...
demo2-restore-s3 Complete ...
If you set .spec.prune to afterFailed, you might see the following restore status:
kubectl get restore -n restore-test -o wide
NAME STATUS ...
demo3-restore-s3 PruneComplete ...
Point-in-time recovery
This section provides an example about how to perform point-in-time recovery (PITR) in a demo3 cluster in the test3 namespace. PITR takes two steps:
- Restore the cluster to the time point of the snapshot backup using the snapshot backup data in the
spec.pitrFullBackupStorageProvider.s3.prefixfolder of thespec.pitrFullBackupStorageProvider.s3.bucketbucket. - Restore the cluster to any point in time using the log backup data in the
spec.s3.prefixfolder of thespec.s3.bucketbucket.
The detailed steps are as follows.
Prerequisites: Complete data backup
In this example, the my-bucket bucket of Amazon S3 stores the following two types of backup data:
- The snapshot backup data generated during the log backup, stored in the
my-full-backup-folder-pitrfolder. - The log backup data, stored in the
my-log-backup-folder-pitrfolder.
For detailed steps of how to perform data backup, refer to Back up data to Azure Blob Storage.
Step 1: Prepare the restoration environment
Before restoring backup data on S3-compatible storages to TiDB using BR, take the following steps to prepare the restoration environment:
Create a namespace for managing restoration. The following example creates a
restore-testnamespace:kubectl create namespace restore-testDownload backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
restore-testnamespace:kubectl apply -f backup-rbac.yaml -n restore-testGrant permissions to the remote storage for the
restore-testnamespace.If the data to be restored is in Amazon S3, you can grant permissions in three methods. For more information, see AWS account permissions.
If the data to be restored is in other S3-compatible storage (such as Ceph and MinIO), you can grant permissions by using AccessKey and SecretKey.
Step 2: Restore the backup data to a TiDB cluster
The example in this section restores the snapshot backup data to the cluster. The specified restoration time point must be between the time point of snapshot backup and the Log Checkpoint Ts of log backup.
PITR grants permissions to remote storages in the same way as snapshot backup. The example in this section grants permissions by using AccessKey and SecretKey.
The detailed steps are as follows:
Create a
RestoreCR nameddemo3-restore-s3in therestore-testnamespace and specify the restoration time point as2022-10-10T17:21:00+08:00:kubectl apply -f restore-point-s3.yamlThe content of
restore-point-s3.yamlis as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo3-restore-s3 namespace: restore-test spec: restoreMode: pitr # prune: afterFailed br: cluster: demo3 clusterNamespace: test3 s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-log-backup-folder-pitr pitrRestoredTs: "2022-10-10T17:21:00+08:00" pitrFullBackupStorageProvider: s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folder-pitrWhen you configure
restore-point-s3.yaml, note the following:spec.restoreMode: when you perform PITR, set this field topitr. The default value of this field issnapshot, which means snapshot backup.
Wait for the restoration operation to complete:
kubectl get jobs -n restore-testNAME COMPLETIONS ... restore-demo3-restore-s3 1/1 ...You can also check the restoration status by using the following command:
kubectl get restore -n restore-test -o wideNAME STATUS ... demo3-restore-s3 Complete ...If you set
.spec.prunetoafterFailed, you might see the following restore status:kubectl get restore -n restore-test -o wideNAME STATUS ... demo3-restore-s3 PruneComplete ...
Troubleshooting
If you encounter any problem during the restore process, refer to Common Deployment Failures.