Restore Data from GCS Using BR
This document describes how to restore the backup data stored in Google Cloud Storage (GCS) to a TiDB cluster on Kubernetes, including two restoration methods:
- Full restoration. This method takes the backup data of snapshot backup and restores a TiDB cluster to the time point of the snapshot backup.
- Point-in-time recovery (PITR). This method takes the backup data of both snapshot backup and log backup and restores a TiDB cluster to any point in time.
The restore method described in this document is implemented based on Custom Resource Definition (CRD) in TiDB Operator. For the underlying implementation, BR is used to restore the data. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
PITR allows you to restore a new TiDB cluster to any point in time of the backup cluster. To use PITR, you need the backup data of snapshot backup and log backup. During the restoration, the snapshot backup data is first restored to the TiDB cluster, and then the log backup data between the snapshot backup time point and the specified point in time is restored to the TiDB cluster.
Full restoration
This section provides an example about how to restore the backup data from the spec.gcs.prefix
folder of the spec.gcs.bucket
bucket on GCS to the demo2
TiDB cluster in the test1
namespace. The following are the detailed steps.
Prerequisites: Complete the snapshot backup
In this example, the my-full-backup-folder
folder in the my-bucket
bucket of GCS stores the snapshot backup data. For steps of performing snapshot backup, refer to Back up Data to GCS Using BR.
Step 1: Prepare the restore environment
Before restoring backup data on GCS to TiDB using BR, take the following steps to prepare the restore environment:
Save the following content as the
backup-rbac.yaml
file to create the required role-based access control (RBAC) resources:--- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tidb-backup-manager labels: app.kubernetes.io/component: tidb-backup-manager rules: - apiGroups: [""] resources: ["events"] verbs: ["*"] - apiGroups: ["br.pingcap.com"] resources: ["backups", "restores"] verbs: ["get", "watch", "list", "update"] --- kind: ServiceAccount apiVersion: v1 metadata: name: tidb-backup-manager --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tidb-backup-manager labels: app.kubernetes.io/component: tidb-backup-manager subjects: - kind: ServiceAccount name: tidb-backup-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: tidb-backup-managerExecute the following command to create the RBAC resources in the
test1
namespace:kubectl apply -f backup-rbac.yaml -n test1Grant permissions to the remote storage for the
test1
namespace:Refer to GCS account permissions.
Step 2: Restore the backup data to a TiDB cluster
Create the
Restore
Custom Resource (CR) to restore the specified data to your cluster:kubectl apply -f restore-full-gcs.yamlThe content of
restore-full-gcs.yaml
file is as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-gcs namespace: test1 spec: # backupType: full br: cluster: demo2 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # sendCredToTikv: true gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-full-backup-folder # location: us-east1 # storageClass: STANDARD_IA # objectAcl: privateWhen configuring
restore-full-gcs.yaml
, note the following:- For more information about GCS configuration, refer to GCS fields.
- Some parameters in
.spec.br
are optional, such aslogLevel
,statusAddr
,concurrency
,rateLimit
,checksum
,timeAgo
, andsendCredToTikv
. For more information about BR configuration, refer to BR fields. - For v4.0.8 or a later version, BR can automatically adjust
tikv_gc_life_time
. You do not need to configurespec.to
fields in theRestore
CR. - For more information about the
Restore
CR fields, refer to Restore CR fields.
After creating the
Restore
CR, execute the following command to check the restore status:kubectl get restore -n test1 -o wideNAME STATUS ... demo2-restore-gcs Complete ...
Point-in-time recovery
This section provides an example about how to perform point-in-time recovery (PITR) in a demo3
cluster in the test1
namespace. PITR takes two steps:
- Restore the cluster to the time point of the snapshot backup using the snapshot backup data in the
spec.pitrFullBackupStorageProvider.gcs.prefix
folder of thespec.pitrFullBackupStorageProvider.gcs.bucket
bucket. - Restore the cluster to any point in time using the log backup data in the
spec.gcs.prefix
folder of thespec.gcs.bucket
bucket.
The detailed steps are as follows.
Prerequisites: Complete data backup
In this example, the my-bucket
bucket of GCS stores the following two types of backup data:
- The snapshot backup data generated during the log backup, stored in the
my-full-backup-folder-pitr
folder. - The log backup data, stored in the
my-log-backup-folder-pitr
folder.
For detailed steps of how to perform data backup, refer to Back up data to GCS Using BR.
Step 1: Prepare the restoration environment
Refer to Step 1: Prepare the restore environment.
Step 2: Restore the backup data to a TiDB cluster
The example in this section restores the snapshot backup data to the cluster. The specified restoration time point must be between the time point of snapshot backup and the Log Checkpoint Ts
of log backup. The detailed steps are as follows:
Create a
Restore
CR nameddemo3-restore-gcs
in thetest1
namespace and specify the restoration time point as2022-10-10T17:21:00+08:00
:kubectl apply -f restore-point-gcs.yamlThe content of
restore-point-gcs.yaml
is as follows:--- apiVersion: br.pingcap.com/v1alpha1 kind: Restore metadata: name: demo3-restore-gcs namespace: test1 spec: restoreMode: pitr br: cluster: demo3 gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-log-backup-folder-pitr pitrRestoredTs: "2022-10-10T17:21:00+08:00" pitrFullBackupStorageProvider: gcs: projectId: ${project_id} secretName: gcs-secret bucket: my-bucket prefix: my-full-backup-folder-pitrWhen you configure
restore-point-gcs.yaml
, note the following:spec.restoreMode
: when you perform PITR, set this field topitr
. The default value of this field issnapshot
, which means snapshot backup.
Wait for the restoration operation to complete:
kubectl get jobs -n test1NAME COMPLETIONS ... restore-demo3-restore-gcs 1/1 ...You can also check the restoration status by using the following command:
kubectl get restore -n test1 -o wideNAME STATUS ... demo3-restore-gcs Complete ...
Troubleshooting
If you encounter any problem during the restore process, refer to Common Deployment Failures.