- Introduction
- Get Started
- Deploy
- In Self-Managed Kubernetes
- In Public Cloud Kubernetes
- Deploy TiDB on ARM64 Machines
- Deploy TiFlash to Explore TiDB HTAP
- Deploy TiDB Across Multiple Kubernetes Clusters
- Deploy a Heterogeneous TiDB Cluster
- Deploy TiCDC
- Deploy TiDB Binlog
- Monitor and Alert
- Migrate
- Import Data
- Migrate from MySQL
- Migrate TiDB to Kubernetes
- Manage
- Secure
- Scale
- Upgrade
- Upgrade a TiDB Cluster
- Upgrade TiDB Operator
- Backup and Restore
- Overview
- Backup and Restore Custom Resources
- Grant Permissions to Remote Storage
- Amazon S3 Compatible Storage
- Google Cloud Storage
- Persistent Volumes
- Maintain
- Restart a TiDB Cluster
- Destroy a TiDB Cluster
- View TiDB Logs
- Modify TiDB Cluster Configuration
- Configure Automatic Failover
- Pause Sync of TiDB Cluster
- Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator
- Maintain Kubernetes Nodes
- Migrate from Helm 2 to Helm 3
- Replace Nodes for a TiDB Cluster
- Disaster Recovery
- Troubleshoot
- FAQs
- Reference
- Release Notes
- v1.3
- v1.2
- v1.1
- v1.0
- v0
Deploy TiCDC in Kubernetes
TiCDC is a tool for replicating the incremental data of TiDB. This document describes how to deploy TiCDC in Kubernetes using TiDB Operator.
You can deploy TiCDC when deploying a new TiDB cluster, or add the TiCDC component to an existing TiDB cluster.
Prerequisites
TiDB Operator is deployed.
Fresh TiCDC deployment
To deploy TiCDC when deploying the TiDB cluster, refer to Deploy TiDB in General Kubernetes.
Add TiCDC to an existing TiDB cluster
Edit
TidbCluster
Custom Resource:kubectl edit tc ${cluster_name} -n ${namespace}
Add the TiCDC configuration as follows:
spec: ticdc: baseImage: pingcap/ticdc replicas: 3
Mount persistent volumes (PVs) for TiCDC。
TiCDC supports mounting multiple PV. It is recommended that you plan the number of PVs required when deploying TiCDC for the first time. For more information, refer to Multiple disks mounting。
After the deployment, enter a TiCDC Pod by running
kubectl exec
:kubectl exec -it ${pod_name} -n ${namespace} -- sh
Manage the cluster and data replication tasks by using
cdc cli
./cdc cli capture list --pd=http://${cluster_name}-pd:2379
[ { "id": "3ed24f6c-22cf-446f-9fe0-bf4a66d00f5b", "is-owner": false, "address": "${cluster_name}-ticdc-2.${cluster_name}-ticdc-peer.${namespace}.svc:8301" }, { "id": "60e98ed7-cd49-45f4-b5ae-d3b85ba3cd96", "is-owner": false, "address": "${cluster_name}-ticdc-0.${cluster_name}-ticdc-peer.${namespace}.svc:8301" }, { "id": "dc3592c0-dace-42a0-8afc-fb8506e8271c", "is-owner": true, "address": "${cluster_name}-ticdc-1.${cluster_name}-ticdc-peer.${namespace}.svc:8301" } ]
Starting from v4.0.3, TiCDC supports TLS. TiDB Operator supports enabling TLS for TiCDC since v1.1.3.
If TLS is enabled when you create the TiDB cluster, add TLS certificate-related parameters when you use
cdc cli
./cdc cli capture list --pd=https://${cluster_name}-pd:2379 --ca=/var/lib/cluster-client-tls/ca.crt --cert=/var/lib/cluster-client-tls/tls.crt --key=/var/lib/cluster-client-tls/tls.key
If the server does not have an external network, refer to deploy TiDB cluster to download the required Docker image on the machine with an external network and upload it to the server.