Enable TLS between TiDB Components
This document describes how to enable Transport Layer Security (TLS) between components of the TiDB cluster on Kubernetes, which is supported since TiDB Operator v1.1.
To enable TLS between TiDB components, perform the following steps:
Generate certificates for each component of the TiDB cluster to be created:
A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects:
${cluster_name}-${component_name}-cluster-secret
.A set of shared client-side certificates for the various clients of each component, saved as the Kubernetes Secret objects:
${cluster_name}-cluster-client-secret
.
Deploy the cluster, and set
.spec.tlsCluster.enabled
totrue
.Configure
pd-ctl
andtikv-ctl
to connect to the cluster.
Certificates can be issued in multiple methods. This document describes two methods. You can choose either of them to issue certificates for the TiDB cluster:
If you need to renew the existing TLS certificate, refer to Renew and Replace the TLS Certificate.
Generate certificates for components of the TiDB cluster
This section describes how to issue certificates using two methods: cfssl
and cert-manager
.
Using cfssl
Download
cfssl
and initialize the certificate issuer:mkdir -p ~/bin curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x ~/bin/{cfssl,cfssljson} export PATH=$PATH:~/bin mkdir -p cfssl cd cfsslGenerate the
ca-config.json
configuration file:cat << EOF > ca-config.json { "signing": { "default": { "expiry": "8760h" }, "profiles": { "internal": { "expiry": "8760h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }, "client": { "expiry": "8760h", "usages": [ "signing", "key encipherment", "client auth" ] } } } } EOFGenerate the
ca-csr.json
configuration file:cat << EOF > ca-csr.json { "CN": "TiDB", "CA": { "expiry": "87600h" }, "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "CA", "O": "PingCAP", "ST": "Beijing", "OU": "TiDB" } ] } EOFGenerate CA by the configured option:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -Generate the server-side certificates:
In this step, a set of server-side certificate is created for each component of the TiDB cluster.
PD
First, generate the default
pd-server.json
file:cfssl print-defaults csr > pd-server.jsonThen, edit this file to change the
CN
andhosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-pd", "${cluster_name}-pd.${namespace}", "${cluster_name}-pd.${namespace}.svc", "${cluster_name}-pd-peer", "${cluster_name}-pd-peer.${namespace}", "${cluster_name}-pd-peer.${namespace}.svc", "*.${cluster_name}-pd-peer", "*.${cluster_name}-pd-peer.${namespace}", "*.${cluster_name}-pd-peer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Finally, generate the PD server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal pd-server.json | cfssljson -bare pd-serverTiKV
First, generate the default
tikv-server.json
file:cfssl print-defaults csr > tikv-server.jsonThen, edit this file to change the
CN
andhosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-tikv", "${cluster_name}-tikv.${namespace}", "${cluster_name}-tikv.${namespace}.svc", "${cluster_name}-tikv-peer", "${cluster_name}-tikv-peer.${namespace}", "${cluster_name}-tikv-peer.${namespace}.svc", "*.${cluster_name}-tikv-peer", "*.${cluster_name}-tikv-peer.${namespace}", "*.${cluster_name}-tikv-peer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Finally, generate the TiKV server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tikv-server.json | cfssljson -bare tikv-serverTiDB
First, create the default
tidb-server.json
file:cfssl print-defaults csr > tidb-server.jsonThen, edit this file to change the
CN
,hosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-tidb", "${cluster_name}-tidb.${namespace}", "${cluster_name}-tidb.${namespace}.svc", "${cluster_name}-tidb-peer", "${cluster_name}-tidb-peer.${namespace}", "${cluster_name}-tidb-peer.${namespace}.svc", "*.${cluster_name}-tidb-peer", "*.${cluster_name}-tidb-peer.${namespace}", "*.${cluster_name}-tidb-peer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Finally, generate the TiDB server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tidb-server.json | cfssljson -bare tidb-serverPump
First, create the default
pump-server.json
file:cfssl print-defaults csr > pump-server.jsonThen, edit this file to change the
CN
,hosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "*.${cluster_name}-pump", "*.${cluster_name}-pump.${namespace}", "*.${cluster_name}-pump.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Finally, generate the Pump server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal pump-server.json | cfssljson -bare pump-serverDrainer
First, generate the default
drainer-server.json
file:cfssl print-defaults csr > drainer-server.jsonThen, edit this file to change the
CN
,hosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "<for hosts list, see the following instructions>" ], ...Drainer is deployed using Helm. The
hosts
field varies with different configuration of thevalues.yaml
file.If you have set the
drainerName
attribute when deploying Drainer as follows:... # Changes the names of the statefulset and Pod. # The default value is clusterName-ReleaseName-drainer. # Does not change the name of an existing running Drainer, which is unsupported. drainerName: my-drainer ...Then you can set the
hosts
attribute as described below:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "*.${drainer_name}", "*.${drainer_name}.${namespace}", "*.${drainer_name}.${namespace}.svc" ], ...If you have not set the
drainerName
attribute when deploying Drainer, configure thehosts
attribute as follows:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "*.${cluster_name}-${release_name}-drainer", "*.${cluster_name}-${release_name}-drainer.${namespace}", "*.${cluster_name}-${release_name}-drainer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed.${release_name}
is therelease name
you set whenhelm install
is executed.${drainer_name}
isdrainerName
in thevalues.yaml
file. You can also add your customizedhosts
.Finally, generate the Drainer server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal drainer-server.json | cfssljson -bare drainer-serverTiCDC
Generate the default
ticdc-server.json
file:cfssl print-defaults csr > ticdc-server.jsonEdit this file to change the
CN
,hosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-ticdc", "${cluster_name}-ticdc.${namespace}", "${cluster_name}-ticdc.${namespace}.svc", "${cluster_name}-ticdc-peer", "${cluster_name}-ticdc-peer.${namespace}", "${cluster_name}-ticdc-peer.${namespace}.svc", "*.${cluster_name}-ticdc-peer", "*.${cluster_name}-ticdc-peer.${namespace}", "*.${cluster_name}-ticdc-peer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Generate the TiCDC server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal ticdc-server.json | cfssljson -bare ticdc-server
TiFlash
Generate the default
tiflash-server.json
file:cfssl print-defaults csr > tiflash-server.jsonEdit this file to change the
CN
andhosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-tiflash", "${cluster_name}-tiflash.${namespace}", "${cluster_name}-tiflash.${namespace}.svc", "${cluster_name}-tiflash-peer", "${cluster_name}-tiflash-peer.${namespace}", "${cluster_name}-tiflash-peer.${namespace}.svc", "*.${cluster_name}-tiflash-peer", "*.${cluster_name}-tiflash-peer.${namespace}", "*.${cluster_name}-tiflash-peer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Generate the TiFlash server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tiflash-server.json | cfssljson -bare tiflash-server
TiKV Importer
If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiKV Importer component.
Generate the default
importer-server.json
file:cfssl print-defaults csr > importer-server.jsonEdit this file to change the
CN
andhosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-importer", "${cluster_name}-importer.${namespace}", "${cluster_name}-importer.${namespace}.svc" "${cluster_name}-importer.${namespace}.svc", "*.${cluster_name}-importer", "*.${cluster_name}-importer.${namespace}", "*.${cluster_name}-importer.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Generate the TiKV Importer server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal importer-server.json | cfssljson -bare importer-server
TiDB Lightning
If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiDB Lightning component.
Generate the default
lightning-server.json
file:cfssl print-defaults csr > lightning-server.jsonEdit this file to change the
CN
andhosts
attributes:... "CN": "TiDB", "hosts": [ "127.0.0.1", "::1", "${cluster_name}-lightning", "${cluster_name}-lightning.${namespace}", "${cluster_name}-lightning.${namespace}.svc" ], ...${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed. You can also add your customizedhosts
.Generate the TiDB Lightning server-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal lightning-server.json | cfssljson -bare lightning-server
Generate the client-side certificate:
First, create the default
client.json
file:cfssl print-defaults csr > client.jsonThen, edit this file to change the
CN
,hosts
attributes. You can leave thehosts
empty:... "CN": "TiDB", "hosts": [], ...Finally, generate the client-side certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare clientCreate the Kubernetes Secret object:
If you have already generated a set of certificates for each component and a set of client-side certificate for each client as described in the above steps, create the Secret objects for the TiDB cluster by executing the following command:
The PD cluster certificate Secret:
kubectl create secret generic ${cluster_name}-pd-cluster-secret --namespace=${namespace} --from-file=tls.crt=pd-server.pem --from-file=tls.key=pd-server-key.pem --from-file=ca.crt=ca.pemThe TiKV cluster certificate Secret:
kubectl create secret generic ${cluster_name}-tikv-cluster-secret --namespace=${namespace} --from-file=tls.crt=tikv-server.pem --from-file=tls.key=tikv-server-key.pem --from-file=ca.crt=ca.pemThe TiDB cluster certificate Secret:
kubectl create secret generic ${cluster_name}-tidb-cluster-secret --namespace=${namespace} --from-file=tls.crt=tidb-server.pem --from-file=tls.key=tidb-server-key.pem --from-file=ca.crt=ca.pemThe Pump cluster certificate Secret:
kubectl create secret generic ${cluster_name}-pump-cluster-secret --namespace=${namespace} --from-file=tls.crt=pump-server.pem --from-file=tls.key=pump-server-key.pem --from-file=ca.crt=ca.pemThe Drainer cluster certificate Secret:
kubectl create secret generic ${cluster_name}-drainer-cluster-secret --namespace=${namespace} --from-file=tls.crt=drainer-server.pem --from-file=tls.key=drainer-server-key.pem --from-file=ca.crt=ca.pemThe TiCDC cluster certificate Secret:
kubectl create secret generic ${cluster_name}-ticdc-cluster-secret --namespace=${namespace} --from-file=tls.crt=ticdc-server.pem --from-file=tls.key=ticdc-server-key.pem --from-file=ca.crt=ca.pemThe TiFlash cluster certificate Secret:
kubectl create secret generic ${cluster_name}-tiflash-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiflash-server.pem --from-file=tls.key=tiflash-server-key.pem --from-file=ca.crt=ca.pemThe TiKV Importer cluster certificate Secret:
kubectl create secret generic ${cluster_name}-importer-cluster-secret --namespace=${namespace} --from-file=tls.crt=importer-server.pem --from-file=tls.key=importer-server-key.pem --from-file=ca.crt=ca.pemThe TiDB Lightning cluster certificate Secret:
kubectl create secret generic ${cluster_name}-lightning-cluster-secret --namespace=${namespace} --from-file=tls.crt=lightning-server.pem --from-file=tls.key=lightning-server-key.pem --from-file=ca.crt=ca.pemThe client certificate Secret:
kubectl create secret generic ${cluster_name}-cluster-client-secret --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem
You have created two Secret objects:
- One Secret object for each PD/TiKV/TiDB/Pump/Drainer server-side certificate to load when the server is started;
- One Secret object for their clients to connect.
Using cert-manager
Install
cert-manager
.Refer to cert-manager installation on Kubernetes for details.
Create an Issuer to issue certificates to the TiDB cluster.
To configure
cert-manager
, create the Issuer resources.First, create a directory which saves the files that
cert-manager
needs to create certificates:mkdir -p cert-manager cd cert-managerThen, create a
tidb-cluster-issuer.yaml
file with the following content:apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ${cluster_name}-selfsigned-ca-issuer namespace: ${namespace} spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-ca namespace: ${namespace} spec: secretName: ${cluster_name}-ca-secret commonName: "TiDB" isCA: true duration: 87600h # 10yrs renewBefore: 720h # 30d issuerRef: name: ${cluster_name}-selfsigned-ca-issuer kind: Issuer --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ${cluster_name}-tidb-issuer namespace: ${namespace} spec: ca: secretName: ${cluster_name}-ca-secret${cluster_name}
is the name of the cluster. The above YAML file creates three objects:- An Issuer object of the SelfSigned type, used to generate the CA certificate needed by Issuer of the CA type;
- A Certificate object, whose
isCa
is set totrue
. - An Issuer, used to issue TLS certificates between TiDB components.
Finally, execute the following command to create an Issuer:
kubectl apply -f tidb-cluster-issuer.yamlGenerate the server-side certificate.
In
cert-manager
, the Certificate resource represents the certificate interface. This certificate is issued and updated by the Issuer created in Step 2.According to Enable TLS Authentication, each component needs a server-side certificate, and all components need a shared client-side certificate for their clients.
PD
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-pd-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-pd-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-pd" - "${cluster_name}-pd.${namespace}" - "${cluster_name}-pd.${namespace}.svc" - "${cluster_name}-pd-peer" - "${cluster_name}-pd-peer.${namespace}" - "${cluster_name}-pd-peer.${namespace}.svc" - "*.${cluster_name}-pd-peer" - "*.${cluster_name}-pd-peer.${namespace}" - "*.${cluster_name}-pd-peer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.io${cluster_name}
is the name of the cluster. Configure the items as follows:Set
spec.secretName
to${cluster_name}-pd-cluster-secret
.Add
server auth
andclient auth
inusages
.Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-pd
${cluster_name}-pd.${namespace}
${cluster_name}-pd.${namespace}.svc
${cluster_name}-pd-peer
${cluster_name}-pd-peer.${namespace}
${cluster_name}-pd-peer.${namespace}.svc
*.${cluster_name}-pd-peer
*.${cluster_name}-pd-peer.${namespace}
*.${cluster_name}-pd-peer.${namespace}.svc
Add the following two IPs in
ipAddresses
. You can also add other IPs according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-pd-cluster-secret
Secret object to be used by the PD component of the TiDB server.
TiKV
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-tikv-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-tikv-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-tikv" - "${cluster_name}-tikv.${namespace}" - "${cluster_name}-tikv.${namespace}.svc" - "${cluster_name}-tikv-peer" - "${cluster_name}-tikv-peer.${namespace}" - "${cluster_name}-tikv-peer.${namespace}.svc" - "*.${cluster_name}-tikv-peer" - "*.${cluster_name}-tikv-peer.${namespace}" - "*.${cluster_name}-tikv-peer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.io${cluster_name}
is the name of the cluster. Configure the items as follows:Set
spec.secretName
to${cluster_name}-tikv-cluster-secret
.Add
server auth
andclient auth
inusages
.Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-tikv
${cluster_name}-tikv.${namespace}
${cluster_name}-tikv.${namespace}.svc
${cluster_name}-tikv-peer
${cluster_name}-tikv-peer.${namespace}
${cluster_name}-tikv-peer.${namespace}.svc
*.${cluster_name}-tikv-peer
*.${cluster_name}-tikv-peer.${namespace}
*.${cluster_name}-tikv-peer.${namespace}.svc
Add the following 2 IPs in
ipAddresses
. You can also add other IPs according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-tikv-cluster-secret
Secret object to be used by the TiKV component of the TiDB server.
TiDB
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-tidb-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-tidb-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-tidb" - "${cluster_name}-tidb.${namespace}" - "${cluster_name}-tidb.${namespace}.svc" - "${cluster_name}-tidb-peer" - "${cluster_name}-tidb-peer.${namespace}" - "${cluster_name}-tidb-peer.${namespace}.svc" - "*.${cluster_name}-tidb-peer" - "*.${cluster_name}-tidb-peer.${namespace}" - "*.${cluster_name}-tidb-peer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.io${cluster_name}
is the name of the cluster. Configure the items as follows:Set
spec.secretName
to${cluster_name}-tidb-cluster-secret
Add
server auth
andclient auth
inusages
Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-tidb
${cluster_name}-tidb.${namespace}
${cluster_name}-tidb.${namespace}.svc
${cluster_name}-tidb-peer
${cluster_name}-tidb-peer.${namespace}
${cluster_name}-tidb-peer.${namespace}.svc
*.${cluster_name}-tidb-peer
*.${cluster_name}-tidb-peer.${namespace}
*.${cluster_name}-tidb-peer.${namespace}.svc
Add the following 2 IPs in
ipAddresses
. You can also add other IPs according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-tidb-cluster-secret
Secret object to be used by the TiDB component of the TiDB server.
Pump
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-pump-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-pump-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "*.${cluster_name}-pump" - "*.${cluster_name}-pump.${namespace}" - "*.${cluster_name}-pump.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.io${cluster_name}
is the name of the cluster. Configure the items as follows:Set
spec.secretName
to${cluster_name}-pump-cluster-secret
Add
server auth
andclient auth
inusages
Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:*.${cluster_name}-pump
*.${cluster_name}-pump.${namespace}
*.${cluster_name}-pump.${namespace}.svc
Add the following 2 IPs in
ipAddresses
. You can also add other IPs according to your needs:127.0.0.1
::1
Add the Issuer created above in the
issuerRef
For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-pump-cluster-secret
Secret object to be used by the Pump component of the TiDB server.
Drainer
Drainer is deployed using Helm. The
dnsNames
field varies with different configuration of thevalues.yaml
file.If you set the
drainerName
attributes when deploying Drainer as follows:... # Changes the name of the statefulset and Pod. # The default value is clusterName-ReleaseName-drainer # Does not change the name of an existing running Drainer, which is unsupported. drainerName: my-drainer ...Then you need to configure the certificate as described below:
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-drainer-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-drainer-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "*.${drainer_name}" - "*.${drainer_name}.${namespace}" - "*.${drainer_name}.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.ioIf you didn't set the
drainerName
attribute when deploying Drainer, configure thednsNames
attributes as follows:apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-drainer-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-drainer-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "*.${cluster_name}-${release_name}-drainer" - "*.${cluster_name}-${release_name}-drainer.${namespace}" - "*.${cluster_name}-${release_name}-drainer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.io${cluster_name}
is the name of the cluster.${namespace}
is the namespace in which the TiDB cluster is deployed.${release_name}
is therelease name
you set whenhelm install
is executed.${drainer_name}
isdrainerName
in thevalues.yaml
file. You can also add your customizeddnsNames
.Set
spec.secretName
to${cluster_name}-drainer-cluster-secret
.Add
server auth
andclient auth
inusages
.See the above descriptions for
dnsNames
.Add the following 2 IPs in
ipAddresses
. You can also add other IPs according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-drainer-cluster-secret
Secret object to be used by the Drainer component of the TiDB server.
TiCDC
Starting from v4.0.3, TiCDC supports TLS. TiDB Operator supports enabling TLS for TiCDC since v1.1.3.
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-ticdc-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-ticdc-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-ticdc" - "${cluster_name}-ticdc.${namespace}" - "${cluster_name}-ticdc.${namespace}.svc" - "${cluster_name}-ticdc-peer" - "${cluster_name}-ticdc-peer.${namespace}" - "${cluster_name}-ticdc-peer.${namespace}.svc" - "*.${cluster_name}-ticdc-peer" - "*.${cluster_name}-ticdc-peer.${namespace}" - "*.${cluster_name}-ticdc-peer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.ioIn the file,
${cluster_name}
is the name of the cluster:Set
spec.secretName
to${cluster_name}-ticdc-cluster-secret
.Add
server auth
andclient auth
inusages
.Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-ticdc
${cluster_name}-ticdc.${namespace}
${cluster_name}-ticdc.${namespace}.svc
${cluster_name}-ticdc-peer
${cluster_name}-ticdc-peer.${namespace}
${cluster_name}-ticdc-peer.${namespace}.svc
*.${cluster_name}-ticdc-peer
*.${cluster_name}-ticdc-peer.${namespace}
*.${cluster_name}-ticdc-peer.${namespace}.svc
Add the following 2 IPs in
ipAddresses
. You can also add other IPs according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-ticdc-cluster-secret
Secret object to be used by the TiCDC component of the TiDB server.
TiFlash
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-tiflash-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-tiflash-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-tiflash" - "${cluster_name}-tiflash.${namespace}" - "${cluster_name}-tiflash.${namespace}.svc" - "${cluster_name}-tiflash-peer" - "${cluster_name}-tiflash-peer.${namespace}" - "${cluster_name}-tiflash-peer.${namespace}.svc" - "*.${cluster_name}-tiflash-peer" - "*.${cluster_name}-tiflash-peer.${namespace}" - "*.${cluster_name}-tiflash-peer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.ioIn the file,
${cluster_name}
is the name of the cluster:Set
spec.secretName
to${cluster_name}-tiflash-cluster-secret
.Add
server auth
andclient auth
inusages
.Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-tiflash
${cluster_name}-tiflash.${namespace}
${cluster_name}-tiflash.${namespace}.svc
${cluster_name}-tiflash-peer
${cluster_name}-tiflash-peer.${namespace}
${cluster_name}-tiflash-peer.${namespace}.svc
*.${cluster_name}-tiflash-peer
*.${cluster_name}-tiflash-peer.${namespace}
*.${cluster_name}-tiflash-peer.${namespace}.svc
Add the following 2 IP addresses in
ipAddresses
. You can also add other IP addresses according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-tiflash-cluster-secret
Secret object to be used by the TiFlash component of the TiDB server.
TiKV Importer
If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiKV Importer component.
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-importer-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-importer-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-importer" - "${cluster_name}-importer.${namespace}" - "${cluster_name}-importer.${namespace}.svc" - "*.${cluster_name}-importer" - "*.${cluster_name}-importer.${namespace}" - "*.${cluster_name}-importer.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.ioIn the file,
${cluster_name}
is the name of the cluster:Set
spec.secretName
to${cluster_name}-importer-cluster-secret
.Add
server auth
andclient auth
inusages
.Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-importer
${cluster_name}-importer.${namespace}
${cluster_name}-importer.${namespace}.svc
Add the following 2 IP addresses in
ipAddresses
. You can also add other IP addresses according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-importer-cluster-secret
Secret object to be used by the TiKV Importer component of the TiDB server.
TiDB Lightning
If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiDB Lightning component.
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-lightning-cluster-secret namespace: ${namespace} spec: secretName: ${cluster_name}-lightning-cluster-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - server auth - client auth dnsNames: - "${cluster_name}-lightning" - "${cluster_name}-lightning.${namespace}" - "${cluster_name}-lightning.${namespace}.svc" ipAddresses: - 127.0.0.1 - ::1 issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.ioIn the file,
${cluster_name}
is the name of the cluster:Set
spec.secretName
to${cluster_name}-lightning-cluster-secret
.Add
server auth
andclient auth
inusages
.Add the following DNSs in
dnsNames
. You can also add other DNSs according to your needs:${cluster_name}-lightning
${cluster_name}-lightning.${namespace}
${cluster_name}-lightning.${namespace}.svc
Add the following 2 IP addresses in
ipAddresses
. You can also add other IP addresses according to your needs:127.0.0.1
::1
Add the Issuer created above in
issuerRef
.For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-lightning-cluster-secret
Secret object to be used by the TiDB Lightning component of the TiDB server.
Generate the client-side certificate for components of the TiDB cluster.
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${cluster_name}-cluster-client-secret namespace: ${namespace} spec: secretName: ${cluster_name}-cluster-client-secret duration: 8760h # 365d renewBefore: 360h # 15d subject: organizations: - PingCAP commonName: "TiDB" usages: - client auth issuerRef: name: ${cluster_name}-tidb-issuer kind: Issuer group: cert-manager.io${cluster_name}
is the name of the cluster. Configure the items as follows:- Set
spec.secretName
to${cluster_name}-cluster-client-secret
. - Add
client auth
inusages
. - You can leave
dnsNames
andipAddresses
empty. - Add the Issuer created above in
issuerRef
. - For other attributes, refer to cert-manager API.
After the object is created,
cert-manager
generates a${cluster_name}-cluster-client-secret
Secret object to be used by the clients of the TiDB components.- Set
Deploy the TiDB cluster
When you deploy a TiDB cluster, you can enable TLS between TiDB components, and set the cert-allowed-cn
configuration item (for TiDB, the configuration item is cluster-verify-cn
) to verify the CN (Common Name) of each component's certificate.
In this step, you need to perform the following operations:
- Create a TiDB cluster
- Enable TLS between the TiDB components, and enable CN verification
- Deploy a monitoring system
- Deploy the Pump component, and enable CN verification
Create a TiDB cluster:
Create the
tidb-cluster.yaml
file:apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: ${cluster_name} namespace: ${namespace} spec: tlsCluster: enabled: true version: v6.1.0 timezone: UTC pvReclaimPolicy: Retain pd: baseImage: pingcap/pd maxFailoverCount: 0 replicas: 1 requests: storage: "10Gi" config: security: cert-allowed-cn: - TiDB tikv: baseImage: pingcap/tikv maxFailoverCount: 0 replicas: 1 requests: storage: "100Gi" config: security: cert-allowed-cn: - TiDB tidb: baseImage: pingcap/tidb maxFailoverCount: 0 replicas: 1 service: type: ClusterIP config: security: cluster-verify-cn: - TiDB pump: baseImage: pingcap/tidb-binlog replicas: 1 requests: storage: "100Gi" config: security: cert-allowed-cn: - TiDB --- apiVersion: pingcap.com/v1alpha1 kind: TidbMonitor metadata: name: ${cluster_name} namespace: ${namespace} spec: clusters: - name: ${cluster_name} prometheus: baseImage: prom/prometheus version: v2.27.1 grafana: baseImage: grafana/grafana version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer version: v6.1.0 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 prometheusReloader: baseImage: quay.io/prometheus-operator/prometheus-config-reloader version: v0.49.0 imagePullPolicy: IfNotPresentExecute
kubectl apply -f tidb-cluster.yaml
to create a TiDB cluster.This operation also includes deploying a monitoring system and the Pump component.
Create a Drainer component and enable TLS and CN verification:
Method 1: Set
drainerName
when you create Drainer.Edit the
values.yaml
file, setdrainer-name
, and enable the TLS feature:... drainerName: ${drainer_name} tlsCluster: enabled: true certAllowedCN: - TiDB ...Deploy the Drainer cluster:
helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yamlMethod 2: Do not set
drainerName
when you create Drainer.Edit the
values.yaml
file, and enable the TLS feature:... tlsCluster: enabled: true certAllowedCN: - TiDB ...Deploy the Drainer cluster:
helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml
Create the Backup/Restore resource object:
Create the
backup.yaml
file:apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: ${cluster_name}-backup namespace: ${namespace} spec: backupType: full br: cluster: ${cluster_name} clusterNamespace: ${namespace} sendCredToTikv: true from: host: ${host} secretName: ${tidb_secret} port: 4000 user: root s3: provider: aws region: ${my_region} secretName: ${s3_secret} bucket: ${my_bucket} prefix: ${my_folder}Deploy Backup:
kubectl apply -f backup.yamlCreate the
restore.yaml
file:apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: ${cluster_name}-restore namespace: ${namespace} spec: backupType: full br: cluster: ${cluster_name} clusterNamespace: ${namespace} sendCredToTikv: true to: host: ${host} secretName: ${tidb_secret} port: 4000 user: root s3: provider: aws region: ${my_region} secretName: ${s3_secret} bucket: ${my_bucket} prefix: ${my_folder}Deploy Restore:
kubectl apply -f restore.yaml
Configure pd-ctl
, tikv-ctl
and connect to the cluster
Mount the certificates.
Configure
spec.pd.mountClusterClientSecret: true
andspec.tikv.mountClusterClientSecret: true
with the following command:kubectl patch tc ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"pd":{"mountClusterClientSecret": true},"tikv":{"mountClusterClientSecret": true}}}'Use
pd-ctl
to connect to the PD cluster.Get into the PD Pod:
kubectl exec -it ${cluster_name}-pd-0 -n ${namespace} shUse
pd-ctl
:cd /var/lib/cluster-client-tls /pd-ctl --cacert=ca.crt --cert=tls.crt --key=tls.key -u https://127.0.0.1:2379 memberUse
tikv-ctl
to connect to the TiKV cluster.Get into the TiKV Pod:
kubectl exec -it ${cluster_name}-tikv-0 -n ${namespace} shUse
tikv-ctl
:cd /var/lib/cluster-client-tls /tikv-ctl --ca-path=ca.crt --cert-path=tls.crt --key-path=tls.key --host 127.0.0.1:20160 cluster