Sign InTry Free

Configuration of tidb-cluster Chart

This document describes the configuration of the tidb-cluster chart.

Parameters

ParameterDescriptionDefault value
rbac.createWhether to enable RBAC for Kubernetestrue
clusterNameTiDB cluster name. This variable is not set by default, and tidb-cluster directly uses ReleaseName during installation instead of the TiDB cluster name.nil
extraLabelsAdd extra labels to the TidbCluster object (CRD). Refer to labels{}
schedulerNameScheduler used by TiDB clustertidb-scheduler
timezoneThe default time zone of the TiDB clusterUTC
pvReclaimPolicyReclaim policy for PVs (Persistent Volumes) used by the TiDB clusterRetain
services[0].nameName of the service exposed by the TiDB clusternil
services[0].typeType of the service exposed by the TiDB cluster (selected from ClusterIP, NodePort, LoadBalancer)nil
discovery.imageMirror image of the PD service discovery component in the TiDB cluster, which is used to provide service discovery for each PD instance to coordinate the startup sequence when the PD cluster is first startedpingcap/tidb-operator:v1.0.0-beta.3
discovery.imagePullPolicyPull strategy for the PD service discovery component imageIfNotPresent
discovery.resources.limits.cpuCPU resource limit for PD service discovery component250m
discovery.resources.limits.memoryMemory resource limit for PD service discovery component150Mi
discovery.resources.requests.cpuCPU resource request for PD service discovery component80m
discovery.resources.requests.memoryMemory resource request for PD service discovery component50Mi
enableConfigMapRolloutWhether to enable automatic rolling update for the TiDB cluster. If enabled, the TiDB cluster automatically updates the corresponding components when the ConfigMap of the TiDB cluster changes. This configuration is only supported in TiDB Operator v1.0 and later versionsfalse
pd.configConfiguration of PD in configuration file format. To view the default PD configuration file, refer to pd/conf/config.toml and select the tag of the corresponding PD version. To view the descriptions of parameters, refer to PD configuration description and select the corresponding document version. Here you only need to modify the configuration according to the format in the configuration file.If the TiDB Operator version <= v1.0.0-beta.3, the default value is
nil
If the TiDB Operator version > v1.0.0-beta.3, the default value is
[log]
level = "info"
[replication]
location-labels = ["region", "zone", "rack", "host"]
For example,
  config: |
    [log]
    level = "info"
    [replication]
    location-labels = ["region", "zone", "rack", "host"]
pd.replicasNumber of Pods in PD3
pd.imagePD imagepingcap/pd:v3.0.0-rc.1
pd.imagePullPolicyPull strategy for PD mirrorIfNotPresent
pd.logLevelIf TiDB Operator version> v1.0.0-beta.3, configure in pd.config:
[log]
level = "info"
info
pd.storageClassNamestorageClass used by PD. storageClassName refers to a type of storage provided by the Kubernetes cluster. Different classes may be mapped to service quality levels, backup policies, or any policies determined by the cluster administrator. For details, refer to storage-classeslocal-storage
pd.maxStoreDownTimepd.maxStoreDownTime refers to the time that a store node is disconnected before the node is marked as down. When the status becomes down, the store node starts to migrate its data to other store nodes.
If TiDB Operator version > v1.0.0-beta.3, configure in pd.config:
[schedule]
max-store-down-time = "30m"
30m
pd.maxReplicaspd.maxReplicas is the number of replicas in the TiDB cluster.
If TiDB Operator version > v1.0.0-beta.3, configure in pd.config:
[replication]
max-replicas = 3
3
pd.resources.limits.cpuCPU resource limit for each PD Podnil
pd.resources.limits.memoryMemory resource limit for each PD Podnil
pd.resources.limits.storageStorage capacity limit for each PD Podnil
pd.resources.requests.cpuCPU resource request for each PD Podnil
pd.resources.requests.memoryMemory resource request for each PD Podnil
pd.resources.requests.storageStorage capacity request for each PD Pod1Gi
pd.affinitypd.affinity defines PD scheduling rules and preferences. For details, refer to affinity-and-anti-affinity{}
pd.nodeSelectorpd.nodeSelector makes sure that PD Pods are dispatched only to nodes that have this key-value pair as a label. For details, refer to nodeselector{}
pd.tolerationspd.tolerations applies to PD Pods, allowing PD Pods to be dispatched to nodes with specified taints. For details, refer to taint-and-toleration{}
pd.annotationsAdd specific annotations to PD Pods{}
tikv.configConfiguration of TiKV in configuration file format. To view the default TiKV configuration file, refer to tikv/etc/config-template.toml and select the tag of the corresponding TiKV version. To view the descriptions of parameters, refer to TiKV configuration description and select the corresponding document version. Here you only need to modify the configuration according to the format in the configuration file.

The following two configuration items need to be configured explicitly:

[storage.block-cache]
  shared = true
  capacity = "1GB"
Recommended: set capacity to 50% of tikv.resources.limits.memory

[readpool.coprocessor]
  high-concurrency = 8
  normal-concurrency = 8
  low-concurrency = 8
Recommended: set to 80% of tikv.resources.limits.cpu
If the TiDB Operator version <= v1.0.0-beta.3, the default value is
nil
If the TiDB Operator version > v1.0.0-beta.3, the default value is
log-level = "info"
For example:
  config: |
    log-level = "info"
tikv.replicasNumber of Pods in TiKV3
tikv.imageImage of TiKVpingcap/tikv:v3.0.0-rc.1
tikv.imagePullPolicyPull strategy for TiKV imageIfNotPresent
tikv.logLevelTiKV log level.
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
log-level = "info"
info
tikv.storageClassNamestorageClass used by TiKV. storageClassName refers to a type of storage provided by the Kubernetes cluster. Different classes may be mapped to service quality levels, backup policies, or any policies determined by the cluster administrator. For details, refer to storage-classeslocal-storage
tikv.syncLogsyncLog indicates whether to enable the raft log replication feature. If enabled, the feature ensure that data is not lost during poweroff.
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[raftstore]
sync-log = true
true
tikv.grpcConcurrencyConfigure the size of the gRPC server thread pool.
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[server]
grpc-concurrency = 4
4
tikv.resources.limits.cpuCPU resource limit for each TiKV Podnil
tikv.resources.limits.memoryMemory resource limit for each TiKV Podnil
tikv.resources.limits.storageStorage capacity limit for each TiKV Podnil
tikv.resources.requests.cpuCPU resource request for each TiKV Podnil
tikv.resources.requests.memoryMemory resource request for each TiKV Podnil
tikv.resources.requests.storageStorage capacity request for each TiKV Pod10Gi
tikv.affinitytikv.affinity defines TiKV's scheduling rules and preferences. For details, refer to affinity-and-anti-affinity{}
tikv.nodeSelectortikv.nodeSelector makes sure that TiKV Pods are dispatched only to nodes that have this key-value pair as a label. For details, refer to nodeselector{}
tikv.tolerationstikv.tolerations applies to TiKV Pods, allowing TiKV Pods to be dispatched to nodes with specified taints. For details, refer to taint-and-toleration{}
tikv.annotationsAdd specific annotations for TiKV Pods{}
tikv.defaultcfBlockCacheSizeSpecifies the block cache size. The block cache is used to cache uncompressed blocks. A large block cache setting can speed up reading. Generally it is recommended to set the block cache size to 30%-50% of tikv.resources.limits.memory
If TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[rocksdb.defaultcf]
block-cache-size = "1GB"
Since TiKV v3.0.0, it is no longer necessary to configure [rocksdb.defaultcf].block-cache-size and [rocksdb.writecf].block-cache-size. Instead, you can configure [storage.block-cache].capacity.
1GB
tikv.writecfBlockCacheSizeSpecifies the block cache size of writecf. It is generally recommended to be 10%-30% of tikv.resources.limits.memory
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[rocksdb.writecf]
block-cache-size = "256MB"
Since TiKV v3.0.0, it is no longer necessary to configure [rocksdb.defaultcf].block-cache-size and [rocksdb.writecf].block-cache-size. Instead, you can configure [storage.block-cache].capacity
256MB
tikv.readpoolStorageConcurrencySize of TiKV storage thread pool for high/medium/low priority operations
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[readpool.storage]
high-concurrency = 4
normal-concurrency = 4
low-concurrency = 4
4
tikv.readpoolCoprocessorConcurrencyUsually, if tikv.resources.limits.cpu > 8, set tikv.readpoolCoprocessorConcurrency to tikv.resources.limits.cpu * 0.8
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[readpool.coprocessor]
high-concurrency = 8
normal-concurrency = 8
low-concurrency = 8
8
tikv.storageSchedulerWorkerPoolSizeSize of the working pool of the TiKV scheduler, which should be increased in the case of rewriting, and should be smaller than the total CPU cores.
If the TiDB Operator version > v1.0.0-beta.3, configure in tikv.config:
[storage]
scheduler-worker-pool-size = 4
4
tidb.configConfiguration of TiDB in configuration file format. To view the default TiDB configuration file, refer to configuration file and select the tag of the corresponding TiDB version. To view the descriptions of parameters, refer to TiDB configuration description and select the corresponding document version. Here you only need to modify the configuration according to the format in the configuration file.

The following configuration items need to be configured explicitly:

[performance]
  max-procs = 0
Recommended: set max-procs to the number of cores that correspondes to tidb.resources.limits.cpu
If the TiDB Operator version <= v1.0.0-beta.3, the default value is:
nil
If the TiDB Operator version > v1.0.0-beta.3, the default value is:
[log]
level = "info"
For example:
  config: |
    [log]
    level = "info"
tidb.replicasNumber of Pods in TiDB2
tidb.imageImage of TiDBpingcap/tidb:v3.0.0-rc.1
tidb.imagePullPolicyPull strategy for TiDB imageIfNotPresent
tidb.logLevelTiDB log level.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
[log]
level = "info"
info
tidb.resources.limits.cpuCPU resource limits for each TiDB Podnil
tidb.resources.limits.memoryMemory resource limit for each TiDB Podnil
tidb.resources.requests.cpuStorage capacity limit for each TiDB Podnil
tidb.resources.requests.memoryMemory resource request for each TiDB Podnil
tidb.passwordSecretNameThe name of the Secret which stores the TiDB username and password. This Secret can be created with the following command: kubectl create secret generic tidb secret--from literal=root=${password}--namespace=${namespace}. If the parameter is not set, the TiDB root password is empty.nil
tidb.initSqlThe initialization script that will be executed after the TiDB cluster is successfully started.nil
tidb.affinitytidb.affinity defines the scheduling rules and preferences of TiDB. For details, refer to affinity-and-anti-affinity.{}
tidb.nodeSelectortidb.nodeSelector makes sure that TiDB Pods are only dispatched to nodes that have this key-value pair as a label. For details, refer tonodeselector.{}
tidb.tolerationstidb.tolerations applies to TiKV Pods, allowing TiKV Pods to be dispatched to nodes with specified taints. For details, refer to taint-and-toleration.{}
tidb.annotationsAdd specific annotations for TiDB Pods{}
tidb.maxFailoverCountThe maximum number of failover in TiDB. If it is set to 3, then TiDB supports at most 3 instances failover at the same time.3
tidb.service.typeType of the service exposed by TiDBNodePort
tidb.service.externalTrafficPolicyIndicates whether this service wants to route external traffic to a local node or a cluster-wide endpoint. Two options are available: Cluster (default) and Local. Cluster hides the client's source IP. In such case, the traffic might need to be redirected to another node, but has a good overall load distribution. Local retains the client's source IP and avoids the redirection of traffic of the LoadBalancer and NodePort service, but a potential risk of unbalanced traffic propagation exists. For details, refer to External Load Balancer.nil
tidb.service.loadBalancerIPSpecifies TiDB load balancing IP. Some cloud service providers allow you to specify the loadBalancer IP. In these cases, a load balancer is created using a user-specified loadBalancerIP. If the loadBalancerIP field is not specified, the loadBalancer is set with a temporary IP address. If loadBalancerIP is specified but the cloud provider does not support this feature, the loadBalancerIP field you set is ignored.nil
tidb.service.mysqlNodePortMySQL NodePort port exposed by the TiDB service/
tidb.service.exposeStatusWhether the TiDB service exposes the status porttrue
tidb.service.statusNodePortSpecifies the NodePort where the status port of the TiDB service is exposed/
tidb.separateSlowLogWhether to run a standalone container in sidecar mode to output TiDB's SlowLogIf the TiDB Operator version <= v1.0.0-beta.3, the default value is false.
If the TiDB Operator version > v1.0.0-beta.3, the default value is true.
tidb.slowLogTailer.imageslowLogTailer image of TiDB. slowLogTailer is a sidecar type container used to output TiDB's SlowLog. This configuration only takes effect when tidb.separateSlowLog = true.busybox:1.26.2
tidb.slowLogTailer.resources.limits.cpuCPU resource limit for slowLogTailer of each TiDB Pod100m
tidb.slowLogTailer.resources.limits.memoryMemory resource limit for slowLogTailer of each TiDB Pod50Mi
tidb.slowLogTailer.resources.requests.cpuCPU resource request for slowLogTailer of each TiDB Pod20m
tidb.slowLogTailer.resources.requests.memoryMemory resource request for slowLogTailer of each TiDB Pod5Mi
tidb.plugin.enableWhether to enable the TiDB plugin featurefalse
tidb.plugin.directorySpecifies the directory of the TiDB plugin/plugins
tidb.plugin.listSpecifies the list of plugins loaded by TiDB. Rules of naming Plugin ID: [plugin name]-[version]. For example: 'conn_limit-1'[]
tidb.preparedPlanCacheEnabledWhether to enable the prepared plan cache of TiDB.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
[prepared-plan-cache]
enabled = false
false
tidb.preparedPlanCacheCapacityNumber of the prepared plan cache of TiDB.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
[prepared-plan-cache]
capacity = 100
100
tidb.txnLocalLatchesEnabledWhether to enable the transaction memory lock. It is recommended to enable the transaction memory lock when there are many local transaction conflicts.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
[txn-local-latches]
enabled = false
false
tidb.txnLocalLatchesCapacityCapacity of the transaction memory lock. The number of slots corresponding to the hash is automatically adjusted up to an exponential multiple of 2. Every slot occupies 32 bytes of memory. When the range of data to be written is wide (such as importing data), setting a small capacity causes performance degradation.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
[txn-local-latches]
capacity = 10240000
10240000
tidb.tokenLimitLimit for concurrent sessions executed by TiDB
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
token-limit = 1000
1000
tidb.memQuotaQueryMemory limit for TiDB query. 32GB by default.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
mem-quota-query = 34359738368
34359738368
tidb.checkMb4ValueInUtf8Determines whether to check mb4 characters when the current character set is uff8.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
check-mb4-value-in-utf8 = true
true
tidb.treatOldVersionUtf8AsUtf8mb4Used to upgrade compatibility. If this parameter is set to true, the utf8 character set in the old version of tables/columns is treated as the utf8mb4 character set.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
treat-old-version-utf8-as-utf8mb4 = true
true
tidb.leaseTerm of TiDB Schema lease. It is very dangerous to change this parameter, so do not change it unless you know the possible consequences.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
lease = "45s"
45s
tidb.maxProcsThe maximum number of CPU cores that are available. 0 represents the total number of CPUs on the machine or Pod.
If the TiDB Operator version > v1.0.0-beta.3, configure in tidb.config:
[performance]
max-procs = 0
0
Download PDF
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.