Topology Configuration File for TiDB Deployment Using TiUP
To deploy or scale TiDB using TiUP, you need to provide a topology file (sample) to describe the cluster topology.
Similarly, to modify the cluster topology, you need to modify the topology file. The difference is that, after the cluster is deployed, you can only modify a part of the fields in the topology file. This document introduces each section of the topology file and each field in each section.
When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. To customize the configurations of the preceding monitoring servers, you can follow the instructions in Customize Configurations of Monitoring Servers.
File structure
A topology configuration file for TiDB deployment using TiUP might contain the following sections:
- global: The cluster's global configuration. Some of the configuration items use the default values and you can configure them separately in each instance.
- monitored: Configuration for monitoring services, namely, the blackbox_exporter and the node_exporter. On each machine, anode_exporterand ablackbox_exporterare deployed.
- server_configs: Components' global configuration. You can configure each component separately. If an instance has a configuration item with the same name, the instance's configuration item will take effect.
- component_versions: Component version. You can configure it when a component does not use the cluster version. This section is introduced in tiup-cluster v1.14.0.
- pd_servers: The configuration of the PD instance. This configuration specifies the machines to which the PD component is deployed.
- tidb_servers: The configuration of the TiDB instance. This configuration specifies the machines to which the TiDB component is deployed.
- tikv_servers: The configuration of the TiKV instance. This configuration specifies the machines to which the TiKV component is deployed.
- tiflash_servers: The configuration of the TiFlash instance. This configuration specifies the machines to which the TiFlash component is deployed.
- tiproxy_servers: The configuration of the TiProxy instance. This configuration specifies the machines to which the TiProxy component is deployed.
- pump_servers: The configuration of the Pump instance. This configuration specifies the machines to which the Pump component is deployed.
- drainer_servers: The configuration of the Drainer instance. This configuration specifies the machines to which the Drainer component is deployed.
- kvcdc_servers: The configuration of the TiKV-CDC instance. This configuration specifies the machines to which the TiKV-CDC component is deployed.
- cdc_servers: The configuration of the TiCDC instance. This configuration specifies the machines to which the TiCDC component is deployed.
- tispark_masters: The configuration of the TiSpark master instance. This configuration specifies the machines to which the TiSpark master component is deployed. Only one node of TiSpark master can be deployed.
- tispark_workers: The configuration of the TiSpark worker instance. This configuration specifies the machines to which the TiSpark worker component is deployed.
- monitoring_servers: Specifies the machines to which Prometheus and NGMonitoring are deployed. TiUP supports deploying multiple Prometheus instances but only the first instance is used.
- grafana_servers: The configuration of the Grafana instance. This configuration specifies the machines to which Grafana is deployed.
- alertmanager_servers: The configuration of the Alertmanager instance. This configuration specifies the machines to which Alertmanager is deployed.
global
The global section corresponds to the cluster's global configuration and has the following fields:
- user: The user used to start the deployed cluster. The default value is- "tidb". If the user specified in the- <user>field does not exist on the target machine, this user is automatically created.
- group: The user group to which a user belongs. It is specified when the user is created. The value defaults to that of the- <user>field. If the specified group does not exist, it is automatically created.
- systemd_mode: Specifies the- systemdmode used on the target machine during cluster deployment. The default value is- system. If set to- user, sudo permissions are not required on the target machine, meaning TiUP no-sudo mode is used.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. The default value is- 22.
- enable_tls: Specifies whether to enable TLS for the cluster. After TLS is enabled, the generated TLS certificate must be used for connections between components or between the client and the component. The default value is- false.
- listen_host: Specifies the default listening IP address. If it is empty, each instance automatically sets it to- ::or- 0.0.0.0based on whether its- hostfield contains- :. This field is introduced in tiup-cluster v1.14.0.
- deploy_dir: The deployment directory of each component. The default value is- "deployed". Its application rules are as follows:- If the absolute path of - deploy_diris configured at the instance level, the actual deployment directory is- deploy_dirconfigured for the instance.
- For each instance, if you do not configure - deploy_dir, its default value is the relative path- <component-name>-<component-port>.
- If - global.deploy_diris an absolute path, the component is deployed to the- <global.deploy_dir>/<instance.deploy_dir>directory.
- If - global.deploy_diris a relative path, the component is deployed to the- /home/<global.user>/<global.deploy_dir>/<instance.deploy_dir>directory.
 
- data_dir: The data directory. Default value:- "data". Its application rules are as follows:- If the absolute path of - data_diris configured at the instance level, the actual deployment directory is- data_dirconfigured for the instance.
- For each instance, if you do not configure - data_dir, its default value is- <global.data_dir>.
- If - data_diris a relative path, the component data is placed in- <deploy_dir>/<data_dir>. For the calculation rules of- <deploy_dir>, see the application rules of the- deploy_dirfield.
 
- log_dir: The log directory. Default value:- "log". Its application rules are as follows:- If the absolute path - log_diris configured at the instance level, the actual log directory is the- log_dirconfigured for the instance.
- For each instance, if you not configure - log_dir, its default value is- <global.log_dir>.
- If - log_diris a relative path, the component log is placed in- <deploy_dir>/<log_dir>. For the calculation rules of- <deploy_dir>, see the application rules of the- deploy_dirfield.
 
- os: The operating system of the target machine. The field controls which operating system to adapt to for the components pushed to the target machine. The default value is "linux".
- arch: The CPU architecture of the target machine. The field controls which platform to adapt to for the binary packages pushed to the target machine. The supported values are "amd64" and "arm64". The default value is "amd64".
- resource_control: Runtime resource control. All configurations in this field are written into the service file of systemd. There is no limit by default. The resources that can be controlled are as follows:- memory_limit: Limits the maximum runtime memory. For example, "2G" means that the maximum memory of 2 GB can be used.
- cpu_quota: Limits the maximum CPU usage at runtime. For example, "200%".
- io_read_bandwidth_max: Limits the maximum I/O bandwidth for disk reads. For example,- "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M".
- io_write_bandwidth_max: Limits maximum I/O bandwidth for disk writes. For example,- /dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M.
- limit_core: Controls the size of core dump.
 
A global configuration example is as follows:
global:
  user: "tidb"
  resource_control:
    memory_limit: "2G"
In the above configuration, the tidb user is used to start the cluster. At the same time, each component is restricted to a maximum of 2 GB of memory when it is running.
monitored
monitored is used to configure the monitoring service on the target machine: node_exporter and blackbox_exporter. The following fields are included:
- node_exporter_port: The service port of- node_exporter. The default value is- 9100.
- blackbox_exporter_port: The service port of- blackbox_exporter. The default value is- 9115.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
A monitored configuration example is as follows:
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
The above configuration specifies that node_exporter uses the 9100 port and blackbox_exporter uses the 9115 port.
server_configs
server_configs is used to configure services and to generate configuration files for each component. Similar to the global section, the configuration of this section can be overwritten by the configurations with the same names in an instance. server_configs mainly includes the following fields:
- tidb: TiDB service-related configuration. For the complete configuration, see TiDB configuration file.
- tikv: TiKV service-related configuration. For the complete configuration, see TiKV configuration file.
- pd: PD service-related configuration. For the complete configuration, see PD configuration file.
- tiflash: TiFlash service-related configuration. For the complete configuration, see TiFlash configuration file.
- tiflash_learner: Each TiFlash node has a special built-in TiKV. This configuration item is used to configure this special TiKV. It is generally not recommended to modify the content under this configuration item.
- tiproxy: TiProxy service-related configuration. For the complete configuration, see TiProxy configuration file.
- pump: Pump service-related configuration. For the complete configuration, see TiDB Binlog configuration file.
- drainer: Drainer service-related configuration. For the complete configuration, see TiDB Binlog configuration file.
- cdc: TiCDC service-related configuration. For the complete configuration, see Deploy TiCDC.
A server_configs configuration example is as follows:
server_configs:
  tidb:
    lease: "45s"
    split-table: true
    token-limit: 1000
    instance.tidb_enable_ddl: true
  tikv:
    log-level: "info"
    readpool.unified.min-thread-count: 1
The above configuration specifies the global configuration of TiDB and TiKV.
component_versions
component_versions is used to specify the version number of a certain component. 
- When component_versionsis not configured, each component either uses the same version number as the TiDB cluster (such as PD and TiKV), or uses the latest version (such as Alertmanager).
- When component_versionsis configured, the corresponding component will use the specified version, and this version will be used in subsequent cluster scaling and upgrade operations.
Make sure you only configure it when you need to use a specific version of a component.
component_versions contains the following fields:
- tikv: The version of the TiKV component
- tiflash: The version of the TiFlash component
- pd: The version of the PD component
- tidb_dashboard: The version of the standalone TiDB Dashboard component
- pump: The version of the Pump component
- drainer: The version of the Drainer component
- cdc: The version of the CDC component
- kvcdc: The version of the TiKV-CDC component
- tiproxy: The version of the TiProxy component
- prometheus: The version of the Prometheus component
- grafana: The version of the Grafana component
- alertmanager: The version of the Alertmanager component
The following is an example configuration for component_versions:
component_versions:
  kvcdc: "v1.1.1"
The preceding configuration specifies the version number of TiKV-CDC to be v1.1.1.
pd_servers
pd_servers specifies the machines to which PD services are deployed. It also specifies the service configuration on each machine. pd_servers is an array, and each element of the array contains the following fields:
- host: Specifies the machine to which the PD services are deployed. The field value is an IP address and is mandatory.
- listen_host: When the machine has multiple IP addresses,- listen_hostspecifies the listening IP address of the service. The default value is- 0.0.0.0.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- name: Specifies the name of the PD instance. Different instances must have unique names; otherwise, instances cannot be deployed.
- client_port: Specifies the port that PD uses to connect to the client. The default value is- 2379.
- peer_port: Specifies the port for communication between PDs. The default value is- 2380.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The configuration rule of this field is the same as the- pdconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- pdcontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- listen_host
- name
- client_port
- peer_port
- deploy_dir
- data_dir
- log_dir
- arch
- os
A pd_servers configuration example is as follows:
pd_servers:
  - host: 10.0.1.11
    config:
      schedule.max-merge-region-size: 20
      schedule.max-merge-region-keys: 200000
  - host: 10.0.1.12
The above configuration specifies that PD will be deployed on 10.0.1.11 and 10.0.1.12, and makes specific configurations for the PD of 10.0.1.11.
tidb_servers
tidb_servers specifies the machines to which TiDB services are deployed. It also specifies the service configuration on each machine. tidb_servers is an array, and each element of the array contains the following fields:
- host: Specifies the machine to which the TiDB services are deployed. The field value is an IP address and is mandatory.
- listen_host: When the machine has multiple IP addresses,- listen_hostspecifies the listening IP address of the service. The default value is- 0.0.0.0.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of TiDB services, which is used to provide connection to the MySQL client. The default value is- 4000.
- status_port: The listening port of the TiDB status service, which is used to view the status of the TiDB services from the external via HTTP requests. The default value is- 10080.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The configuration rule of this field is the same as the- tidbconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- tidbcontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- listen_host
- port
- status_port
- deploy_dir
- log_dir
- arch
- os
A tidb_servers configuration example is as follows:
tidb_servers:
  - host: 10.0.1.14
    config:
      log.level: warn
      log.slow-query-file: tidb-slow-overwrited.log
  - host: 10.0.1.15
tikv_servers
tikv_servers specifies the machines to which TiKV services are deployed. It also specifies the service configuration on each machine. tikv_servers is an array, and each element of the array contains the following fields:
- host: Specifies the machine to which the TiKV services are deployed. The field value is an IP address and is mandatory.
- listen_host: When the machine has multiple IP addresses,- listen_hostspecifies the listening IP address of the service. The default value is- 0.0.0.0.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the TiKV services. The default value is- 20160.
- status_port: The listening port of the TiKV status service. The default value is- 20180.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The configuration rule of this field is the same as the- tikvconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- tikvcontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- listen_host
- port
- status_port
- deploy_dir
- data_dir
- log_dir
- arch
- os
A tikv_servers configuration example is as follows:
tikv_servers:
  - host: 10.0.1.14
    config:
      server.labels: { zone: "zone1", host: "host1" }
  - host: 10.0.1.15
    config:
      server.labels: { zone: "zone1", host: "host2" }
tiflash_servers
tiflash_servers specifies the machines to which TiFlash services are deployed. It also specifies the service configuration on each machine. This section is an array, and each element of the array contains the following fields:
- host: Specifies the machine to which the TiFlash services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- tcp_port: The port of the TiFlash TCP service for internal testing purposes. The default value is- 9000. Starting from TiUP v1.12.5, this configuration item does not take effect on clusters that are v7.1.0 or later.
- flash_service_port: The port via which TiFlash provides services. TiDB reads data from TiFlash via this port. The default value is- 3930.
- metrics_port: TiFlash's status port, which is used to output metric data. The default value is- 8234.
- flash_proxy_port: The port of the built-in TiKV. The default value is- 20170.
- flash_proxy_status_port: The status port of the built-in TiKV. The default value is- 20292.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global. TiFlash supports multiple- data_dirdirectories separated by commas.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- tmp_path: The storage path of TiFlash temporary files. The default value is [- pathor the first directory of- storage.latest.dir] + "/tmp".
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The configuration rule of this field is the same as the- tiflashconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- tiflashcontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- learner_config: Each TiFlash node has a special built-in TiKV. This configuration item is used to configure this special TiKV. It is generally not recommended to modify the content under this configuration item.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
After the deployment, for the fields above, you can only add directories to data_dir; for the fields below, you cannot modify these fields:
- host
- tcp_port
- flash_service_port
- flash_proxy_port
- flash_proxy_status_port
- metrics_port
- deploy_dir
- log_dir
- tmp_path
- arch
- os
A tiflash_servers configuration example is as follows:
tiflash_servers:
  - host: 10.0.1.21
  - host: 10.0.1.22
tiproxy_servers
tiproxy_servers specifies the machines to which the TiProxy services are deployed and the service configuration on each machine. tiproxy_servers is an array, and each element of the array contains the following fields:
- host: Specifies the IP address of the machine to which the TiProxy services are deployed. This field is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the TiProxy SQL services. The default value is- 6000.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated based on the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated based on the- data_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is of string type. The value is the ID of the NUMA node, such as- "0,1".
- config: The configuration rule of this field is the same as the- tiproxyconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- tiproxycontent in- server_configs. If these two fields overlap, the content of this field takes effect. Subsequently, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
Among the above fields, you cannot modify the following configured fields after the deployment:
- host
- port
- deploy_dir
- data_dir
- arch
- os
A tiproxy_servers configuration example is as follows:
tiproxy_servers:
  - host: 10.0.1.21
  - host: 10.0.1.22
pump_servers
pump_servers specifies the machines to which the Pump services of TiDB Binlog are deployed. It also specifies the service configuration on each machine. pump_servers is an array, and each element of the array contains the following fields:
- host: Specifies the machine to which the Pump services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the Pump services. The default value is- 8250.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The configuration rule of this field is the same as the- pumpconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- pumpcontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- port
- deploy_dir
- data_dir
- log_dir
- arch
- os
A pump_servers configuration example is as follows:
pump_servers:
  - host: 10.0.1.21
    config:
      gc: 7
  - host: 10.0.1.22
drainer_servers
drainer_servers specifies the machines to which the Drainer services of TiDB Binlog are deployed. It also specifies the service configuration on each machine. drainer_servers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the Drainer services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of Drainer services. The default value is- 8249.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- commit_ts(deprecated): When Drainer starts, it reads the checkpoint. If Drainer gets no checkpoint, it uses this field as the replication time point for the initial startup. This field defaults to- -1(Drainer always gets the latest timestamp from the PD as the commit_ts).
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The configuration rule of this field is the same as the- drainerconfiguration rule in- server_configs. If this field is configured, the field content is merged with the- drainercontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- port
- deploy_dir
- data_dir
- log_dir
- arch
- os
The commit_ts field is deprecated since TiUP v1.9.2 and is not recorded in the starting script of Drainer. If you still need to use this field, refer to the following example to configure the initial-commit-ts field in config.
A drainer_servers configuration example is as follows:
drainer_servers:
  - host: 10.0.1.21
    config:
      initial-commit-ts: -1
      syncer.db-type: "mysql"
      syncer.to.host: "127.0.0.1"
      syncer.to.user: "root"
      syncer.to.password: ""
      syncer.to.port: 3306
      syncer.ignore-table:
        - db-name: test
          tbl-name: log
        - db-name: test
          tbl-name: audit
kvcdc_servers
kvcdc_servers specifies the machines to which the TiKV-CDC services are deployed. It also specifies the service configuration on each machine. kvcdc_servers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the TiKV-CDC services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the TiKV-CDC services. The default value is- 8600.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data-dir: Specifies the directory that TiKV-CDC uses to store temporary files primarily for sorting (optional). The free disk space for this directory is recommended to be greater than or equal to 500 GiB.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- gc-ttl: The TTL (Time to Live, in seconds) of the service-level GC safepoint in PD set by TiKV-CDC (optional). It is the duration that replication tasks can be suspended, defaulting to- 86400, which is 24 hours. Note that suspending replication tasks affects the progress of TiKV garbage collection safepoint. The longer the- gc-ttl, the longer changefeeds can be suspended, but at the same time, more obsolete data will be kept and occupy more space. Vice versa.
- tz: The time zone that the TiKV-CDC services use. TiKV-CDC uses this time zone when internally converting time data types such as timestamp and when replicating data to the downstream. The default value is the local time zone where the process runs.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The address of the configuration file that TiKV-CDC uses (optional).
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- port
- deploy_dir
- data_dir
- log_dir
- arch
- os
A kvcdc_servers configuration example is as follows:
kvcdc_servers:
  - host: 10.0.1.21
  - host: 10.0.1.22
cdc_servers
cdc_servers specifies the machines to which the TiCDC services are deployed. It also specifies the service configuration on each machine. cdc_servers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the TiCDC services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the TiCDC services. The default value is- 8300.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- gc-ttl: The Time To Live (TTL) duration of the service level GC safepoint set by TiCDC in PD, in seconds. The default value is- 86400, which is 24 hours.
- tz: The time zone that the TiCDC services use. TiCDC uses this time zone when internally converting time data types such as timestamp and when replicating data to the downstream. The default value is the local time zone where the process runs.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config: The field content is merged with the- cdccontent in- server_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified in- host.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
- ticdc_cluster_id: Specifies the TiCDC cluster ID corresponding to the service. If this field is not specified, the service joins the default TiCDC cluster. This field only takes effect in TiDB v6.3.0 or later versions.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- port
- deploy_dir
- data_dir
- log_dir
- arch
- os
- ticdc_cluster_id
A cdc_servers configuration example is as follows:
cdc_servers:
  - host: 10.0.1.20
    gc-ttl: 86400
    data_dir: "/cdc-data"
  - host: 10.0.1.21
    gc-ttl: 86400
    data_dir: "/cdc-data"
tispark_masters
tispark_masters specifies the machines to which the master node of TiSpark is deployed. It also specifies the service configuration on each machine. tispark_masters is an array. Each array element contains the following fields:
- host: Specifies the machine to which the TiSpark master is deployed. The field value is an IP address and is mandatory.
- listen_host: When the machine has multiple IP addresses,- listen_hostspecifies the listening IP address of the service. The default value is- 0.0.0.0.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: Spark's listening port, used for communication before the node. The default value is- 7077.
- web_port: Spark's web port, which provides web services and the task status. The default value is- 8080.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- java_home: Specifies the path of the JRE environment to be used. This parameter corresponds to the- JAVA_HOMEsystem environment variable.
- spark_config: Configures to configure the TiSpark services. Then, a configuration file is generated and sent to the machine specified in- host.
- spark_env: Configures the environment variables when Spark starts.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- listen_host
- port
- web_port
- deploy_dir
- arch
- os
A tispark_masters configuration example is as follows:
tispark_masters:
  - host: 10.0.1.21
    spark_config:
      spark.driver.memory: "2g"
      spark.eventLog.enabled: "False"
      spark.tispark.grpc.framesize: 2147483647
      spark.tispark.grpc.timeout_in_sec: 100
      spark.tispark.meta.reload_period_in_sec: 60
      spark.tispark.request.command.priority: "Low"
      spark.tispark.table.scan_concurrency: 256
    spark_env:
      SPARK_EXECUTOR_CORES: 5
      SPARK_EXECUTOR_MEMORY: "10g"
      SPARK_WORKER_CORES: 5
      SPARK_WORKER_MEMORY: "10g"
  - host: 10.0.1.22
tispark_workers
tispark_workers specifies the machines to which the worker nodes of TiSpark are deployed. It also specifies the service configuration on each machine. tispark_workers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the TiSpark workers are deployed. The field value is an IP address and is mandatory.
- listen_host: When the machine has multiple IP addresses,- listen_hostspecifies the listening IP address of the service. The default value is- 0.0.0.0.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: Spark's listening port, used for communication before the node. The default value is- 7077.
- web_port: Spark's web port, which provides web services and the task status. The default value is- 8080.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- java_home: Specifies the path in which the JRE environment to be used is located. This parameter corresponds to the- JAVA_HOMEsystem environment variable.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- listen_host
- port
- web_port
- deploy_dir
- arch
- os
A tispark_workers configuration example is as follows:
tispark_workers:
  - host: 10.0.1.22
  - host: 10.0.1.23
monitoring_servers
monitoring_servers specifies the machines to which the Prometheus services are deployed. It also specifies the service configuration on each machine. monitoring_servers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the monitoring services are deployed. The field value is an IP address and is mandatory.
- ng_port: Specifies the port that NgMonitoring listens to. Introduced in TiUP v1.7.0, this field supports Continuous Profiling and Top SQL. The default value is- 12020.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the Prometheus services. The default value is- 9090.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- storage_retention: The retention time of the Prometheus monitoring data. The default value is- "30d".
- rule_dir: Specifies a local directory that should contain complete- *.rules.ymlfiles. These files are transferred to the target machine during the initialization phase of the cluster configuration as the rules for Prometheus.
- remote_config: Supports writing Prometheus data to the remote, or reading data from the remote. This field has two configurations:- remote_write: See the Prometheus document- <remote_write>.
- remote_read: See the Prometheus document- <remote_read>.
 
- external_alertmanagers: If the- external_alertmanagersfield is configured, Prometheus alerts the configuration behavior to the Alertmanager that is outside the cluster. This field is an array, each element of which is an external Alertmanager and consists of the- hostand- web_portfields.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
- additional_args: Introduced in TiUP v1.15.0, this field configures additional parameters for running Prometheus. This field is an array, and each element of the array is a Prometheus running parameter. For example, to enable the Prometheus hot reload feature, you can set this field to- --web.enable-lifecycle.
- additional_scrape_conf: Customized Prometheus scrape configuration. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the- additional_scrape_conffield to the corresponding parameters of the Prometheus configuration file. For more information, see Customize Prometheus scrape configuration.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- port
- deploy_dir
- data_dir
- log_dir
- arch
- os
A monitoring_servers configuration example is as follows:
monitoring_servers:
  - host: 10.0.1.11
    rule_dir: /local/rule/dir
    additional_args:
    - --web.enable-lifecycle
    remote_config:
      remote_write:
      - queue_config:
          batch_send_deadline: 5m
          capacity: 100000
          max_samples_per_send: 10000
          max_shards: 300
        url: http://127.0.0.1:8003/write
      remote_read:
      - url: http://127.0.0.1:8003/read
      external_alertmanagers:
      - host: 10.1.1.1
        web_port: 9093
      - host: 10.1.1.2
        web_port: 9094
grafana_servers
grafana_servers specifies the machines to which the Grafana services are deployed. It also specifies the service configuration on each machine. grafana_servers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the Grafana services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- port: The listening port of the Grafana services. The default value is- 3000.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- username: The user name on the Grafana login interface.
- password: The password corresponding to Grafana.
- dashboard_dir: Specifies a local directory that should contain complete- dashboard(*.json)files. These files are transferred to the target machine during the initialization phase of the cluster configuration as the dashboards for Grafana.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
- config: This field is used to add custom configurations to Grafana. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the- configfield to the Grafana configuration file- grafana.ini. For more information, see Customize other Grafana configurations.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- port
- deploy_dir
- arch
- os
A grafana_servers configuration example is as follows:
grafana_servers:
  - host: 10.0.1.11
    dashboard_dir: /local/dashboard/dir
alertmanager_servers
alertmanager_servers specifies the machines to which the Alertmanager services are deployed. It also specifies the service configuration on each machine. alertmanager_servers is an array. Each array element contains the following fields:
- host: Specifies the machine to which the Alertmanager services are deployed. The field value is an IP address and is mandatory.
- ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, the- ssh_portof the- globalsection is used.
- web_port: Specifies the port used that Alertmanager uses to provide web services. The default value is- 9093.
- cluster_port: Specifies the communication port between one Alertmanger and other Alertmanager. The default value is- 9094.
- deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the- deploy_dirdirectory configured in- global.
- data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the- data_dirdirectory configured in- global.
- log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the- log_dirdirectory configured in- global.
- numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".
- config_file: Specifies a local file that is transferred to the target machine during the initialization phase of the cluster configuration as the configuration of Alertmanager.
- os: The operating system of the machine specified in- host. If this field is not specified, the default value is the- osvalue in- global.
- arch: The architecture of the machine specified in- host. If this field is not specified, the default value is the- archvalue in- global.
- resource_control: Resource control for the service. If this field is configured, the field content is merged with the- resource_controlcontent in- global(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in- host. The configuration rules of- resource_controlare the same as the- resource_controlcontent in- global.
- listen_host: Specifies the listening address so that Alertmanager can be accessed through a proxy. It is recommended to configure it as- 0.0.0.0. For more information, see Customize Alertmanager configurations.
For the above fields, you cannot modify these configured fields after the deployment:
- host
- web_port
- cluster_port
- deploy_dir
- data_dir
- log_dir
- arch
- os
A alertmanager_servers configuration example is as follows:
alertmanager_servers:
  - host: 10.0.1.11
    config_file: /local/config/file
  - host: 10.0.1.12
    config_file: /local/config/file