Topology Configuration File for TiDB Deployment Using TiUP
To deploy or scale TiDB using TiUP, you need to provide a topology file (sample) to describe the cluster topology.
Similarly, to modify the cluster topology, you need to modify the topology file. The difference is that, after the cluster is deployed, you can only modify a part of the fields in the topology file. This document introduces each section of the topology file and each field in each section.
When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. To customize the configurations of the preceding monitoring servers, you can follow the instructions in Customize Configurations of Monitoring Servers.
File structure
A topology configuration file for TiDB deployment using TiUP might contain the following sections:
- global: The cluster's global configuration. Some of the configuration items use the default values and you can configure them separately in each instance.
- monitored: Configuration for monitoring services, namely, the blackbox_exporter and the
node_exporter. On each machine, anode_exporterand ablackbox_exporterare deployed. - server_configs: Components' global configuration. You can configure each component separately. If an instance has a configuration item with the same name, the instance's configuration item will take effect.
- component_versions: Component version. You can configure it when a component does not use the cluster version. This section is introduced in tiup-cluster v1.14.0.
- pd_servers: The configuration of the PD instance. This configuration specifies the machines to which the PD component is deployed.
- tidb_servers: The configuration of the TiDB instance. This configuration specifies the machines to which the TiDB component is deployed.
- tikv_servers: The configuration of the TiKV instance. This configuration specifies the machines to which the TiKV component is deployed.
- tiflash_servers: The configuration of the TiFlash instance. This configuration specifies the machines to which the TiFlash component is deployed.
- tiproxy_servers: The configuration of the TiProxy instance. This configuration specifies the machines to which the TiProxy component is deployed.
- kvcdc_servers: The configuration of the TiKV-CDC instance. This configuration specifies the machines to which the TiKV-CDC component is deployed.
- cdc_servers: The configuration of the TiCDC instance. This configuration specifies the machines to which the TiCDC component is deployed.
- tispark_masters: The configuration of the TiSpark master instance. This configuration specifies the machines to which the TiSpark master component is deployed. Only one node of TiSpark master can be deployed.
- tispark_workers: The configuration of the TiSpark worker instance. This configuration specifies the machines to which the TiSpark worker component is deployed.
- tso_servers: The configuration of the TSO instance. This configuration specifies the machines to which the
tsomicroservice is deployed (requires configuringpd_mode: "ms"inglobalto enable PD microservices). - scheduling_servers: The configuration of the Scheduling instance. This configuration specifies the machines to which the
schedulingmicroservice is deployed (requires configuringpd_mode: "ms"inglobalto enable PD microservices). - monitoring_servers: Specifies the machines to which Prometheus and NGMonitoring are deployed. TiUP supports deploying multiple Prometheus instances but only the first instance is used.
- grafana_servers: The configuration of the Grafana instance. This configuration specifies the machines to which Grafana is deployed.
- alertmanager_servers: The configuration of the Alertmanager instance. This configuration specifies the machines to which Alertmanager is deployed.
global
The global section corresponds to the cluster's global configuration and has the following fields:
user: The user used to start the deployed cluster. The default value is"tidb". If the user specified in the<user>field does not exist on the target machine, this user is automatically created.group: The user group to which a user belongs. It is specified when the user is created. The value defaults to that of the<user>field. If the specified group does not exist, it is automatically created.systemd_mode: Specifies thesystemdmode used on the target machine during cluster deployment. The default value issystem. If set touser, sudo permissions are not required on the target machine, meaning TiUP no-sudo mode is used.ssh_port: Specifies the SSH port to connect to the target machine for operations. The default value is22.enable_tls: Specifies whether to enable TLS for the cluster. After TLS is enabled, the generated TLS certificate must be used for connections between components or between the client and the component. The default value isfalse.listen_host: Specifies the default listening IP address. If it is empty, each instance automatically sets it to::or0.0.0.0based on whether itshostfield contains:. This field is introduced in tiup-cluster v1.14.0.deploy_dir: The deployment directory of each component. The default value is"deployed". Its application rules are as follows:If the absolute path of
deploy_diris configured at the instance level, the actual deployment directory isdeploy_dirconfigured for the instance.For each instance, if you do not configure
deploy_dir, its default value is the relative path<component-name>-<component-port>.If
global.deploy_diris an absolute path, the component is deployed to the<global.deploy_dir>/<instance.deploy_dir>directory.If
global.deploy_diris a relative path, the component is deployed to the/home/<global.user>/<global.deploy_dir>/<instance.deploy_dir>directory.
data_dir: The data directory. Default value:"data". Its application rules are as follows:If the absolute path of
data_diris configured at the instance level, the actual deployment directory isdata_dirconfigured for the instance.For each instance, if you do not configure
data_dir, its default value is<global.data_dir>.If
data_diris a relative path, the component data is placed in<deploy_dir>/<data_dir>. For the calculation rules of<deploy_dir>, see the application rules of thedeploy_dirfield.
log_dir: The log directory. Default value:"log". Its application rules are as follows:If the absolute path
log_diris configured at the instance level, the actual log directory is thelog_dirconfigured for the instance.For each instance, if you not configure
log_dir, its default value is<global.log_dir>.If
log_diris a relative path, the component log is placed in<deploy_dir>/<log_dir>. For the calculation rules of<deploy_dir>, see the application rules of thedeploy_dirfield.
os: The operating system of the target machine. The field controls which operating system to adapt to for the components pushed to the target machine. The default value is "linux".arch: The CPU architecture of the target machine. The field controls which platform to adapt to for the binary packages pushed to the target machine. The supported values are "amd64" and "arm64". The default value is "amd64".pd_mode: The PD working mode. The field controls whether to enable PD microservices. The supported value is "ms". Specifying this field means enabling PD microservices.resource_control: Runtime resource control. All configurations in this field are written into the service file of systemd. There is no limit by default. The resources that can be controlled are as follows:memory_limit: Limits the maximum runtime memory. For example, "2G" means that the maximum memory of 2 GB can be used.cpu_quota: Limits the maximum CPU usage at runtime. For example, "200%".io_read_bandwidth_max: Limits the maximum I/O bandwidth for disk reads. For example,"/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M".io_write_bandwidth_max: Limits maximum I/O bandwidth for disk writes. For example,/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M.limit_core: Controls the size of core dump.
A global configuration example is as follows:
global:
user: "tidb"
resource_control:
memory_limit: "2G"
In the above configuration, the tidb user is used to start the cluster. At the same time, each component is restricted to a maximum of 2 GB of memory when it is running.
monitored
monitored is used to configure the monitoring service on the target machine: node_exporter and blackbox_exporter. The following fields are included:
node_exporter_port: The service port ofnode_exporter. The default value is9100.blackbox_exporter_port: The service port ofblackbox_exporter. The default value is9115.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.
A monitored configuration example is as follows:
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
The above configuration specifies that node_exporter uses the 9100 port and blackbox_exporter uses the 9115 port.
server_configs
server_configs is used to configure services and to generate configuration files for each component. Similar to the global section, the configuration of this section can be overwritten by the configurations with the same names in an instance. server_configs mainly includes the following fields:
tidb: TiDB service-related configuration. For the complete configuration, see TiDB configuration file.tikv: TiKV service-related configuration. For the complete configuration, see TiKV configuration file.pd: PD service-related configuration. For the complete configuration, see PD configuration file.tiflash: TiFlash service-related configuration. For the complete configuration, see TiFlash configuration file.tiflash_learner: Each TiFlash node has a special built-in TiKV. This configuration item is used to configure this special TiKV. It is generally not recommended to modify the content under this configuration item.tiproxy: TiProxy service-related configuration. For the complete configuration, see TiProxy configuration file.cdc: TiCDC service-related configuration. For the complete configuration, see Deploy TiCDC.tso:tsomicroservice-related configuration. For the complete configuration, see TSO configuration file.scheduling:schedulingmicroservice-related configuration. For the complete configuration, see Scheduling configuration file.
A server_configs configuration example is as follows:
server_configs:
tidb:
lease: "45s"
split-table: true
token-limit: 1000
instance.tidb_enable_ddl: true
tikv:
log-level: "info"
readpool.unified.min-thread-count: 1
The above configuration specifies the global configuration of TiDB and TiKV.
component_versions
component_versions is used to specify the version number of a certain component.
- When
component_versionsis not configured, each component either uses the same version number as the TiDB cluster (such as PD and TiKV), or uses the latest version (such as Alertmanager). - When
component_versionsis configured, the corresponding component will use the specified version, and this version will be used in subsequent cluster scaling and upgrade operations.
Make sure you only configure it when you need to use a specific version of a component.
component_versions contains the following fields:
tikv: The version of the TiKV componenttiflash: The version of the TiFlash componentpd: The version of the PD componenttidb_dashboard: The version of the standalone TiDB Dashboard componentcdc: The version of the CDC componentkvcdc: The version of the TiKV-CDC componenttiproxy: The version of the TiProxy componentprometheus: The version of the Prometheus componentgrafana: The version of the Grafana componentalertmanager: The version of the Alertmanager componenttso: The version of the TSO componentscheduling: The version of the Scheduling component
The following is an example configuration for component_versions:
component_versions:
kvcdc: "v1.1.1"
The preceding configuration specifies the version number of TiKV-CDC to be v1.1.1.
pd_servers
pd_servers specifies the machines to which PD services are deployed. It also specifies the service configuration on each machine. pd_servers is an array, and each element of the array contains the following fields:
host: Specifies the machine to which the PD services are deployed. The field value is an IP address and is mandatory.listen_host: When the machine has multiple IP addresses,listen_hostspecifies the listening IP address of the service. The default value is0.0.0.0.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.name: Specifies the name of the PD instance. Different instances must have unique names; otherwise, instances cannot be deployed.client_port: Specifies the port that PD uses to connect to the client. The default value is2379.peer_port: Specifies the port for communication between PDs. The default value is2380.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config: The configuration rule of this field is the same as thepdconfiguration rule inserver_configs. If this field is configured, the field content is merged with thepdcontent inserver_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.
For the above fields, you cannot modify these configured fields after the deployment:
hostlisten_hostnameclient_portpeer_portdeploy_dirdata_dirlog_dirarchos
A pd_servers configuration example is as follows:
pd_servers:
- host: 10.0.1.11
config:
schedule.max-merge-region-size: 20
schedule.max-merge-region-keys: 200000
- host: 10.0.1.12
The above configuration specifies that PD will be deployed on 10.0.1.11 and 10.0.1.12, and makes specific configurations for the PD of 10.0.1.11.
tidb_servers
tidb_servers specifies the machines to which TiDB services are deployed. It also specifies the service configuration on each machine. tidb_servers is an array, and each element of the array contains the following fields:
host: Specifies the machine to which the TiDB services are deployed. The field value is an IP address and is mandatory.listen_host: When the machine has multiple IP addresses,listen_hostspecifies the listening IP address of the service. The default value is0.0.0.0.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of TiDB services, which is used to provide connection to the MySQL client. The default value is4000.status_port: The listening port of the TiDB status service, which is used to view the status of the TiDB services from the external via HTTP requests. The default value is10080.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config: The configuration rule of this field is the same as thetidbconfiguration rule inserver_configs. If this field is configured, the field content is merged with thetidbcontent inserver_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.
For the above fields, you cannot modify these configured fields after the deployment:
hostlisten_hostportstatus_portdeploy_dirlog_dirarchos
A tidb_servers configuration example is as follows:
tidb_servers:
- host: 10.0.1.14
config:
log.level: warn
log.slow-query-file: tidb-slow-overwrited.log
- host: 10.0.1.15
tikv_servers
tikv_servers specifies the machines to which TiKV services are deployed. It also specifies the service configuration on each machine. tikv_servers is an array, and each element of the array contains the following fields:
host: Specifies the machine to which the TiKV services are deployed. The field value is an IP address and is mandatory.listen_host: When the machine has multiple IP addresses,listen_hostspecifies the listening IP address of the service. The default value is0.0.0.0.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of the TiKV services. The default value is20160.status_port: The listening port of the TiKV status service. The default value is20180.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config: The configuration rule of this field is the same as thetikvconfiguration rule inserver_configs. If this field is configured, the field content is merged with thetikvcontent inserver_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.
For the above fields, you cannot modify these configured fields after the deployment:
hostlisten_hostportstatus_portdeploy_dirdata_dirlog_dirarchos
A tikv_servers configuration example is as follows:
tikv_servers:
- host: 10.0.1.14
config:
server.labels: { zone: "zone1", host: "host1" }
- host: 10.0.1.15
config:
server.labels: { zone: "zone1", host: "host2" }
tiflash_servers
tiflash_servers specifies the machines to which TiFlash services are deployed. It also specifies the service configuration on each machine. This section is an array, and each element of the array contains the following fields:
host: Specifies the machine to which the TiFlash services are deployed. The field value is an IP address and is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.tcp_port: The port of the TiFlash TCP service for internal testing purposes. The default value is9000. Starting from TiUP v1.12.5, this configuration item does not take effect on clusters that are v7.1.0 or later.flash_service_port: The port via which TiFlash provides services. TiDB reads data from TiFlash via this port. The default value is3930.metrics_port: TiFlash's status port, which is used to output metric data. The default value is8234.flash_proxy_port: The port of the built-in TiKV. The default value is20170.flash_proxy_status_port: The status port of the built-in TiKV. The default value is20292.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal. TiFlash supports multipledata_dirdirectories separated by commas.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.tmp_path: The storage path of TiFlash temporary files. The default value is [pathor the first directory ofstorage.latest.dir] + "/tmp".numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config: The configuration rule of this field is the same as thetiflashconfiguration rule inserver_configs. If this field is configured, the field content is merged with thetiflashcontent inserver_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified inhost.learner_config: Each TiFlash node has a special built-in TiKV. This configuration item is used to configure this special TiKV. It is generally not recommended to modify the content under this configuration item.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.
After the deployment, for the fields above, you can only add directories to data_dir; for the fields below, you cannot modify these fields:
hosttcp_portflash_service_portflash_proxy_portflash_proxy_status_portmetrics_portdeploy_dirlog_dirtmp_patharchos
A tiflash_servers configuration example is as follows:
tiflash_servers:
- host: 10.0.1.21
- host: 10.0.1.22
tiproxy_servers
tiproxy_servers specifies the machines to which the TiProxy services are deployed and the service configuration on each machine. tiproxy_servers is an array, and each element of the array contains the following fields:
host: Specifies the IP address of the machine to which the TiProxy services are deployed. This field is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of the TiProxy SQL services. The default value is6000.status_port: The listening port of the TiProxy status service. It is used to view the status of the TiProxy services from the external via HTTP requests. The default value is3080.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated based on thedeploy_dirdirectory configured inglobal.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is of string type. The value is the ID of the NUMA node, such as"0,1".config: The configuration rule of this field is the same as thetiproxyconfiguration rule inserver_configs. If this field is configured, the field content is merged with thetiproxycontent inserver_configs. If these two fields overlap, the content of this field takes effect. Subsequently, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.
Among the above fields, you cannot modify the following configured fields after the deployment:
hostportdeploy_dirarchos
A tiproxy_servers configuration example is as follows:
tiproxy_servers:
- host: 10.0.1.21
port: 6000
status_port: 3080
config:
labels: { zone: "zone1" }
- host: 10.0.1.22
port: 6000
status_port: 3080
config:
labels: { zone: "zone2" }
For more configuration examples, see TiProxy Deployment Topology.
kvcdc_servers
kvcdc_servers specifies the machines to which the TiKV-CDC services are deployed. It also specifies the service configuration on each machine. kvcdc_servers is an array. Each array element contains the following fields:
host: Specifies the machine to which the TiKV-CDC services are deployed. The field value is an IP address and is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of the TiKV-CDC services. The default value is8600.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data-dir: Specifies the directory that TiKV-CDC uses to store temporary files primarily for sorting (optional). The free disk space for this directory is recommended to be greater than or equal to 500 GiB.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.gc-ttl: The TTL (Time to Live, in seconds) of the service-level GC safepoint in PD set by TiKV-CDC (optional). It is the duration that replication tasks can be suspended, defaulting to86400, which is 24 hours. Note that suspending replication tasks affects the progress of TiKV garbage collection safepoint. The longer thegc-ttl, the longer changefeeds can be suspended, but at the same time, more obsolete data will be kept and occupy more space. Vice versa.tz: The time zone that the TiKV-CDC services use. TiKV-CDC uses this time zone when internally converting time data types such as timestamp and when replicating data to the downstream. The default value is the local time zone where the process runs.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config: The address of the configuration file that TiKV-CDC uses (optional).os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.
For the above fields, you cannot modify these configured fields after the deployment:
hostportdeploy_dirdata_dirlog_dirarchos
A kvcdc_servers configuration example is as follows:
kvcdc_servers:
- host: 10.0.1.21
- host: 10.0.1.22
cdc_servers
cdc_servers specifies the machines to which the TiCDC services are deployed. It also specifies the service configuration on each machine. cdc_servers is an array. Each array element contains the following fields:
host: Specifies the machine to which the TiCDC services are deployed. The field value is an IP address and is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of the TiCDC services. The default value is8300.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.gc-ttl: The Time To Live (TTL) duration of the service level GC safepoint set by TiCDC in PD, in seconds. The default value is86400, which is 24 hours.tz: The time zone that the TiCDC services use. TiCDC uses this time zone when internally converting time data types such as timestamp and when replicating data to the downstream. The default value is the local time zone where the process runs.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config: The field content is merged with thecdccontent inserver_configs(if the two fields overlap, the content of this field takes effect). Then, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.ticdc_cluster_id: Specifies the TiCDC cluster ID corresponding to the service. If this field is not specified, the service joins the default TiCDC cluster. This field only takes effect in TiDB v6.3.0 or later versions.
For the above fields, you cannot modify these configured fields after the deployment:
hostportdeploy_dirdata_dirlog_dirarchosticdc_cluster_id
A cdc_servers configuration example is as follows:
cdc_servers:
- host: 10.0.1.20
gc-ttl: 86400
data_dir: "/cdc-data"
- host: 10.0.1.21
gc-ttl: 86400
data_dir: "/cdc-data"
tispark_masters
tispark_masters specifies the machines to which the master node of TiSpark is deployed. It also specifies the service configuration on each machine. tispark_masters is an array. Each array element contains the following fields:
host: Specifies the machine to which the TiSpark master is deployed. The field value is an IP address and is mandatory.listen_host: When the machine has multiple IP addresses,listen_hostspecifies the listening IP address of the service. The default value is0.0.0.0.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: Spark's listening port, used for communication before the node. The default value is7077.web_port: Spark's web port, which provides web services and the task status. The default value is8080.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.java_home: Specifies the path of the JRE environment to be used. This parameter corresponds to theJAVA_HOMEsystem environment variable.spark_config: Configures to configure the TiSpark services. Then, a configuration file is generated and sent to the machine specified inhost.spark_env: Configures the environment variables when Spark starts.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.
For the above fields, you cannot modify these configured fields after the deployment:
hostlisten_hostportweb_portdeploy_dirarchos
A tispark_masters configuration example is as follows:
tispark_masters:
- host: 10.0.1.21
spark_config:
spark.driver.memory: "2g"
spark.eventLog.enabled: "False"
spark.tispark.grpc.framesize: 2147483647
spark.tispark.grpc.timeout_in_sec: 100
spark.tispark.meta.reload_period_in_sec: 60
spark.tispark.request.command.priority: "Low"
spark.tispark.table.scan_concurrency: 256
spark_env:
SPARK_EXECUTOR_CORES: 5
SPARK_EXECUTOR_MEMORY: "10g"
SPARK_WORKER_CORES: 5
SPARK_WORKER_MEMORY: "10g"
- host: 10.0.1.22
tispark_workers
tispark_workers specifies the machines to which the worker nodes of TiSpark are deployed. It also specifies the service configuration on each machine. tispark_workers is an array. Each array element contains the following fields:
host: Specifies the machine to which the TiSpark workers are deployed. The field value is an IP address and is mandatory.listen_host: When the machine has multiple IP addresses,listen_hostspecifies the listening IP address of the service. The default value is0.0.0.0.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: Spark's listening port, used for communication before the node. The default value is7077.web_port: Spark's web port, which provides web services and the task status. The default value is8080.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.java_home: Specifies the path in which the JRE environment to be used is located. This parameter corresponds to theJAVA_HOMEsystem environment variable.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.
For the above fields, you cannot modify these configured fields after the deployment:
hostlisten_hostportweb_portdeploy_dirarchos
A tispark_workers configuration example is as follows:
tispark_workers:
- host: 10.0.1.22
- host: 10.0.1.23
tso_servers
tso_servers specifies the machines to which the tso microservices are deployed. It also specifies the service configuration on each machine. tso_servers is an array, and each element of the array contains the following fields:
host: Specifies the IP address of the machine to which thetsomicroservices are deployed. The field value is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: Specifies the listening port of thetsomicroservices. The default value is3379.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.config: The configuration rule of this field is the same as thetsoconfiguration rule inserver_configs. If this field is configured, the field content is merged with thetsocontent inserver_configs(if the two fields overlap, the content of this field takes effect). Subsequently, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.
Among the preceding fields, you cannot modify these fields after the deployment:
hostportdeploy_dirdata_dirarchos
A tso_servers configuration example is as follows:
tso_servers:
- host: 10.0.1.21
- host: 10.0.1.22
scheduling_servers
scheduling_servers specifies the machines to which scheduling microservices are deployed. It also specifies the service configuration on each machine. scheduling_servers is an array, and each element of the array contains the following fields:
host: Specifies the IP address of the machine to which theschedulingmicroservices are deployed. The field is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: Specifies the listening port of theschedulingmicroservices. The default value is3379.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.config: The configuration rule of this field is the same as theschedulingconfiguration rule inserver_configs. If this field is configured, the field content is merged with theschedulingcontent inserver_configs(if the two fields overlap, the content of this field takes effect). Subsequently, a configuration file is generated and sent to the machine specified inhost.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.
Among the preceding fields, you cannot modify these fields after the deployment:
hostportdeploy_dirdata_dirarchos
A scheduling_servers configuration example is as follows:
scheduling_servers:
- host: 10.0.1.21
- host: 10.0.1.22
monitoring_servers
monitoring_servers specifies the machines to which the Prometheus services are deployed. It also specifies the service configuration on each machine. monitoring_servers is an array. Each array element contains the following fields:
host: Specifies the machine to which the monitoring services are deployed. The field value is an IP address and is mandatory.ng_port: Specifies the port that NgMonitoring listens to. Introduced in TiUP v1.7.0, this field supports Continuous Profiling and Top SQL. The default value is12020.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of the Prometheus services. The default value is9090.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".storage_retention: The retention time of the Prometheus monitoring data. The default value is"30d".rule_dir: Specifies a local directory that should contain complete*.rules.ymlfiles. These files are transferred to the target machine during the initialization phase of the cluster configuration as the rules for Prometheus.remote_config: Supports writing Prometheus data to the remote, or reading data from the remote. This field has two configurations:remote_write: See the Prometheus document<remote_write>.remote_read: See the Prometheus document<remote_read>.
external_alertmanagers: If theexternal_alertmanagersfield is configured, Prometheus alerts the configuration behavior to the Alertmanager that is outside the cluster. This field is an array, each element of which is an external Alertmanager and consists of thehostandweb_portfields.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.additional_args: Introduced in TiUP v1.15.0, this field configures additional parameters for running Prometheus. This field is an array, and each element of the array is a Prometheus running parameter. For example, to enable the Prometheus hot reload feature, you can set this field to--web.enable-lifecycle.additional_scrape_conf: Customized Prometheus scrape configuration. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of theadditional_scrape_conffield to the corresponding parameters of the Prometheus configuration file. For more information, see Customize Prometheus scrape configuration.
For the above fields, you cannot modify these configured fields after the deployment:
hostportdeploy_dirdata_dirlog_dirarchos
A monitoring_servers configuration example is as follows:
monitoring_servers:
- host: 10.0.1.11
rule_dir: /local/rule/dir
additional_args:
- --web.enable-lifecycle
remote_config:
remote_write:
- queue_config:
batch_send_deadline: 5m
capacity: 100000
max_samples_per_send: 10000
max_shards: 300
url: http://127.0.0.1:8003/write
remote_read:
- url: http://127.0.0.1:8003/read
external_alertmanagers:
- host: 10.1.1.1
web_port: 9093
- host: 10.1.1.2
web_port: 9094
grafana_servers
grafana_servers specifies the machines to which the Grafana services are deployed. It also specifies the service configuration on each machine. grafana_servers is an array. Each array element contains the following fields:
host: Specifies the machine to which the Grafana services are deployed. The field value is an IP address and is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.port: The listening port of the Grafana services. The default value is3000.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.username: The user name on the Grafana login interface.password: The password corresponding to Grafana.dashboard_dir: Specifies a local directory that should contain completedashboard(*.json)files. These files are transferred to the target machine during the initialization phase of the cluster configuration as the dashboards for Grafana.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.config: This field is used to add custom configurations to Grafana. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of theconfigfield to the Grafana configuration filegrafana.ini. For more information, see Customize other Grafana configurations.
For the above fields, you cannot modify these configured fields after the deployment:
hostportdeploy_dirarchos
A grafana_servers configuration example is as follows:
grafana_servers:
- host: 10.0.1.11
dashboard_dir: /local/dashboard/dir
alertmanager_servers
alertmanager_servers specifies the machines to which the Alertmanager services are deployed. It also specifies the service configuration on each machine. alertmanager_servers is an array. Each array element contains the following fields:
host: Specifies the machine to which the Alertmanager services are deployed. The field value is an IP address and is mandatory.ssh_port: Specifies the SSH port to connect to the target machine for operations. If it is not specified, thessh_portof theglobalsection is used.web_port: Specifies the port used that Alertmanager uses to provide web services. The default value is9093.cluster_port: Specifies the communication port between one Alertmanger and other Alertmanager. The default value is9094.deploy_dir: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to thedeploy_dirdirectory configured inglobal.data_dir: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to thedata_dirdirectory configured inglobal.log_dir: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to thelog_dirdirectory configured inglobal.numa_node: Allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is the string type. The field value is the ID of the NUMA node, such as "0,1".config_file: Specifies a local file that is transferred to the target machine during the initialization phase of the cluster configuration as the configuration of Alertmanager.os: The operating system of the machine specified inhost. If this field is not specified, the default value is theosvalue inglobal.arch: The architecture of the machine specified inhost. If this field is not specified, the default value is thearchvalue inglobal.resource_control: Resource control for the service. If this field is configured, the field content is merged with theresource_controlcontent inglobal(if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified inhost. The configuration rules ofresource_controlare the same as theresource_controlcontent inglobal.listen_host: Specifies the listening address so that Alertmanager can be accessed through a proxy. It is recommended to configure it as0.0.0.0. For more information, see Customize Alertmanager configurations.
For the above fields, you cannot modify these configured fields after the deployment:
hostweb_portcluster_portdeploy_dirdata_dirlog_dirarchos
A alertmanager_servers configuration example is as follows:
alertmanager_servers:
- host: 10.0.1.11
config_file: /local/config/file
- host: 10.0.1.12
config_file: /local/config/file