- Docs Home
- About TiDB
- Quick Start
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Maintain
- Upgrade
- Scale
- Backup and Restore
- Use BR Tool (Recommended)
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Modify Configuration Online
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Read Historical Data
- Use Stale Read (Recommended)
- Use the
tidb_snapshot
System Variable
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- TiDB Operator
- Backup & Restore (BR)
- TiDB Binlog
- TiDB Lightning
- TiDB Data Migration
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMN
ADD INDEX
ADMIN
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ALTER DATABASE
ALTER INDEX
ALTER INSTANCE
ALTER TABLE
ALTER USER
ANALYZE TABLE
BACKUP
BEGIN
CHANGE COLUMN
COMMIT
CHANGE DRAINER
CHANGE PUMP
CREATE [GLOBAL|SESSION] BINDING
CREATE DATABASE
CREATE INDEX
CREATE ROLE
CREATE SEQUENCE
CREATE TABLE LIKE
CREATE TABLE
CREATE USER
CREATE VIEW
DEALLOCATE
DELETE
DESC
DESCRIBE
DO
DROP [GLOBAL|SESSION] BINDING
DROP COLUMN
DROP DATABASE
DROP INDEX
DROP ROLE
DROP SEQUENCE
DROP STATS
DROP TABLE
DROP USER
DROP VIEW
EXECUTE
EXPLAIN ANALYZE
EXPLAIN
FLASHBACK TABLE
FLUSH PRIVILEGES
FLUSH STATUS
FLUSH TABLES
GRANT <privileges>
GRANT <role>
INSERT
KILL [TIDB]
LOAD DATA
LOAD STATS
MODIFY COLUMN
PREPARE
RECOVER TABLE
RENAME INDEX
RENAME TABLE
REPLACE
RESTORE
REVOKE <privileges>
REVOKE <role>
ROLLBACK
SELECT
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET PASSWORD
SET ROLE
SET TRANSACTION
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [BACKUPS|RESTORES]
SHOW [GLOBAL|SESSION] BINDINGS
SHOW BUILTINS
SHOW CHARACTER SET
SHOW COLLATION
SHOW [FULL] COLUMNS FROM
SHOW CONFIG
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DATABASES
SHOW DRAINER STATUS
SHOW ENGINES
SHOW ERRORS
SHOW [FULL] FIELDS FROM
SHOW GRANTS
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLUGINS
SHOW PRIVILEGES
SHOW [FULL] PROCESSSLIST
SHOW PROFILES
SHOW PUMP STATUS
SHOW SCHEMAS
SHOW STATS_HEALTHY
SHOW STATS_HISTOGRAMS
SHOW STATS_META
SHOW STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
SHOW WARNINGS
SHUTDOWN
SPLIT REGION
START TRANSACTION
TABLE
TRACE
TRUNCATE
UPDATE
USE
WITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Character Set and Collation
- System Tables
mysql
- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUS
CLIENT_ERRORS_SUMMARY_BY_HOST
CLIENT_ERRORS_SUMMARY_BY_USER
CLIENT_ERRORS_SUMMARY_GLOBAL
CHARACTER_SETS
CLUSTER_CONFIG
CLUSTER_HARDWARE
CLUSTER_INFO
CLUSTER_LOAD
CLUSTER_LOG
CLUSTER_SYSTEMINFO
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
DATA_LOCK_WAITS
DDL_JOBS
DEADLOCKS
ENGINES
INSPECTION_RESULT
INSPECTION_RULES
INSPECTION_SUMMARY
KEY_COLUMN_USAGE
METRICS_SUMMARY
METRICS_TABLES
PARTITIONS
PROCESSLIST
SCHEMATA
SEQUENCES
SESSION_VARIABLES
SLOW_QUERY
STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_STORAGE_STATS
TIDB_HOT_REGIONS
TIDB_INDEXES
TIDB_SERVERS_INFO
TIDB_TRX
TIFLASH_REPLICA
TIKV_REGION_PEERS
TIKV_REGION_STATUS
TIKV_STORE_STATUS
USER_PRIVILEGES
VIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Profile Instances Page
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
You are viewing the documentation of an older version of the TiDB database (TiDB v5.1).
Deploy a TiDB Cluster Using TiUP
TiUP is a cluster operation and maintenance tool introduced in TiDB 4.0. TiUP provides TiUP cluster, a cluster management component written in Golang. By using TiUP cluster, you can easily perform daily database operations, including deploying, starting, stopping, destroying, scaling, and upgrading a TiDB cluster, and manage TiDB cluster parameters.
TiUP supports deploying TiDB, TiFlash, TiDB Binlog, TiCDC, and the monitoring system. This document introduces how to deploy TiDB clusters of different topologies.
TiDB, TiUP and TiDB Dashboard share usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.
Step 1: Prerequisites and precheck
Make sure that you have read the following documents:
Step 2: Install TiUP on the control machine
You can install TiUP on the control machine in either of the two ways: online deployment and offline deployment.
Method 1: Deploy TiUP online
Log in to the control machine using a regular user account (take the tidb
user as an example). All the following TiUP installation and cluster management operations can be performed by the tidb
user.
Install TiUP by executing the following command:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
Set the TiUP environment variables:
Redeclare the global environment variables:
source .bash_profile
Confirm whether TiUP is installed:
which tiup
Install the TiUP cluster component:
tiup cluster
If TiUP is already installed, update the TiUP cluster component to the latest version:
tiup update --self && tiup update cluster
Expected output includes
“Update successfully!”
.Verify the current version of your TiUP cluster:
tiup --binary cluster
Method 2: Deploy TiUP offline
Perform the following steps in this section to deploy a TiDB cluster offline using TiUP:
Step 1: Prepare the TiUP offline component package
To prepare the TiUP offline component package, manually pack an offline component package using tiup mirror clone
.
Install the TiUP package manager online.
Install the TiUP tool:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
Redeclare the global environment variables:
source .bash_profile
Confirm whether TiUP is installed:
which tiup
Pull the mirror using TiUP.
Pull the needed components on a machine that has access to the Internet:
tiup mirror clone tidb-community-server-${version}-linux-amd64 ${version} --os=linux --arch=amd64
The command above creates a directory named
tidb-community-server-${version}-linux-amd64
in the current directory, which contains the component package necessary for starting a cluster.Pack the component package by using the
tar
command and send the package to the control machine in the isolated environment:tar czvf tidb-community-server-${version}-linux-amd64.tar.gz tidb-community-server-${version}-linux-amd64
tidb-community-server-${version}-linux-amd64.tar.gz
is an independent offline environment package.
Step 2: Deploy the offline TiUP component
After sending the package to the control machine of the target cluster, install the TiUP component by running the following commands:
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz && \
sh tidb-community-server-${version}-linux-amd64/local_install.sh && \
source /home/tidb/.bash_profile
The local_install.sh
script automatically executes the tiup mirror set tidb-community-server-${version}-linux-amd64
command to set the current mirror address to tidb-community-server-${version}-linux-amd64
.
To switch the mirror to another directory, you can manually execute the tiup mirror set <mirror-dir>
command. To switch the mirror to the online environment, you can execute the tiup mirror set https://tiup-mirrors.pingcap.com
command.
Step 3: Initialize cluster topology file
According to the intended cluster topology, you need to manually create and edit the cluster initialization configuration file.
To create the cluster initialization configuration file, you can create a YAML-formatted configuration file on the control machine using TiUP:
tiup cluster template > topology.yaml
For the hybrid deployment scenarios, you can also execute tiup cluster template --full > topology.yaml
to create the recommended topology template. For the geo-distributed deployment scenarios, you can execute tiup cluster template --multi-dc > topology.yaml
to create the recommended topology template.
Execute vi topology.yaml
to see the configuration file content:
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
server_configs: {}
pd_servers:
- host: 10.0.1.4
- host: 10.0.1.5
- host: 10.0.1.6
tidb_servers:
- host: 10.0.1.7
- host: 10.0.1.8
- host: 10.0.1.9
tikv_servers:
- host: 10.0.1.1
- host: 10.0.1.2
- host: 10.0.1.3
monitoring_servers:
- host: 10.0.1.4
grafana_servers:
- host: 10.0.1.4
alertmanager_servers:
- host: 10.0.1.4
The following examples cover the most common scenarios. You need to modify the configuration file (named topology.yaml
) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly.
This is the basic cluster topology, including tidb-server, tikv-server, and pd-server. It is suitable for OLTP applications.
This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology. It is suitable for real-time HTAP applications.
This is to deploy TiCDC along with the minimal cluster topology. TiCDC is a tool for replicating the incremental data of TiDB, introduced in TiDB 4.0. It supports multiple downstream platforms, such as TiDB, MySQL, and MQ. Compared with TiDB Binlog, TiCDC has lower latency and native high availability. After the deployment, start TiCDC and create the replication task using
cdc cli
.TiDB Binlog deployment topology
This is to deploy TiDB Binlog along with the minimal cluster topology. TiDB Binlog is the widely used component for replicating incremental data. It provides near real-time backup and replication.
This is to deploy TiSpark along with the minimal cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is still experimental.
This is to deploy multiple instances on a single machine. You need to add extra configurations for the directory, port, resource ratio, and label.
Geo-distributed deployment topology
This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention.
- For parameters that should be globally effective, configure these parameters of corresponding components in the
server_configs
section of the configuration file. - For parameters that should be effective on a specific node, configure these parameters in the
config
of this node. - Use
.
to indicate the subcategory of the configuration, such aslog.slow-threshold
. For more formats, see TiUP configuration template. - For more parameter description, see TiDB
config.toml.example
, TiKVconfig.toml.example
, PDconfig.toml.example
, and TiFlash configuration.
Step 4: Execute the deployment command
You can use secret keys or interactive passwords for security authentication when you deploy TiDB using TiUP:
- If you use secret keys, you can specify the path of the keys through
-i
or--identity_file
; - If you use passwords, add the
-p
flag to enter the password interaction window; - If password-free login to the target machine has been configured, no authentication is required.
In general, TiUP creates the user and group specified in the topology.yaml
file on the target machine, with the following exceptions:
- The user name configured in
topology.yaml
already exists on the target machine. - You have used the
--skip-create-user
option in the command line to explicitly skip the step of creating the user.
Before you execute the deploy
command, use the check
and check --apply
commands to detect and automatically repair the potential risks in the cluster:
tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
Then execute the deploy
command to deploy the TiDB cluster:
tiup cluster deploy tidb-test v5.1.4 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
In the above command:
- The name of the deployed TiDB cluster is
tidb-test
. - You can see the latest supported versions by running
tiup list tidb
. This document takesv5.1.4
as an example. - The initialization configuration file is
topology.yaml
. --user root
: Log in to the target machine through theroot
key to complete the cluster deployment, or you can use other users withssh
andsudo
privileges to complete the deployment.[-i]
and[-p]
: optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters.[-i]
is the private key of theroot
user (or other users specified by--user
) that has access to the target machine.[-p]
is used to input the user password interactively.- If you need to specify the user group name to be created on the target machine, see this example.
At the end of the output log, you will see Deployed cluster `tidb-test` successfully
. This indicates that the deployment is successful.
Step 5: Check the clusters managed by TiUP
tiup cluster list
TiUP supports managing multiple TiDB clusters. The command above outputs information of all the clusters currently managed by TiUP, including the name, deployment user, version, and secret key information:
Starting /home/tidb/.tiup/components/cluster/v1.5.0/cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
tidb-test tidb v5.1.4 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
Step 6: Check the status of the deployed TiDB cluster
For example, execute the following command to check the status of the tidb-test
cluster:
tiup cluster display tidb-test
Expected output includes the instance ID, role, host, listening port, and status (because the cluster is not started yet, so the status is Down
/inactive
), and directory information.
Step 7: Start the TiDB cluster
tiup cluster start tidb-test
If the output log includes Started cluster `tidb-test` successfully
, the start is successful.
Step 8: Verify the running status of the TiDB cluster
For the specific operations, see Verify Cluster Status.
What's next
If you have deployed TiFlash along with the TiDB cluster, see the following documents:
If you have deployed TiCDC along with the TiDB cluster, see the following documents:
- Step 1: Prerequisites and precheck
- Step 2: Install TiUP on the control machine
- Step 3: Initialize cluster topology file
- Step 4: Execute the deployment command
- Step 5: Check the clusters managed by TiUP
- Step 6: Check the status of the deployed TiDB cluster
- Step 7: Start the TiDB cluster
- Step 8: Verify the running status of the TiDB cluster
- What's next