- About TiDB
- Quick Start
- Software and Hardware Requirements
- Environment Configuration Checklist
- Topology Patterns
- Install and Start
- Verify Cluster Status
- Benchmarks Methods
- Backup and Restore
- Read Historical Data
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Maintain TiDB Using Ansible
- Modify Configuration Online
- Monitor and Alert
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Understanding the Query Execution Plan
- SQL Optimization Process
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Ecosystem Tools
- Use Cases
- Backup & Restore (BR)
- TiDB Binlog
- TiDB Lightning
- Cluster Architecture
- Key Monitoring Metrics
- SQL Language Structure and Syntax
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
CREATE [GLOBAL|SESSION] BINDING
CREATE TABLE LIKE
DROP [GLOBAL|SESSION] BINDING
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [GLOBAL|SESSION] BINDINGS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DRAINER STATUS
SHOW [FULL] FIELDS FROM
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW [FULL] PROCESSSLIST
SHOW PUMP STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
- Data Types
- Functions and Operators
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- List of Expressions for Pushdown
- Generated Columns
- SQL Mode
- Garbage Collection (GC)
- Character Set and Collation
- System Tables
- TiDB Dashboard
- Overview Page
- Cluster Info Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Profile Instances Page
- TiDB Dashboard
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- Release Notes
- All Releases
TiUP is a cluster operation and maintenance tool introduced in TiDB 4.0. TiUP provides TiUP cluster, a cluster management component written in Golang. By using TiUP cluster, you can easily perform daily database operations, including deploying, starting, stopping, destroying, scaling, and upgrading a TiDB cluster, and manage TiDB cluster parameters.
TiUP supports deploying TiDB, TiFlash, TiDB Binlog, TiCDC, and the monitoring system. This document introduces how to deploy TiDB clusters of different topologies.
Make sure that you have read the following documents:
Log in to the control machine using a regular user account (take the
tidb user as an example). All the following TiUP installation and cluster management operations can be performed by the
Install TiUP by executing the following command:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
Set the TiUP environment variables:
Redeclare the global environment variables:
Confirm whether TiUP is installed:
Install the TiUP cluster component:
If TiUP is already installed, update the TiUP cluster component to the latest version:
tiup update --self && tiup update cluster
Expected output includes
Verify the current version of your TiUP cluster:
tiup --binary cluster
According to the intended cluster topology, you need to manually create and edit the cluster initialization configuration file.
The following examples cover six common scenarios. You need to create a YAML configuration file (named
topology.yaml for example) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration accordingly.
The following topology documents provide a cluster configuration template for each of the following common scenarios:
This is the basic cluster topology, including tidb-server, tikv-server, and pd-server. It is suitable for OLTP applications.
This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology. It is suitable for real-time HTAP applications.
This is to deploy TiCDC along with the minimal cluster topology. TiCDC is a tool for replicating the incremental data of TiDB, introduced in TiDB 4.0. It supports multiple downstream platforms, such as TiDB, MySQL, and MQ. Compared with TiDB Binlog, TiCDC has lower latency and native high availability. After the deployment, start TiCDC and create the replication task using
This is to deploy TiDB Binlog along with the minimal cluster topology. TiDB Binlog is the widely used component for replicating incremental data. It provides near real-time backup and replication.
This is to deploy TiSpark along with the minimal cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is still experimental.
This is to deploy multiple instances on a single machine. You need to add extra configurations for the directory, port, resource ratio, and label.
This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention.
- For parameters that should be globally effective, configure these parameters of corresponding components in the
server_configssection of the configuration file.
- For parameters that should be effective on a specific node, configure these parameters in the
configof this node.
.to indicate the subcategory of the configuration, such as
log.slow-threshold. For more formats, see TiUP configuration template.
- For more parameter description, see TiDB
config.toml.example, and TiFlash configuration.
You can use secret keys or interactive passwords for security authentication when you deploy TiDB using TiUP:
- If you use secret keys, you can specify the path of the keys through
- If you use passwords, add the
-pflag to enter the password interaction window;
- If password-free login to the target machine has been configured, no authentication is required.
In general, TiUP creates the user and group specified in the
topology.yamlfile on the target machine, with the following exceptions:
- The user name configured in
topology.yamlalready exists on the target machine.
- You have used the
--skip-create-useroption in the command line to explicitly skip the step of creating the user.
tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
In the above command:
- The name of the deployed TiDB cluster is
- The version of the TiDB cluster is
v4.0.0. You can see other supported versions by running
tiup list tidb.
- The initialization configuration file is
--user root: Log in to the target machine through the
rootkey to complete the cluster deployment, or you can use other users with
sudoprivileges to complete the deployment.
[-p]: optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters.
[-i]is the private key of the
rootuser (or other users specified by
--user) that has access to the target machine.
[-p]is used to input the user password interactively.
- If you need to specify the user group name to be created on the target machine, see this example.
At the end of the output log, you will see
Deployed cluster `tidb-test` successfully. This indicates that the deployment is successful.
tiup cluster list
TiUP supports managing multiple TiDB clusters. The command above outputs information of all the clusters currently managed by TiUP, including the name, deployment user, version, and secret key information:
Starting /home/tidb/.tiup/components/cluster/v1.0.0/cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- tidb-test tidb v4.0.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
For example, execute the following command to check the status of the
tiup cluster display tidb-test
Expected output includes the instance ID, role, host, listening port, and status (because the cluster is not started yet, so the status is
inactive), and directory information.
tiup cluster start tidb-test
If the output log includes
Started cluster `tidb-test` successfully, the start is successful.
Check the TiDB cluster status using TiUP:
tiup cluster display tidb-test
Upin the output, the cluster status is normal.
Log in to the database by running the following command:
mysql -u root -h 10.0.1.4 -P 4000
If you have deployed TiFlash along with the TiDB cluster, see the following documents:
If you have deployed TiCDC along with the TiDB cluster, see the following documents:
By default, TiDB, TiUP and TiDB Dashboard share usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.
- Deploy a TiDB Cluster Using TiUP
- Step 1: Prerequisites and precheck
- Step 2: Install TiUP on the control machine
- Step 3: Edit the initialization configuration file
- Step 4: Execute the deployment command
- Step 5: Check the clusters managed by TiUP
- Step 6: Check the status of the deployed TiDB cluster
- Step 7: Start the TiDB cluster
- Step 8: Verify the running status of the TiDB cluster
- What's next