- About TiDB
- Quick Start
- Software and Hardware Requirements
- Environment Configuration Checklist
- Topology Patterns
- Install and Start
- Verify Cluster Status
- Benchmarks Methods
- Backup and Restore
- Read Historical Data
- Configure Time Zone
- Daily Checklist
- Manage TiCDC Cluster and Replication Tasks
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Maintain TiDB Using Ansible
- Modify Configuration Online
- Monitor and Alert
- TiDB Troubleshooting Map
- Identify Slow Queries
- SQL Diagnostics
- Identify Expensive Queries
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiCDC
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- SQL Tuning with
- SQL Optimization
- SQL Optimization Process
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plan
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Ecosystem Tools
- Use Cases
- Backup & Restore (BR)
- TiDB Binlog
- TiDB Lightning
- Cluster Architecture
- Key Monitoring Metrics
- SQL Language Structure and Syntax
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
CREATE [GLOBAL|SESSION] BINDING
CREATE TABLE LIKE
DROP [GLOBAL|SESSION] BINDING
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [GLOBAL|SESSION] BINDINGS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DRAINER STATUS
SHOW [FULL] FIELDS FROM
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW [FULL] PROCESSSLIST
SHOW PUMP STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
- Data Types
- Functions and Operators
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- List of Expressions for Pushdown
- Generated Columns
- SQL Mode
- Garbage Collection (GC)
- Character Set and Collation
- System Tables
- TiDB Dashboard
- Overview Page
- Cluster Info Page
- Key Visualizer Page
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Profile Instances Page
- TiDB Dashboard
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Errors Codes
- TiCDC Overview
- TiCDC Open Protocol
- Table Filter
- Schedule Replicas by Topology Labels
- Release Notes
- All Releases
This document describes the environment check operations before deploying TiDB. The following steps are ordered by priorities.
For production deployments, it is recommended to use NVMe SSD of EXT4 filesystem to store TiKV data. This configuration is the best practice, whose reliability, security, and stability have been proven in a large number of online scenarios.
Log in to the target machines using the
root user account.
Format your data disks to the ext4 filesystem and add the
noatime mount options to the filesystem. It is required to add the
nodelalloc option, or else the TiUP deployment cannot pass the precheck. The
noatime option is optional.
If your data disks have been formatted to ext4 and have added the mount options, you can uninstall it by running the
umount /dev/nvme0n1p1command, skip directly to the fifth step below to edit the
/etc/fstabfile, and add the options again to the filesystem.
/dev/nvme0n1 data disk as an example:
View the data disk.
Disk /dev/nvme0n1: 1000 GB
Create the partition.
parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1
lsblkcommand to view the device number of the partition: for a NVMe disk, the generated device number is usually
nvme0n1p1; for a regular disk (for example,
/dev/sdb), the generated device number is usually
Format the data disk to the ext4 filesystem.
View the partition UUID of the data disk.
In this example, the UUID of nvme0n1p1 is
NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ext4 237b634b-a565-477b-8371-6dff0c41f5ab /boot ├─sda2 swap f414c5c0-f823-4bb1-8fdf-e531173a72ed └─sda3 ext4 547909c1-398d-4696-94c6-03e43e317b60 / sr0 nvme0n1 └─nvme0n1p1 ext4 c51eb23b-195c-4061-92a9-3fad812cc12f
/etc/fstabfile and add the
UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2
Mount the data disk.
mkdir /data1 && \ mount -a
Check using the following command.
mount -t ext4
/dev/nvme0n1p1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered)
If the filesystem is ext4 and
nodelallocis included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines.
This section describes how to disable swap.
TiDB requires sufficient memory space for operation. It is not recommended to use swap as a buffer for insufficient memory, which might reduce performance. Therefore, it is recommended to disable the system swap permanently.
Do not disable the system swap by executing
swapoff -a, or this setting will be invalid after the machine is restarted.
To disable the system swap, execute the following command:
echo "vm.swappiness = 0">> /etc/sysctl.conf swapoff -a && swapon -a sysctl -p
In TiDB clusters, the access ports between nodes must be open to ensure the transmission of information such as read and write requests and data heartbeats. In common online scenarios, the data interaction between the database and the application service and between the database nodes are all made within a secure network. Therefore, if there are no special security requirements, it is recommended to stop the firewall of the target machine. Otherwise, refer to the port usage and add the needed port information to the allowlist of the firewall service.
The rest of this section describes how to stop the firewall service of a target machine.
Check the firewall status. Take CentOS Linux release 7.7.1908 (Core) as an example.
sudo firewall-cmd --state sudo systemctl status firewalld.service
Stop the firewall service.
sudo systemctl stop firewalld.service
Disable automatic start of the firewall service.
sudo systemctl disable firewalld.service
Check the firewall status.
sudo systemctl status firewalld.service
TiDB is a distributed database system that requires clock synchronization between nodes to guarantee linear consistency of transactions in the ACID model.
At present, the common solution to clock synchronization is to use the Network Time Protocol (NTP) services. You can use the
pool.ntp.org timing service on the Internet, or build your own NTP service in an offline environment.
To check whether the NTP service is installed and whether it synchronizes with the NTP server normally, take the following steps:
Run the following command. If it returns
running, then the NTP service is running.
sudo systemctl status ntpd.service
ntpd.service - Network Time Service Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled) Active: active (running) since 一 2017-12-18 13:13:19 CST; 3s ago
ntpstatcommand to check whether the NTP service synchronizes with the NTP server.
For the Ubuntu system, you need to install the
If it returns
synchronised to NTP server(synchronizing with the NTP server), then the synchronization process is normal.
synchronised to NTP server (220.127.116.11) at stratum 2 time correct to within 91 ms polling server every 1024 s
The following situation indicates the NTP service is not synchronizing normally:
The following situation indicates the NTP service is not running normally:
Unable to talk to NTP daemon. Is it running?
To make the NTP service start synchronizing as soon as possible, run the following command. Replace
pool.ntp.org with your NTP server.
sudo systemctl stop ntpd.service && \ sudo ntpdate pool.ntp.org && \ sudo systemctl start ntpd.service
To install the NTP service manually on the CentOS 7 system, run the following command:
sudo yum install ntp ntpdate && \ sudo systemctl start ntpd.service && \ sudo systemctl enable ntpd.service
This section describes how to manually configure the SSH mutual trust and sudo without password. It is recommended to use TiUP for deployment, which automatically configure SSH mutual trust and login without password. If you deploy TiDB clusters using TiUP, ignore this section.
Log in to the target machine respectively using the
rootuser account, create the
tidbuser and set the login password.
useradd tidb && \ passwd tidb
To configure sudo without password, run the following command, and add
tidb ALL=(ALL) NOPASSWD: ALLto the end of the file:
tidb ALL=(ALL) NOPASSWD: ALL
tidbuser to log in to the control machine, and run the following command. Replace
10.0.1.1with the IP of your target machine, and enter the
tidbuser password of the target machine as prompted. After the command is executed, SSH mutual trust is already created. This applies to other machines as well.
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.1.1
Log in to the control machine using the
tidbuser account, and log in to the IP of the target machine using
ssh. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured.
After you log in to the target machine using the
tidbuser, run the following command. If you do not need to enter the password and can switch to the
rootuser, then sudo without password of the
tidbuser is successfully configured.
sudo -su root
This section describes how to install the NUMA tool. In online environments, because the hardware configuration is usually higher than required, to better plan the hardware resources, multiple instances of TiDB or TiKV can be deployed on a single machine. In such scenarios, you can use NUMA tools to prevent the competition for CPU resources which might cause reduced performance.
- Binding cores using NUMA is a method to isolate CPU resources and is suitable for deploying multiple instances on highly configured physical machines.
- After completing deployment using
tiup cluster deploy, you can use the
execcommand to perform cluster level management operations.
Log in to the target node to install. Take CentOS Linux release 7.7.1908 (Core) as an example.
sudo yum -y install numactl
tiup clusterto install in batches.
tiup cluster exec --help
Run shell command on host in the tidb cluster Usage: cluster exec <cluster-name> [flags] Flags: --command string the command run on cluster host (default "ls") -h, --help help for exec --sudo use root permissions (default false)
To use the sudo privilege to execute the installation command for all the target machines in the
tidb-testcluster, run the following command:
tiup cluster exec tidb-test --sudo --command "yum -y install numactl"
- TiDB Environment and System Configuration Check