- About TiDB
- Quick Start
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migration Tools
- Migration Scenarios
- Advanced Migration
- Maintain
- Upgrade
- Scale
- Backup and Restore
- Use BR Tool (Recommended)
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Modify Configuration Online
- Online Unsafe Recovery
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference
- Overview
- TiUP Commands
- TiUP Cluster Commands
- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- TiDB Operator
- Backup & Restore (BR)
- TiDB Binlog
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- DM Overview
- Basic Features
- Advanced Features
- Merge and Migrate Data from Sharded Tables
- Migrate from MySQL Databases that Use GH-ost/PT-osc
- Filter DMLs Using SQL Expressions
- DM Architecture
- Benchmarks
- Quick Start
- Data Migration Scenarios
- Deploy
- Maintain
- Tools
- Cluster Upgrade
- Create a Data Source
- Manage Data Source
- Manage a Data Migration Task
- Manually Handle Sharding DDL Lock
- Switch the MySQL Instance to Be Migrated
- Manage Schemas of Tables to be Migrated
- Handle Alerts
- Daily Check
- Troubleshoot
- Performance Tuning
- Reference
- Secure
- Monitoring Metrics
- Alert Rules
- Error Codes
- FAQ
- Glossary
- Example
- Release Notes
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMN
ADD INDEX
ADMIN
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ADMIN SHOW TELEMETRY
ALTER DATABASE
ALTER INDEX
ALTER INSTANCE
ALTER PLACEMENT POLICY
ALTER TABLE
ALTER USER
ANALYZE TABLE
BACKUP
BEGIN
CHANGE COLUMN
COMMIT
CHANGE DRAINER
CHANGE PUMP
CREATE [GLOBAL|SESSION] BINDING
CREATE DATABASE
CREATE INDEX
CREATE PLACEMENT POLICY
CREATE ROLE
CREATE SEQUENCE
CREATE TABLE LIKE
CREATE TABLE
CREATE USER
CREATE VIEW
DEALLOCATE
DELETE
DESC
DESCRIBE
DO
DROP [GLOBAL|SESSION] BINDING
DROP COLUMN
DROP DATABASE
DROP INDEX
DROP PLACEMENT POLICY
DROP ROLE
DROP SEQUENCE
DROP STATS
DROP TABLE
DROP USER
DROP VIEW
EXECUTE
EXPLAIN ANALYZE
EXPLAIN
FLASHBACK TABLE
FLUSH PRIVILEGES
FLUSH STATUS
FLUSH TABLES
GRANT <privileges>
GRANT <role>
INSERT
KILL [TIDB]
LOAD DATA
LOAD STATS
MODIFY COLUMN
PREPARE
RECOVER TABLE
RENAME INDEX
RENAME TABLE
REPLACE
RESTORE
REVOKE <privileges>
REVOKE <role>
ROLLBACK
SELECT
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET PASSWORD
SET ROLE
SET TRANSACTION
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [BACKUPS|RESTORES]
SHOW [GLOBAL|SESSION] BINDINGS
SHOW BUILTINS
SHOW CHARACTER SET
SHOW COLLATION
SHOW [FULL] COLUMNS FROM
SHOW CONFIG
SHOW CREATE PLACEMENT POLICY
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DATABASES
SHOW DRAINER STATUS
SHOW ENGINES
SHOW ERRORS
SHOW [FULL] FIELDS FROM
SHOW GRANTS
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLACEMENT
SHOW PLACEMENT FOR
SHOW PLACEMENT LABELS
SHOW PLUGINS
SHOW PRIVILEGES
SHOW [FULL] PROCESSSLIST
SHOW PROFILES
SHOW PUMP STATUS
SHOW SCHEMAS
SHOW STATS_HEALTHY
SHOW STATS_HISTOGRAMS
SHOW STATS_META
SHOW STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
SHOW WARNINGS
SHUTDOWN
SPLIT REGION
START TRANSACTION
TABLE
TRACE
TRUNCATE
UPDATE
USE
WITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
mysql
- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUS
CLIENT_ERRORS_SUMMARY_BY_HOST
CLIENT_ERRORS_SUMMARY_BY_USER
CLIENT_ERRORS_SUMMARY_GLOBAL
CHARACTER_SETS
CLUSTER_CONFIG
CLUSTER_HARDWARE
CLUSTER_INFO
CLUSTER_LOAD
CLUSTER_LOG
CLUSTER_SYSTEMINFO
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
DATA_LOCK_WAITS
DDL_JOBS
DEADLOCKS
ENGINES
INSPECTION_RESULT
INSPECTION_RULES
INSPECTION_SUMMARY
KEY_COLUMN_USAGE
METRICS_SUMMARY
METRICS_TABLES
PARTITIONS
PLACEMENT_RULES
PROCESSLIST
REFERENTIAL_CONSTRAINTS
SCHEMATA
SEQUENCES
SESSION_VARIABLES
SLOW_QUERY
STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_STORAGE_STATS
TIDB_HOT_REGIONS
TIDB_HOT_REGIONS_HISTORY
TIDB_INDEXES
TIDB_SERVERS_INFO
TIDB_TRX
TIFLASH_REPLICA
TIKV_REGION_PEERS
TIKV_REGION_STATUS
TIKV_STORE_STATUS
USER_PRIVILEGES
VIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
Placement Rules in SQL
Placement Rules in SQL is an experimental feature introduced in v5.3.0. The syntax might change before its GA, and there might also be bugs. If you understand the risks, you can enable this experiment feature by executing SET GLOBAL tidb_enable_alter_placement = 1;
.
Placement Rules in SQL is a feature that enables you to specify where data is stored in a TiKV cluster using SQL interfaces. Using this feature, tables and partitions are scheduled to specific regions, data centers, racks, or hosts. This is useful for scenarios including optimizing a high availability strategy with lower cost, ensuring that local replicas of data are available for local stale reads, and adhering to data locality requirements.
The detailed user scenarios are as follows:
- Merge multiple databases of different applications to reduce the cost on database maintenance
- Increase replica count for important data to improve the application availability and data reliability
- Store new data into SSDs and store old data into HHDs to lower the cost on data archiving and storage
- Schedule the leaders of hotspot data to high-performance TiKV instances
- Separate cold data to lower-cost storage mediums to improve cost efficiency
Specify placement options
To use Placement Rules in SQL, you need to specify one or more placement options in a SQL statement. To specify the Placement options, you can either use direct placement or use a placement policy.
In the following example, both tables t1
and t2
have the same rules. t1
is specified rules using a direct placement while t2
is specified rules using a placement policy.
CREATE TABLE t1 (a INT) PRIMARY_REGION="us-east-1" REGIONS="us-east-1,us-west-1";
CREATE PLACEMENT POLICY eastandwest PRIMARY_REGION="us-east-1" REGIONS="us-east-1,us-west-1";
CREATE TABLE t2 (a INT) PLACEMENT POLICY=eastandwest;
It is recommended to use placement policies for simpler rule management. When you change a placement policy (via ALTER PLACEMENT POLICY
), the change automatically propagates to all database objects.
If you use direct placement options, you have to alter rules for each object (for example, tables and partitions).
PLACEMENT POLICY
is not associated with any database schema and has the global scope. Therefore, assigning a placement policy does not require any additional privileges over the CREATE TABLE
privilege.
Option reference
Placement options depend on labels correctly specified in the configuration of each TiKV node. For example, the PRIMARY_REGION
option depends on the region
label in TiKV. To see a summary of all labels available in your TiKV cluster, use the statement SHOW PLACEMENT LABELS
:
mysql> show placement labels;
+--------+----------------+
| Key | Values |
+--------+----------------+
| disk | ["ssd"] |
| region | ["us-east-1"] |
| zone | ["us-east-1a"] |
+--------+----------------+
3 rows in set (0.00 sec)
Option Name | Description |
---|---|
PRIMARY_REGION | Raft leaders are placed in stores that have the region label that matches the value of this option. |
REGIONS | Raft followers are placed in stores that have the region label that matches the value of this option. |
SCHEDULE | The strategy used to schedule the placement of followers. The value options are EVEN (default) or MAJORITY_IN_PRIMARY . |
FOLLOWERS | The number of followers. For example, FOLLOWERS=2 means that there will be 3 replicas of the data (2 followers and 1 leader). |
In addition to the placement options above, you can also use the advance configurations. For details, see Advance placement.
Option Name | Description |
---|---|
CONSTRAINTS | A list of constraints that apply to all roles. For example, CONSTRAINTS="[+disk=ssd] . |
FOLLOWER_CONSTRAINTS | A list of constraints that only apply to followers. |
Examples
Increase the number of replicas
The default configuration of max-replicas
is 3
. To increase this for a specific set of tables, you can use a placement policy as follows:
CREATE PLACEMENT POLICY fivereplicas FOLLOWERS=4;
CREATE TABLE t1 (a INT) PLACEMENT POLICY=fivereplicas;
Note that the PD configuration includes the leader and follower count, thus 4 followers + 1 leader equals 5 replicas in total.
To expand on this example, you can also use PRIMARY_REGION
and REGIONS
placement options to describe the placement for the followers:
CREATE PLACEMENT POLICY eastandwest PRIMARY_REGION="us-east-1" REGIONS="us-east-1,us-east-2,us-west-1" SCHEDULE="MAJORITY_IN_PRIMARY" FOLLOWERS=4;
CREATE TABLE t1 (a INT) PLACEMENT POLICY=eastandwest;
The SCHEDULE
option instructs TiDB on how to balance the followers. The default schedule of EVEN
ensures a balance of followers in all regions.
To ensure that enough followers are placed in the primary region (us-east-1
) so that quorum can be achieved, you can use the MAJORITY_IN_PRIMARY
schedule. This schedule helps provide lower latency transactions at the expense of some availability. If the primary region fails, MAJORITY_IN_PRIMARY
cannot provide automatic failover.
Assign placement to a partitioned table
The following example uses list partitioning, which is currently an experimental feature of TiDB. Partitioned tables also require the PRIMARY KEY
to be included in all columns in the table's partitioning function.
In addition to assigning placement options to tables, you can also assign the options to table partitions. For example:
CREATE PLACEMENT POLICY europe PRIMARY_REGION="eu-central-1" REGIONS="eu-central-1,eu-west-1";
CREATE PLACEMENT POLICY northamerica PRIMARY_REGION="us-east-1" REGIONS="us-east-1";
SET tidb_enable_list_partition = 1;
CREATE TABLE t1 (
country VARCHAR(10) NOT NULL,
userdata VARCHAR(100) NOT NULL
) PARTITION BY LIST COLUMNS (country) (
PARTITION pEurope VALUES IN ('DE', 'FR', 'GB') PLACEMENT POLICY=europe,
PARTITION pNorthAmerica VALUES IN ('US', 'CA', 'MX') PLACEMENT POLICY=northamerica
);
Set the default placement for a schema
You can directly attach the default placement options to a database schema. This works similar to setting the default character set or collation for a schema. Your specified placement options apply when no other options are specified. For example:
CREATE TABLE t1 (a INT); -- Creates a table t1 with no placement options.
ALTER DATABASE test FOLLOWERS=4; -- Changes the default placement option, and does not apply to the existing table t1.
CREATE TABLE t2 (a INT); -- Creates a table t2 with the default placement of FOLLOWERS=4.
CREATE TABLE t3 (a INT) PRIMARY_REGION="us-east-1" REGIONS="us-east-1,us-east-2"; -- Creates a table t3 without the default FOLLOWERS=4 placement, because this statement has specified another placement.
ALTER DATABASE test FOLLOWERS=2; -- Changes the default placement, and does not apply to existing tables.
CREATE TABLE t4 (a INT); -- Creates a table t4 with the default FOLLOWERS=2 option.
Because placement options are only inherited from the database schema when a table is created, it is recommended to set the default placement option using a PLACEMENT POLICY
. This ensures that future changes to the policy propagate to existing tables.
Advanced placement
The placement options PRIMARY_REGION
, REGIONS
, and SCHEDULE
meet the basic needs of data placement at the loss of some flexibility. For more complex scenarios with the need for higher flexibility, you can also use the advanced placement options of CONSTRAINTS
and FOLLOWER_CONSTRAINTS
. You cannot specify the PRIMARY_REGION
, REGIONS
, or SCHEDULE
option with the CONSTRAINTS
option at the same time. If you specify both at the same time, an error will be returned.
For example, to set constraints that data must reside on a TiKV store where the label disk
must match a value:
CREATE PLACEMENT POLICY storeonfastssd CONSTRAINTS="[+disk=ssd]";
CREATE PLACEMENT POLICY storeonhdd CONSTRAINTS="[+disk=hdd]";
CREATE PLACEMENT POLICY companystandardpolicy CONSTRAINTS="";
CREATE TABLE t1 (id INT, name VARCHAR(50), purchased DATE)
PLACEMENT POLICY=companystandardpolicy
PARTITION BY RANGE( YEAR(purchased) ) (
PARTITION p0 VALUES LESS THAN (2000) PLACEMENT POLICY=storeonhdd,
PARTITION p1 VALUES LESS THAN (2005),
PARTITION p2 VALUES LESS THAN (2010),
PARTITION p3 VALUES LESS THAN (2015),
PARTITION p4 VALUES LESS THAN MAXVALUE PLACEMENT POLICY=storeonfastssd
);
You can either specify constraints in list format ([+disk=ssd]
) or in dictionary format ({+disk=ssd: 1,+disk=hdd: 2}
).
In list format, constraints are specified as a list of key-value pairs. The key starts with either a +
or a -
. +disk=ssd
indicates that the label disk
must be set to ssd
, and -disk=hdd
indicates that the label disk
must not be hdd
.
In dictionary format, constraints also indicate a number of instances that apply to that rule. For example, FOLLOWER_CONSTRAINTS="{+region=us-east-1: 1,+region=us-east-2: 1,+region=us-west-1: 1,+any: 1}";
indicates that 1 follower is in us-east-1, 1 follower is in us-east-2, 1 follower is in us-west-1, and 1 follower can be in any region. For another example, FOLLOWER_CONSTRAINTS='{"+region=us-east-1,+disk=hdd":1,"+region=us-west-1":1}';
indicates that 1 follower is in us-east-1 with an hdd disk, and 1 follower is in us-west-1.
Dictionary and list formats are based on the YAML parser, but the YAML syntax might be incorrectly parsed. For example, "{+disk=ssd:1,+disk=hdd:2}"
is incorrectly parsed as '{"+disk=ssd:1": null, "+disk=hdd:1": null}'
. But "{+disk=ssd: 1,+disk=hdd: 1}"
is correctly parsed as '{"+disk=ssd": 1, "+disk=hdd": 1}'
.
Known limitations
The following known limitations exist in the experimental release of Placement Rules in SQL:
- Dumpling does not support dumping placement policies. See issue #29371.
- TiDB tools, including Backup & Restore (BR), TiCDC, TiDB Lightning, and TiDB Data Migration (DM), do not yet support placement rules.
- Temporary tables do not support placement options (either via direct placement or placement policies).
- Syntactic sugar rules are permitted for setting
PRIMARY_REGION
andREGIONS
. In the future, we plan to add varieties forPRIMARY_RACK
,PRIMARY_ZONE
, andPRIMARY_HOST
. See issue #18030. - TiFlash learners are not configurable through Placement Rules syntax.
- Placement rules only ensure that data at rest resides on the correct TiKV store. The rules do not guarantee that data in transit (via either user queries or internal operations) only occurs in a specific region.