- Docs Home
- About TiDB
- Quick Start
- Quick Start
- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Cloud Native Development Environment
- Third-party Support
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
- Advanced Migration
- Backup and Restore
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Modify Configuration Online
- Online Unsafe Recovery
- Replicate Data Between Primary and Secondary Clusters
- Monitor and Alert
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- Tuning Guide
- Configuration Tuning
- SQL Tuning
- Understanding the Query Execution Plan
- SQL Optimization Process
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Two Data Centers in One City Deployment
- Read Historical Data
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Tools
- Use Cases
- Documentation Map
- Terminology and Concepts
- Manage TiUP Components
- Troubleshooting Guide
- Command Reference
- TiUP Commands
- tiup clean
- tiup completion
- tiup env
- tiup help
- tiup install
- tiup list
- tiup mirror
- tiup status
- tiup telemetry
- tiup uninstall
- tiup update
- TiUP Cluster Commands
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- TiDB Lightning
- Get Started
- Prechecks and requirements
- Data Sources
- Physical Import Mode
- Logical Import Mode
- Key Features
- TiDB Data Migration
- About TiDB Data Migration
- Quick Start
- Deploy a DM cluster
- Advanced Tutorials
- Cluster Upgrade
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Command Line
- Configuration Files
- Compatibility Catalog
- Monitoring and Alerts
- Error Codes
- Release Notes
- Backup & Restore (BR)
- BR Overview
- Deploy and Use BR
- Use BR to Back Up Cluster Data
- Use BR to Restore Cluster Data
- BR Use Cases
- BR Features
- Point-in-Time Recovery
- TiDB Binlog
- Cluster Architecture
- Key Monitoring Metrics
- SQL Language Structure and Syntax
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ADMIN SHOW TELEMETRY
ALTER PLACEMENT POLICY
ALTER TABLE COMPACT
ALTER TABLE SET TIFLASH MODE
CREATE [GLOBAL|SESSION] BINDING
CREATE PLACEMENT POLICY
CREATE TABLE LIKE
DROP [GLOBAL|SESSION] BINDING
DROP PLACEMENT POLICY
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [GLOBAL|SESSION] BINDINGS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE PLACEMENT POLICY
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DRAINER STATUS
SHOW [FULL] FIELDS FROM
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLACEMENT FOR
SHOW PLACEMENT LABELS
SHOW [FULL] PROCESSSLIST
SHOW PUMP STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
- Data Types
- Functions and Operators
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Generated Columns
- SQL Mode
- Table Attributes
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
- TiDB Dashboard
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Monitoring Page
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- TiDB Installation Packages
To better handle SQL performance issues, MySQL has provided statement summary tables in
performance_schema to monitor SQL with statistics. Among these tables,
events_statements_summary_by_digest is very useful in locating SQL problems with its abundant fields such as latency, execution times, rows scanned, and full table scans.
Therefore, starting from v4.0.0-rc.1, TiDB provides system tables in
performance_schema) that are similar to
events_statements_summary_by_digest in terms of features.
This document details these tables and introduces how to use them to troubleshoot SQL performance issues.
statements_summary is a system table in
statements_summary groups the SQL statements by the SQL digest and the plan digest, and provides statistics for each SQL category.
The "SQL digest" here means the same as used in slow logs, which is a unique identifier calculated through normalized SQL statements. The normalization process ignores constant, blank characters, and is case insensitive. Therefore, statements with consistent syntaxes have the same digest. For example:
SELECT * FROM employee WHERE id IN (1, 2, 3) AND salary BETWEEN 1000 AND 2000; select * from EMPLOYEE where ID in (4, 5) and SALARY between 3000 and 4000;
After normalization, they are both of the following category:
select * from employee where id in (...) and salary between ? and ?;
The "plan digest" here refers to the unique identifier calculated through normalized execution plan. The normalization process ignores constants. The same SQL statements might be grouped into different categories because the same statements might have different execution plans. SQL statements of the same category have the same execution plan.
statements_summary stores the aggregated results of SQL monitoring metrics. In general, each of the monitoring metrics includes the maximum value and average value. For example, the execution latency metric corresponds to two fields:
AVG_LATENCY (average latency) and
MAX_LATENCY (maximum latency).
To make sure that the monitoring metrics are up to date, data in the
statements_summary table is periodically cleared, and only recent aggregated results are retained and displayed. The periodical data clearing is controlled by the
tidb_stmt_summary_refresh_interval system variable. If you happen to make a query right after the clearing, the data displayed might be very little.
The following is a sample output of querying
SUMMARY_BEGIN_TIME: 2020-01-02 11:00:00 SUMMARY_END_TIME: 2020-01-02 11:30:00 STMT_TYPE: Select SCHEMA_NAME: test DIGEST: 0611cc2fe792f8c146cc97d39b31d9562014cf15f8d41f23a4938ca341f54182 DIGEST_TEXT: select * from employee where id = ? TABLE_NAMES: test.employee INDEX_NAMES: NULL SAMPLE_USER: root EXEC_COUNT: 3 SUM_LATENCY: 1035161 MAX_LATENCY: 399594 MIN_LATENCY: 301353 AVG_LATENCY: 345053 AVG_PARSE_LATENCY: 57000 MAX_PARSE_LATENCY: 57000 AVG_COMPILE_LATENCY: 175458 MAX_COMPILE_LATENCY: 175458 ........... AVG_MEM: 103 MAX_MEM: 103 AVG_DISK: 65535 MAX_DISK: 65535 AVG_AFFECTED_ROWS: 0 FIRST_SEEN: 2020-01-02 11:12:54 LAST_SEEN: 2020-01-02 11:25:24 QUERY_SAMPLE_TEXT: select * from employee where id=3100 PREV_SAMPLE_TEXT: PLAN_DIGEST: f415b8d52640b535b9b12a9c148a8630d2c6d59e419aad29397842e32e8e5de3 PLAN: Point_Get_1 root 1 table:employee, handle:3100
In TiDB, the time unit of fields in statement summary tables is nanosecond (ns), whereas in MySQL the time unit is picosecond (ps).
The table schema of
statements_summary_history is identical to that of
statements_summary_history saves the historical data of a time range. By checking historical data, you can troubleshoot anomalies and compare monitoring metrics of different time ranges.
SUMMARY_END_TIME represent the start time and the end time of the historical time range.
tidb_stmt_summary_max_stmt_count variable controls the maximum number of statements that the
statement_summary table stores in memory. The
statement_summary table uses the LRU algorithm. Once the number of SQL statements exceeds the
tidb_stmt_summary_max_stmt_count value, the longest unused record is evicted from the table. The number of evicted SQL statements during each period is recorded in the
statements_summary_evicted table is updated only when a SQL record is evicted from the
statement_summary table. The
statements_summary_evicted only records the period during which the eviction occurs and the number of evicted SQL statements.
statements_summary_evicted tables only show the statement summary of a single TiDB server. To query the data of the entire cluster, you need to query the
cluster_statements_summary displays the
statements_summary data of each TiDB server.
cluster_statements_summary_history displays the
statements_summary_history data of each TiDB server.
cluster_statements_summary_evicted displays the
statements_summary_evicted data of each TiDB server. These tables use the
INSTANCE field to represent the address of the TiDB server. The other fields are the same as those in
The following system variables are used to control the statement summary:
tidb_enable_stmt_summary: Determines whether to enable the statement summary feature.
disable. The feature is enabled by default. The statistics in the system table are cleared if this feature is disabled. The statistics are re-calculated next time this feature is enabled. Tests have shown that enabling this feature has little impact on performance.
tidb_stmt_summary_refresh_interval: The interval at which the
statements_summarytable is refreshed. The time unit is second (s). The default value is
tidb_stmt_summary_history_size: The size of each SQL statement category stored in the
statements_summary_historytable, which is also the maximum number of records in the
statement_summary_evictedtable. The default value is
tidb_stmt_summary_max_stmt_count: Limits the number of SQL statements that can be stored in statement summary tables. The default value is
3000. If the limit is exceeded, those SQL statements that recently remain unused are cleared. These cleared SQL statements are recorded in the
tidb_stmt_summary_max_sql_length: Specifies the longest display length of
QUERY_SAMPLE_TEXT. The default value is
tidb_stmt_summary_internal_query: Determines whether to count the TiDB SQL statements.
1means to count, and
0means not to count. The default value is
When a category of SQL statement needs to be removed because the
tidb_stmt_summary_max_stmt_count limit is exceeded, TiDB removes the data of that SQL statement category of all time ranges from the
statement_summary_history table. Therefore, even if the number of SQL statement categories in a certain time range does not reach the limit, the number of SQL statements stored in the
statement_summary_history table is less than the actual number of SQL statements. If this situation occurs and affects performance, you are recommended to increase the value of
An example of the statement summary configuration is shown as follows:
set global tidb_enable_stmt_summary = true; set global tidb_stmt_summary_refresh_interval = 1800; set global tidb_stmt_summary_history_size = 24;
After the configuration above takes effect, every 30 minutes the
statements_summary table is cleared. The
statements_summary_history table stores data generated over the recent 12 hours.
statements_summary_evicted table records the recent 24 periods during which SQL statements are evicted from the statement summary. The
statements_summary_evicted table is updated every 30 minutes.
tidb_stmt_summary_max_sql_length configuration items affect memory usage. It is recommended that you adjust these configurations based on your needs, the SQL size, SQL count, and machine configuration. It is not recommended to set them too large values. You can calculate the memory usage using
After the system has run for a period of time (depending on the system load), you can check the
statement_summary table to see whether SQL eviction has occurred. For example:
select @@global.tidb_stmt_summary_max_stmt_count; select count(*) from information_schema.statements_summary;
+-------------------------------------------+ | @@global.tidb_stmt_summary_max_stmt_count | +-------------------------------------------+ | 3000 | +-------------------------------------------+ 1 row in set (0.001 sec) +----------+ | count(*) | +----------+ | 3001 | +----------+ 1 row in set (0.001 sec)
You can see that the
statements_summary table is full of records. Then check the evicted data from the
select * from information_schema.statements_summary_evicted;
+---------------------+---------------------+---------------+ | BEGIN_TIME | END_TIME | EVICTED_COUNT | +---------------------+---------------------+---------------+ | 2020-01-02 16:30:00 | 2020-01-02 17:00:00 | 59 | +---------------------+---------------------+---------------+ | 2020-01-02 16:00:00 | 2020-01-02 16:30:00 | 45 | +---------------------+---------------------+---------------+ 2 row in set (0.001 sec)
From the result above, you can see that a maximum of 59 SQL categories are evicted, which indicates that the proper size of the statement summary is 59 records.
The statement summary tables have the following limitation:
All data of the statement summary tables above will be lost when the TiDB server is restarted. This is because statement summary tables are all memory tables, and the data is cached in memory instead of being persisted on storage.
This section provides two examples to show how to use the statement summary feature to troubleshoot SQL performance issues.
In this example, the client shows slow performance with point queries on the
employee table. You can perform a fuzzy search on SQL texts:
SELECT avg_latency, exec_count, query_sample_text FROM information_schema.statements_summary WHERE digest_text LIKE 'select * from employee%';
0.3ms are considered within the normal range of
avg_latency. Therefore, it can be concluded that the server end is not the cause. You can troubleshoot with the client or the network.
+-------------+------------+------------------------------------------+ | avg_latency | exec_count | query_sample_text | +-------------+------------+------------------------------------------+ | 1042040 | 2 | select * from employee where name='eric' | | 345053 | 3 | select * from employee where id=3100 | +-------------+------------+------------------------------------------+ 2 rows in set (0.00 sec)
If the QPS decrease significantly from 10:00 to 10:30, you can find out the three categories of SQL statements with the longest time consumption from the history table:
SELECT sum_latency, avg_latency, exec_count, query_sample_text FROM information_schema.statements_summary_history WHERE summary_begin_time='2020-01-02 10:00:00' ORDER BY sum_latency DESC LIMIT 3;
The result shows that the following three categories of SQL statements consume the longest time in total, which need to be optimized with high priority.
+-------------+-------------+------------+-----------------------------------------------------------------------+ | sum_latency | avg_latency | exec_count | query_sample_text | +-------------+-------------+------------+-----------------------------------------------------------------------+ | 7855660 | 1122237 | 7 | select avg(salary) from employee where company_id=2013 | | 7241960 | 1448392 | 5 | select * from employee join company on employee.company_id=company.id | | 2084081 | 1042040 | 2 | select * from employee where name='eric' | +-------------+-------------+------------+-----------------------------------------------------------------------+ 3 rows in set (0.00 sec)
The following are descriptions of fields in the
STMT_TYPE: SQL statement type.
SCHEMA_NAME: The current schema in which SQL statements of this category are executed.
DIGEST: The digest of SQL statements of this category.
DIGEST_TEXT: The normalized SQL statement.
QUERY_SAMPLE_TEXT: The original SQL statements of the SQL category. Only one original statement is taken.
TABLE_NAMES: All tables involved in SQL statements. If there is more than one table, each is separated by a comma.
INDEX_NAMES: All SQL indexes used in SQL statements. If there is more than one index, each is separated by a comma.
SAMPLE_USER: The users who execute SQL statements of this category. Only one user is taken.
PLAN_DIGEST: The digest of the execution plan.
PLAN: The original execution plan. If there are multiple statements, the plan of only one statement is taken.
BINARY_PLAN: The original execution plan encoded in binary format. If there are multiple statements, the plan of only one statement is taken. Execute the
SELECT tidb_decode_binary_plan('xxx...')statement to parse the specific execution plan.
PLAN_CACHE_HITS: The total number of times that SQL statements of this category hit the plan cache.
PLAN_IN_CACHE: Indicates whether the previous execution of SQL statements of this category hit the plan cache.
Fields related to execution time:
SUMMARY_BEGIN_TIME: The beginning time of the current summary period.
SUMMARY_END_TIME: The ending time of the current summary period.
FIRST_SEEN: The time when SQL statements of this category are seen for the first time.
LAST_SEEN: The time when SQL statements of this category are seen for the last time.
Fields related to TiDB server:
EXEC_COUNT: Total execution times of SQL statements of this category.
SUM_ERRORS: The sum of errors occurred during execution.
SUM_WARNINGS: The sum of warnings occurred during execution.
SUM_LATENCY: The total execution latency of SQL statements of this category.
MAX_LATENCY: The maximum execution latency of SQL statements of this category.
MIN_LATENCY: The minimum execution latency of SQL statements of this category.
AVG_LATENCY: The average execution latency of SQL statements of this category.
AVG_PARSE_LATENCY: The average latency of the parser.
MAX_PARSE_LATENCY: The maximum latency of the parser.
AVG_COMPILE_LATENCY: The average latency of the compiler.
MAX_COMPILE_LATENCY: The maximum latency of the compiler.
AVG_MEM: The average memory (byte) used.
MAX_MEM: The maximum memory (byte) used.
AVG_DISK: The average disk space (byte) used.
MAX_DISK: The maximum disk space (byte) used.
Fields related to TiKV Coprocessor task:
SUM_COP_TASK_NUM: The total number of Coprocessor requests sent.
MAX_COP_PROCESS_TIME: The maximum execution time of Coprocessor tasks.
MAX_COP_PROCESS_ADDRESS: The address of the Coprocessor task with the maximum execution time.
MAX_COP_WAIT_TIME: The maximum waiting time of Coprocessor tasks.
MAX_COP_WAIT_ADDRESS: The address of the Coprocessor task with the maximum waiting time.
AVG_PROCESS_TIME: The average processing time of SQL statements in TiKV.
MAX_PROCESS_TIME: The maximum processing time of SQL statements in TiKV.
AVG_WAIT_TIME: The average waiting time of SQL statements in TiKV.
MAX_WAIT_TIME: The maximum waiting time of SQL statements in TiKV.
AVG_BACKOFF_TIME: The average waiting time before retry when a SQL statement encounters an error that requires a retry.
MAX_BACKOFF_TIME: The maximum waiting time before retry when a SQL statement encounters an error that requires a retry.
AVG_TOTAL_KEYS: The average number of keys that Coprocessor has scanned.
MAX_TOTAL_KEYS: The maximum number of keys that Coprocessor has scanned.
AVG_PROCESSED_KEYS: The average number of keys that Coprocessor has processed. Compared with
avg_processed_keysdoes not include the old versions of MVCC. A great difference between
avg_processed_keysindicates that many old versions exist.
MAX_PROCESSED_KEYS: The maximum number of keys that Coprocessor has processed.
AVG_PREWRITE_TIME: The average time of the prewrite phase.
MAX_PREWRITE_TIME: The longest time of the prewrite phase.
AVG_COMMIT_TIME: The average time of the commit phase.
MAX_COMMIT_TIME: The longest time of the commit phase.
AVG_GET_COMMIT_TS_TIME: The average time of getting
MAX_GET_COMMIT_TS_TIME: The longest time of getting
AVG_COMMIT_BACKOFF_TIME: The average waiting time before retry when a SQL statement encounters an error that requires a retry during the commit phase.
MAX_COMMIT_BACKOFF_TIME: The maximum waiting time before retry when a SQL statement encounters an error that requires a retry during the commit phase.
AVG_RESOLVE_LOCK_TIME: The average time for resolving lock conflicts occurred between transactions.
MAX_RESOLVE_LOCK_TIME: The longest time for resolving lock conflicts occurred between transactions.
AVG_LOCAL_LATCH_WAIT_TIME: The average waiting time of the local transaction.
MAX_LOCAL_LATCH_WAIT_TIME: The maximum waiting time of the local transaction.
AVG_WRITE_KEYS: The average count of written keys.
MAX_WRITE_KEYS: The maximum count of written keys.
AVG_WRITE_SIZE: The average amount of written data (in byte).
MAX_WRITE_SIZE: The maximum amount of written data (in byte).
AVG_PREWRITE_REGIONS: The average number of Regions involved in the prewrite phase.
MAX_PREWRITE_REGIONS: The maximum number of Regions during the prewrite phase.
AVG_TXN_RETRY: The average number of transaction retries.
MAX_TXN_RETRY: The maximum number of transaction retries.
SUM_BACKOFF_TIMES: The sum of retries when SQL statements of this category encounter errors that require a retry.
BACKOFF_TYPES: All types of errors that require retries and the number of retries for each type. The format of the field is
type:number. If there is more than one error type, each is separated by a comma, like
AVG_AFFECTED_ROWS: The average number of rows affected.
PREV_SAMPLE_TEXT: When the current SQL statement is
PREV_SAMPLE_TEXTis the previous statement to
COMMIT. In this case, SQL statements are grouped by the digest and
prev_sample_text. This means that
COMMITstatements with different
prev_sample_textare grouped to different rows. When the current SQL statement is not
PREV_SAMPLE_TEXTfield is an empty string.
BEGIN_TIME: Records the starting time.
END_TIME: Records the ending time.
EVICTED_COUNT: The number of SQL categories that are evicted during the record period.
- The cluster tables for statement summary
- Parameter configuration
- Troubleshooting examples
- Fields description