EXPLAIN Walkthrough

Because SQL is a declarative language, you cannot automatically tell whether a query is executed efficiently. You must first use the EXPLAIN statement to learn the current execution plan.

The following statement from the bikeshare example database counts how many trips were taken on July 1, 2017:

EXPLAIN SELECT count(*) FROM trips WHERE start_date BETWEEN '2017-07-01 00:00:00' AND '2017-07-01 23:59:59';
+------------------------------+----------+-----------+---------------+------------------------------------------------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +------------------------------+----------+-----------+---------------+------------------------------------------------------------------------------------------------------------------------+ | StreamAgg_20 | 1.00 | root | | funcs:count(Column#13)->Column#11 | | └─TableReader_21 | 1.00 | root | | data:StreamAgg_9 | | └─StreamAgg_9 | 1.00 | cop[tikv] | | funcs:count(1)->Column#13 | | └─Selection_19 | 250.00 | cop[tikv] | | ge(bikeshare.trips.start_date, 2017-07-01 00:00:00.000000), le(bikeshare.trips.start_date, 2017-07-01 23:59:59.000000) | | └─TableFullScan_18 | 10000.00 | cop[tikv] | table:trips | keep order:false, stats:pseudo | +------------------------------+----------+-----------+---------------+------------------------------------------------------------------------------------------------------------------------+ 5 rows in set (0.00 sec)

From the child operator └─TableFullScan_18 back, you can see its execution process as follows, which is currently suboptimal:

  1. The coprocessor (TiKV) reads the entire trips table as a TableFullScan operation. It then passes the rows that it reads to the Selection_19 operator, which is still within TiKV.
  2. The WHERE start_date BETWEEN .. predicate is then filtered in the Selection_19 operator. Approximately 250 rows are estimated to meet this selection. Note that this number is estimated according to the statistics and the operator's logic. The └─TableFullScan_18 operator shows stats:pseudo, which means that the table does not have the actual statistical information. After running ANALYZE TABLE trips to collect statistical information, the statistics are expected to be more accurate.
  3. The rows that meet the selection criteria then have a count function applied to them. This is also completed inside the StreamAgg_9 operator, which is still inside TiKV (cop[tikv]). The TiKV coprocessor can execute a number of MySQL built-in functions, count being one of them.
  4. The results from StreamAgg_9 are then sent to the TableReader_21 operator which is now inside the TiDB server (the task of root). The estRows column value for this operator is 1, which means that the operator will receive one row from each of the TiKV Regions to be accessed. For more information about these requests, see EXPLAIN ANALYZE.
  5. The StreamAgg_20 operator then applies a count function to each of the rows from the └─TableReader_21 operator, which you can see from SHOW TABLE REGIONS and will be about 56 rows. Because this is the root operator, it then returns results to the client.

Assess the current performance

EXPLAIN only returns the query execution plan but does not execute the query. To get the actual execution time, you can either execute the query or use EXPLAIN ANALYZE:

EXPLAIN ANALYZE SELECT count(*) FROM trips WHERE start_date BETWEEN '2017-07-01 00:00:00' AND '2017-07-01 23:59:59';
+------------------------------+----------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------+-----------+------+ | id | estRows | actRows | task | access object | execution info | operator info | memory | disk | +------------------------------+----------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------+-----------+------+ | StreamAgg_20 | 1.00 | 1 | root | | time:1.031417203s, loops:2 | funcs:count(Column#13)->Column#11 | 632 Bytes | N/A | | └─TableReader_21 | 1.00 | 56 | root | | time:1.031408123s, loops:2, cop_task: {num: 56, max: 782.147269ms, min: 5.759953ms, avg: 252.005927ms, p95: 609.294603ms, max_proc_keys: 910371, p95_proc_keys: 704775, tot_proc: 11.524s, tot_wait: 580ms, rpc_num: 56, rpc_time: 14.111932641s} | data:StreamAgg_9 | 328 Bytes | N/A | | └─StreamAgg_9 | 1.00 | 56 | cop[tikv] | | proc max:640ms, min:8ms, p80:276ms, p95:480ms, iters:18695, tasks:56 | funcs:count(1)->Column#13 | N/A | N/A | | └─Selection_19 | 250.00 | 11409 | cop[tikv] | | proc max:640ms, min:8ms, p80:276ms, p95:476ms, iters:18695, tasks:56 | ge(bikeshare.trips.start_date, 2017-07-01 00:00:00.000000), le(bikeshare.trips.start_date, 2017-07-01 23:59:59.000000) | N/A | N/A | | └─TableFullScan_18 | 10000.00 | 19117643 | cop[tikv] | table:trips | proc max:612ms, min:8ms, p80:248ms, p95:460ms, iters:18695, tasks:56 | keep order:false, stats:pseudo | N/A | N/A | +------------------------------+----------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------+-----------+------+ 5 rows in set (1.03 sec)

The example query above takes 1.03 seconds to execute, which is an ideal performance.

From the result of EXPLAIN ANALYZE above, actRows indicates that some of the estimates (estRows) are inaccurate (expecting 10 thousand rows but finding 19 million rows), which is already indicated in the operator info (stats:pseudo) of └─TableFullScan_18. If you run ANALYZE TABLE first and then EXPLAIN ANALYZE again, you can see that the estimates are much closer:

ANALYZE TABLE trips; EXPLAIN ANALYZE SELECT count(*) FROM trips WHERE start_date BETWEEN '2017-07-01 00:00:00' AND '2017-07-01 23:59:59';
Query OK, 0 rows affected (10.22 sec) +------------------------------+-------------+----------+-----------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------+-----------+------+ | id | estRows | actRows | task | access object | execution info | operator info | memory | disk | +------------------------------+-------------+----------+-----------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------+-----------+------+ | StreamAgg_20 | 1.00 | 1 | root | | time:926.393612ms, loops:2 | funcs:count(Column#13)->Column#11 | 632 Bytes | N/A | | └─TableReader_21 | 1.00 | 56 | root | | time:926.384792ms, loops:2, cop_task: {num: 56, max: 850.94424ms, min: 6.042079ms, avg: 234.987725ms, p95: 495.474806ms, max_proc_keys: 910371, p95_proc_keys: 704775, tot_proc: 10.656s, tot_wait: 904ms, rpc_num: 56, rpc_time: 13.158911952s} | data:StreamAgg_9 | 328 Bytes | N/A | | └─StreamAgg_9 | 1.00 | 56 | cop[tikv] | | proc max:592ms, min:4ms, p80:244ms, p95:480ms, iters:18695, tasks:56 | funcs:count(1)->Column#13 | N/A | N/A | | └─Selection_19 | 432.89 | 11409 | cop[tikv] | | proc max:592ms, min:4ms, p80:244ms, p95:480ms, iters:18695, tasks:56 | ge(bikeshare.trips.start_date, 2017-07-01 00:00:00.000000), le(bikeshare.trips.start_date, 2017-07-01 23:59:59.000000) | N/A | N/A | | └─TableFullScan_18 | 19117643.00 | 19117643 | cop[tikv] | table:trips | proc max:564ms, min:4ms, p80:228ms, p95:456ms, iters:18695, tasks:56 | keep order:false | N/A | N/A | +------------------------------+-------------+----------+-----------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------+-----------+------+ 5 rows in set (0.93 sec)

After ANALYZE TABLE is executed, you can see that the estimated rows for the └─TableFullScan_18 operator is accurate and the estimate for └─Selection_19 is now also much closer. In the two cases above, although the execution plan (the set of operators TiDB uses to execute this query) has not changed, quite frequently sub-optimal plans are caused by outdated statistics.

In addition to ANALYZE TABLE, TiDB automatically regenerates statistics as a background operation after the threshold of tidb_auto_analyze_ratio is reached. You can see how close TiDB is to this threshold (how healthy TiDB considers the statistics to be) by executing the SHOW STATS_HEALTHY statement:

SHOW STATS_HEALTHY;
+-----------+------------+----------------+---------+ | Db_name | Table_name | Partition_name | Healthy | +-----------+------------+----------------+---------+ | bikeshare | trips | | 100 | +-----------+------------+----------------+---------+ 1 row in set (0.00 sec)

Identify optimizations

The current execution plan is efficient in the following aspects:

  • Most of the work is handled inside the TiKV coprocessor. Only 56 rows need to be sent across the network back to TiDB for processing. Each of these rows is short and contains only the count that matches the selection.

  • Aggregating the count of rows both in TiDB (StreamAgg_20) and in TiKV (└─StreamAgg_9) uses the stream aggregation, which is very efficient in its memory usage.

The biggest issue with the current execution plan is that the predicate start_date BETWEEN '2017-07-01 00:00:00' AND '2017-07-01 23:59:59' does not apply immediately. All rows are read first with a TableFullScan operator, and then a selection is applied afterwards. You can find out the cause from the output of SHOW CREATE TABLE trips:

SHOW CREATE TABLE trips\G
*************************** 1. row *************************** Table: trips Create Table: CREATE TABLE `trips` ( `trip_id` bigint(20) NOT NULL AUTO_INCREMENT, `duration` int(11) NOT NULL, `start_date` datetime DEFAULT NULL, `end_date` datetime DEFAULT NULL, `start_station_number` int(11) DEFAULT NULL, `start_station` varchar(255) DEFAULT NULL, `end_station_number` int(11) DEFAULT NULL, `end_station` varchar(255) DEFAULT NULL, `bike_number` varchar(255) DEFAULT NULL, `member_type` varchar(255) DEFAULT NULL, PRIMARY KEY (`trip_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin AUTO_INCREMENT=20477318 1 row in set (0.00 sec)

There is NO index on start_date. You would need an index in order to push this predicate into an index reader operator. Add an index as follows:

ALTER TABLE trips ADD INDEX (start_date);
Query OK, 0 rows affected (2 min 10.23 sec)

After adding an index, you can then repeat the query in EXPLAIN. In the following output, you can see that a new execution plan is chosen, and the TableFullScan and Selection operators have been eliminated:

EXPLAIN SELECT count(*) FROM trips WHERE start_date BETWEEN '2017-07-01 00:00:00' AND '2017-07-01 23:59:59';
+-----------------------------+---------+-----------+-------------------------------------------+-------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-----------------------------+---------+-----------+-------------------------------------------+-------------------------------------------------------------------+ | StreamAgg_17 | 1.00 | root | | funcs:count(Column#13)->Column#11 | | └─IndexReader_18 | 1.00 | root | | index:StreamAgg_9 | | └─StreamAgg_9 | 1.00 | cop[tikv] | | funcs:count(1)->Column#13 | | └─IndexRangeScan_16 | 8471.88 | cop[tikv] | table:trips, index:start_date(start_date) | range:[2017-07-01 00:00:00,2017-07-01 23:59:59], keep order:false | +-----------------------------+---------+-----------+-------------------------------------------+-------------------------------------------------------------------+ 4 rows in set (0.00 sec)

To compare the actual execution time, you can again use EXPLAIN ANALYZE:

EXPLAIN ANALYZE SELECT count(*) FROM trips WHERE start_date BETWEEN '2017-07-01 00:00:00' AND '2017-07-01 23:59:59';
+-----------------------------+---------+---------+-----------+-------------------------------------------+------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+-----------+------+ | id | estRows | actRows | task | access object | execution info | operator info | memory | disk | +-----------------------------+---------+---------+-----------+-------------------------------------------+------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+-----------+------+ | StreamAgg_17 | 1.00 | 1 | root | | time:4.516728ms, loops:2 | funcs:count(Column#13)->Column#11 | 372 Bytes | N/A | | └─IndexReader_18 | 1.00 | 1 | root | | time:4.514278ms, loops:2, cop_task: {num: 1, max:4.462288ms, proc_keys: 11409, rpc_num: 1, rpc_time: 4.457148ms} | index:StreamAgg_9 | 238 Bytes | N/A | | └─StreamAgg_9 | 1.00 | 1 | cop[tikv] | | time:4ms, loops:12 | funcs:count(1)->Column#13 | N/A | N/A | | └─IndexRangeScan_16 | 8471.88 | 11409 | cop[tikv] | table:trips, index:start_date(start_date) | time:4ms, loops:12 | range:[2017-07-01 00:00:00,2017-07-01 23:59:59], keep order:false | N/A | N/A | +-----------------------------+---------+---------+-----------+-------------------------------------------+------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+-----------+------+ 4 rows in set (0.00 sec)

From the result above, the query time has reduced from 1.03 seconds to 0.0 seconds.

Disable the early execution of subqueries

During query optimization, TiDB pre-executes subqueries that can be directly calculated. For example:

CREATE TABLE t1(a int); INSERT INTO t1 VALUES(1); CREATE TABLE t2(a int); EXPLAIN SELECT * FROM t2 WHERE a = (SELECT a FROM t1);
+--------------------------+----------+-----------+---------------+--------------------------------+ | id | estRows | task | access object | operator info | +--------------------------+----------+-----------+---------------+--------------------------------+ | TableReader_14 | 10.00 | root | | data:Selection_13 | | └─Selection_13 | 10.00 | cop[tikv] | | eq(test.t2.a, 1) | | └─TableFullScan_12 | 10000.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | +--------------------------+----------+-----------+---------------+--------------------------------+ 3 rows in set (0.00 sec)

In the preceding example, the a = (SELECT a FROM t1) subquery is calculated during optimization and rewritten as t2.a=1. This allows more optimizations such as constant propagation and folding during optimization. However, it affects the execution time of the EXPLAIN statement. When the subquery itself takes a long time to execute, the EXPLAIN statement might not be completed, which could affect online troubleshooting.

Starting from v7.3.0, TiDB introduces the tidb_opt_enable_non_eval_scalar_subquery system variable, which controls whether to disable the pre-execution of such subqueries in EXPLAIN. The default value of this variable is OFF, which means that the subquery is pre-calculated. You can set this variable to ON to disable the pre-execution of subqueries:

SET @@tidb_opt_enable_non_eval_scalar_subquery = ON; EXPLAIN SELECT * FROM t2 WHERE a = (SELECT a FROM t1);
+---------------------------+----------+-----------+---------------+---------------------------------+ | id | estRows | task | access object | operator info | +---------------------------+----------+-----------+---------------+---------------------------------+ | Selection_13 | 8000.00 | root | | eq(test.t2.a, ScalarQueryCol#5) | | └─TableReader_15 | 10000.00 | root | | data:TableFullScan_14 | | └─TableFullScan_14 | 10000.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | ScalarSubQuery_10 | N/A | root | | Output: ScalarQueryCol#5 | | └─MaxOneRow_6 | 1.00 | root | | | | └─TableReader_9 | 1.00 | root | | data:TableFullScan_8 | | └─TableFullScan_8 | 1.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +---------------------------+----------+-----------+---------------+---------------------------------+ 7 rows in set (0.00 sec)

As you can see, the scalar subquery is not expanded during the execution, which makes it easier to understand the specific execution process of such SQL.

Was this page helpful?