TiDB Sysbench Performance Test Report -- v3.0 vs. v2.1

Test purpose

This test aims to compare the performance of TiDB 3.0 and TiDB 2.1 in the OLTP scenario.

Test version, time, and place

TiDB version: v3.0.0 vs. v2.1.13

Time: June, 2019

Place: Beijing

Test environment

This test runs on AWS EC2 and uses the CentOS-7.6.1810-Nitro (ami-028946f4cffc8b916) image. The components and types of instances are as follows:

ComponentInstance type
PDr5d.xlarge
TiKVc5d.4xlarge
TiDBc5.4xlarge

Sysbench version: 1.0.17

Test plan

Use Sysbench to import 16 tables, with 10,000,000 rows in each table. Start three sysbench to add pressure to three TiDB instances. The number of concurrent requests increases incrementally. A single concurrent test lasts 5 minutes.

Prepare data using the following command:

sysbench oltp_common \ --threads=16 \ --rand-type=uniform \ --db-driver=mysql \ --mysql-db=sbtest \ --mysql-host=$tidb_host \ --mysql-port=$tidb_port \ --mysql-user=root \ --mysql-password=password \ prepare --tables=16 --table-size=10000000

Then test TiDB using the following command:

sysbench $testname \ --threads=$threads \ --time=300 \ --report-interval=15 \ --rand-type=uniform \ --rand-seed=$RANDOM \ --db-driver=mysql \ --mysql-db=sbtest \ --mysql-host=$tidb_host \ --mysql-port=$tidb_port \ --mysql-user=root \ --mysql-password=password \ run --tables=16 --table-size=10000000

TiDB version information

v3.0.0

ComponentGitHash
TiDB8efbe62313e2c1c42fd76d35c6f020087eef22c2
TiKVa467f410d235fa9c5b3c355e3b620f81d3ac0e0c
PD70aaa5eee830e21068f1ba2d4c9bae59153e5ca3

v2.1.13

ComponentGitHash
TiDB6b5b1a6802f9b8f5a22d8aab24ac80729331e1bc
TiKVb3cf3c8d642534ea6fa93d475a46da285cc6acbf
PD886362ebfb26ef0834935afc57bcee8a39c88e54

TiDB parameter configuration

Enable the prepared plan cache in both TiDB v2.1 and v3.0 (point select and read write are not enabled in v2.1 for optimization reasons):

[prepared-plan-cache] enabled = true

Then configure global variables:

set global tidb_hashagg_final_concurrency=1;
set global tidb_hashagg_partial_concurrency=1;
set global tidb_disable_txn_auto_retry=0;

In addition, make the following configuration in v3.0:

[tikv-client] max-batch-wait-time = 2000000

TiKV parameter configuration

Configure the global variable in both TiDB v2.1 and v3.0:

log-level = "error" [readpool.storage] normal-concurrency = 10 [server] grpc-concurrency = 6 [rocksdb.defaultcf] block-cache-size = "14GB" [rocksdb.writecf] block-cache-size = "8GB" [rocksdb.lockcf] block-cache-size = "1GB"

In addition, make the following configuration in v3.0:

[raftstore] apply-pool-size = 3 store-pool-size = 3

Cluster topology

Machine IPDeployment instance
172.31.8.83 * Sysbench
172.31.7.80, 172.31.5.163, 172.31.11.123PD
172.31.4.172, 172.31.1.155, 172.31.9.210TiKV
172.31.7.80, 172.31.5.163, 172.31.11.123TiDB

Test result

Point Select test

v2.1:

ThreadsQPS95% latency(ms)
150240304.061.61
300276635.752.97
600307838.065.18
900323667.937.30
1200330925.739.39
1500336250.3811.65

v3.0:

ThreadsQPS95% latency(ms)
150334219.040.64
300456444.861.10
600512177.482.11
900525945.133.13
1200534577.364.18
1500533944.645.28

point select

Update Non-Index test

v2.1:

ThreadsQPS95% latency (ms)
15021785.378.58
30028979.2713.70
60034629.7224.83
90036410.0643.39
120037174.1562.19
150037408.8887.56

v3.0:

ThreadsQPS95% latency (ms)
15028045.756.67
30039237.779.91
60049536.5616.71
90055963.7322.69
120059904.0229.72
150062247.9542.61

update non-index

Update Index test

v2.1:

ThreadsQPS95% latency(ms)
15014378.2413.22
30016916.4324.38
60017636.1157.87
90017740.9295.81
120017929.24130.13
150018012.80161.51

v3.0:

ThreadsQPS95% latency(ms)
15019047.3210.09
30024467.6416.71
60028882.6631.94
90030298.4157.87
120030419.4092.42
150030643.55125.52

update index

Read Write test

v2.1:

ThreadsQPS95% latency(ms)
15085140.6044.98
30096773.0182.96
600105139.81153.02
900110041.83215.44
1200113242.70277.21
1500114542.19337.94

v3.0:

ThreadsQPS95% latency(ms)
150105692.0835.59
300129769.6958.92
600141430.86114.72
900144371.76170.48
1200143344.37223.34
1500144567.91277.21

read write

Was this page helpful?