TiDB Sysbench 性能对比测试报告 - v5.0 对比 v4.0

测试目的

测试对比 TiDB v5.0 和 v4.0 在 OLTP 场景下的性能。

测试环境 (AWS EC2)

硬件配置

服务类型EC2 类型实例数
PDm5.xlarge3
TiKVi3.4xlarge3
TiDBc5.4xlarge3
Sysbenchc5.9xlarge1

软件版本

服务类型软件版本
PD4.0、5.0
TiDB4.0、5.0
TiKV4.0、5.0
Sysbench1.0.20

参数配置

TiDB v4.0 参数配置

log.level: "error" performance.max-procs: 20 prepared-plan-cache.enabled: true tikv-client.max-batch-wait-time: 2000000

TiKV v4.0 参数配置

storage.scheduler-worker-pool-size: 5 raftstore.store-pool-size: 3 raftstore.apply-pool-size: 3 rocksdb.max-background-jobs: 3 raftdb.max-background-jobs: 3 raftdb.allow-concurrent-memtable-write: true server.grpc-concurrency: 6 readpool.unified.min-thread-count: 5 readpool.unified.max-thread-count: 20 readpool.storage.normal-concurrency: 10 pessimistic-txn.pipelined: true

TiDB v5.0 参数配置

log.level: "error" performance.max-procs: 20 prepared-plan-cache.enabled: true tikv-client.max-batch-wait-time: 2000000

TiKV v5.0 参数配置

storage.scheduler-worker-pool-size: 5 raftstore.store-pool-size: 3 raftstore.apply-pool-size: 3 rocksdb.max-background-jobs: 8 raftdb.max-background-jobs: 4 raftdb.allow-concurrent-memtable-write: true server.grpc-concurrency: 6 readpool.unified.min-thread-count: 5 readpool.unified.max-thread-count: 20 readpool.storage.normal-concurrency: 10 pessimistic-txn.pipelined: true server.enable-request-batch: false

TiDB v4.0 全局变量配置

set global tidb_hashagg_final_concurrency=1; set global tidb_hashagg_partial_concurrency=1;

TiDB v5.0 全局变量配置

set global tidb_hashagg_final_concurrency=1; set global tidb_hashagg_partial_concurrency=1; set global tidb_enable_async_commit = 1; set global tidb_enable_1pc = 1; set global tidb_guarantee_linearizability = 0; set global tidb_enable_clustered_index = 1;

测试方案

  1. 通过 TiUP 部署 TiDB v5.0 和 v4.0。
  2. 通过 Sysbench 导入 16 张表,每张表有 1000 万行数据。
  3. 分别对每个表执行 analyze table 命令。
  4. 备份数据,用于不同并发测试前进行数据恢复,以保证每次数据一致。
  5. 启动 Sysbench 客户端,进行 point_selectread_writeupdate_indexupdate_non_index 测试。通过 AWS NLB 向 TiDB 加压,单轮预热 1 分钟,测试 5 分钟。
  6. 每轮完成后停止集群,使用之前的备份的数据覆盖,再启动集群。

准备测试数据

执行以下命令来准备测试数据:

sysbench oltp_common \ --threads=16 \ --rand-type=uniform \ --db-driver=mysql \ --mysql-db=sbtest \ --mysql-host=$aws_nlb_host \ --mysql-port=$aws_nlb_port \ --mysql-user=root \ --mysql-password=password \ prepare --tables=16 --table-size=10000000

执行测试命令

执行以下命令来执行测试:

sysbench $testname \ --threads=$threads \ --time=300 \ --report-interval=1 \ --rand-type=uniform \ --db-driver=mysql \ --mysql-db=sbtest \ --mysql-host=$aws_nlb_host \ --mysql-port=$aws_nlb_port \ run --tables=16 --table-size=10000000

测试结果

Point Select 性能

Threadsv4.0 QPSv4.0 95% latency (ms)v5.0 QPSv5.0 95% latency (ms)QPS 提升
150159451.191.32177876.251.2311.56%
300244790.381.96252675.031.823.22%
600322929.053.75331956.843.362.80%
900364840.055.67365655.045.090.22%
1200376529.187.98366507.477.04-2.66%
1500368390.5210.84372476.358.901.11%

v5.0 对比 v4.0,Point Select 性能提升了 2.7%。

Point Select

Update Non-index 性能

Threadsv4.0 QPSv4.0 95% latency (ms)v5.0 QPSv5.0 95% latency (ms)QPS 提升
15017243.7811.0430866.236.9179.00%
30025397.0615.8345915.399.7380.79%
60033388.0825.2860098.5216.4180.00%
90038291.7536.8970317.4121.8983.64%
120041003.4655.8276376.2228.6786.27%
150044702.8462.1980234.5834.9579.48%

v5.0 对比 v4.0,Update Non-index 性能提升了 81%。

Update Non-index

Update Index 性能

Threadsv4.0 QPSv4.0 95% latency (ms)v5.0 QPSv5.0 95% latency (ms)QPS 提升
15011736.2117.0115631.3417.0133.19%
30015435.9528.6719957.0622.6929.29%
60018983.2149.2123218.1441.8522.31%
90020855.2974.4626226.7653.8525.76%
120021887.64102.9728505.4169.2930.24%
150023621.15110.6630341.0682.9628.45%

v5.0 对比 v4.0,Update Index 性能提升了 28%。

Update Index

Read Write 性能

Threadsv4.0 QPSv4.0 95% latency (ms)v5.0 QPSv5.0 95% latency (ms)QPS 提升
15059979.9161.0866098.5755.8210.20%
30077118.32102.9784639.4890.789.75%
60090619.52183.21101477.46167.4411.98%
90097085.57267.41109463.46240.0212.75%
1200106521.61331.91115416.05320.178.35%
1500116278.96363.18118807.5411.962.17%

v5.0 对比 v4.0,Read Write 性能提升了 9%。

Read Write

文档内容是否有帮助?

下载 PDF文档反馈社区交流
产品
TiDB
TiDB Cloud
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.