Doc Menu

DM 1.0-GA Benchmark Report

This benchmark report describes the test purpose, environment, scenario, and result for DM 1.0-GA.

Test purpose

The purpose of this test is to test the performance of DM full import and incremental replication.

Test environment

Machine information

System information:

Machine IPOperation systemKernel versionFile system type
172.16.4.39CentOS Linux release 7.6.18103.10.0-957.1.3.el7.x86_64ext4
172.16.4.40CentOS Linux release 7.6.18103.10.0-957.1.3.el7.x86_64ext4
172.16.4.41CentOS Linux release 7.6.18103.10.0-957.1.3.el7.x86_64ext4
172.16.4.42CentOS Linux release 7.6.18103.10.0-957.1.3.el7.x86_64ext4
172.16.4.43CentOS Linux release 7.6.18103.10.0-957.1.3.el7.x86_64ext4
172.16.4.44CentOS Linux release 7.6.18103.10.0-957.1.3.el7.x86_64ext4

Hardware information:

TypeSpecification
CPU40 CPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Memory192GB, 12 * 16GB DIMM DDR4 2133 MHz
DiskIntel DC P4510 4TB NVMe PCIe 3.0
Network card10 Gigabit Ethernet

Others:

  • Network rtt between servers: rtt min/avg/max/mdev = 0.074/0.088/0.121/0.019 ms

Cluster topology

Machine IPDeployment instance
172.16.4.39PD1, DM-worker1, DM-master
172.16.4.40PD2, MySQL1
172.16.4.41PD3, TiDB
172.16.4.42TiKV1
172.16.4.43TiKV2
172.16.4.44TiKV3

Version information

  • MySQL version: 5.7.27-log
  • TiDB version: v4.0.0-alpha-198-gbde7f440e
  • DM version: v1.0.1
  • Sysbench version: 1.0.17

Test scenario

You can refer to the test scenario described in the performance test, namely, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41).

Full import benchmark case

For details, see Full Import Benchmark Case.

Full import benchmark result

itemdump threadmydumper extra-argsdump speed (MB/s)
enable single table concurrent32"-r 320000 --regex '^sbtest.*'"191.03
disable single table concurrent32"--regex '^sbtest.*'"72.22
itemlatency of execute transaction (s)statement per transactiondata size (GB)time (s)import speed (MB/s)
load data1.737487838.142346.916.64

Benchmark results with different pool sizes in load unit

In this test, the size of data imported using sysbench is 3.78 GB. The following is detailed information of the test data:

load pool sizelatency of execution txn (s)import time (s)import speed (MB/s)TiDB 99 duration (s)
20.250425.99.10.23
40.523360.110.70.41
80.986267.014.50.93
162.022265.914.52.68
323.778262.314.76.39
647.452281.913.78.00

Benchmark result with different row count in per statement

Full import data size in this benchmark case is 3.78 GB, load unit pool size uses 32. The statement count is controlled by mydumper parameters.

row count in per statementmydumper extra-argslatency of execution txn (s)import time (s)import speed (MB/s)TiDB 99 duration (s)
7426-s 1500000 -r 3200006.982258.315.010.34
4903-s 1000000 -r 3200003.778262.314.76.39
2470-s 500000 -r 3200001.962271.3614.32.00
1236-s 250000 -r 3200001.911283.313.71.50
618-s 125000 -r 3200000.683299.912.90.73
310-s 62500 -r 3200000.413322.612.00.49

Incremental replication benchmark case

For details about the test method, see Incremental Replication Benchmark Case.

Benchmark result for incremental replication

DM sync unit worker-count is 32, and batch size is 100 in this benchmark case.

itemsqpstps95% Latency
MySQL42.79k42.79k1.18ms
DM relay log unit-11.3MB/s45us (read duration)
DM binlog replication unit22.97k (binlog event received qps, not including skipped events)-20ms (txn execution latency)
TiDB31.30k (Begin/Commit 3.93k Insert 22.76k)4.16k95%: 6.4ms 99%: 9ms

Benchmark result with different sync unit concurrency

sync unit worker-countDM tpsDM execution latency (ms)TiDB qpsTiDB 99 duration (ms)
47074637.1k3
8146846414.9k4
16234865624.9k6
32233452829.2k10
64233023031.2k16
1024222257056.9k70

Benchmark result with different SQL distribution

sysbench typerelay log flush speed (MB/s)DM tpsDM execution latency (ms)TiDB qpsTiDB 99 duration (ms)
insert_only11.3233452829.2k10
write_only18.73347012934.6k11

dump unit

We recommend that the statement size be 200 KB~1 MB, and row count in each statement be approximately 1000~5000, which is based on the actual row size in your scenario.

load unit

We recommend that you set pool-size to 16.

sync unit

We recommend that you set batch size to 100 and worker-count to 16~32.