DM 5.3.0 Benchmark Report

This benchmark report describes the test purpose, environment, scenario, and results for DM 5.3.0.

Test purpose

The purpose of this test is to evaluate the performance of DM full import and incremental replication and to conclude recommended configurations for DM migration tasks based on the test results.

Test environment

Machine information

System information:

Machine IPOperating SystemKernel versionFile system type Linux release 7.8.20033.10.0-957.el7.x86_64ext4 Linux release 7.8.20033.10.0-957.el7.x86_64ext4 Linux release 7.8.20033.10.0-957.el7.x86_64ext4

Hardware information:

CPUIntel(R) Xeon(R) Silver 4214R @ 2.40GHz, 48 Cores
Memory192G, 12 * 16GB DIMM DDR4 2133 MHz
DiskIntel SSDPE2KX040T8 4TB
Network card10 Gigabit Ethernet


  • Network rtt between servers: rtt min/avg/max/mdev = 0.045/0.064/0.144/0.024 ms

Cluster topology

Machine IPDeployed instance, TiDB1, TiKV1, MySQL1, DM-master1, TiDB2, TiKV2, DM-worker1, TiDB3, TiKV3

Version information

  • MySQL version: 5.7.36-log
  • TiDB version: v5.2.1
  • DM version: v5.3.0
  • Sysbench version: 1.1.0

Test scenario

You can use a simple data migration flow, that is, MySQL1 ( -> DM-worker( -> TiDB(load balance) (, to do the test. For detailed test scenario description, see performance test.

Full import benchmark case

For detailed full import test method, see Full Import Benchmark Case.

Full import benchmark results

To enable multi-thread concurrent data export via Dumpling, you can configure the threads parameter in the mydumpers configuration item. This speeds up data export.

ItemData size (GB)ThreadsRowsStatement-sizeTime (s)Dump speed (MB/s)
dump data38.132320000100000045846
ItemData size (GB)Pool sizeStatement per TXNMax latency of TXN execution (s)Time (s)Import speed (MB/s)
load data38.132487876274013.9

Benchmark results with different pool sizes in load unit

In this test, the full amount of data imported using sysbench is 3.78 GB. The following is detailed information of the test data:

load unit pool sizeMax latency of TXN execution (s)Import time (s)Import Speed (MB/s)TiDB 99 duration (s)

Benchmark results with different row count per statement

In this test, the full amount of imported data is 3.78 GB and the pool-size of load unit is set to 32. The statement count is controlled by statement-size, rows, or extra-args parameters in the mydumpers configuration item.

Row count per statementmydumpers extra-argsMax latency of TXN execution (s)Import time (s)Import speed (MB/s)TiDB 99 duration (s)
7506-s 1500000 -r 3200008.3422916.510.64
5006-s 1000000 -r 3200006.1221817.37.23
2506-s 500000 -r 3200004.2723216.23.24
1256-s 250000 -r 3200002.2523516.01.92
629-s 125000 -r 3200001.0324615.30.91
315-s 62500 -r 3200000.6324915.10.44

Incremental replication benchmark case

For detailed incremental replication test method, see Incremental Replication Benchmark Case.

Incremental replication benchmark result

In this test, the worker-count of sync unit is set to 32 and batch is set to 100.

ItemsQPSTPS95% latency
DM binlog replication unit29.1k (The number of binlog events received per unit of time, not including skipped events)-92ms (txn execution time)
TiDB32.0k (Begin/Commit 1.5 Insert 29.72k)3.52k95%: 6.2ms 99%: 8.3ms

Benchmark results with different sync unit concurrency

sync unit worker-countDM QPSMax DM execution latency (ms)TiDB QPSTiDB 99 duration (ms)

Benchmark results with different SQL distribution

Sysbench typeDM QPSMax DM execution latency (ms)TiDB QPSTiDB 99 duration (ms)

dump unit

We recommend that the statement size be 200 KB~1 MB, and row count in each statement be approximately 1000~5000, which is based on the actual row size in your scenario.

load unit

We recommend that you set pool-size to 16~32.

sync unit

We recommend that you set batch to 100 and worker-count to 16~32.

Was this page helpful?