DM 5.3.0 Benchmark Report

This benchmark report describes the test purpose, environment, scenario, and results for DM 5.3.0.

Test purpose

The purpose of this test is to evaluate the performance of DM full import and incremental replication and to conclude recommended configurations for DM migration tasks based on the test results.

Test environment

Machine information

System information:

Machine IPOperating SystemKernel versionFile system type
172.16.6.1CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.6.2CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.6.3CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4

Hardware information:

TypeSpecification
CPUIntel(R) Xeon(R) Silver 4214R @ 2.40GHz, 48 Cores
Memory192G, 12 * 16GB DIMM DDR4 2133 MHz
DiskIntel SSDPE2KX040T8 4TB
Network card10 Gigabit Ethernet

Others:

  • Network rtt between servers: rtt min/avg/max/mdev = 0.045/0.064/0.144/0.024 ms

Cluster topology

Machine IPDeployed instance
172.16.6.1PD1, TiDB1, TiKV1, MySQL1, DM-master1
172.16.6.2PD2, TiDB2, TiKV2, DM-worker1
172.16.6.3PD3, TiDB3, TiKV3

Version information

  • MySQL version: 5.7.36-log
  • TiDB version: v5.2.1
  • DM version: v5.3.0
  • Sysbench version: 1.1.0

Test scenario

You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see performance test.

Full import benchmark case

For detailed full import test method, see Full Import Benchmark Case.

Full import benchmark results

To enable multi-thread concurrent data export via Dumpling, you can configure the threads parameter in the mydumpers configuration item. This speeds up data export.

ItemData size (GB)ThreadsRowsStatement-sizeTime (s)Dump speed (MB/s)
dump data38.132320000100000045846
ItemData size (GB)Pool sizeStatement per TXNMax latency of TXN execution (s)Time (s)Import speed (MB/s)
load data38.132487876274013.9

Benchmark results with different pool sizes in load unit

In this test, the full amount of data imported using sysbench is 3.78 GB. The following is detailed information of the test data:

load unit pool sizeMax latency of TXN execution (s)Import time (s)Import Speed (MB/s)TiDB 99 duration (s)
20.713979.50.61
41.2136310.41.03
83.3027913.52.11
165.5620018.93.04
326.9221817.36.56
648.5923116.38.62

Benchmark results with different row count per statement

In this test, the full amount of imported data is 3.78 GB and the pool-size of load unit is set to 32. The statement count is controlled by statement-size, rows, or extra-args parameters in the mydumpers configuration item.

Row count per statementmydumpers extra-argsMax latency of TXN execution (s)Import time (s)Import speed (MB/s)TiDB 99 duration (s)
7506-s 1500000 -r 3200008.3422916.510.64
5006-s 1000000 -r 3200006.1221817.37.23
2506-s 500000 -r 3200004.2723216.23.24
1256-s 250000 -r 3200002.2523516.01.92
629-s 125000 -r 3200001.0324615.30.91
315-s 62500 -r 3200000.6324915.10.44

Incremental replication benchmark case

For detailed incremental replication test method, see Incremental Replication Benchmark Case.

Incremental replication benchmark result

In this test, the worker-count of sync unit is set to 32 and batch is set to 100.

ItemsQPSTPS95% latency
MySQL40.65k40.65k1.10ms
DM binlog replication unit29.1k (The number of binlog events received per unit of time, not including skipped events)-92ms (txn execution time)
TiDB32.0k (Begin/Commit 1.5 Insert 29.72k)3.52k95%: 6.2ms 99%: 8.3ms

Benchmark results with different sync unit concurrency

sync unit worker-countDM QPSMax DM execution latency (ms)TiDB QPSTiDB 99 duration (ms)
410.24010.5k4
817.6k6418.9k5
1629.5k8030.5k7
3229.1k9232.0k9
6427.4k8837.7k14
102422.9k8557.5k25

Benchmark results with different SQL distribution

Sysbench typeDM QPSMax DM execution latency (ms)TiDB QPSTiDB 99 duration (ms)
insert_only29.1k6432.0k8
write_only23.5k29624.2k18

dump unit

We recommend that the statement size be 200 KB~1 MB, and row count in each statement be approximately 1000~5000, which is based on the actual row size in your scenario.

load unit

We recommend that you set pool-size to 16~32.

sync unit

We recommend that you set batch to 100 and worker-count to 16~32.

Was this page helpful?