DM 2.0-GA Benchmark Report

This benchmark report describes the test purpose, environment, scenario, and results for DM 2.0-GA.

Test purpose

The purpose of this test is to evaluate the performance of DM full import and incremental replication and to conclude recommended configurations for DM migration tasks based on the test results.

Test environment

Machine information

System information:

Machine IPOperating SystemKernel versionFile system type
172.16.5.32CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.5.33CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.5.34CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.5.35CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.5.36CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4
172.16.5.37CentOS Linux release 7.8.20033.10.0-957.el7.x86_64ext4

Hardware information:

TypeSpecification
CPUIntel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 40 Cores
Memory128G, 8 * 16GB DIMM DDR4 2133 MHz
DiskIntel SSD DC P4800X 375G NVMe * 2
Network card10 Gigabit Ethernet

Others:

  • Network rtt between servers: rtt min/avg/max/mdev = 0.074/0.116/0.158/0.042 ms

Cluster topology

Machine IPDeployed instance
172.16.5.32PD1, DM-worker1, DM-master
172.16.5.33PD2, MySQL1
172.16.5.34PD3, TiDB
172.16.5.35TiKV1(nvme0n1), TiKV2(nvme1n1)
172.16.5.36TiKV3(nvme0n1), TiKV4(nvme1n1)
172.16.5.37TiKV5(nvme0n1), TiKV6(nvme1n1)

Version information

  • MySQL version: 5.7.31-log
  • TiDB version: v4.0.7
  • DM version: v2.0.0
  • Sysbench version: 1.0.17

Test scenario

You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see performance test.

Full import benchmark case

For detailed full import test method, see Full Import Benchmark Case.

Full import benchmark results

To enable multi-thread concurrent data export via Dumpling, you can configure the threads parameter in the mydumpers configuration item. This speeds up data export.

ItemData size (GB)ThreadsRowsStatement-sizeTime (s)Dump speed (MB/s)
dump data38.1323200001000000106.73359.43
ItemData size (GB)Pool sizeStatement per TXNMax latency of TXN execution (s)Time (s)Import speed (MB/s)
load data38.132487820.951580.5424.11

Benchmark results with different pool sizes in load unit

In this test, the full amount of data imported using sysbench is 3.78 GB. The following is detailed information of the test data:

load unit pool sizeMax latency of TXN execution (s)Import time (s)Import Speed (MB/s)TiDB 99 duration (s)
20.354388.630.32
40.6530512.300.55
81.8223116.362.26
163.4622816.573.04
325.9220818.176.56
648.5922117.109.62

Benchmark results with different row count per statement

In this test, the full amount of imported data is 3.78 GB and the pool-size of load unit is set to 32. The statement count is controlled by statement-size, rows, or extra-args parameters in the mydumpers configuration item.

Row count per statementmydumpers extra-argsMax latency of TXN execution (s)Import time (s)Import speed (MB/s)TiDB 99 duration (s)
7506-s 1500000 -r 3200008.7421817.310.49
5006-s 1000000 -r 3200005.9220818.16.56
2506-s 500000 -r 3200003.0722217.02.32
1256-s 250000 -r 3200002.0123016.41.87
629-s 125000 -r 3200000.9824115.60.94
315-s 62500 -r 3200000.5124515.40.45

Incremental replication benchmark case

For detailed incremental replication test method, see Incremental Replication Benchmark Case.

Incremental replication benchmark result

In this test, the worker-count of sync unit is set to 32 and batch is set to 100.

ItemsQPSTPS95% latency
MySQL38.65k38.65k1.10ms
DM binlog replication unit21.33k (The number of binlog events received per unit of time, not including skipped events)-66.75ms (txn execution time)
TiDB21.90k (Begin/Commit 2.32k Insert 21.35k)3.52k95%: 5.2ms 99%: 8.3ms

Benchmark results with different sync unit concurrency

sync unit worker-countDM QPSMax DM execution latency (ms)TiDB QPSTiDB 99 duration (ms)
411.83k5612.1k4
818.34k5818.9k5
1620.85k6021.6k6
3221.33k6621.9k8
6421.52k6822.1k10
102420.45k8550.5k52

Benchmark results with different SQL distribution

Sysbench typeDM QPSMax DM execution latency (ms)TiDB QPSTiDB 99 duration (ms)
insert_only21.33k6621.9k8
write_only10.2k8711.2k8

dump unit

We recommend that the statement size be 200 KB~1 MB, and row count in each statement be approximately 1000~5000, which is based on the actual row size in your scenario.

load unit

We recommend that you set pool-size to 16.

sync unit

We recommend that you set batch to 100 and worker-count to 16~32.

Was this page helpful?