Handle Errors
This document introduces the error system and how to handle common errors when you use DM.
Error system
In the error system, usually, the information of a specific error is as follows:
code: error code.DM uses the same error code for the same error type. An error code does not change as the DM version changes.
Some errors might be removed during the DM iteration, while the error codes are not. DM uses a new error code instead of an existing one for a new error.
class: error type.It is used to mark the component where an error occurs (error source).
The following table displays all error types, error sources, and error samples.
scope: Error scope.It is used to mark the scope and source of DM objects when an error occurs.
scopeincludes four types:not-set,upstream,downstream, andinternal.If the logic of the error directly involves requests between upstream and downstream databases, the scope is set to
upstreamordownstream; otherwise, it is currently set tointernal.level: Error level.The severity level of the error, including
low,medium, andhigh.- The
lowlevel error usually relates to user operations and incorrect inputs. It does not affect migration tasks. - The
mediumlevel error usually relates to user configurations. It affects some newly started services; however, it does not affect the existing DM migration status. - The
highlevel error usually needs your attention, since you need to resolve it to avoid the possible interruption of a migration task.
- The
message: Error descriptions.Detailed descriptions of the error. To wrap and store every additional layer of error message on the error call chain, the errors.Wrap mode is adopted. The message description wrapped at the outermost layer indicates the error in DM and the message description wrapped at the innermost layer indicates the error source.
workaround: Error handling methods (optional)The handling methods for this error. For some confirmed errors (such as configuration errors), DM gives the corresponding manual handling methods in
workaround.Error stack information (optional)
Whether DM outputs the error stack information depends on the error severity and the necessity. The error stack records the complete stack call information when the error occurs. If you cannot figure out the error cause based on the basic information and the error message, you can trace the execution path of the code when the error occurs using the error stack.
For the complete list of error codes, refer to the error code lists.
Troubleshooting
If you encounter an error while running DM, take the following steps to troubleshoot this error:
Execute the
query-statuscommand to check the task running status and the error output.Check the log files related to the error. The log files are on the DM-master and DM-worker nodes. To get key information about the error, refer to the error system. Then check the Handle Common Errors section to find the solution.
If the error is not covered in this document, and you cannot solve the problem by checking the log or monitoring metrics, you can contact the R&D.
After the error is resolved, restart the task using dmctl.
resume-task ${task name}
However, you need to reset the data migration task in some cases. For details, refer to Reset the Data Migration Task.
Handle common errors
What can I do when a migration task is interrupted with the invalid connection error returned?
Reason
The invalid connection error indicates that anomalies have occurred in the connection between DM and the downstream TiDB database (such as network failure, TiDB restart, TiKV busy and so on) and that a part of the data for the current request has been sent to TiDB.
Solutions
Because DM has the feature of concurrently migrating data to the downstream in migration tasks, several errors might occur when a task is interrupted. You can check these errors by using query-status.
- If only the
invalid connectionerror occurs during the incremental replication process, DM retries the task automatically. - If DM does not or fails to retry automatically because of version problems, use
stop-taskto stop the task and then usestart-taskto restart the task.
A migration task is interrupted with the driver: bad connection error returned
Reason
The driver: bad connection error indicates that anomalies have occurred in the connection between DM and the upstream TiDB database (such as network failure, TiDB restart and so on) and that the data of the current request has not yet been sent to TiDB at that moment.
Solution
The current version of DM automatically retries on error. If you use the previous version which does not support automatically retry, you can execute the stop-task command to stop the task. Then execute start-task to restart the task.
The relay unit throws error event from * in * diff from passed-in event * or a migration task is interrupted with failing to get or parse binlog errors like get binlog error ERROR 1236 (HY000) and binlog checksum mismatch, data may be corrupted returned
Reason
During the DM process of relay log pulling or incremental replication, this two errors might occur if the size of the upstream binlog file exceeds 4 GB.
Cause: When writing relay logs, DM needs to perform event verification based on binlog positions and the size of the binlog file, and store the replicated binlog positions as checkpoints. However, the official MySQL uses uint32 to store binlog positions. This means the binlog position for a binlog file over 4 GB overflows, and then the errors above occur.
Solutions
For relay units, manually recover migration using the following solution:
Identify in the upstream that the size of the corresponding binlog file has exceeded 4GB when the error occurs.
Stop the DM-worker.
Copy the corresponding binlog file in the upstream to the relay log directory as the relay log file.
In the relay log directory, update the corresponding
relay.metafile to pull from the next binlog file. If you have specifiedenable_gtidtotruefor the DM-worker, you need to modify the GTID corresponding to the next binlog file when updating therelay.metafile. Otherwise, you don't need to modify the GTID.Example: when the error occurs,
binlog-name = "mysql-bin.004451"andbinlog-pos = 2453. Update them respectively tobinlog-name = "mysql-bin.004452"andbinlog-pos = 4, and updatebinlog-gtidtof0e914ef-54cf-11e7-813d-6c92bf2fa791:1-138218058.Restart the DM-worker.
For binlog replication processing units, manually recover migration using the following solution:
Identify in the upstream that the size of the corresponding binlog file has exceeded 4GB when the error occurs.
Stop the migration task using
stop-task.Update the
binlog_namein the global checkpoints and in each table checkpoint of the downstreamdm_metadatabase to the name of the binlog file in error; updatebinlog_posto a valid position value for which migration has completed, for example, 4.Example: the name of the task in error is
dm_test, the corresponding ssource-idisreplica-1, and the corresponding binlog file ismysql-bin|000001.004451. Execute the following command:UPDATE dm_test_syncer_checkpoint SET binlog_name='mysql-bin|000001.004451', binlog_pos = 4 WHERE id='replica-1';Specify
safe-mode: truein thesyncerssection of the migration task configuration to ensure re-entrant.Start the migration task using
start-task.View the status of the migration task using
query-status. You can restoresafe-modeto the original value and restart the migration task when migration is done for the original error-triggering relay log files.
Access denied for user 'root'@'172.31.43.27' (using password: YES) shows when you query the task or check the log
For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by dmctl. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see Encrypt the database password using dmctl.
In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also prechecks the corresponding privileges automatically while starting the data migration task.
The load processing unit reports the error packet for query is too large. Try adjusting the 'max_allowed_packet' variable
Reasons
Both MySQL client and MySQL/TiDB server have the quota limits for
max_allowed_packet. If anymax_allowed_packetexceeds a limit, the client receives the error message. Currently, for the latest version of DM and TiDB server, the default value ofmax_allowed_packetis64M.The full data import processing unit in DM does not support splitting the SQL file exported by the Dump processing unit in DM.
Solutions
It is recommended to set the
statement-sizeoption ofextra-argsfor the Dump processing unit:According to the default
--statement-sizesetting, the default size ofInsert Statementgenerated by the Dump processing unit is about1M. With this default setting, the load processing unit does not report the errorpacket for query is too large. Try adjusting the 'max_allowed_packet' variablein most cases.Sometimes you might receive the following
WARNlog during the data dump. ThisWARNlog does not affect the dump process. This only means that wide tables are dumped.Row bigger than statement_size for xxxIf the single row of the wide table exceeds
64M, you need to modify the following configurations and make sure the configurations take effect.Execute
set @@global.max_allowed_packet=134217728(134217728= 128 MB) in the TiDB server.First add the
max-allowed-packet: 134217728(128 MB) to thetarget-databasesection in the DM task configuration file. Then, execute thestop-taskcommand and execute thestart-taskcommand.