Sign InTry Free

TiDB Binlog Error Handling

This document introduces common errors that you might encounter and solutions to these errors when you use TiDB Binlog.

kafka server: Message was too large, server rejected it to avoid allocation error is returned when Drainer replicates data to Kafka

Cause: Executing a large transaction in TiDB generates binlog data of a large size, which might exceed Kafka's limit on the message size.

Solution: Adjust the configuration parameters of Kafka as shown below:

message.max.bytes=1073741824 replica.fetch.max.bytes=1073741824 fetch.message.max.bytes=1073741824

Pump returns no space left on device error

Cause: The local disk space is insufficient for Pump to write binlog data normally.

Solution: Clean up the disk space and then restart Pump.

fail to notify all living drainer is returned when Pump is started

Cause: When Pump is started, it notifies all Drainer nodes that are in the online state. If it fails to notify Drainer, this error log is printed.

Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the online state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.

Data loss occurs during the TiDB Binlog replication

You need to confirm that TiDB Binlog is enabled on all TiDB instances and runs normally. If the cluster version is later than v3.0, use the curl {TiDB_IP}:{STATUS_PORT}/info/all command to confirm the TiDB Binlog status on all TiDB instances.

When the upstream transaction is large, Pump reports an error rpc error: code = ResourceExhausted desc = trying to send message larger than max (2191430008 vs. 2147483647)

This error occurs because the gRPC message sent by TiDB to Pump exceeds the size limit. You can adjust the maximum size of a gRPC message that Pump allows by specifying max-message-size when starting Pump.

Is there any cleaning mechanism for the incremental data of the file format output by Drainer? Will the data be deleted?

  • In Drainer v3.0.x, there is no cleaning mechanism for incremental data of the file format.
  • In the v4.0.x version, there is a time-based data cleaning mechanism. For details, refer to Drainer's retention-time configuration item.

Was this page helpful?

Download PDFRequest docs changesAsk questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.