- Introduction
- Concepts
- Architecture
- Key Features
- Horizontal Scalability
- MySQL Compatible Syntax
- Replicate from and to MySQL
- Distributed Transactions with Strong Consistency
- Cloud Native Architecture
- Minimize ETL with HTAP
- Fault Tolerance & Recovery with Raft
- Automatic Rebalancing
- Deployment and Orchestration with Ansible, Kubernetes, Docker
- JSON Support
- Spark Integration
- Read Historical Data Without Restoring from Backup
- Fast Import and Restore of Data
- Hybrid of Column and Row Storage
- SQL Plan Management
- Open Source
- Online Schema Changes
- How-to
- Get Started
- Deploy
- Hardware Recommendations
- From Binary Tarball
- Orchestrated Deployment
- Geographic Redundancy
- Data Migration with Ansible
- Configure
- Secure
- Transport Layer Security (TLS)
- Generate Self-signed Certificates
- Monitor
- Migrate
- Maintain
- Scale
- Upgrade
- Troubleshoot
- Reference
- SQL
- MySQL Compatibility
- SQL Language Structure
- Data Types
- Functions and Operators
- Function and Operator Reference
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Miscellaneous Functions
- Precision Math
- SQL Statements
ADD COLUMN
ADD INDEX
ADMIN
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ALTER DATABASE
ALTER TABLE
ALTER USER
ANALYZE TABLE
BEGIN
CHANGE COLUMN
COMMIT
CREATE DATABASE
CREATE INDEX
CREATE TABLE LIKE
CREATE TABLE
CREATE USER
DEALLOCATE
DELETE
DESC
DESCRIBE
DO
DROP COLUMN
DROP DATABASE
DROP INDEX
DROP TABLE
DROP USER
EXECUTE
EXPLAIN ANALYZE
EXPLAIN
FLUSH PRIVILEGES
FLUSH STATUS
FLUSH TABLES
GRANT <privileges>
INSERT
KILL [TIDB]
LOAD DATA
LOAD STATS
MODIFY COLUMN
PREPARE
RENAME INDEX
RENAME TABLE
REPLACE
REVOKE <privileges>
ROLLBACK
SELECT
SET [NAMES|CHARACTER SET]
SET PASSWORD
SET TRANSACTION
SET [GLOBAL|SESSION] <variable>
SHOW CHARACTER SET
SHOW COLLATION
SHOW [FULL] COLUMNS FROM
SHOW CREATE TABLE
SHOW DATABASES
SHOW ENGINES
SHOW ERRORS
SHOW [FULL] FIELDS FROM
SHOW GRANTS
SHOW INDEXES [FROM|IN]
SHOW INDEX [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW PRIVILEGES
SHOW [FULL] PROCESSSLIST
SHOW SCHEMAS
SHOW STATUS
SHOW [FULL] TABLES
SHOW TABLE STATUS
SHOW [GLOBAL|SESSION] VARIABLES
SHOW WARNINGS
START TRANSACTION
TRACE
TRUNCATE
UPDATE
USE
- Constraints
- Generated Columns
- Character Set
- Configuration
- Security
- Transactions
- System Databases
- Errors Codes
- Supported Client Drivers
- Garbage Collection (GC)
- Performance
- Key Monitoring Metrics
- Alert Rules
- Best Practices
- TiSpark
- TiDB Binlog
- Tools
- Overview
- Use Cases
- Download
- Mydumper
- Syncer
- Loader
- TiDB Data Migration
- TiDB Lightning
- sync-diff-inspector
- PD Control
- PD Recover
- TiKV Control
- TiDB Control
- FAQs
- Support
- Contribute
- Releases
- All Releases
- v2.1
- v2.0
- v1.0
- Glossary
You are viewing the documentation of an older version of the TiDB database (TiDB v2.1).
Upgrade TiDB Binlog
This document introduces how to upgrade TiDB Binlog that is deployed with TiDB Ansible and deployed manually to the latest cluster version. There is also a section on how to upgrade TiDB Binlog from an earlier incompatible version (Kafka/Local version) to the latest version.
Upgrade TiDB Binlog deployed with TiDB Ansible
Follow the steps in this section if you deploy TiDB Binlog with TiDB Ansible Playbook.
Upgrade Pump
First, upgrade the Pump component:
- Copy the new version of the binary
pump
file into the({ resources_dir })/bin
directory. - Execute the
ansible-playbook rolling_update.yml --tags=pump
command to perform a rolling update for Pump.
Upgrade Drainer
Second, upgrade the Drainer component:
- Copy the new version of the binary
drainer
file into the({ resources_dir })/bin
directory. - Execute the
ansible-playbook stop_drainer.yml --tags=drainer
command. - Execute the
ansible-playbook start_drainer.yml --tags=drainer
command.
Upgrade TiDB Binlog deployed manually
Follow the steps in this section if you deploy TiDB Binlog manually.
Upgrade Pump
First, upgrade each Pump instance in the cluster one by one. This ensures that there are always Pump instances in the cluster that can receive binlogs from TiDB. The steps are as below:
- Replace the original file with the new version of
pump
. - Restart the Pump process.
Upgrade Drainer
Second, upgrade the Drainer component:
- Replace the original file with the new version of
drainer
. - Restart the Drainer process.
Upgrade TiDB Binlog from Kafka/Local version to the cluster version
The new TiDB versions (v2.0.8-binlog, v2.1.0-rc.5 or later) are not compatible with the Kafka version or Local version of TiDB Binlog. If TiDB is upgraded to one of the new versions, it is required to use the cluster version of TiDB Binlog. If the Kafka or local version of TiDB Binlog is used before upgrading, you need to upgrade your TiDB Binlog to the cluster version.
The corresponding relationship between TiDB Binlog versions and TiDB versions is shown in the following table:
TiDB Binlog version | TiDB version | Note |
---|---|---|
Local | TiDB 1.0 or earlier | |
Kafka | TiDB 1.0 ~ TiDB 2.1 RC5 | TiDB 1.0 supports both the local and Kafka versions of TiDB Binlog. |
Cluster | TiDB v2.0.8-binlog, TiDB 2.1 RC5 or later | TiDB v2.0.8-binlog is a special 2.0 version supporting the cluster version of TiDB Binlog. |
Upgrade process
If importing the full data is acceptable, you can abandon the old version and deploy TiDB Binlog following TiDB Binlog Cluster Deployment.
If you want to resume replication from the original checkpoint, perform the following steps to upgrade TiDB Binlog:
Deploy the new version of Pump.
Stop the TiDB cluster service.
Upgrade TiDB and the configuration, and write the binlog data to the new Pump cluster.
Reconnect the TiDB cluster to the service.
Make sure that the old version of Drainer has replicated the data in the old version of Pump to the downstream completely;
Query the
status
interface of Drainer,command as below:curl 'http://172.16.10.49:8249/status'
{"PumpPos":{"172.16.10.49:8250":{"offset":32686}},"Synced": true ,"DepositWindow":{"Upper":398907800202772481,"Lower":398907799455662081}}
If the return value of
Synced
is True, it means Drainer has replicated the data in the old version of Pump to the downstream completely.Start the new version of Drainer.
Close the Pump and Drainer of the old versions and the dependent Kafka and ZooKeeper.