- Introduction
- Deploy
- Deploy TiDB Cluster
- Deploy Heterogeneous Cluster
- Deploy TiFlash
- Deploy TiCDC
- Deploy TiDB Binlog
- Deploy TiDB Enterprise Edition
- Deploy Multiple Sets of TiDB Operator
- Deploy Monitoring
- Secure
- Operate
- Scale TiDB Cluster
- Backup and Restore
- Backup and Restore with S3-Compatible Storage
- Backup and Restore with GCS
- Backup and Restore with Persistent Volumes
- Disaster Recovery
- Reference
- Tools
- Configure
- Release Notes
- v1.1
- v1.0
- v0
TiDB Operator 1.1.10 Release Notes
Release date: January 28, 2021
TiDB Operator version: 1.1.10
Rolling Update Changes
- Upgrading TiDB Operator will cause the recreation of the TidbMonitor Pod due to #3684
New Features
- Support canary upgrade of TiDB Operator (#3548, @shonge, #3554, @cvvz)
- TidbMonitor supports
remotewrite
configuration (#3679, @mikechengwei) - Support configuring init containers for components in the TiDB cluster (#3713, @handlerww)
- Add local backend support to the TiDB Lightning chart (#3644, @csuzhangxc)
Improvements
- Support customizing the storage config for TiDB slow log (#3731, @BinChenn)
- Add the
tidb_cluster
label for the scrape jobs in TidbMonitor to support monitoring multiple clusters (#3750, @mikechengwei) - Supports persisting checkpoint for the TiDB Lightning helm chart (#3653, @csuzhangxc)
- Change the directory of the customized alert rules in TidbMonitor from
tidb:${tidb_image_version}
totidb:${initializer_image_version}
so that when the TiDB cluster is upgraded afterwards, the TidbMonitor Pod will not be recreated (#3684, @BinChenn)
Bug Fixes
- Fix the issue that when TLS is enabled for the TiDB cluster, if
spec.from
orspec.to
is not configured, backup and restore jobs with BR might fail (#3707, @BinChenn) - Fix the bug that if the advanced StatefulSet is enabled and
delete-slots
annotations are added for PD or TiKV, the Pods whose ordinal is bigger thanreplicas - 1
will be terminated directly without any pre-delete operations such as evicting leaders (#3702, @cvvz) - Fix the issue that after the Pod has been evicted or killed, the status of backup or restore is not updated to
Failed
(#3696, @csuzhangxc) - Fix the issue that when the TiKV cluster is not bootstrapped due to incorrect configuration, the TiKV component could not be recovered by editing
TidbCluster
CR (#3694, @cvvz)