- Introduction
- Get Started
- Deploy
- In Self-Managed Kubernetes
- In Public Cloud Kubernetes
- Deploy TiDB on ARM64 Machines
- Deploy TiFlash to Explore TiDB HTAP
- Deploy TiDB Across Multiple Kubernetes Clusters
- Deploy a Heterogeneous TiDB Cluster
- Deploy TiCDC
- Deploy TiDB Binlog
- Monitor and Alert
- Migrate
- Import Data
- Migrate from MySQL
- Migrate TiDB to Kubernetes
- Manage
- Secure
- Scale
- Upgrade
- Upgrade a TiDB Cluster
- Upgrade TiDB Operator
- Backup and Restore
- Overview
- Backup and Restore Custom Resources
- Grant Permissions to Remote Storage
- Amazon S3 Compatible Storage
- Google Cloud Storage
- Persistent Volumes
- Maintain
- Restart a TiDB Cluster
- Destroy a TiDB Cluster
- View TiDB Logs
- Modify TiDB Cluster Configuration
- Configure Automatic Failover
- Pause Sync of TiDB Cluster
- Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator
- Maintain Kubernetes Nodes
- Migrate from Helm 2 to Helm 3
- Replace Nodes for a TiDB Cluster
- Disaster Recovery
- Troubleshoot
- FAQs
- Reference
- Release Notes
- v1.3
- v1.2
- v1.1
- v1.0
- v0
TiDB Operator 1.1.10 Release Notes
Release date: January 28, 2021
TiDB Operator version: 1.1.10
Compatibility Changes
Due to the changes of #3638, the
apiVersion
of ClusterRoleBinding, ClusterRole, RoleBinding, and Role created in the TiDB Operator chart is changed fromrbac.authorization .k8s.io/v1beta1
torbac.authorization.k8s.io/v1
. In this case, upgrading TiDB Operator throughhelm upgrade
may report the following error:Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace:, name: tidb-operator:tidb-controller-manager, existing_kind: rbac.authorization.k8s.io/ v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole
For details, refer to helm/helm#7697. In this case, you need to delete TiDB Operator through
helm uninstall
and then reinstall it (deleting TiDB Operator will not affect the current TiDB clusters).
Rolling Update Changes
- Upgrading TiDB Operator will cause the recreation of the TidbMonitor Pod due to #3684
New Features
- Support canary upgrade of TiDB Operator (#3548, @shonge, #3554, @cvvz)
- TidbMonitor supports
remotewrite
configuration (#3679, @mikechengwei) - Support configuring init containers for components in the TiDB cluster (#3713, @handlerww)
- Add local backend support to the TiDB Lightning chart (#3644, @csuzhangxc)
Improvements
- Support customizing the storage config for TiDB slow log (#3731, @BinChenn)
- Add the
tidb_cluster
label for the scrape jobs in TidbMonitor to support monitoring multiple clusters (#3750, @mikechengwei) - Supports persisting checkpoint for the TiDB Lightning helm chart (#3653, @csuzhangxc)
- Change the directory of the customized alert rules in TidbMonitor from
tidb:${tidb_image_version}
totidb:${initializer_image_version}
so that when the TiDB cluster is upgraded afterwards, the TidbMonitor Pod will not be recreated (#3684, @BinChenn)
Bug Fixes
- Fix the issue that when TLS is enabled for the TiDB cluster, if
spec.from
orspec.to
is not configured, backup and restore jobs with BR might fail (#3707, @BinChenn) - Fix the bug that if the advanced StatefulSet is enabled and
delete-slots
annotations are added for PD or TiKV, the Pods whose ordinal is bigger thanreplicas - 1
will be terminated directly without any pre-delete operations such as evicting leaders (#3702, @cvvz) - Fix the issue that after the Pod has been evicted or killed, the status of backup or restore is not updated to
Failed
(#3696, @csuzhangxc) - Fix the issue that when the TiKV cluster is not bootstrapped due to incorrect configuration, the TiKV component could not be recovered by editing
TidbCluster
CR (#3694, @cvvz)
Was this page helpful?