Doc Menu

TiDB Operator 1.1.10 Release Notes

Release date: January 28, 2021

TiDB Operator version: 1.1.10

Compatibility Changes

  • Due to the changes of #3638, the apiVersion of ClusterRoleBinding, ClusterRole, RoleBinding, and Role created in the TiDB Operator chart is changed from rbac.authorization .k8s.io/v1beta1 to rbac.authorization.k8s.io/v1. In this case, upgrading TiDB Operator through helm upgrade may report the following error:

    Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace:, name: tidb-operator:tidb-controller-manager, existing_kind: rbac.authorization.k8s.io/ v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole

    For details, refer to helm/helm#7697. In this case, you need to delete TiDB Operator through helm uninstall and then reinstall it (deleting TiDB Operator will not affect the current TiDB clusters).

Rolling Update Changes

  • Upgrading TiDB Operator will cause the recreation of the TidbMonitor Pod due to #3684

New Features

Improvements

  • Support customizing the storage config for TiDB slow log (#3731, @BinChenn)
  • Add the tidb_cluster label for the scrape jobs in TidbMonitor to support monitoring multiple clusters (#3750, @mikechengwei)
  • Supports persisting checkpoint for the TiDB Lightning helm chart (#3653, @csuzhangxc)
  • Change the directory of the customized alert rules in TidbMonitor from tidb:${tidb_image_version} to tidb:${initializer_image_version} so that when the TiDB cluster is upgraded afterwards, the TidbMonitor Pod will not be recreated (#3684, @BinChenn)

Bug Fixes

  • Fix the issue that when TLS is enabled for the TiDB cluster, if spec.from or spec.to is not configured, backup and restore jobs with BR might fail (#3707, @BinChenn)
  • Fix the bug that if the advanced StatefulSet is enabled and delete-slots annotations are added for PD or TiKV, the Pods whose ordinal is bigger than replicas - 1 will be terminated directly without any pre-delete operations such as evicting leaders (#3702, @cvvz)
  • Fix the issue that after the Pod has been evicted or killed, the status of backup or restore is not updated to Failed (#3696, @csuzhangxc)
  • Fix the issue that when the TiKV cluster is not bootstrapped due to incorrect configuration, the TiKV component could not be recovered by editing TidbCluster CR (#3694, @cvvz)