TiDB Operator 1.2.0-beta.1 Release Notes

Release date: April 7, 2021

TiDB Operator version: 1.2.0-beta.1

Compatibility Changes

  • Due to the changes of #3638, the apiVersion of ClusterRoleBinding, ClusterRole, RoleBinding, and Role created in the TiDB Operator chart is changed from rbac.authorization .k8s.io/v1beta1 to rbac.authorization.k8s.io/v1. In this case, upgrading TiDB Operator through helm upgrade may report the following error:

    Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace:, name: tidb-operator:tidb-controller-manager, existing_kind: rbac.authorization.k8s.io/ v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole

    For details, refer to helm/helm#7697. In this case, you need to delete TiDB Operator through helm uninstall and then reinstall it (deleting TiDB Operator will not affect the current TiDB clusters).

Rolling Update Changes

  • Upgrading TiDB Operator will cause the recreation of the TidbMonitor Pod due to #3785

New Features

  • Support setting customized environment variables for backup and restore job containers (#3833, @dragonly)
  • Add additional volume and volumeMount configurations to TidbMonitor(#3855, @mikechengwei)
  • Support affinity and tolerations in backup/restore CR (#3835, @dragonly)
  • The resources in the tidb-operator chart use the new service account when appendReleaseSuffix is set to true (#3819, @DanielZhangQD)
  • Support configuring durations for leader election (#3794, @july2993)
  • Add the tidb_cluster label for the scrape jobs in TidbMonitor to support monitoring multiple clusters (#3750, @mikechengwei)
  • Support setting customized store labels according to the node labels (#3784, @L3T)
  • Support customizing the storage config for TiDB slow log (#3731, @BinChenn)
  • TidbMonitor supports remotewrite configuration (#3679, @mikechengwei)
  • Support configuring init containers for components in the TiDB cluster (#3713, @handlerww)

Improvements

  • Add retry for DNS lookup failure exception in TiDBInitializer (#3884, @handlerww)
  • Optimize thanos example yaml files (#3726, @mikechengwei)
  • Delete the evict leader scheduler after TiKV Pod is recreated during the rolling update (#3724, @handlerww)
  • Support multiple PVCs for PD during scaling and failover (#3820, @dragonly)
  • Support multiple PVCs for TiKV during scaling (#3816, @dragonly)
  • Support PVC resizing for TiDB (#3891, @dragonly)
  • Add TiFlash rolling upgrade logic to avoid all TiFlash stores being unavailable at the same time during the upgrade (#3789, @handlerww)
  • Retrieve the region leader count from TiKV Pod directly instead of from PD to get the accurate count (#3801, @DanielZhangQD)
  • Print RocksDB and Raft logs to stdout to support collecting and querying the logs in Grafana (#3768, @baurine)

Bug Fixes

  • Fix the issue that PVCs will be set to incorrect size if multiple PVCs are configured for PD/TiKV (#3858, @dragonly)
  • Fix the panic issue when .spec.tidb is not set in the TidbCluster CR with TLS enabled (#3852, @dragonly)
  • Fix the issue that some unrecognized environment variables are included in the external labels of the TidbMonitor (#3785, @mikechengwei)
  • Fix the issue that after the Pod has been evicted or killed, the status of backup or restore is not updated to Failed (#3696, @csuzhangxc)
  • Fix the bug that if the advanced StatefulSet is enabled and delete-slots annotations are added for PD or TiKV, the Pods whose ordinal is bigger than replicas - 1 will be terminated directly without any pre-delete operations such as evicting leaders (#3702, @cvvz)
  • Fix the issue that when TLS is enabled for the TiDB cluster, if spec.from or spec.to is not configured, backup and restore jobs with BR might fail (#3707, @BinChenn)
  • Fix the issue that when the TiKV cluster is not bootstrapped due to incorrect configuration, the TiKV component could not be recovered by editing TidbCluster CR (#3694, @cvvz)

Was this page helpful?