- Get Started
- In Self-Managed Kubernetes
- In Public Cloud Kubernetes
- Deploy TiDB on ARM64 Machines
- Deploy TiFlash to Explore TiDB HTAP
- Deploy TiDB Across Multiple Kubernetes Clusters
- Deploy a Heterogeneous TiDB Cluster
- Deploy TiCDC
- Deploy TiDB Binlog
- Monitor and Alert
- Backup and Restore
- Backup and Restore Custom Resources
- Grant Permissions to Remote Storage
- Amazon S3 Compatible Storage
- Google Cloud Storage
- Persistent Volumes
- Restart a TiDB Cluster
- Destroy a TiDB Cluster
- View TiDB Logs
- Modify TiDB Cluster Configuration
- Configure Automatic Failover
- Pause Sync of TiDB Cluster
- Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator
- Maintain Kubernetes Nodes
- Migrate from Helm 2 to Helm 3
- Replace Nodes for a TiDB Cluster
- Disaster Recovery
- Release Notes
This document describes how to use shards for TidbMonitor.
TidbMonitor collects monitoring data for a single TiDB cluster or multiple TiDB clusters. When the amount of monitoring data is large, the computing capacity of one TidbMonitor might hit a bottleneck. In this case, it is recommended to use shards of Prometheus Modulus. This feature performs
__address__ to divide the monitoring data of multiple targets (
Targets) into multiple TidbMonitor Pods.
To use shards for TidbMonitor, you need a data aggregation plan. The Thanos method is recommended.
To enable shards for TidbMonitor, you need to specify the
shards field. For example:
apiVersion: pingcap.com/v1alpha1 kind: TidbMonitor metadata: name: monitor spec: replicas: 1 shards: 2 clusters: - name: basic prometheus: baseImage: prom/prometheus version: v2.27.1 initializer: baseImage: pingcap/tidb-monitor-initializer version: v5.2.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 prometheusReloader: baseImage: quay.io/prometheus-operator/prometheus-config-reloader version: v0.49.0 imagePullPolicy: IfNotPresent
- The number of Pods corresponding to TidbMonitor is the product of
shards. For example, when
2, TiDB Operator creates 2 TidbMonitor Pods.
Targetsare reallocated. However, the monitoring data already stored on the Pods is not reallocated.
For details on the configuration, refer to shards example.