- Introduction
- Get Started
- Deploy
- In Self-Managed Kubernetes
- In Public Cloud Kubernetes
- Deploy TiDB on ARM64 Machines
- Deploy TiFlash to Explore TiDB HTAP
- Deploy TiDB Across Multiple Kubernetes Clusters
- Deploy a Heterogeneous TiDB Cluster
- Deploy TiCDC
- Deploy TiDB Binlog
- Monitor and Alert
- Migrate
- Import Data
- Migrate from MySQL
- Migrate TiDB to Kubernetes
- Manage
- Secure
- Scale
- Upgrade
- Upgrade a TiDB Cluster
- Upgrade TiDB Operator
- Backup and Restore
- Overview
- Backup and Restore Custom Resources
- Grant Permissions to Remote Storage
- Amazon S3 Compatible Storage
- Google Cloud Storage
- Persistent Volumes
- Maintain
- Restart a TiDB Cluster
- Destroy a TiDB Cluster
- View TiDB Logs
- Modify TiDB Cluster Configuration
- Configure Automatic Failover
- Pause Sync of TiDB Cluster
- Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator
- Maintain Kubernetes Nodes
- Migrate from Helm 2 to Helm 3
- Replace Nodes for a TiDB Cluster
- Disaster Recovery
- Troubleshoot
- FAQs
- Reference
- Release Notes
- v1.3
- v1.2
- v1.1
- v1.0
- v0
Enable Shards for TidbMonitor
This document describes how to use shards for TidbMonitor.
Shards
TidbMonitor collects monitoring data for a single TiDB cluster or multiple TiDB clusters. When the amount of monitoring data is large, the computing capacity of one TidbMonitor might hit a bottleneck. In this case, it is recommended to use shards of Prometheus Modulus. This feature performs hashmod
on __address__
to divide the monitoring data of multiple targets (Targets
) into multiple TidbMonitor Pods.
To use shards for TidbMonitor, you need a data aggregation plan. The Thanos method is recommended.
Enable shards
To enable shards for TidbMonitor, you need to specify the shards
field. For example:
apiVersion: pingcap.com/v1alpha1
kind: TidbMonitor
metadata:
name: monitor
spec:
replicas: 1
shards: 2
clusters:
- name: basic
prometheus:
baseImage: prom/prometheus
version: v2.27.1
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v5.2.1
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
prometheusReloader:
baseImage: quay.io/prometheus-operator/prometheus-config-reloader
version: v0.49.0
imagePullPolicy: IfNotPresent
Note
- The number of Pods corresponding to TidbMonitor is the product of
replicas
andshards
. For example, whenreplicas
is1
andshards
is2
, TiDB Operator creates 2 TidbMonitor Pods. - After
shards
is changed,Targets
are reallocated. However, the monitoring data already stored on the Pods is not reallocated.
For details on the configuration, refer to shards example.
What’s on this page
Was this page helpful?