Key Metrics

If you use TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For more information, see TiDB Monitoring Framework Overview.

The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node_exporter, Disk Performance, and Performance_overview. A lot of metrics are there to help you diagnose.

For routine operations, you can get an overview of the component (PD, TiDB, TiKV) status and the entire cluster from the Overview dashboard, where the key metrics are displayed. This document provides a detailed description of these key metrics.

Key metrics description

To understand the key metrics displayed on the Overview dashboard, check the following table:

ServicePanel NameDescriptionNormal Range
Services Port StatusServices UpThe online nodes number of each service.
PDPD roleThe role of the current PD.
PDStorage capacityThe total storage capacity of the TiDB cluster.
PDCurrent storage sizeThe occupied storage capacity of the TiDB cluster, including the space occupied by TiKV replicas.
PDNormal storesThe number of nodes in the normal state.
PDAbnormal storesThe number of nodes in the abnormal state.0
PDNumber of RegionsThe total number of Regions in the current cluster. Note that the number of Regions has nothing to do with the number of replicas.
PD99% completed_cmds_duration_secondsThe 99th percentile duration to complete a pd-server request.less than 5ms
PDHandle_requests_duration_secondsThe network duration of a PD request.
PDRegion healthThe state of each Region.Generally, the number of pending peers is less than 100, and that of the missing peers cannot always be greater than 0.
PDHot write Region's leader distributionThe total number of leaders who are the write hotspots on each TiKV instance.
PDHot read Region's leader distributionThe total number of leaders who are the read hotspots on each TiKV instance.
PDRegion heartbeat reportThe count of heartbeats reported to PD per instance.
PD99% Region heartbeat latencyThe heartbeat latency per TiKV instance (P99).
TiDBStatement OPSThe number of different types of SQL statements executed per second, which is counted according to SELECT, INSERT, UPDATE, and other types of statements.
TiDBDurationThe execution time.
1. The duration between the time that the client's network request is sent to TiDB and the time that the request is returned to the client after TiDB has executed the request. In general, client requests are sent in the form of SQL statements; however, this duration can include the execution time of commands such as COM_PING, COM_SLEEP, COM_STMT_FETCH, and COM_SEND_LONG_DATA.
2. Because TiDB supports Multi-Query, TiDB supports sending multiple SQL statements at one time, such as select 1; select 1; select 1;. In this case, the total execution time of this query includes the execution time of all SQL statements.
TiDBCPS By InstanceCPS By Instance: the command statistics on each TiDB instance, which is classified according to the success or failure of command execution results.
TiDBFailed Query OPMThe statistics of error types (such as syntax errors and primary key conflicts) based on the errors occurred when executing SQL statements per second on each TiDB instance. The module in which the error occurs and the error code are included.
TiDBConnection CountThe connection number of each TiDB instance.
TiDBMemory UsageThe memory usage statistics of each TiDB instance, which is divided into the memory occupied by processes and the memory applied by Golang on the heap.
TiDBTransaction OPSThe number of transactions executed per second.
TiDBTransaction DurationThe execution time of a transaction
TiDBKV Cmd OPSThe number of executed KV commands.
TiDBKV Cmd Duration 99The execution time of the KV command.
TiDBPD TSO OPSThe number of gRPC requests per second that TiDB sends to PD (cmd) and the number of TSO requests (request); each gRPC request contains a batch of TSO requests.
TiDBPD TSO Wait DurationThe duration that TiDB waits for PD to return TSO.
TiDBTiClient Region Error OPSThe number of Region related errors returned by TiKV.
TiDBLock Resolve OPSThe number of TiDB operations that resolve locks. When TiDB's read or write request encounters a lock, it tries to resolve the lock.
TiDBKV Backoff OPSThe number of errors returned by TiKV.
TiKVleaderThe number of leaders on each TiKV node.
TiKVregionThe number of Regions on each TiKV node.
TiKVCPUThe CPU usage ratio on each TiKV node.
TiKVMemoryThe memory usage on each TiKV node.
TiKVstore sizeThe size of storage space used by each TiKV instance.
TiKVcf sizeThe size of each column family (CF for short).
TiKVchannel fullThe number of "channel full" errors on each TiKV instance.0
TiKVserver report failuresThe number of error messages reported by each TiKV instance.0
TiKVscheduler pending commandsThe number of pending commands on each TiKV instance.
TiKVcoprocessor executor countThe number of coprocessor operations received by TiKV per second. Each type of coprocessor is counted separately.
TiKVcoprocessor request durationThe time consumed to process read requests of coprocessor.
TiKVraft store CPUThe CPU usage ratio of the raftstore threadThe default number of threads is 2 (configured by raftstore.store-pool-size). A value of over 80% for a single thread indicates that the CPU usage ratio is very high.
TiKVCoprocessor CPUThe CPU usage ratio of the coprocessor thread.
System InfoVcoresThe number of CPU cores.
System InfoMemoryThe total memory.
System InfoCPU UsageThe CPU usage ratio, 100% at a maximum.
System InfoLoad [1m]The overload within 1 minute.
System InfoMemory AvailableThe size of the available memory.
System InfoNetwork TrafficThe statistics of the network traffic.
System InfoTCP RetransThe frequency of the TOC retransmission.
System InfoIO UtilThe disk usage ratio, 100% at a maximum; generally you need to consider adding a new node when the usage ratio is up to 80% ~ 90%.

Interface of the Overview dashboard

overview

Was this page helpful?