- About TiDB
- Quick Start
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Replicate Data from TiDB to Kafka
- Advanced Migration
- Backup and Restore
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Modify Configuration Online
- Online Unsafe Recovery
- Replicate Data Between Primary and Secondary Clusters
- Monitor and Alert
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Understanding the Query Execution Plan
- SQL Optimization Process
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Two Data Centers in One City Deployment
- Read Historical Data
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Tools
- Use Cases
- Documentation Map
- Terminology and Concepts
- Manage TiUP Components
- Troubleshooting Guide
- Command Reference
- TiUP Commands
- tiup clean
- tiup completion
- tiup env
- tiup help
- tiup install
- tiup list
- tiup mirror
- tiup status
- tiup telemetry
- tiup uninstall
- tiup update
- TiUP Cluster Commands
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service (Technical Preview)
- TiDB Operator
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- Quick Start
- Deploy a DM cluster
- Advanced Tutorials
- Cluster Upgrade
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Command Line
- Configuration Files
- Compatibility Catalog
- Monitoring and Alerts
- Error Codes
- Release Notes
- Backup & Restore (BR)
- TiDB Binlog
- Cluster Architecture
- Key Monitoring Metrics
- SQL Language Structure and Syntax
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ADMIN SHOW TELEMETRY
ALTER PLACEMENT POLICY
CREATE [GLOBAL|SESSION] BINDING
CREATE PLACEMENT POLICY
CREATE TABLE LIKE
DROP [GLOBAL|SESSION] BINDING
DROP PLACEMENT POLICY
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [GLOBAL|SESSION] BINDINGS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE PLACEMENT POLICY
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DRAINER STATUS
SHOW [FULL] FIELDS FROM
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLACEMENT FOR
SHOW PLACEMENT LABELS
SHOW [FULL] PROCESSSLIST
SHOW PUMP STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
- Data Types
- Functions and Operators
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Generated Columns
- SQL Mode
- Table Attributes
- Garbage Collection (GC)
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
- TiDB Dashboard
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
If your cluster is deployed on AWS and uses the EBS storage, it is recommended to use the EBS encryption. See AWS documentation - EBS Encryption. You are using the non-EBS storage on AWS such as the local NVMe storage, it is recommended to use encryption at rest introduced in this document.
Encryption at rest means that data is encrypted when it is stored. For databases, this feature is also referred to as TDE (transparent data encryption). This is opposed to encryption in flight (TLS) or encryption in use (rarely used). Different things could be doing encryption at rest (SSD drive, file system, cloud vendor, etc), but by having TiKV do the encryption before storage this helps ensure that attackers must authenticate with the database to gain access to data. For example, when an attacker gains access to the physical machine, data cannot be accessed by copying files on disk.
In a TiDB cluster, different components use different encryption methods. This section introduces the encryption supports in different TiDB components such as TiKV, TiFlash, PD, and Backup & Restore (BR).
When a TiDB cluster is deployed, the majority of user data is stored on TiKV and TiFlash nodes. Some metadata is stored on PD nodes (for example, secondary index keys used as TiKV Region boundaries). To get the full benefits of encryption at rest, you need to enable encryption for all components. Backups, log files, and data transmitted over the network should also be considered when you implement encryption.
TiKV supports encryption at rest. This feature allows TiKV to transparently encrypt data files using AES in CTR mode. To enable encryption at rest, an encryption key must be provided by the user and this key is called master key. TiKV automatically rotates data keys that it used to encrypt actual data files. Manually rotating the master key can be done occasionally. Note that encryption at rest only encrypts data at rest (namely, on disk) and not while data is transferred over network. It is advised to use TLS together with encryption at rest.
Optionally, you can use AWS KMS for both cloud and on-premises deployments. You can also supply the plaintext master key in a file.
TiKV currently does not exclude encryption keys and user data from core dumps. It is advised to disable core dumps for the TiKV process when using encryption at rest. This is not currently handled by TiKV itself.
TiKV tracks encrypted data files using the absolute path of the files. As a result, once encryption is turned on for a TiKV node, the user should not change data file paths configuration such as
TiFlash supports encryption at rest. Data keys are generated by TiFlash. All files (including data files, schema files, and temporary files) written into TiFlash (including TiFlash Proxy) are encrypted using the current data key. The encryption algorithms, the encryption configuration (in the
tiflash-learner.toml file) supported by TiFlash, and the meanings of monitoring metrics are consistent with those of TiKV.
If you have deployed TiFlash with Grafana, you can check the TiFlash-Proxy-Details -> Encryption panel.
Encryption-at-rest for PD is an experimental feature, which is configured in the same way as in TiKV.
BR supports S3 server-side encryption (SSE) when backing up data to S3. A customer-owned AWS KMS key can also be used together with S3 server-side encryption. See BR S3 server-side encryption for details.
TiKV, TiDB, and PD info logs might contain user data for debugging purposes. The info log and this data in it are not encrypted. It is recommended to enable log redaction.
TiKV currently supports encrypting data using AES128, AES192 or AES256, in CTR mode. TiKV uses envelope encryption. As a result, two types of keys are used in TiKV when encryption is enabled.
- Master key. The master key is provided by user and is used to encrypt the data keys TiKV generates. Management of master key is external to TiKV.
- Data key. The data key is generated by TiKV and is the key actually used to encrypt data.
The same master key can be shared by multiple instances of TiKV. The recommended way to provide a master key in production is via AWS KMS. Create a customer master key (CMK) through AWS KMS, and then provide the CMK key ID to TiKV in the configuration file. The TiKV process needs access to the KMS CMK while it is running, which can be done by using an IAM role. If TiKV fails to get access to the KMS CMK, it will fail to start or restart. Refer to AWS documentation for KMS and IAM usage.
Alternatively, if using custom key is desired, supplying the master key via file is also supported. The file must contain a 256 bits (or 32 bytes) key encoded as hex string, end with a newline (namely,
\n), and contain nothing else. Persisting the key on disk, however, leaks the key, so the key file is only suitable to be stored on the
tempfs in RAM.
Data keys are passed to the underlying storage engine (namely, RocksDB). All files written by RocksDB, including SST files, WAL files, and the MANIFEST file, are encrypted by the current data key. Other temporary files used by TiKV that may include user data are also encrypted using the same data key. Data keys are automatically rotated by TiKV every week by default, but the period is configurable. On key rotation, TiKV does not rewrite all existing files to replace the key, but RocksDB compaction are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiKV keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads.
Regardless of data encryption method, data keys are encrypted using AES256 in GCM mode for additional authentication. This required the master key to be 256 bits (32 bytes), when passing from file instead of KMS.
To create a key on AWS, follow these steps:
- Go to the AWS KMS on the AWS console.
- Make sure that you have selected the correct region on the top right corner of your console.
- Click Create key and select Symmetric as the key type.
- Set an alias for the key.
You can also perform the operations using the AWS CLI:
aws --region us-west-2 kms create-key aws --region us-west-2 kms create-alias --alias-name "alias/tidb-tde" --target-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
--target-key-id to enter in the second command is in the output of the first command.
To enable encryption, you can add the encryption section in the configuration files of TiKV and PD:
[security.encryption] data-encryption-method = "aes128-ctr" data-key-rotation-period = "168h" # 7 days
Possible values for
data-encryption-method are "aes128-ctr", "aes192-ctr", "aes256-ctr" and "plaintext". The default value is "plaintext", which means encryption is not turned on.
data-key-rotation-period defines how often TiKV rotates the data key. Encryption can be turned on for a fresh TiKV cluster, or an existing TiKV cluster, though only data written after encryption is enabled is guaranteed to be encrypted. To disable encryption, remove
data-encryption-method in the configuration file, or reset it to "plaintext", and restart TiKV. To change encryption method, update
data-encryption-method in the configuration file and restart TiKV.
The master key has to be specified if encryption is enabled (that is,
data-encryption-method is not "plaintext"). To specify a AWS KMS CMK as master key, add the
encryption.master-key section after the
[security.encryption.master-key] type = "kms" key-id = "0987dcba-09fe-87dc-65ba-ab0987654321" region = "us-west-2" endpoint = "https://kms.us-west-2.amazonaws.com"
key-id specifies the key ID for the KMS CMK. The
region is the AWS region name for the KMS CMK. The
endpoint is optional and you do not need to specify it normally unless you are using an AWS KMS-compatible service from a non-AWS vendor or need to use a VPC endpoint for KMS.
You can also use multi-Region keys in AWS. For this, you need to set up a primary key in a specific region and add replica keys in the regions you require.
To specify a master key that's stored in a file, the master key configuration would look like the following:
[security.encryption.master-key] type = "file" path = "/path/to/key/file"
path is the path to the key file. The file must contain a 256 bits (or 16 bytes) key encoded as hex string, end with a newline (
\n) and contain nothing else. Example of the file content:
To rotate master key, you have to specify both of the new master key and old master key in the configuration, and restart TiKV. Use
security.encryption.master-key to specify the new master key, and use
security.encryption.previous-master-key to specify the old master key. The configuration format for
security.encryption.previous-master-key is the same as
encryption.master-key. On restart TiKV must access both of the old and new master key, but once TiKV is up and running, TiKV will only need access to the new key. It is okay to leave the
encryption.previous-master-key configuration in the configuration file from that on. Even on restart, TiKV only tries to use the old key if it fails to decrypt existing data using the new master key.
Currently online master key rotation is not supported, so you need to restart TiKV. It is advised to do a rolling restart to a running TiKV cluster serving online query.
Here is an example configuration for rotating the KMS CMK:
[security.encryption.master-key] type = "kms" key-id = "50a0c603-1c6f-11e6-bb9e-3fadde80ce75" region = "us-west-2" [security.encryption.previous-master-key] type = "kms" key-id = "0987dcba-09fe-87dc-65ba-ab0987654321" region = "us-west-2"
To monitor encryption at rest, if you deploy TiKV with Grafana, you can look at the Encryption panel in the TiKV-Details dashboard. There are a few metrics to look for:
- Encryption initialized: 1 if encryption is initialized during TiKV startup, 0 otherwise. In case of master key rotation, after encryption is initialized, TiKV do not need access to the previous master key.
- Encryption data keys: number of existing data keys. The number is bumped by 1 after each time data key rotation happened. Use this metrics to monitor if data key rotation works as expected.
- Encrypted files: number of encrypted data files currently exists. Compare this number to existing data files in the data directory to estimate portion of data being encrypted, when turning on encryption for a previously unencrypted cluster.
- Encryption meta file size: size of the encryption meta data files.
- Read/Write encryption meta duration: the extra overhead to operate on metadata for encryption.
For debugging, the
tikv-ctl command can be used to dump encryption metadata such as encryption method and data key id used to encryption the file, as well as list of data keys. Since the operation can expose sensitive data, it is not recommended to use in production. Please refer to TiKV Control document.
To reduce the overhead caused by I/O and mutex contention when TiKV manages the encryption metadata, an optimization is introduced in TiKV v4.0.9 and controlled by
security.encryption.enable-file-dictionary-log in the TiKV configuration file. This configuration parameter takes effect only on TiKV v4.0.9 or later versions.
When it is enabled (by default), the data format of encryption metadata is unrecognizable by TiKV v4.0.8 or earlier versions. For example, assume that you use TiKV v4.0.9 or later with encryption at rest and the default
enable-file-dictionary-log configuration. If you downgrade your cluster to TiKV v4.0.8 or earlier, TiKV will fail to start, with an error in the info log similar to the following one:
[2020/12/07 07:26:31.106 +08:00] [ERROR] [mod.rs:110] ["encryption: failed to load file dictionary."] [2020/12/07 07:26:33.598 +08:00] [FATAL] [lib.rs:483] ["called `Result::unwrap()` on an `Err` value: Other(\"[components/encryption/src/encrypted_file/header.rs:18]: unknown version 2\")"]
To avoid the error above, you can first set
false and start TiKV with v4.0.9 or later. Once TiKV starts successfully, the data format of encryption metadata is downgraded to the version recognizable to earlier TiKV versions. At this point, you can then downgrade your TiKV cluster to an earlier version.
To enable S3 server-side encryption when backup to S3 using BR, pass
--s3.sse argument and set value to "aws:kms". S3 will use its own KMS key for encryption. Example:
./br backup full --pd <pd-address> --storage "s3://<bucket>/<prefix>" --s3.sse aws:kms
To use a custom AWS KMS CMK that you created and owned, pass
--s3.sse-kms-key-id in addition. In this case, both the BR process and all the TiKV nodes in the cluster would need access to the KMS CMK (for example, via AWS IAM), and the KMS CMK needs to be in the same AWS region as the S3 bucket used to store the backup. It is advised to grant access to the KMS CMK to BR process and TiKV nodes via AWS IAM. Refer to AWS documentation for usage of IAM. For example:
./br backup full --pd <pd-address> --storage "s3://<bucket>/<prefix>" --s3.region <region> --s3.sse aws:kms --s3.sse-kms-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
When restoring the backup, both
--s3.sse-kms-key-id should NOT be used. S3 will figure out encryption settings by itself. The BR process and TiKV nodes in the cluster to restore the backup to would also need access to the KMS CMK, or the restore will fail. Example:
./br restore full --pd <pd-address> --storage "s3://<bucket>/<prefix> --s3.region <region>"