Sign InTry Free

TiDB FAQs in Kubernetes

This document collects frequently asked questions (FAQs) about the TiDB cluster in Kubernetes.

How to modify time zone settings?

The default time zone setting for each component container of a TiDB cluster in Kubernetes is UTC. To modify this setting, take the steps below based on your cluster status:

  • If it is the first time you deploy the cluster:

    In the values.yaml file of the TiDB cluster, modify the timezone setting. For example, you can set it to timezone: Asia/Shanghai before you deploy the TiDB cluster.

  • If the cluster is running:

    • In the values.yaml file of the TiDB cluster, modify timezone settings in the values.yaml file of the TiDB cluster. For example, you can set it to timezone: Asia/Shanghai and then upgrade the TiDB cluster.
    • Refer to Time Zone Support to modify TiDB service time zone settings.

Can HPA or VPA be configured on TiDB components?

Currently, the TiDB cluster does not support HPA (Horizontal Pod Autoscaling) or VPA (Vertical Pod Autoscaling), because it is difficult to achieve autoscaling on stateful applications such as a database. Autoscaling can not be achieved merely by the monitoring data of CPU and memory.

What scenarios require manual intervention when I use TiDB Operator to orchestrate a TiDB cluster?

Besides the operation of the Kubernetes cluster itself, there are the following two scenarios that might require manual intervention when using TiDB Operator:

  • Adjusting the cluster after the auto-failover of TiKV. See Auto-Failover for details;
  • Maintaining or dropping the specified Kubernetes nodes. See Maintaining Nodes for details.

To achieve high availability and data safety, it is recommended that you deploy the TiDB cluster in at least three availability zones in a production environment.

In terms of the deployment topology relationship between the TiDB cluster and TiDB services, TiDB Operator supports the following three deployment modes. Each mode has its own merits and demerits, so your choice must be based on actual application needs.

  • Deploy the TiDB cluster and TiDB services in the same Kubernetes cluster of the same VPC;
  • Deploy the TiDB cluster and TiDB services in different Kubernetes clusters of the same VPC;
  • Deploy the TiDB cluster and TiDB services in different Kubernetes clusters of different VPCs.

Does TiDB Operator support TiSpark?

TiDB Operator does not yet support automatically orchestrating TiSpark.

If you want to add the TiSpark component to TiDB in Kubernetes, you must maintain Spark on your own in the same Kubernetes cluster. You must ensure that Spark can access the IPs and ports of PD and TiKV instances, and install the TiSpark plugin for Spark. TiSpark offers a detailed guide for you to install the TiSpark plugin.

To maintain Spark in Kubernetes, refer to Spark on Kubernetes.

How to check the configuration of the TiDB cluster?

To check the configuration of the PD, TiKV, and TiDB components of the current cluster, run the following command:

  • Check the PD configuration file:

    kubectl exec -it <pd-pod-name> -n <namespace> -- cat /etc/pd/pd.toml
  • Check the TiKV configuration file:

    kubectl exec -it <tikv-pod-name> -n <namespace> -- cat /etc/tikv/tikv.toml
  • Check the TiDB configuration file:

    kubectl exec -it <tidb-pod-name> -c tidb -n <namespace> -- cat /etc/tidb/tidb.toml

Why does TiDB Operator fail to schedule Pods when I deploy the TiDB clusters?

Three possible reasons:

  • Insufficient resource or HA Policy causes the Pod stuck in the Pending state. Refer to Troubleshoot TiDB in Kubernetes for more details.

  • taint is applied to some nodes, which prevents the Pod from being scheduled to these nodes unless the Pod has the matching toleration. Refer to taint & toleration for more details.

  • Scheduling conflict, which causes the Pod stuck in the ContainerCreating state. In such case, you can check if there is more than one TiDB Operator deployed in the Kubernetes cluster. Conflicts occur when custom schedulers in multiple TiDB Operators schedule the same Pod in different phases.

    You can execute the following command to verify whether there is more than one TiDB Operator deployed. If more than one record is returned, delete the extra TiDB Operator to resolve the scheduling conflict.

    kubectl get deployment --all-namespaces |grep tidb-scheduler
Download PDF
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.