XiaBee's Studio.

Build Distributed TiDB System

字数统计: 1.2k阅读时长: 7 min
2022/08/08

Backgroud

TiDB is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. Please refer to the official document to see the details

Server preparing

We use several ECS to build the real distributed system, instead of distributed service on one server.

If you want to simulate production deployment on a single machine, refer to the ofiicial doc of quick start

Ensure the following:

  • All ECS can intercommunicate through their Fire-Walls
  • You can log in to all servers as root

In this article, we use 3 ECS as an example . Their IP addresses are as follows:

1
2
3
10.2.103.149
10.2.103.81
10.2.103.43

Login to ECS

We logined to all ECS with public keys, the key-pairs are stored as ~/.ssh/jinshan and ~/.ssh/jinshan.pub.

1
2
3
ssh -i ~/.ssh/jinshan_rsa root@10.2.103.43
# ssh -i ~/.ssh/jinshan_rsa root@10.2.103.149
# ssh -i ~/.ssh/jinshan_rsa root@10.2.103.81

SSH mutual trust

Log in to the target machine respectively using the root user account, create the tidb user and set the login password.

1
2
useradd tidb && \
passwd tidb

To configure sudo without password, run the following command, and add tidb ALL=(ALL) NOPASSWD: ALL to the end of the file:

1
2
visudo
tidb ALL=(ALL) NOPASSWD: ALL

Use the tidb user to log in to the control machine, and run the following command. Replace 10.2.103.43 with the IP of your target machine, and enter the tidb user password of the target machine as prompted. After the command is executed, SSH mutual trust is already created. This applies to other machines as well. Newly created tidb users do not have the .ssh directory. To create such a directory, execute the command that generates the RSA key. To deploy TiDB components on the control machine, configure mutual trust for the control machine and the control machine itself.

1
2
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.2.103.43

Log in to the control machine using the tidb user account, and log in to the IP of the target machine using ssh. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured.

1
ssh 10.2.103.43

Attention: if you have problem with copying keys remotely, try to login to the target and write the keys to ~/.ssh/authorized_keys manually.

img

image.png

Others

For futher development, please refer to the TiDB Environment and System Configuration Check.

Install TiUP

Starting with TiDB 4.0, TiUP, as the package manager, makes it far easier to manage different cluster components in the TiDB ecosystem. Now you can run any component with only a single line of TiUP commands.You can refer to the tiup document to see the details.

Install the package

1
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

Reload the shell profile

Above command installs TiUP in the $HOME/.tiup folder. The installed components and the data generated by their operation are also placed in this folder. This command also automatically adds $HOME/.tiup/bin to the PATH environment variable in the Shell .profile file, so you can use TiUP directly.

Deploy clusters

Write the configuration files

Referring to complex-multi-instance.yaml and tiup documents, we write the yaml like this:

Click here to expand / collapse details

  ## Global variables are applied to all deployments and used as the default value of
  ## the deployments if a specific deployment value is missing.
  global:
    user: "tidb"
    ssh_port: 22
    deploy_dir: "/tidb-deploy"
    data_dir: "/tidb-data"
  monitored:
    node_exporter_port: 9100
    blackbox_exporter_port: 9115
    deploy_dir: "/tidb-deploy/monitored-9100"
    data_dir: "/tidb-data-monitored-9100"
    log_dir: "/tidb-deploy/monitored-9100/log"
  server_configs:
    tidb:
      log.slow-threshold: 300
    tikv:
      readpool.unified.max-thread-count: 1
      readpool.storage.use-unified-pool: true
      readpool.coprocessor.use-unified-pool: true
      storage.block-cache.capacity: 8GB
      raftstore.capacity: 250GB
    pd:
      replication.location-labels: ["resource_pool", "host"]
      schedule.leader-schedule-limit: 4
      schedule.region-schedule-limit: 2048
      schedule.replica-schedule-limit: 64
  pd_servers:
    - host: 10.2.103.43
    - host: 10.2.103.81
    - host: 10.2.103.149
  tidb_servers:
    - host: 10.2.103.43
      port: 4000
      status_port: 10080
      deploy_dir: "/tidb-deploy/tidb-4000"
      log_dir: "/tidb-deploy/tidb-4000/log"
      # numa_node: "0"
    - host: 10.2.103.43
      port: 4001
      status_port: 10081
      deploy_dir: "/tidb-deploy/tidb-4001"
      log_dir: "/tidb-deploy/tidb-4001/log"
      # numa_node: "1"
    - host: 10.2.103.81
      port: 4000
      status_port: 10080
      deploy_dir: "/tidb-deploy/tidb-4000"
      log_dir: "/tidb-deploy/tidb-4000/log"
      # numa_node: "0"
    - host: 10.2.103.81
      port: 4001
      status_port: 10081
      deploy_dir: "/tidb-deploy/tidb-4001"
      log_dir: "/tidb-deploy/tidb-4001/log"
      # numa_node: "1"
    - host: 10.2.103.149
      port: 4000
      status_port: 10080
      deploy_dir: "/tidb-deploy/tidb-4000"
      log_dir: "/tidb-deploy/tidb-4000/log"
      # numa_node: "0"
    - host: 10.2.103.149
      port: 4001
      status_port: 10081
      deploy_dir: "/tidb-deploy/tidb-4001"
      log_dir: "/tidb-deploy/tidb-4001/log"
      # numa_node: "1"
  tikv_servers:
    - host: 10.2.103.43
      port: 20160
      status_port: 20180
      deploy_dir: "/tidb-deploy/tikv-20160"
      data_dir: "/tidb-data/tikv-20160"
      log_dir: "/tidb-deploy/tikv-20160/log"
      # numa_node: "0"
      config:
        server.labels: { host: "tikv1" ,resource_pool: "pool1"}
    - host: 10.2.103.43
      port: 20161
      status_port: 20181
      deploy_dir: "/tidb-deploy/tikv-20161"
      data_dir: "/tidb-data/tikv-20161"
      log_dir: "/tidb-deploy/tikv-20161/log"
      # numa_node: "1"
      config:
        server.labels: { host: "tikv1" ,resource_pool: "pool2"}
    - host: 10.2.103.81
      port: 20160
      status_port: 20180
      deploy_dir: "/tidb-deploy/tikv-20160"
      data_dir: "/tidb-data/tikv-20160"
      log_dir: "/tidb-deploy/tikv-20160/log"
      # numa_node: "0"
      config:
        server.labels: { host: "tikv2" ,resource_pool: "pool1"}
    - host: 10.2.103.81
      port: 20161
      status_port: 20181
      deploy_dir: "/tidb-deploy/tikv-20161"
      data_dir: "/tidb-data/tikv-20161"
      log_dir: "/tidb-deploy/tikv-20161/log"
      # numa_node: "1"
      config:
        server.labels: { host: "tikv2" ,resource_pool: "pool2"}
    - host: 10.2.103.149
      port: 20160
      status_port: 20180
      deploy_dir: "/tidb-deploy/tikv-20160"
      data_dir: "/tidb-data/tikv-20160"
      log_dir: "/tidb-deploy/tikv-20160/log"
      # numa_node: "0"
      config:
        server.labels: { host: "tikv3" ,resource_pool: "pool1"}
    - host: 10.2.103.149
      port: 20161
      status_port: 20181
      deploy_dir: "/tidb-deploy/tikv-20161"
      data_dir: "/tidb-data/tikv-20161"
      log_dir: "/tidb-deploy/tikv-20161/log"
      # numa_node: "1"
      config:
        server.labels: { host: "tikv3",resource_pool: "pool2" }
  monitoring_servers:
    - host: 10.2.103.43
      # ssh_port: 22
      # port: 9090
      # deploy_dir: "/tidb-deploy/prometheus-8249"
      # data_dir: "/tidb-data/prometheus-8249"
      # log_dir: "/tidb-deploy/prometheus-8249/log"
  grafana_servers:
    - host: 10.2.103.43
      # port: 3000
      # deploy_dir: /tidb-deploy/grafana-3000
  alertmanager_servers:
    - host: 10.2.103.43
      # ssh_port: 22
      # web_port: 9093
      # cluster_port: 9094
      # deploy_dir: "/tidb-deploy/alertmanager-9093"
      # data_dir: "/tidb-data/alertmanager-9093"
      # log_dir: "/tidb-deploy/alertmanager-9093/log"

Check and deploy

1
2
3
4
5
tiup cluster check ./complex-multi-instance.yaml --apply --user tidb -i /home/tidb/.ssh/id_rsa
tiup cluster deploy xiabee v6.1.0 ./complex-multi-instance.yaml --user tidb -i /home/tidb/.ssh/id_rsa
# deploy
tiup cluster start xiabee --init
# init

Refering to deploy document to see the details.

image.png

After initiation, you will see the password of root.

image.png

Display clusters

1
2
3
4
tiup cluster list
# to display all cluster names
tiup cluster display xiabee
# to display certain cluster's details

image.png

Then check the dashboard to see the topology:

image.png

Refference

https://docs.pingcap.com/tidb/stable/overview

https://docs.pingcap.com/tidb/stable/check-before-deployment

https://docs.pingcap.com/tidb/stable/tiup-overview

https://docs.pingcap.com/tidb/stable/production-deployment-using-tiup

CATALOG
  1. 1. Backgroud
  2. 2. Server preparing
    1. 2.1. Login to ECS
    2. 2.2. SSH mutual trust
    3. 2.3. Others
  3. 3. Install TiUP
    1. 3.1. Install the package
    2. 3.2. Reload the shell profile
  4. 4. Deploy clusters
    1. 4.1. Write the configuration files
    2. 4.2. Check and deploy
    3. 4.3. Display clusters
  5. 5. Refference