Skip to content

Commit

Permalink
op-guide: split binary deployment into three sections (prometheus#1025)
Browse files Browse the repository at this point in the history
* Update binary-deployment.md
* Add local deployment guide
* Add testing guide
  • Loading branch information
morgo authored Apr 8, 2019
1 parent 59a7a26 commit 3bdcd36
Show file tree
Hide file tree
Showing 4 changed files with 286 additions and 101 deletions.
5 changes: 4 additions & 1 deletion TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,10 @@
+ Deploy
- [Ansible Deployment (Recommended)](op-guide/ansible-deployment.md)
- [Offline Deployment Using Ansible](op-guide/offline-ansible-deployment.md)
- [Binary Deployment](op-guide/binary-deployment.md)
+ Binary Deployment
- [Local Install](op-guide/binary-local-deployment.md)
- [Testing Environment](op-guide/binary-testing-deployment.md)
- [Production Deployment](op-guide/binary-deployment.md)
- [Docker Deployment](op-guide/docker-deployment.md)
- [Docker Compose Deployment](op-guide/docker-compose.md)
- [Cross-DC Deployment Solutions](op-guide/cross-dc-deployment.md)
Expand Down
104 changes: 4 additions & 100 deletions op-guide/binary-deployment.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,14 @@
---
title: Deploy TiDB Using the Binary
title: Production Deployment from Binary Tarball
summary: Use the binary to deploy a TiDB cluster.
category: operations
---

# Deploy TiDB Using the Binary
# Production Deployment from Binary Tarball

This guide provides installation instructions from tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD.
This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD.

This document describes the binary deployment of three scenarios:

1. [Single node cluster deployment](#single-node-cluster-deployment) for trying out TiDB.
2. [Multiple nodes cluster deployment for testing](#multiple-nodes-cluster-deployment-for-test) TiDB across multiple nodes and exploring features in more detail.
3. [Multiple nodes cluster deployment](#multiple-nodes-cluster-deployment) for production deployments.
See also [local deployment](../op-guide/binary-local-deployment.md) and [testing enviroment](../op-guide/binary-testing-deployment.md) deployment.

## Prepare

Expand Down Expand Up @@ -121,98 +117,6 @@ $ tar -xzf tidb-latest-linux-amd64.tar.gz
$ cd tidb-latest-linux-amd64
```
## Single node cluster deployment
After downloading the TiDB binary package, you can run and test the TiDB cluster on a standalone server. Follow the steps below to start PD, TiKV and TiDB:
1. Start PD.
```bash
$ ./bin/pd-server --data-dir=pd \
--log-file=pd.log &
```
2. Start TiKV.
```bash
$ ./bin/tikv-server --pd="127.0.0.1:2379" \
--data-dir=tikv \
--log-file=tikv.log &
```
3. Start TiDB.
```bash
$ ./bin/tidb-server --store=tikv \
--path="127.0.0.1:2379" \
--log-file=tidb.log &
```
4. Use the MySQL client to connect to TiDB.
```sh
$ mysql -h 127.0.0.1 -P 4000 -u root -D test
```
## Multiple nodes cluster deployment for test
If you want to test TiDB but have a limited number of nodes, you can use one PD instance to test the entire cluster.
Assuming that you have four nodes, you can deploy 1 PD instance, 3 TiKV instances, and 1 TiDB instance. See the following table for details:
| Name | Host IP | Services |
| :-- | :-- | :------------------- |
| Node1 | 192.168.199.113 | PD1, TiDB |
| Node2 | 192.168.199.114 | TiKV1 |
| Node3 | 192.168.199.115 | TiKV2 |
| Node4 | 192.168.199.116 | TiKV3 |
Follow the steps below to start PD, TiKV and TiDB:
1. Start PD on Node1.
```bash
$ ./bin/pd-server --name=pd1 \
--data-dir=pd \
--client-urls="http://192.168.199.113:2379" \
--peer-urls="http://192.168.199.113:2380" \
--initial-cluster="pd1=http://192.168.199.113:2380" \
--log-file=pd.log &
```
2. Start TiKV on Node2, Node3 and Node4.
```bash
$ ./bin/tikv-server --pd="192.168.199.113:2379" \
--addr="192.168.199.114:20160" \
--data-dir=tikv \
--log-file=tikv.log &
$ ./bin/tikv-server --pd="192.168.199.113:2379" \
--addr="192.168.199.115:20160" \
--data-dir=tikv \
--log-file=tikv.log &
$ ./bin/tikv-server --pd="192.168.199.113:2379" \
--addr="192.168.199.116:20160" \
--data-dir=tikv \
--log-file=tikv.log &
```
3. Start TiDB on Node1.
```bash
$ ./bin/tidb-server --store=tikv \
--path="192.168.199.113:2379" \
--log-file=tidb.log
```
4. Use the MySQL client to connect to TiDB.
```sh
$ mysql -h 192.168.199.113 -P 4000 -u root -D test
```
## Multiple nodes cluster deployment
For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Requirements](/op-guide/recommendation.md).
Expand Down
101 changes: 101 additions & 0 deletions op-guide/binary-local-deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
title: Local Deployment from Binary Tarball
summary: Use the binary to deploy a TiDB cluster.
category: operations
---

# Local Deployment from Binary Tarball

This guide provides installation instructions for all TiDB components on a single developer machine. It is intended for evaluation purposes, and does not match the recommended usage for production systems.

See also [testing environment](../op-guide/binary-testing-deployment.md) and [production enviroment](../op-guide/binary-deployment.md) deployment.

The following local TCP ports will be used:

| Component | Port | Protocol | Description |
| --------- | ----- | -------- | ----------- |
| TiDB | 4000 | TCP | the communication port for the application and DBA tools |
| TiDB | 10080 | TCP | the communication port to report TiDB status |
| TiKV | 20160 | TCP | the TiKV communication port |
| PD | 2379 | TCP | the communication port between TiDB and PD |
| PD | 2380 | TCP | the inter-node communication port within the PD cluster |


### Prepare

This guide is for deployment on Linux only. It is recommended to use RHEL/CentOS 7.3 or higher. TiKV requires you to raise the open files limit:

```bash
tidbuser="tidb"

cat << EOF > /tmp/tidb.conf
$tidbuser soft nofile 1000000
$tidbuser hard nofile 1000000
EOF

sudo cp /tmp/tidb.conf /etc/security/limits.d/
sudo sysctl -w fs.file-max=1000000
```
See the [production deployment](../op-guide/binary-deployment.md) optional kernel tuning parameters.

### Create a database running user account

1. Log in to the machine using the `root` user account and create a database running user account (`tidb`) using the following command:

```bash
# useradd tidb -m
```

2. Switch the user from `root` to `tidb` by using the following command. You can use this `tidb` user account to deploy your TiDB cluster.

```bash
# su - tidb
```

### Download the official binary package

```
# Download the package.
$ wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
$ wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256

# Check the file integrity. If the result is OK, the file is correct.
$ sha256sum -c tidb-latest-linux-amd64.sha256

# Extract the package.
$ tar -xzf tidb-latest-linux-amd64.tar.gz
$ cd tidb-latest-linux-amd64
```
### Start
Follow the steps below to start PD, TiKV and TiDB:
1. Start PD.
```bash
$ ./bin/pd-server --data-dir=pd \
--log-file=pd.log &
```
2. Start TiKV.
```bash
$ ./bin/tikv-server --pd="127.0.0.1:2379" \
--data-dir=tikv \
--log-file=tikv.log &
```
3. Start TiDB.
```bash
$ ./bin/tidb-server --store=tikv \
--path="127.0.0.1:2379" \
--log-file=tidb.log &
```
4. Use the MySQL client to connect to TiDB.
```sh
$ mysql -h 127.0.0.1 -P 4000 -u root -D test
```
Loading

0 comments on commit 3bdcd36

Please sign in to comment.