Skip to content

Commit

Permalink
example for tiflash (#2327) (#2333)
Browse files Browse the repository at this point in the history
  • Loading branch information
sre-bot authored Apr 29, 2020
1 parent a786dce commit 1cb1571
Show file tree
Hide file tree
Showing 4 changed files with 177 additions and 3 deletions.
6 changes: 3 additions & 3 deletions examples/basic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,10 @@ Explore the TiDB sql interface:
Explore the monitoring dashboards:

```bash
> kubectl -n <namespace> port-forward svc/basic-grafana 4000:4000 &>/tmp/pf-grafana.log &
> kubectl -n <namespace> port-forward svc/basic-grafana 3000:3000 &>/tmp/pf-grafana.log &
```

Browse [localhost:4000](http://localhost:4000).
Browse [localhost:3000](http://localhost:3000).

## Destroy

Expand All @@ -68,6 +68,6 @@ Browse [localhost:4000](http://localhost:4000).
The PVCs used by TiDB cluster will not be deleted in the above process, therefore, the PVs will be not be released neither. You can delete PVCs and release the PVs by the following command:

```bash
> kubectl -n <namespace> delete pvc app.kubernetes.io/instance=basic,app.kubernetes.io,app.kubernetes.io/managed-by=tidb-operator
> kubectl -n <namespace> delete pvc -l app.kubernetes.io/instance=basic,app.kubernetes.io/managed-by=tidb-operator
```

76 changes: 76 additions & 0 deletions examples/tiflash/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# A Basic TiDB cluster with TiFlash and Monitoring

> **Note:**
>
> This setup is for test or demo purpose only and **IS NOT** applicable for critical environment. Refer to the [Documents](https://pingcap.com/docs/stable/tidb-in-kubernetes/deploy/prerequisites/) for production setup.
The following steps will create a TiDB cluster with TiFlash deployed and monitoring.

**Prerequisites**:
- TiDB operator `v1.1.0-rc.3` or higher version installed. [Doc](https://pingcap.com/docs/stable/tidb-in-kubernetes/deploy/tidb-operator/)
- Available `StorageClass` configured, and there are enough PVs (by default, 9 PVs are required) of that storageClass:

The availabe `StorageClass` can by checked with the following command:

```bash
> kubectl get storageclass
```

The output is similar to the following:

```bash
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 1d
gold kubernetes.io/gce-pd 1d
local-storage kubernetes.io/no-provisioner 189d
```

The default storageClassName in `tidb-cluster.yaml` and `tidb-monitor.yaml` is set to `local-storage`, please update them to your available storageClass.

## Install

The following commands is assumed to be executed in this directory.

Install the cluster:

```bash
> kubectl create ns <namespace>
> kubectl -n <namespace> apply -f ./
```

Wait for cluster Pods ready:

```bash
> watch kubectl -n <namespace> get pod
```

## Explore

Explore the TiDB SQL interface:

```bash
> kubectl -n <namespace> port-forward svc/demo-tidb 4000:4000 &>/tmp/pf-tidb.log &
> mysql -h 127.0.0.1 -P 4000 -u root
```
Refer to the [doc](https://pingcap.com/docs/stable/reference/tiflash/use-tiflash/) to try TiFlash.

Explore the monitoring dashboards:

```bash
> kubectl -n <namespace> port-forward svc/demo-grafana 3000:3000 &>/tmp/pf-grafana.log &
```

Browse [localhost:3000](http://localhost:3000).

## Destroy

```bash
> kubectl -n <namespace> delete -f ./
```

The PVCs used by the TiDB cluster will not be deleted in the above command, therefore, the PVs will be not be released either. You can delete PVCs and release the PVs with the following command:

```bash
> kubectl -n <namespace> delete pvc -l app.kubernetes.io/instance=demo,app.kubernetes.io/managed-by=tidb-operator
```

73 changes: 73 additions & 0 deletions examples/tiflash/tidb-cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: demo
spec:
configUpdateStrategy: RollingUpdate
enablePVReclaim: false
imagePullPolicy: IfNotPresent
pd:
baseImage: pingcap/pd
config:
log:
level: info
replication:
enable-placement-rules: "true"
location-labels:
- zone
- host
max-replicas: 3
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
replicas: 3
requests:
storage: 10Gi
storageClassName: local-storage
pvReclaimPolicy: Delete
schedulerName: tidb-scheduler
services:
- name: pd
type: ClusterIP
tidb:
baseImage: pingcap/tidb
config:
log:
file:
max-backups: 3
level: info
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
replicas: 2
separateSlowLog: true
service:
type: NodePort
slowLogTailer:
image: busybox:1.26.2
imagePullPolicy: IfNotPresent
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 20m
memory: 5Mi
tiflash:
baseImage: pingcap/tiflash
maxFailoverCount: 3
replicas: 2
storageClaims:
- resources:
requests:
storage: 10Gi
storageClassName: local-storage
tikv:
baseImage: pingcap/tikv
config:
log-level: info
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
replicas: 3
requests:
storage: 10Gi
storageClassName: local-storage
timezone: UTC
version: v4.0.0-rc
25 changes: 25 additions & 0 deletions examples/tiflash/tidb-monitor.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbMonitor
metadata:
name: demo
spec:
clusters:
- name: demo
prometheus:
baseImage: prom/prometheus
version: v2.11.1
grafana:
baseImage: grafana/grafana
version: 6.0.1
service:
type: NodePort
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v4.0.0-rc
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
persistent: true
imagePullPolicy: IfNotPresent
storage: 10Gi
storageClassName: local-storage

0 comments on commit 1cb1571

Please sign in to comment.