Skip to content

Commit

Permalink
DOCS: Fixed all the suggestions
Browse files Browse the repository at this point in the history
  • Loading branch information
HarshCasper committed Dec 15, 2021
1 parent d386eb3 commit 831db78
Show file tree
Hide file tree
Showing 11 changed files with 70 additions and 115 deletions.
6 changes: 3 additions & 3 deletions docs/source/admin_guide/backup.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Manual Backups
# Manual backups

Your cloud provider may have native ways to backup your Kubernetes cluster and volumes.

Expand All @@ -10,7 +10,7 @@ There are three main locations that you need to backup:
2. The Keycloak user/group database
3. The JupyterHub database (for Dashboard configuration)

## Network File System
## Network file system

This amounts to:

Expand Down Expand Up @@ -162,7 +162,7 @@ gsutil cp gs://<your_bucket_name>/backups/2021-04-23.tar .
Similar instructions, but use Digital Ocean spaces. This guide explains installation of the command-line tool:
https://www.digitalocean.com/community/tutorials/how-to-migrate-from-amazon-s3-to-digitalocean-spaces-with-rclone

## Keycloak User/Group Database
## Keycloak user/group database

QHub provides a simple script to export the important user/group database. Your new QHub cluster will recreate a lot of Keycloak config (including new Keycloak clients which will have new secrets), so only the high-level Group and User info is exported.

Expand Down
8 changes: 4 additions & 4 deletions docs/source/admin_guide/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,18 +43,18 @@ After completing these steps. `kubectl` should be able to access the cluster.

#### Debug your Kubernetes cluster

[K9](https://k9scli.io/) is a terminal-based UI to manage Kubernetes clusters that aims to simplify navigating, observing, and managing your applications in K8. K9 continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in Kubernetes. It's definitely a huge improvement to the general workflow, and a best-to-have tool for debugging your Kubernetes cluster sessions.
[`k9s`](https://k9scli.io/) is a terminal-based UI to manage Kubernetes clusters that aims to simplify navigating, observing, and managing your applications in `k8s`. `k9s` continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in Kubernetes. It's definitely a huge improvement to the general workflow, and a best-to-have tool for debugging your Kubernetes cluster sessions.

Installation can be done on macOS, Windows, and Linux. Instructions for each operating system can be found [here](https://github.com/derailed/k9s). Complete the installation to follow along.

By default, K9 starts with the standard directory that's set as the context (in this case Minikube). To view all the current process press `0`:
By default, `k9s` starts with the standard directory that's set as the context (in this case Minikube). To view all the current process press `0`:

![Image of K9 terminal UI](../images/k9s_UI.png)
![Image of the `k9s` terminal UI](../images/k9s_UI.png)

> **NOTE**: In some circumstances you will be confronted with the need to inspect any services launched by your cluster at your ‘localhost’. For instance, if your cluster has problem
with the network traffic tunnel configuration, it may limit or block the user's access to destination resources over the connection.

K9 port-forward option <kbd>shift</kbd> + <kbd>f</kbd> allows you to access and interact with internal Kubernetes cluster processes from your localhost you can then use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
`k9s` port-forward option <kbd>shift</kbd> + <kbd>f</kbd> allows you to access and interact with internal Kubernetes cluster processes from your localhost you can then use this method to investigate issues and adjust your services locally without the need to expose them beforehand.

---

Expand Down
2 changes: 1 addition & 1 deletion docs/source/admin_guide/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ qhub upgrade -c qhub-config.yaml

This will output a newer version of qhub-config.yaml that's compatible with the new version of qhub. The process will list any changes it has made. It will also tell you where it has stored a backup of the original file.

If you are deploying QHub from your local machine (that'sn't using CI/CD) then you will now have a qhub-config.yaml file that you can use to `qhub deploy -c qhub-config.yaml` through the latest version of the QHub command package.
If you are deploying QHub from your local machine (not using CI/CD) then you will now have a qhub-config.yaml file that you can use to `qhub deploy -c qhub-config.yaml` through the latest version of the QHub command package.

## Special customizations

Expand Down
19 changes: 9 additions & 10 deletions docs/source/dev_guide/minikube.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@

It's possible to run QHub on Minikube, and this can allow quicker feedback loops for development, as well as being less expensive than running cloud Kubernetes clusters.

Local testing is a great way to test the components of QHub. It's important to highlight that while It's possible to test most of QHub
with this version, components that are Cloud provisioned such as VPCs, managed Kubernetes cluster and managed container registries can't be locally tested, due to their Cloud dependencies.
Local testing is a great way to test the components of QHub. It's important to highlight that while it's possible to test most of QHub with this version, components that are Cloud provisioned such as VPCs, managed Kubernetes cluster and managed container registries can't be locally tested, due to their Cloud dependencies.

## Compatibility

Expand Down Expand Up @@ -161,7 +160,7 @@ A example subnet range looks like `192.168.49.2/24`. This CIDR range has a start

For this example case, the user assigns `metallb` a start IP address of `192.168.49.100` and an end of `192.168.49.150`.

The user can the `metallb` below command-line tool interface which prompts for the start and stop IP range:
The user can enable `metallb` as shown below. The command-line tool interface prompts the user for the start and stop IP range:

```shell
minikube addons configure metallb
Expand All @@ -186,7 +185,7 @@ The output should be `The 'metallb' addon is enabled`.
<details>
<summary>Click to expand note</summary>

The browser can have trouble reaching the load balancer running on WSL2. A workaround is to port forward the proxy-pod to the host IP 0.0.0.0. Get the ip address of the WSL2 machine via ```ip a```, which should be a 127.x.x.x address. To change the port forwarding after opening k9 you can type ```:pods <enter>```, hover over the proxy-... pod and type ```<shift-s>```, and enter the ip addresses.
The browser can have trouble reaching the load balancer running on WSL2. A workaround is to port forward the proxy-pod to the host IP 0.0.0.0. Get the ip address of the WSL2 machine via ```ip a```, which should be a 127.x.x.x address. To change the port forwarding after opening `k9s` you can type ```:pods <enter>```, hover over the proxy-... pod and type ```<shift-s>```, and enter the IP addresses.

</details>

Expand Down Expand Up @@ -253,7 +252,7 @@ curl -k https://github-actions.qhub.dev/hub/login

It's also possible to visit `https://github-actions.qhub.dev` in your web browser to select the deployment.

Since this is a local deployment, hence It's not visible to the internet; `https` certificates isn't signed by [Let's Encrypt](https://letsencrypt.org/). Thus, the certificates is [self-signed by Traefik](https://en.wikipedia.org/wiki/Self-signed_certificate).
Since this is a local deployment, hence it's not visible to the internet; `https` certificates isn't signed by [Let's Encrypt](https://letsencrypt.org/). Thus, the certificates is [self-signed by Traefik](https://en.wikipedia.org/wiki/Self-signed_certificate).

Several browsers makes it difficult to view a self-signed certificate that's not added to the certificate registry.

Expand Down Expand Up @@ -284,7 +283,7 @@ The command deletes all instances of QHub, cleaning up the deployment environmen

# Minikube on Mac

The earlier instructions for minikube on Linux _nearly_ works on Mac except things that break without clever use of port forwarding at the right times.
The earlier instructions for Minikube on Linux _nearly_ works on Mac except things that break without clever use of port forwarding at the right times.

1 - When working out the IP addresses to configure metallb try this:
```
Expand Down Expand Up @@ -363,7 +362,7 @@ This should show all instances, so work out which one you need if there are mult

Using the instance ID you obtained just preceding (for example `i-01bd8a4ee6016e1fe`), use that to first query for the 'security GroupSet ID' (for example `sg-96f73feb`).

Then use that to open up port 22 for the security group (and hence for the instance). Multiple instances running in this security group, is exposed on Port 22.
Then use that to open up port 22 for the security group (and hence for the instance). Multiple instances running in this security group, all of which are now exposed on Port 22.

```bash
aws ec2 describe-instance-attribute --instance-id i-01bd8a4ee6016e1fe --attribute groupSet
Expand Down Expand Up @@ -443,7 +442,7 @@ sed -i -E 's/(cpu_guarantee):\s+[0-9\.]+/\1: 1/g' "qhub-config.yaml"
sed -i -E 's/(mem_guarantee):\s+[A-Za-z0-9\.]+/\1: 1G/g' "qhub-config.yaml"
```

The last two commands preceding reduce slightly the memory and CPU requirements of JupyterLab sessions etc. Make any other changes needed to the qhub-config.yaml file.
The preceding two commands reduce slightly the memory and CPU requirements of JupyterLab sessions etc. Make any other changes needed to the `qhub-config.yaml` file.

Then deploy:

Expand All @@ -453,7 +452,7 @@ qhub deploy --config qhub-config.yaml --disable-prompt

## Enable Kubernetes access from Mac

This step is optional, but allows you to use kubectl and K9 directly from your Mac. It's not needed if you are satisfied to use kubectl within an SSH session on AWS instead.
This step is optional, but allows you to use `kubectl` and `k9s` directly from your Mac. It's not needed if you are satisfied to use kubectl within an SSH session on AWS instead.

On your Mac laptop:

Expand Down Expand Up @@ -525,6 +524,6 @@ And then the users can add an extra port forward when they SSH into their AWS in
sudo ssh -i ~/.ssh/${MYKEYNAME}.pem ubuntu@ec2-35-177-109-173.eu-west-2.compute.amazonaws.com -L 127.0.0.1:8443:192.168.49.2:8443 -L github-actions.qhub.dev:443:192.168.49.100:443
```

This is executed with the sudo access because It's desired to forward a low-numbered port, like 443, which is otherwise not allowed.
This is executed with the `sudo` privileges because forwarding a low-numbered port, like 443, is not allowed otherwise.

Now you can access https://github-actions.qhub.dev/ in a browser and you should be able to use your QHub. You have to bypass the self-signed cert warnings though - see [verify the local deployment](#verify-the-local-deployment) for instructions.
9 changes: 4 additions & 5 deletions docs/source/dev_guide/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,9 @@ Hadolint will report `error`, `warning`, `info` and `style` while linting Docker

## Debug Kubernetes clusters

To debug Kubernetes clusters, we advise you to use [K9](https://k9scli.io/), a terminal-based UI that aims to simplify navigation, observation, and management of applications in Kubernetes. K9 continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in deployed clusters.
To debug Kubernetes clusters, checkout [`k9s`](https://k9scli.io/), a terminal-based UI that aims to simplify navigation, observation, and management of applications in Kubernetes. `k9s` continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in deployed clusters.

Installation can be done on a macOS, in Windows, and Linux and instructions can be found [here](https://github.com/derailed/k9s). For more details on usage, check out the [Troubleshooting documentation](https://docs.qhub.dev/en/stable/source/admin_guide/troubleshooting.html#debug-your-kubernetes-cluster).
Installation can be done on a macOS, in Windows, and Linux and instructions are located [here](https://github.com/derailed/k9s). For more details on usage, review the [Troubleshooting documentation](https://docs.qhub.dev/en/stable/source/admin_guide/troubleshooting.html#debug-your-kubernetes-cluster).

## Cypress Tests

Expand All @@ -90,8 +90,7 @@ export CYPRESS_EXAMPLE_USER_PASSWORD=<password>
npm run cypress:open
```

The Base URL can point anywhere that should be accessible - it can be the URL of a QHub cloud deployment. The QHub Config Path should point to the associated yaml file for that site. Most importantly, the tests will inspect the yaml file to understand what tests are relevant. To start with, it checks security.authentication.type to determine what should be available on the login page, and how to test it. If the login type is 'password' then it uses the value in CYPRESS_EXAMPLE_USER_PASSWORD as the password (default username is
`example-user` but this can be changed by setting CYPRESS_EXAMPLE_USER_NAME).
The Base URL can point anywhere that should be accessible - it can be the URL of a QHub cloud deployment. The QHub Config Path should point to the associated yaml file for that site. Most importantly, the tests will inspect the yaml file to understand what tests are relevant. To start with, it checks security.authentication.type to determine what should be available on the login page, and how to test it. If the login type is 'password' then it uses the value in `CYPRESS_EXAMPLE_USER_PASSWORD` as the password (default username is `example-user` but this can be changed by setting `CYPRESS_EXAMPLE_USER_NAME`).

The final command, in the preceding code-snippet, opens the Cypress UI where you can run the tests manually and see the actions in the browser.

Expand All @@ -108,7 +107,7 @@ pytest tests_deployment/ -v

# Cloud Testing

Cloud testing on aws, gcp, azure, and digital ocean can be significantly more complicated and time consuming. But It's the only way to truly test the cloud deployments, including infrastructure, of course. To test on cloud Kubernetes, just deploy qhub in the normal way on those clouds, but using the [linked pip install](./index.md) of the qhub package.
Cloud testing on AWS, GCP, Azure, and Digital Ocean can be significantly more complicated and time consuming. But it's the only way to truly test the cloud deployments, including infrastructure, of course. To test on cloud Kubernetes, just deploy QHub in the normal way on those clouds, but using the [linked pip install](./index.md) of the QHub package.

Even with the dev install of the qhub package, you may find that the deployed cluster doesn't actually reflect any development changes, for example to the Docker images for JupyterHub or JupyterLab. That will be because your qhub-config.yaml references fully released versions. See [Using a development branch](#using-a-development-branch) above for how to encourage the Docker images to be specified based on the latest development code.

Expand Down
8 changes: 4 additions & 4 deletions docs/source/installation/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ security:

#### Password based authentication

For Password based authentication. Ultimately, this just defers to however Keycloak is configured. That's also true for GitHub/Auth0 cases, except that for the single-sign on providers the deployment will also configure those providers in Keycloak to save manual configuration. But ultimately, It's also possible to add GitHub, or Google etc, as an Identity Provider in Keycloak even if you formally select 'password' authentication in the `qhub-config.yaml` file.
For Password based authentication. Ultimately, this just defers to however Keycloak is configured. That's also true for GitHub/Auth0 cases, except that for the single-sign on providers the deployment will also configure those providers in Keycloak to save manual configuration. But ultimately, it's also possible to add GitHub, or Google etc, as an Identity Provider in Keycloak even if you formally select 'password' authentication in the `qhub-config.yaml` file.

```yaml
security:
Expand Down Expand Up @@ -307,7 +307,7 @@ and **Kubernetes versions** will be DIFFERENT. [duplicated info]
To take advantage of the auto-scaling and dask-distributed computing capabilities,
QHub can be deployed on a handful of the most commonly used cloud providers. QHub
utilizes many of the resources these cloud providers have to offer, however,
at It's core, is the Kubernetes engine (or service). Each cloud provider has slightly
at it's core, is the Kubernetes engine (or service). Each cloud provider has slightly
different ways Kubernetes is configured but fear not, all of this is handled by QHub.

Listed below are the cloud providers QHub currently supports.
Expand Down Expand Up @@ -602,7 +602,7 @@ When configuring the memory and cpus for profiles there are some
important considerations to make. Two important terms to understand are:
- `limit`: the absolute max memory that a given pod can consume. If a
process within the pod consumes more than the `limit` memory the
linux OS will kill the process. LimIt's not used for scheduling
linux OS will kill the process. LimIt is not used for scheduling
purposes with kubernetes.
- `guarantee`: is the amount of memory the kubernetes scheduler uses
to place a given pod. In general the `guarantee` will be less than
Expand Down Expand Up @@ -751,7 +751,7 @@ configuration for the environment to appear.

## qhub_version

All qhub-config.yaml files must now contain a `qhub_version` field displaying the version of QHub which It's intended to be deployed with.
All `qhub-config.yaml` files must now contain a `qhub_version` field displaying the version of QHub which it's intended to be deployed with.

QHub will refuse to deploy if it doesn't contain the same version as that of the `qhub` command.

Expand Down
Loading

0 comments on commit 831db78

Please sign in to comment.