From 831db78c9d6af3af778b890c832923ec95065af5 Mon Sep 17 00:00:00 2001
From: HarshCasper <erbeusgriffincasper@gmail.com>
Date: Wed, 15 Dec 2021 20:41:07 +0530
Subject: [PATCH] DOCS: Fixed all the suggestions

---
 docs/source/admin_guide/backup.md          |   6 +-
 docs/source/admin_guide/troubleshooting.md |   8 +-
 docs/source/admin_guide/upgrade.md         |   2 +-
 docs/source/dev_guide/minikube.md          |  19 ++--
 docs/source/dev_guide/testing.md           |   9 +-
 docs/source/installation/configuration.md  |   8 +-
 docs/source/installation/setup.md          | 124 +++++++--------------
 docs/source/user_guide/dask_gateway.md     |   4 +-
 docs/source/user_guide/faq.md              |   2 +-
 tests/vale/styles/Google/Headings.yml      |   1 +
 tests/vale/styles/Google/Units.yml         |   2 +-
 11 files changed, 70 insertions(+), 115 deletions(-)

diff --git a/docs/source/admin_guide/backup.md b/docs/source/admin_guide/backup.md
index 1de8a0fe5a..8dc94a025b 100644
--- a/docs/source/admin_guide/backup.md
+++ b/docs/source/admin_guide/backup.md
@@ -1,4 +1,4 @@
-# Manual Backups
+# Manual backups
 
 Your cloud provider may have native ways to backup your Kubernetes cluster and volumes.
 
@@ -10,7 +10,7 @@ There are three main locations that you need to backup:
 2. The Keycloak user/group database
 3. The JupyterHub database (for Dashboard configuration)
 
-## Network File System
+## Network file system
 
 This amounts to:
 
@@ -162,7 +162,7 @@ gsutil cp gs://<your_bucket_name>/backups/2021-04-23.tar .
 Similar instructions, but use Digital Ocean spaces. This guide explains installation of the command-line tool:
 https://www.digitalocean.com/community/tutorials/how-to-migrate-from-amazon-s3-to-digitalocean-spaces-with-rclone
 
-## Keycloak User/Group Database
+## Keycloak user/group database
 
 QHub provides a simple script to export the important user/group database. Your new QHub cluster will recreate a lot of Keycloak config (including new Keycloak clients which will have new secrets), so only the high-level Group and User info is exported.
 
diff --git a/docs/source/admin_guide/troubleshooting.md b/docs/source/admin_guide/troubleshooting.md
index 3f71a07740..ba4c7e3af2 100644
--- a/docs/source/admin_guide/troubleshooting.md
+++ b/docs/source/admin_guide/troubleshooting.md
@@ -43,18 +43,18 @@ After completing these steps. `kubectl` should be able to access the cluster.
 
 #### Debug your Kubernetes cluster
 
-[K9](https://k9scli.io/) is a terminal-based UI to manage Kubernetes clusters that aims to simplify navigating, observing, and managing your applications in K8. K9 continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in Kubernetes. It's definitely a huge improvement to the general workflow, and a best-to-have tool for debugging your Kubernetes cluster sessions.
+[`k9s`](https://k9scli.io/) is a terminal-based UI to manage Kubernetes clusters that aims to simplify navigating, observing, and managing your applications in `k8s`. `k9s` continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in Kubernetes. It's definitely a huge improvement to the general workflow, and a best-to-have tool for debugging your Kubernetes cluster sessions.
 
 Installation can be done on macOS, Windows, and Linux. Instructions for each operating system can be found [here](https://github.com/derailed/k9s). Complete the installation to follow along.
 
-By default, K9 starts with the standard directory that's set as the context (in this case Minikube). To view all the current process press `0`:
+By default, `k9s` starts with the standard directory that's set as the context (in this case Minikube). To view all the current process press `0`:
 
-![Image of K9 terminal UI](../images/k9s_UI.png)
+![Image of the `k9s` terminal UI](../images/k9s_UI.png)
 
 > **NOTE**: In some circumstances you will be confronted with the need to inspect any services launched by your cluster at your ‘localhost’. For instance, if your cluster has problem
 with the network traffic tunnel configuration, it may limit or block the user's access to destination resources over the connection.
 
-K9 port-forward option <kbd>shift</kbd> + <kbd>f</kbd> allows you to access and interact with internal Kubernetes cluster processes from your localhost you can then use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
+`k9s` port-forward option <kbd>shift</kbd> + <kbd>f</kbd> allows you to access and interact with internal Kubernetes cluster processes from your localhost you can then use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
 
 ---
 
diff --git a/docs/source/admin_guide/upgrade.md b/docs/source/admin_guide/upgrade.md
index 022416c828..2ca2c2e73f 100644
--- a/docs/source/admin_guide/upgrade.md
+++ b/docs/source/admin_guide/upgrade.md
@@ -30,7 +30,7 @@ qhub upgrade -c qhub-config.yaml
 
 This will output a newer version of qhub-config.yaml that's compatible with the new version of qhub. The process will list any changes it has made. It will also tell you where it has stored a backup of the original file.
 
-If you are deploying QHub from your local machine (that'sn't using CI/CD) then you will now have a qhub-config.yaml file that you can use to `qhub deploy -c qhub-config.yaml` through the latest version of the QHub command package.
+If you are deploying QHub from your local machine (not using CI/CD) then you will now have a qhub-config.yaml file that you can use to `qhub deploy -c qhub-config.yaml` through the latest version of the QHub command package.
 
 ## Special customizations
 
diff --git a/docs/source/dev_guide/minikube.md b/docs/source/dev_guide/minikube.md
index ba264005b5..b01616935a 100644
--- a/docs/source/dev_guide/minikube.md
+++ b/docs/source/dev_guide/minikube.md
@@ -4,8 +4,7 @@
 
 It's possible to run QHub on Minikube, and this can allow quicker feedback loops for development, as well as being less expensive than running cloud Kubernetes clusters.
 
-Local testing is a great way to test the components of QHub. It's important to highlight that while It's possible to test most of QHub
-with this version, components that are Cloud provisioned such as VPCs, managed Kubernetes cluster and managed container registries can't be locally tested, due to their Cloud dependencies.
+Local testing is a great way to test the components of QHub. It's important to highlight that while it's possible to test most of QHub with this version, components that are Cloud provisioned such as VPCs, managed Kubernetes cluster and managed container registries can't be locally tested, due to their Cloud dependencies.
 
 ## Compatibility
 
@@ -161,7 +160,7 @@ A example subnet range looks like `192.168.49.2/24`. This CIDR range has a start
 
 For this example case, the user assigns `metallb` a start IP address of `192.168.49.100` and an end of `192.168.49.150`.
 
-The user can the `metallb` below command-line tool interface which prompts for the start and stop IP range:
+The user can enable `metallb` as shown below. The command-line tool interface prompts the user for the start and stop IP range:
 
 ```shell
 minikube addons configure metallb
@@ -186,7 +185,7 @@ The output should be `The 'metallb' addon is enabled`.
 <details>
   <summary>Click to expand note</summary>
 
-The browser can have trouble reaching the load balancer running on WSL2. A workaround is to port forward the proxy-pod to the host IP 0.0.0.0. Get the ip address of the WSL2 machine via ```ip a```, which should be a 127.x.x.x address. To change the port forwarding after opening k9 you can type ```:pods <enter>```, hover over the proxy-... pod and type ```<shift-s>```, and enter the ip addresses.
+The browser can have trouble reaching the load balancer running on WSL2. A workaround is to port forward the proxy-pod to the host IP 0.0.0.0. Get the ip address of the WSL2 machine via ```ip a```, which should be a 127.x.x.x address. To change the port forwarding after opening `k9s` you can type ```:pods <enter>```, hover over the proxy-... pod and type ```<shift-s>```, and enter the IP addresses.
 
 </details>
 
@@ -253,7 +252,7 @@ curl -k https://github-actions.qhub.dev/hub/login
 
 It's also possible to visit `https://github-actions.qhub.dev` in your web browser to select the deployment.
 
-Since this is a local deployment, hence It's not visible to the internet; `https` certificates isn't signed by [Let's Encrypt](https://letsencrypt.org/). Thus, the certificates is [self-signed by Traefik](https://en.wikipedia.org/wiki/Self-signed_certificate).
+Since this is a local deployment, hence it's not visible to the internet; `https` certificates isn't signed by [Let's Encrypt](https://letsencrypt.org/). Thus, the certificates is [self-signed by Traefik](https://en.wikipedia.org/wiki/Self-signed_certificate).
 
 Several browsers makes it difficult to view a self-signed certificate that's not added to the certificate registry.
 
@@ -284,7 +283,7 @@ The command deletes all instances of QHub, cleaning up the deployment environmen
 
 # Minikube on Mac
 
-The earlier instructions for minikube on Linux _nearly_ works on Mac except things that break without clever use of port forwarding at the right times.
+The earlier instructions for Minikube on Linux _nearly_ works on Mac except things that break without clever use of port forwarding at the right times.
 
 1 - When working out the IP addresses to configure metallb try this:
 ```
@@ -363,7 +362,7 @@ This should show all instances, so work out which one you need if there are mult
 
 Using the instance ID you obtained just preceding (for example `i-01bd8a4ee6016e1fe`), use that to first query for the 'security GroupSet ID' (for example `sg-96f73feb`).
 
-Then use that to open up port 22 for the security group (and hence for the instance). Multiple instances running in this security group, is exposed on Port 22.
+Then use that to open up port 22 for the security group (and hence for the instance). Multiple instances running in this security group, all of which are now exposed on Port 22.
 
 ```bash
 aws ec2 describe-instance-attribute --instance-id i-01bd8a4ee6016e1fe --attribute groupSet
@@ -443,7 +442,7 @@ sed -i -E 's/(cpu_guarantee):\s+[0-9\.]+/\1: 1/g' "qhub-config.yaml"
 sed -i -E 's/(mem_guarantee):\s+[A-Za-z0-9\.]+/\1: 1G/g' "qhub-config.yaml"
 ```
 
-The last two commands preceding reduce slightly the memory and CPU requirements of JupyterLab sessions etc. Make any other changes needed to the qhub-config.yaml file.
+The preceding two commands reduce slightly the memory and CPU requirements of JupyterLab sessions etc. Make any other changes needed to the `qhub-config.yaml` file.
 
 Then deploy:
 
@@ -453,7 +452,7 @@ qhub deploy --config qhub-config.yaml --disable-prompt
 
 ## Enable Kubernetes access from Mac
 
-This step is optional, but allows you to use kubectl and K9 directly from your Mac. It's not needed if you are satisfied to use kubectl within an SSH session on AWS instead.
+This step is optional, but allows you to use `kubectl` and `k9s` directly from your Mac. It's not needed if you are satisfied to use kubectl within an SSH session on AWS instead.
 
 On your Mac laptop:
 
@@ -525,6 +524,6 @@ And then the users can add an extra port forward when they SSH into their AWS in
 sudo ssh -i ~/.ssh/${MYKEYNAME}.pem ubuntu@ec2-35-177-109-173.eu-west-2.compute.amazonaws.com -L 127.0.0.1:8443:192.168.49.2:8443 -L github-actions.qhub.dev:443:192.168.49.100:443
 ```
 
-This is executed with the sudo access because It's desired to forward a low-numbered port, like 443, which is otherwise not allowed.
+This is executed with the `sudo` privileges because forwarding a low-numbered port, like 443, is not allowed otherwise.
 
 Now you can access https://github-actions.qhub.dev/ in a browser and you should be able to use your QHub. You have to bypass the self-signed cert warnings though - see [verify the local deployment](#verify-the-local-deployment) for instructions.
diff --git a/docs/source/dev_guide/testing.md b/docs/source/dev_guide/testing.md
index 5f57c70baf..eacadebccf 100644
--- a/docs/source/dev_guide/testing.md
+++ b/docs/source/dev_guide/testing.md
@@ -71,9 +71,9 @@ Hadolint will report `error`, `warning`, `info` and `style` while linting Docker
 
 ## Debug Kubernetes clusters
 
-To debug Kubernetes clusters, we advise you to use [K9](https://k9scli.io/), a terminal-based UI that aims to simplify navigation, observation, and management of applications in Kubernetes. K9 continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in deployed clusters.
+To debug Kubernetes clusters,  checkout [`k9s`](https://k9scli.io/), a terminal-based UI that aims to simplify navigation, observation, and management of applications in Kubernetes. `k9s` continuously monitors Kubernetes clusters for changes and provides shortcut commands to interact with the observed resources becoming a fast way to review and resolve day-to-day issues in deployed clusters.
 
-Installation can be done on a macOS, in Windows, and Linux and instructions can be found [here](https://github.com/derailed/k9s). For more details on usage, check out the [Troubleshooting documentation](https://docs.qhub.dev/en/stable/source/admin_guide/troubleshooting.html#debug-your-kubernetes-cluster).
+Installation can be done on a macOS, in Windows, and Linux and instructions are located [here](https://github.com/derailed/k9s). For more details on usage, review the [Troubleshooting documentation](https://docs.qhub.dev/en/stable/source/admin_guide/troubleshooting.html#debug-your-kubernetes-cluster).
 
 ## Cypress Tests
 
@@ -90,8 +90,7 @@ export CYPRESS_EXAMPLE_USER_PASSWORD=<password>
 npm run cypress:open
 ```
 
-The Base URL can point anywhere that should be accessible - it can be the URL of a QHub cloud deployment. The QHub Config Path should point to the associated yaml file for that site. Most importantly, the tests will inspect the yaml file to understand what tests are relevant. To start with, it checks security.authentication.type to determine what should be available on the login page, and  how to test it. If the login type is 'password' then it uses the value in CYPRESS_EXAMPLE_USER_PASSWORD as the password (default username is
-`example-user` but this can be changed by setting CYPRESS_EXAMPLE_USER_NAME).
+The Base URL can point anywhere that should be accessible - it can be the URL of a QHub cloud deployment. The QHub Config Path should point to the associated yaml file for that site. Most importantly, the tests will inspect the yaml file to understand what tests are relevant. To start with, it checks security.authentication.type to determine what should be available on the login page, and  how to test it. If the login type is 'password' then it uses the value in `CYPRESS_EXAMPLE_USER_PASSWORD` as the password (default username is `example-user` but this can be changed by setting `CYPRESS_EXAMPLE_USER_NAME`).
 
 The final command, in the preceding code-snippet, opens the Cypress UI where you can run the tests manually and see the actions in the browser.
 
@@ -108,7 +107,7 @@ pytest tests_deployment/ -v
 
 # Cloud Testing
 
-Cloud testing on aws, gcp, azure, and digital ocean can be significantly more complicated and time consuming. But It's the only way to truly test the cloud deployments, including infrastructure, of course. To test on cloud Kubernetes, just deploy qhub in the normal way on those clouds, but using the [linked pip install](./index.md) of the qhub package.
+Cloud testing on AWS, GCP, Azure, and Digital Ocean can be significantly more complicated and time consuming. But it's the only way to truly test the cloud deployments, including infrastructure, of course. To test on cloud Kubernetes, just deploy QHub in the normal way on those clouds, but using the [linked pip install](./index.md) of the QHub package.
 
 Even with the dev install of the qhub package, you may find that the deployed cluster doesn't actually reflect any development changes, for example to the Docker images for JupyterHub or JupyterLab. That will be because your qhub-config.yaml references fully released versions. See [Using a development branch](#using-a-development-branch) above for how to encourage the Docker images to be specified based on the latest development code.
 
diff --git a/docs/source/installation/configuration.md b/docs/source/installation/configuration.md
index f664c0fa54..33c2a93866 100644
--- a/docs/source/installation/configuration.md
+++ b/docs/source/installation/configuration.md
@@ -211,7 +211,7 @@ security:
 
 #### Password based authentication
 
-For Password based authentication. Ultimately, this just defers to however Keycloak is configured. That's also true for GitHub/Auth0 cases, except that for the single-sign on providers the deployment will also configure those providers in Keycloak to save manual configuration. But ultimately, It's also possible to add GitHub, or Google etc, as an Identity Provider in Keycloak even if you formally select 'password' authentication in the `qhub-config.yaml` file.
+For Password based authentication. Ultimately, this just defers to however Keycloak is configured. That's also true for GitHub/Auth0 cases, except that for the single-sign on providers the deployment will also configure those providers in Keycloak to save manual configuration. But ultimately, it's also possible to add GitHub, or Google etc, as an Identity Provider in Keycloak even if you formally select 'password' authentication in the `qhub-config.yaml` file.
 
 ```yaml
 security:
@@ -307,7 +307,7 @@ and **Kubernetes versions** will be DIFFERENT. [duplicated info]
 To take advantage of the auto-scaling and dask-distributed computing capabilities,
 QHub can be deployed on a handful of the most commonly used cloud providers. QHub
 utilizes many of the resources these cloud providers have to offer, however,
-at It's core, is the Kubernetes engine (or service). Each cloud provider has slightly
+at it's core, is the Kubernetes engine (or service). Each cloud provider has slightly
 different ways Kubernetes is configured but fear not, all of this is handled by QHub.
 
 Listed below are the cloud providers QHub currently supports.
@@ -602,7 +602,7 @@ When configuring the memory and cpus for profiles there are some
 important considerations to make. Two important terms to understand are:
  - `limit`: the absolute max memory that a given pod can consume. If a
    process within the pod consumes more than the `limit` memory the
-   linux OS will kill the process. LimIt's not used for scheduling
+   linux OS will kill the process. LimIt is not used for scheduling
    purposes with kubernetes.
  - `guarantee`: is the amount of memory the kubernetes scheduler uses
    to place a given pod. In general the `guarantee` will be less than
@@ -751,7 +751,7 @@ configuration for the environment to appear.
 
 ## qhub_version
 
-All qhub-config.yaml files must now contain a `qhub_version` field displaying the version of QHub which It's intended to be deployed with.
+All `qhub-config.yaml` files must now contain a `qhub_version` field displaying the version of QHub which it's intended to be deployed with.
 
 QHub will refuse to deploy if it doesn't contain the same version as that of the `qhub` command.
 
diff --git a/docs/source/installation/setup.md b/docs/source/installation/setup.md
index 0a087e2a39..8983d0a603 100644
--- a/docs/source/installation/setup.md
+++ b/docs/source/installation/setup.md
@@ -1,27 +1,17 @@
 # Setup Initialization
 
-QHub handles the initial setup and management of configurable data
-science environments, allowing users to attain seamless deployment
-with Github Actions or GitLab Workflows.
-
-QHub will be deployed on a cloud of your choice (AWS, Google Cloud, Azure, or Digital Ocean), first preparing requisite cloud infrastructure including a Kubernetes cluster.
+QHub handles the initial setup and management of configurable data science environments, allowing users to attain seamless deployment with Github Actions or GitLab Workflows. QHub will be deployed on a cloud of your choice (AWS, Google Cloud, Azure, or Digital Ocean), first preparing requisite cloud infrastructure including a Kubernetes cluster.
 
 It's suitable for most use cases, especially when:
+
 - You require scalable infrastructure
 - You aim to have a production environment with administration managed via simple configuration stored in git
 
-QHub requires a choice of [Cloud
-provider](#cloud-provider), [authentication (using Auth0, GitHub, or
-password based)](#authentication), [domain
-registration](#domain-registry), and CI provider (GitHub Actions).
+QHub requires a choice of [Cloud provider](#cloud-provider), [authentication (using Auth0, GitHub, or password based)](#authentication), [domain registration](#domain-registry), and CI provider (GitHub Actions).
 
-These services require global [environment
-variables](https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/)
-that once set up will trigger QHub's automatic deploy using GitHub
-Actions.
+These services require global [environment variables](https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/) that once set up will trigger QHub's automatic deploy using GitHub Actions.
 
-To find and set the environment variables, follow the steps described
-in the subsections below.
+To find and set the environment variables, follow the steps described in the subsections below.
 
 > NOTE: **Other QHub approaches**
 >
@@ -34,49 +24,39 @@ in the subsections below.
 
 ## Cloud Provider
 
-The first required step is to **choose a Cloud Provider to host the
-project deployment**. The cloud installation will be within a new Kubernetes cluster,
-but knowledge of Kubernetes is **NOT** required nor is in depth
-knowledge about the specific provider required either. QHub supports
-[Amazon AWS](#amazon-web-services-aws),
-[DigitalOcean](#digital-ocean), [GCP](#google-cloud-platform), and
-[Azure](#microsoft-azure).
+The first required step is to **choose a Cloud Provider to host the project deployment**. The cloud installation will be within a new Kubernetes cluster, but knowledge of Kubernetes is **NOT** required nor is in depth
+knowledge about the specific provider required either. QHub supports [Amazon AWS](#amazon-web-services-aws), [DigitalOcean](#digital-ocean), [GCP](#google-cloud-platform), and [Azure](#microsoft-azure).
 
-To deploy QHub, all access keys require fairly wide permissions to
-create all the resources. Hence, once the Cloud provider has been
-chosen, follow the steps below and set the environment variables as
-specified with **owner/admin** level permissions.
+To deploy QHub, all access keys require fairly wide permissions to create all the resources. Hence, once the Cloud provider has been chosen, follow the steps below and set the environment variables as specified with **owner/admin** level permissions.
 
 You will need to tell `qhub init` which cloud provider you have chosen, in the [Usage](usage.md) section, and this must correspond with the environment variables set for your chosen cloud as below:
 
 ### Amazon Web Services (AWS)
+
 <details><summary>Click for AWS configuration instructions </summary>
 
-Please see these instructions for [creating an IAM
-role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html)
-with administrator permissions. Upon generation, the IAM role will provide a public **access
+Please see these instructions for [creating an IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html) with administrator permissions. Upon generation, the IAM role will provide a public **access
 key ID** and a **secret key** which will need to be added to the environment variables.
 
 To define the environment variables paste the commands below with your respective keys.
 
-```bash
+```shell
 export AWS_ACCESS_KEY_ID="HAKUNAMATATA"
 export AWS_SECRET_ACCESS_KEY="iNtheJUng1etheMightyJUNgleTHEl10N51eEpsT0n1ghy;"
 ```
 </details>
 
 ### Digital Ocean
+
 <details><summary>Click to expand DigitalOcean configuration directions </summary>
 
-Please see these instructions for [creating a Digital Ocean
-token](https://www.digitalocean.com/docs/apis-clis/api/create-personal-access-token/). In
-addition to a `token`, a `spaces key` (similar to AWS S3) credentials are also required. Follow the instructions on the
-[official docs](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key) for more information.
+Please see these instructions for [creating a Digital Ocean token](https://www.digitalocean.com/docs/apis-clis/api/create-personal-access-token/). In addition to a `token`, a `spaces key` (similar to AWS S3) credentials are also required. Follow the instructions on the [official docs](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key) for more information.
+
 > Note: DigitalOcean's permissions model isn't as fine-grained as the other supported Cloud providers.
 
 Set the required environment variables as specified below:
 
-```bash
+```shell
 export DIGITALOCEAN_TOKEN=""          # API token required to generate resources
 export SPACES_ACCESS_KEY_ID=""        # public access key for access spaces
 export SPACES_SECRET_ACCESS_KEY=""    # the private key for access spaces
@@ -89,19 +69,18 @@ export AWS_SECRET_ACCESS_KEY=""       # set this variable identical to `SPACES_S
 
 <details><summary>Click for CGP configuration specs </summary>
 
-Follow [these detailed instructions](https://cloud.google.com/iam/docs/creating-managing-service-accounts) to create a
-Google Service Account with **owner level** permissions. Then, follow the steps described on the official
-[GCP docs](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console)
-to create and download a JSON credentials file. Store this credentials file in a well known location and make sure to
-set yourself exclusive permissions.
+Follow [these detailed instructions](https://cloud.google.com/iam/docs/creating-managing-service-accounts) to create a Google Service Account with **owner level** permissions. Then, follow the steps described on the official
+[GCP docs](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console) to create and download a JSON credentials file. Store this credentials file in a well known location and make sure to set yourself exclusive permissions.
 
 You can change the file permissions by running the command `chmod 600 <filename>` on your terminal.
 
 In this case the environment variables will be such as follows:
-```bash
+
+```shell
 export GOOGLE_CREDENTIALS="path/to/JSON/file/with/credentials"
 export PROJECT_ID="projectIDName"
 ```
+
 > NOTE: the [`PROJECT_ID` variable](https://cloud.google.com/resource-manager/docs/creating-managing-projects) can be
 > found at the Google Console homepage, under `Project info`.
 </details>
@@ -110,14 +89,15 @@ export PROJECT_ID="projectIDName"
 
 <details><summary>Click for Azure configuration details </summary>
 
-Follow [these instructions](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#creating-a-service-principal-in-the-azure-portal)
-to create a Service Principal in the Azure Portal. After completing the steps described on the link, set the following environment variables such as below:
-```bash
+Follow [these instructions](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#creating-a-service-principal-in-the-azure-portal) to create a Service Principal in the Azure Portal. After completing the steps described on the link, set the following environment variables such as below:
+
+```shell
 export ARM_CLIENT_ID=""           # application (client) ID
 export ARM_CLIENT_SECRET=""       # client's secret
 export ARM_SUBSCRIPTION_ID=""     # value available at the `Subscription` section under the `Overview` tab
 export ARM_TENANT_ID=""           # field available under `Azure Active Directories` > `Properties` > `Tenant ID`
 ```
+
 > NOTE 1: Having trouble finding your Subscription ID? [Azure's official docs](https://docs.microsoft.com/en-us/azure/media-services/latest/how-to-set-azure-subscription?tabs=portal)
 > might help.
 
@@ -127,9 +107,7 @@ export ARM_TENANT_ID=""           # field available under `Azure Active Director
 
 ## Authentication
 
-User identity in QHub is now managed within Keycloak which is a robust and highly flexible open source identity and access management solution. A Keycloak instance will be deployed inside your QHub.
-
-It can be configured to work with many OAuth2 identity providers, it can federate users from existing databases (such as LDAP), or it can be used as a simple database of username/passwords.
+User identity in QHub is now managed within Keycloak which is a robust and highly flexible open source identity and access management solution. A Keycloak instance will be deployed inside your QHub. It can be configured to work with many OAuth2 identity providers, it can federate users from existing databases (such as LDAP), or it can be used as a simple database of username/passwords.
 
 The full extent of possible configuration can't be covered here, so we provide three simple options that can be configured automatically by QHub when it sets up your new platform. These options are basic passwords, GitHub single-sign on, or Auth0 single-sign on (which in turn can be configured to allow identity to be provided by social login etc).
 
@@ -139,11 +117,8 @@ You will actually instruct `qhub init` which method you have chosen when you mov
 
 <details><summary>Click for Auth0 configuration details </summary>
 
-Auth0 is a great choice to enable flexible authentication via multiple
-providers. To create the necessary access tokens you will need to have
-an [Auth0](https://auth0.com/) account and be logged in. [Directions
-for creating an Auth0
-application](https://auth0.com/docs/applications/set-up-an-application/register-machine-to-machine-applications).
+Auth0 is a great choice to enable flexible authentication via multiple providers. To create the necessary access tokens you will need to have an [Auth0](https://auth0.com/) account and be logged in. [Directions
+for creating an Auth0 application](https://auth0.com/docs/applications/set-up-an-application/register-machine-to-machine-applications).
 
 - Click on the `Applications` button on the left
 - Select `Create Application` > `Machine to Machine Applications` > `Auth0 Management API` from the dropdown menu
@@ -175,7 +150,7 @@ No environment variables are needed for this - you will be given the relevant in
 
 In the [Usage](usage.md) section, you will need to run `qhub init` (this only ever needs to be run once - it creates your configuration YAML file) and then `qhub deploy` to set up the cloud infrastructure and deploy QHub for the first time.
 
-For subsequent deployments, It's possible to run `qhub deploy` again in exactly the same way, providing the configuration YAML file as you would the first time. However, it's also possible to automate future deployments using 'DevOps' - the configuration YAML file stored in git will trigger automatic redeployment whenever it's edited.
+For subsequent deployments, it's possible to run `qhub deploy` again in exactly the same way, providing the configuration YAML file as you would the first time. However, it's also possible to automate future deployments using 'DevOps' - the configuration YAML file stored in git will trigger automatic redeployment whenever it's edited.
 
 This DevOps approach can be provided by GitHub Actions or GitLab Workflows. As for the other choices, you will only need to specify the CI/CD provider when you come to run `qhub init`, but you may need to set relevant environment variables unless you choose 'none' because you plan to always redeploy manually.
 
@@ -183,20 +158,10 @@ This DevOps approach can be provided by GitHub Actions or GitLab Workflows. As f
 
 <details><summary>Click for GitHub Actions configuration details </summary>
 
-QHub uses GitHub Actions to enable [Infrastructure as
-Code](https://en.wikipedia.org/wiki/Infrastructure_as_code) and
-trigger the CI/CD checks on the configuration file that automatically
-generates the deployment modules for the infrastructure. To
-do that, it will be necessary to set the GitHub username and token as
-environment variables. First create a github personal access token via
-[these
-instructions](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token). The
-token needs permissions to create a repo and create secrets on the
-repo. At the moment we don't have the permissions well scoped out so
-to be on the safe side enable all permissions.
-
- - `GITHUB_USERNAME`: your GitHub username
- - `GITHUB_TOKEN`: token generated by GitHub
+QHub uses GitHub Actions to enable [Infrastructure as Code](https://en.wikipedia.org/wiki/Infrastructure_as_code) and trigger the CI/CD checks on the configuration file that automatically generates the deployment modules for the infrastructure. To do that, it will be necessary to set the GitHub username and token as environment variables. First create a github personal access token via [these instructions](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token). The token needs permissions to create a repo and create secrets on the repo. At the moment we don't have the permissions well scoped out so to be on the safe side enable all permissions.
+
+ - `GITHUB_USERNAME`: GitHub username
+ - `GITHUB_TOKEN`: GitHub-generated token
 </details>
 
 ### GitLab
@@ -212,28 +177,18 @@ After initial deploy, the documentation should tell you when to commit your conf
 
 ## Domain registry
 
-Finally, you will need to have a domain name for hosting QHub. This
-domain will be where your application will be exposed.
+Finally, you will need to have a domain name for hosting QHub. This domain will be where your application will be exposed.
 
-Currently, QHub only supports CloudFlare for automatic DNS
-registration. If an alternate DNS provider is desired, change the
-`--dns-provider` flag from `cloudflare` to `none` on the `qhub deploy`
-command. The deployment then will be paused when it asks for an IP
-address (or CNAME, if using AWS) and prompt to register the desired
-URL. Setting a DNS record heavily depends on the provider thus It's
-not possible to have detailed docs on how to create a record on your
-provider. Googling `setting <A/CNAME> record on <provider name>`
-should yield good results on doing it for your specific provider.
+Currently, QHub only supports CloudFlare for automatic DNS registration. If an alternate DNS provider is desired, change the `--dns-provider` flag from `cloudflare` to `none` on the `qhub deploy` command. The deployment then will be paused when it asks for an IP address (or CNAME, if using AWS) and prompt to register the desired URL. Setting a DNS record heavily depends on the provider thus it's not possible to have detailed docs on how to create a record on your provider. Googling `setting <A/CNAME> record on <provider name>` should yield good results on doing it for your specific provider.
 
 ### Cloudflare
 
 <details><summary>Click for Cloudflare configuration details </summary>
 
-QHub supports Cloudflare as a DNS provider. If you choose to use Cloudflare, first
-create an account, then there are two possible following options:
-1. You can either register your application domain name on it, using the
-[Cloudflare nameserver](https://support.cloudflare.com/hc/en-us/articles/205195708-Changing-your-domain-nameservers-to-Cloudflare)
-(recommended), or
+QHub supports Cloudflare as a DNS provider. If you choose to use Cloudflare, first create an account, then there are two possible following options:
+
+1. You can register your application domain name on it, using the [Cloudflare nameserver](https://support.cloudflare.com/hc/en-us/articles/205195708-Changing-your-domain-nameservers-to-Cloudflare)
+(recommended).
 2. You can outright buy a new domain with Cloudflare (this action isn't particularly recommended).
 
 To generate a token [follow these steps](https://developers.cloudflare.com/api/tokens/create):
@@ -252,7 +207,8 @@ To generate a token [follow these steps](https://developers.cloudflare.com/api/t
 - Click on the `Create Token` button and set the token generated as an environment variable on your machine.
 
 Finally, set the environment variable such as:
-```bash
+
+```shell
  export CLOUDFLARE_TOKEN="cloudflaretokenvalue"
 ```
 
diff --git a/docs/source/user_guide/dask_gateway.md b/docs/source/user_guide/dask_gateway.md
index 1dac0abd78..8664469405 100644
--- a/docs/source/user_guide/dask_gateway.md
+++ b/docs/source/user_guide/dask_gateway.md
@@ -1,6 +1,6 @@
 # Using Dask Gateway
 
-[Dask Gateway](https://gateway.dask.org/) provides a way for secure way to managing dask clusters. QHub uses dask-gateway to expose auto-scaling compute clusters automatically configured for the user. For a full guide on dask-gateway please [see the docs](https://gateway.dask.org/usage.html). However here we try and detail the important usage on qhub.
+[Dask Gateway](https://gateway.dask.org/) provides a secure way to managing dask clusters. QHub uses dask-gateway to expose auto-scaling compute clusters automatically configured for the user. For a full guide on dask-gateway please [see the docs](https://gateway.dask.org/usage.html). However here we try and detail the important usage on QHub.
 
 QHub already has the connection information pre-configured for the user. If you would like to see the pre-configured settings.
 
@@ -38,7 +38,7 @@ cluster = gateway.new_cluster(options)
 cluster
 ```
 
-The user is presented with a gui where you can select to scale up the workers. You originally start with `0` workers. In addition you can scale up via python functions. Additionally the gui has a `dashboard` link that you can click to view [cluster diagnostics](https://docs.dask.org/en/latest/diagnostics-distributed.html). This link is especially useful for debugging and benchmarking.
+The user is presented with a GUI where they can select to scale up the number of workers. At first users start with `0` workers. In addition you can scale up via python functions. Additionally the GUI has a `dashboard` link that you can click to view [cluster diagnostics](https://docs.dask.org/en/latest/diagnostics-distributed.html). This link is especially useful for debugging and benchmarking.
 
 ```python
 cluster.scale(1)
diff --git a/docs/source/user_guide/faq.md b/docs/source/user_guide/faq.md
index 28086b2e48..3ee6be197b 100644
--- a/docs/source/user_guide/faq.md
+++ b/docs/source/user_guide/faq.md
@@ -12,7 +12,7 @@ Anyone with access to the QHub deployment repo can add an environment, and there
 
 > Be careful of the YAML indentation as it differs from the conda `environment.yml`
 
-### What to do when the user requires `X` package and It's not available in the environment?
+### What to do when the user requires `X` package and it's not available in the environment?
 
 The proper solution is to add the package to the `qhub_config.yml` (See #1). If they don't have the access to the deployment repo, the user needs to contact their QHub maintainer to get the required package. They *can* do a user install for pip packages in a pinch (this isn't recommended) but they aren't be available to Dask workers.
 
diff --git a/tests/vale/styles/Google/Headings.yml b/tests/vale/styles/Google/Headings.yml
index 5c51587f67..0db4e09826 100644
--- a/tests/vale/styles/Google/Headings.yml
+++ b/tests/vale/styles/Google/Headings.yml
@@ -57,3 +57,4 @@ exceptions:
   - JuptyerHub
   - SSL
   - Ingress
+  - Keycloak
diff --git a/tests/vale/styles/Google/Units.yml b/tests/vale/styles/Google/Units.yml
index 379fad6b8e..34558ccbcd 100644
--- a/tests/vale/styles/Google/Units.yml
+++ b/tests/vale/styles/Google/Units.yml
@@ -5,4 +5,4 @@ nonword: true
 level: error
 tokens:
   - \d+(?:B|kB|MB|GB|TB)
-  - \d+(?:ns|ms|s|min|h|d)
+  - \d+(?:ns|ms|min|h|d)