diff --git a/README.md b/README.md
index 08cec6295..04918949a 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,6 @@
[![Community Meetings](https://img.shields.io/badge/Community-Meetings-blue)](https://us05web.zoom.us/j/87535654586?pwd=CigbXigJPn38USc6Vuzt7qSVFoO79X.1)
[![built with nix](https://builtwithnix.org/badge.svg)](https://builtwithnix.org)
-
## Table of contents
---
@@ -23,7 +22,7 @@
- [Frequently asked questions](/doc/FAQ.md)
-Mayastor is a cloud-native declarative data plane written in Rust.
+Mayastor is a cloud-native declarative data plane written in Rust.
Our goal is to abstract storage resources and their differences through the data plane such that users only need to
supply the what and do not have to worry about the how
so that individual teams stay in control.
@@ -53,24 +52,30 @@ The official user documentation for the Mayastor Project is published at: [OpenE
## Overview
+![OpenEBS Mayastor](./doc/img/overview.drawio.png)
+
At a high-level, Mayastor consists of two major components.
### **Control plane:**
-- A microservices patterned control plane, centered around a core agent which publically exposes a RESTful API.
+- A microservices patterned control plane, centered around a core agent and a RESTful API.
This is extended by a dedicated operator responsible for managing the life cycle of "Disk Pools"
(an abstraction for devices supplying the cluster with persistent backing storage) and a CSI compliant
- external provisioner (controller).
- Source code for the control plane components is located in its [own repository](https://github.com/openebs/mayastor-control-plane)
+ external provisioner (controller). \
-- A daemonset _mayastor-csi_ plugin which implements the identity and node grpc services from CSI protocol.
+ Source code for the control plane components is located in the [controller repository](https://github.com/openebs/mayastor-control-plane). \
+ The helm chart as well as other k8s specific extensions (ex: kubectl-plugin) are located in the [extensions repository](https://github.com/openebs/mayastor-extensions).
+
+- CSI plugins:
+ - A daemonset _csi-node_ plugin which implements the identity and node services.
+ - A deployment _csi-controller_ plugin which implements the identity and controller services.
### **Data plane:**
-- Each node you wish to use for storage or storage services will have to run an IO Engine daemonset. Mayastor itself has
- two major components: the Nexus and a local storage component.
+- Each node you wish to use for storage or storage services will have to run an I/O Engine instance. The Mayastor data-plane (i/o engine) itself has
+ two major components: the volume target (nexus) and a local storage pools which can be carved out into logical volumes (replicas), which in turn can be shared to other i/o engines via NVMe-oF.
-## Nexus
+## Volume Target / Nexus
The Nexus is responsible for attaching to your storage resources and making it available to the host that is
@@ -89,7 +94,7 @@ they way we do things. Moreover, due to hardware [changes](https://searchstorage
we in fact are forced to think about it.
Based on storage URIs the Nexus knows how to connect to the resources and will make these resources available as
-a single device to a protocol standard protocol. These storage URIs are generated automatically by MOAC and it keeps
+a single device to a protocol standard protocol. These storage URIs are managed by the control-plane and it keeps
track of what resources belong to what Nexus instance and subsequently to what PVC.
You can also directly use the nexus from within your application code. For example:
@@ -138,7 +143,7 @@ buf.as_slice().into_iter().map(|b| assert_eq!(b, 0xff)).for_each(drop);
We think this can help a lot of database projects as well, where they typically have all the smarts in their database engine
-and they want the most simple (but fast) storage device. For a more elaborate example see some of the tests in mayastor/tests.
+and they want the most simple (but fast) storage device. For a more elaborate example see some of the tests in io-engine/tests.
To communicate with the children, the Nexus uses industry standard protocols. The Nexus supports direct access to local
storage and remote storage using NVMe-oF TCP. Another advantage of the implementation is that if you were to remove
@@ -159,8 +164,8 @@ What model fits best for you? You get to decide!
If you do not have a storage system, and just have local storage, i.e block devices attached to your system, we can
consume these and make a "storage system" out of these local devices such that
-you can leverage features like snapshots, clones, thin provisioning, and the likes. Our K8s tutorial does that under
-the water today. Currently, we are working on exporting your local storage implicitly when needed, such that you can
+you can leverage features like snapshots, clones, thin provisioning, and the likes. Our K8s deployment does that under
+the water. Currently, we are working on exporting your local storage implicitly when needed, such that you can
share storage between nodes. This means that your application, when re-scheduled, can still connect to your local storage
except for the fact that it is not local anymore.
@@ -192,12 +197,8 @@ In following example of a client session is assumed that mayastor has been
started and is running:
```
-$ dd if=/dev/zero of=/tmp/disk bs=1024 count=102400
-102400+0 records in
-102400+0 records out
-104857600 bytes (105 MB, 100 MiB) copied, 0.235195 s, 446 MB/s
-$ sudo losetup /dev/loop8 /tmp/disk
-$ io-engine-client pool create tpool /dev/loop8
+$ fallocate -l 100M /tmp/disk.img
+$ io-engine-client pool create tpool aio:///tmp/disk.img
$ io-engine-client pool list
NAME STATE CAPACITY USED DISKS
tpool 0 96.0 MiB 0 B tpool
@@ -232,5 +233,4 @@ Unless you explicitly state otherwise, any contribution intentionally submitted
inclusion in Mayastor by you, as defined in the Apache-2.0 license, licensed as above,
without any additional terms or conditions.
-
[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Fopenebs%2Fmayastor.svg?type=large&issueType=license)](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fopenebs%2Fmayastor?ref=badge_large&issueType=license)
diff --git a/doc/csi.md b/doc/csi.md
index 0caf99b84..0b63aed42 100644
--- a/doc/csi.md
+++ b/doc/csi.md
@@ -7,10 +7,45 @@ document.
Basic workflow starting from registration is as follows:
1. csi-node-driver-registrar retrieves information about csi plugin (mayastor) using csi identity service.
-1. csi-node-driver-registrar registers csi plugin with kubelet passing plugin's csi endpoint as parameter.
-1. kubelet uses csi identity and node services to retrieve information about the plugin (including plugin's ID string).
-1. kubelet creates a custom resource (CR) "csi node info" for the CSI plugin.
-1. kubelet issues requests to publish/unpublish and stage/unstage volume to the CSI plugin when mounting the volume.
+2. csi-node-driver-registrar registers csi plugin with kubelet passing plugin's csi endpoint as parameter.
+3. kubelet uses csi identity and node services to retrieve information about the plugin (including plugin's ID string).
+4. kubelet creates a custom resource (CR) "csi node info" for the CSI plugin.
+5. kubelet issues requests to publish/unpublish and stage/unstage volume to the CSI plugin when mounting the volume.
-The registration of mayastor storage nodes with control plane (moac) is handled
-by a separate protocol using NATS message bus that is independent on CSI plugin.
+The registration of the storage nodes (i/o engines) with the control plane is handled
+by a gRPC service which is independent of the CSI plugin.
+
+
+
+```mermaid
+graph LR
+;
+ PublicApi{"Public
API"}
+ CO[["Container
Orchestrator"]]
+
+ subgraph "Mayastor Control-Plane"
+ Rest["Rest"]
+ InternalApi["Internal
API"]
+ InternalServices["Agents"]
+ end
+
+ subgraph "Mayastor Data-Plane"
+ IO_Node_1["Node 1"]
+ end
+
+ subgraph "Mayastor CSI"
+ Controller["Controller
Plugin"]
+ Node_1["Node
Plugin"]
+ end
+
+%% Connections
+ CO -.-> Node_1
+ CO -.-> Controller
+ Controller -->|REST/http| PublicApi
+ PublicApi -.-> Rest
+ Rest -->|gRPC| InternalApi
+ InternalApi -.->|gRPC| InternalServices
+ Node_1 <--> PublicApi
+ Node_1 -.->|NVMe-oF| IO_Node_1
+ IO_Node_1 <-->|gRPC| InternalServices
+```
diff --git a/doc/design/control-plane-behaviour.md b/doc/design/control-plane-behaviour.md
new file mode 100644
index 000000000..759c5c775
--- /dev/null
+++ b/doc/design/control-plane-behaviour.md
@@ -0,0 +1,171 @@
+# Control Plane Behaviour
+
+This document describes the types of behaviour that the control plane will exhibit under various situations. By
+providing a high-level view it is hoped that the reader will be able to more easily reason about the control plane. \
+
+
+## REST API Idempotency
+
+Idempotency is a term used a lot but which is often misconstrued. The following definition is taken from
+the [Mozilla Glossary](https://developer.mozilla.org/en-US/docs/Glossary/Idempotent):
+
+> An [HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP) method is **idempotent** if an identical request can be
+> made once or several times in a row with the same effect while leaving the server in the same state. In other words,
+> an idempotent method should not have any side-effects (except for keeping statistics). Implemented correctly, the `GET`,
+`HEAD`,`PUT`, and `DELETE` methods are idempotent, but not the `POST` method.
+> All [safe](https://developer.mozilla.org/en-US/docs/Glossary/Safe) methods are also ***idempotent***.
+
+OK, so making multiple identical requests should produce the same result ***without side effects***. Great, so does the
+return value for each request have to be the same? The article goes on to say:
+
+> To be idempotent, only the actual back-end state of the server is considered, the status code returned by each request
+> may differ: the first call of a `DELETE` will likely return a `200`, while successive ones will likely return a`404`.
+
+The control plane will behave exactly as described above. If, for example, multiple `create volume` calls are made for
+the same volume, the first will return success (`HTTP 200` code) while subsequent calls will return a failure status
+code (`HTTP 409` code) indicating that the resource already exists. \
+
+
+## Handling Failures
+
+There are various ways in which the control plane could fail to satisfy a `REST` request:
+
+- Control plane dies in the middle of an operation.
+- Control plane fails to update the persistent store.
+- A gRPC request to Mayastor fails to complete successfully. \
+
+
+Regardless of the type of failure, the control plane has to decide what it should do:
+
+1. Fail the operation back to the callee but leave any created resources alone.
+
+2. Fail the operation back to the callee but destroy any created resources.
+
+3. Act like kubernetes and keep retrying in the hope that it will eventually succeed. \
+
+
+Approach 3 is discounted. If we never responded to the callee it would eventually timeout and probably retry itself.
+This would likely present even more issues/complexity in the control plane.
+
+So the decision becomes, should we destroy resources that have already been created as part of the operation? \
+
+
+### Keep Created Resources
+
+Preventing the control plane from having to unwind operations is convenient as it keeps the implementation simple. A
+separate asynchronous process could then periodically scan for unused resources and destroy them.
+
+There is a potential issue with the above described approach. If an operation fails, it would be reasonable to assume
+that the user would retry it. Is it possible for this subsequent request to fail as a result of the existing unused
+resources lingering (i.e. because they have not yet been destroyed)? If so, this would hamper any retry logic
+implemented in the upper layers.
+
+### Destroy Created Resources
+
+This is the optimal approach. For any given operation, failure results in newly created resources being destroyed. The
+responsibility lies with the control plane tracking which resources have been created and destroying them in the event
+of a failure.
+
+However, what happens if destruction of a resource fails? It is possible for the control plane to retry the operation
+but at some point it will have to give up. In effect the control plane will do its best, but it cannot provide any
+guarantee. So does this mean that these resources are permanently leaked? Not necessarily. Like in
+the [Keep Created Resources](#keep-created-resources) section, there could be a separate process which destroys unused
+resources. \
+
+
+## Use of the Persistent Store
+
+For a control plane to be effective it must maintain information about the system it is interacting with and take
+decision accordingly. An in-memory registry is used to store such information.
+
+Because the registry is stored in memory, it is volatile - meaning all information is lost if the service is restarted.
+As a consequence critical information must be backed up to a highly available persistent store (for more detailed
+information see [persistent-store.md](./persistent-store.md)).
+
+The types of data that need persisting broadly fall into 3 categories:
+
+1. Desired state
+
+2. Actual state
+
+3. Control plane specific information \
+
+
+### Desired State
+
+This is the declarative specification of a resource provided by the user. As an example, the user may request a new
+volume with the following requirements:
+
+- Replica count of 3
+
+- Size
+
+- Preferred nodes
+
+- Number of nexuses
+
+Once the user has provided these constraints, the expectation is that the control plane should create a resource that
+meets the specification. How the control plane achieves this is of no concern.
+
+So what happens if the control plane is unable to meet these requirements? The operation is failed. This prevents any
+ambiguity. If an operation succeeds, the requirements have been met and the user has exactly what they asked for. If the
+operation fails, the requirements couldn’t be met. In this case the control plane should provide an appropriate means of
+diagnosing the issue i.e. a log message.
+
+What happens to resources created before the operation failed? This will be dependent on the chosen failure strategy
+outlined in [Handling Failures](#handling-failures).
+
+### Actual State
+
+This is the runtime state of the system as provided by Mayastor. Whenever this changes, the control plane must reconcile
+this state against the desired state to ensure that we are still meeting the users requirements. If not, the control
+plane will take action to try to rectify this.
+
+Whenever a user makes a request for state information, it will be this state that is returned (Note: If necessary an API
+may be provided which returns the desired state also). \
+
+
+## Control Plane Information
+
+This information is required to aid the control plane across restarts. It will be used to store the state of a resource
+independent of the desired or actual state.
+
+The following sequence will be followed when creating a resource:
+
+1. Add resource specification to the store with a state of “creating”
+
+2. Create the resource
+
+3. Mark the state of the resource as “complete”
+
+If the control plane then crashes mid-operation, on restart it can query the state of each resource. Any resource not in
+the “complete” state can then be destroyed as they will be remnants of a failed operation. The expectation here will be
+that the user will reissue the operation if they wish to.
+
+Likewise, deleting a resource will look like:
+
+1. Mark resources as “deleting” in the store
+
+2. Delete the resource
+
+3. Remove the resource from the store.
+
+For complex operations like creating a volume, all resources that make up the volume will be marked as “creating”. Only
+when all resources have been successfully created will their corresponding states be changed to “complete”. This will
+look something like:
+
+1. Add volume specification to the store with a state of “creating”
+
+2. Add nexus specifications to the store with a state of “creating”
+
+3. Add replica specifications to the store with a state of “creating”
+
+4. Create replicas
+
+5. Create nexus
+
+6. Mark replica states as “complete”
+
+7. Mark nexus states as “complete”
+
+8. Mark volume state as “complete”
diff --git a/doc/design/control-plane.md b/doc/design/control-plane.md
new file mode 100644
index 000000000..fe60f41f3
--- /dev/null
+++ b/doc/design/control-plane.md
@@ -0,0 +1,480 @@
+# Mayastor Control Plane
+
+This provides a high-level design description of the control plane and its main components. It does not, for example, in detail, explain how a replica is retired.
+
+## Background
+
+The current control implementation started as _"just a [CSI] driver"_ that would provision volumes based on dynamic provisioning requests. The intent was to integrate this [CSI] driver within the `OpenEBS` control plane. As things progressed, it turned out that the control plane which we wanted to ingrate into had little control hooks to integrate into.
+
+As a result, more complex functionality was introduced into [Mayastor itself (the data plane or io-engine)][Mayastor] and [MOAC] (the `CSI` driver). The increasing complexity of `MOAC`, with the implicit dependency on [K8s], made it apparent that we needed to split up this functionality into a Mayastor's own specific control plane.
+
+At the same time, however, we figured out how far the stateless approach in [K8s] could be married with the inherently state-full world of [CAS].
+
+We have concluded that we could not implement everything using the same – existing primitives directly. However, **we can leverage the same patterns**.
+
+> "What [K8s] is to (stateless compute) we are to storage."
+
+We can leverage the majority and implement the specifics elsewhere. A side effect of this is that it also means that it is not [K8s] dependent.
+
+## High-level overview
+
+The control plane is our locus of control. It is responsible for what happens to volumes as external events, planned or unexpected, occur. The control plane is extensible through agents. By default, several agents are provided and are part of the core services.
+
+At a high level, the architecture is depicted below. Core and scheduler are so-called agents. Agents implement a function that varies from inserting new specifications to reconciling the desired state.
+
+```mermaid
+graph TD;
+ LB["Clients"]
+ CSIController["CSI Controller"]
+ REST["REST OpenAPI"]
+
+ subgraph Agents["Core Agents"]
+ HA["HA Cluster"]
+ Watcher
+ Core
+ end
+
+ subgraph StorageNode
+ subgraph DataPlane["I/O Engine"]
+ RBAC
+ Nexus
+ Pools
+ Replicas
+ end
+
+ subgraph DataPlaneAgent["Data Plane Agent"]
+ CSINode["CSI Node"]
+ HANode["HA Node Agent"]
+ end
+ end
+
+ subgraph AppNode
+ subgraph DataPlaneAgent_2["Data Plane Agent"]
+ CSINode_2["CSI Node"]
+ HANode_2["HA Node Agent"]
+ end
+ end
+
+ CSIController --> REST
+ LB --> REST
+ REST --> Agents
+ Agents --> DataPlane
+ RBAC -.-> Nexus
+ RBAC -.-> Replicas
+ Nexus --> Replicas
+ Replicas -.-> Pools
+ Agents --> DataPlaneAgent
+ Agents --> DataPlaneAgent_2
+```
+
+Default functionality provided by the control plane through several agents is:
+
+- Provisioning of volumes according to specification (spec)
+
+- Ensuring that as external events take place, the desired state (spec) is reconciled
+
+- Recreates objects (pools, volumes, shares) after a restart of a data plane instance
+
+- Provides an OpenAPI v3 REST service to allow for customization.
+
+- Replica replacement
+
+- Garbage collection
+
+- CSI driver
+
+- CRD operator for the interaction with k8s to create pools
+
+
+
+### Some key points
+
+- The control plane is designed to be scalable. That is to say; multiple control planes can operate on the same objects, where the control plane guarantees mutual exclusion. This is achieved by applying either distributed locks and/or leader elections. This currently is in a “should work” state. However, it is perhaps more practical to use namespacing where each control plane operates on a cluster-ID.
+
+ > _**NOTE**_: this multi control-planes in a single cluster was left behind until further notice
+
+- The control plane does not take part in the IO path, except when there is a dynamic reconfiguration event. If the control plane can not be accessed during such an event, the NVMe controller will remain frozen. The time we allow ourselves to retry operations during such an event is determined by the NVMe IO timeout and the controller loss time-out values.
+
+- The control plane uses well-known, existing technologies as its building blocks. Most notable technologies applied:
+
+ - etcd v3 and only version 3. 1 & 2 are not supported and will not get support
+
+ - Written in Rust
+
+ - gRPC
+
+- We need at least three control nodes, where five is preferred.
+The control plane is extensible by adding and removing agents where each agents complements the control plane in some way.
+Example: the `HA *` agents allow for volume target failover by reconnecting the initiator to another replacement target.
+
+
+
+## Persistent Store (KVstore for configuration data)
+
+The Mayastor Control Plane requires a persistent store for storing information that it can use to make intelligent decisions. \
+A key-value store has been selected as the appropriate type of store. \
+[etcd] is very well known in the industry and provides the strong consistency models required for the control plane.
+
+> _NOTE_: [etcd] is also a fundamental component of Kubernetes itself.
+
+Throughout the control plane and data plane design, [etcd] is considered the source of truth.
+
+Somethings to keep in mind when considering a persistent store implementation:
+
+- **Performance**
+ - Paxos/Raft consensus is inherently latency-sensitive. Moreover, the KV is memory-mapped, meaning that it suffers greatly from random IO.
+ - As per their own docs, `etcd is designed to reliably store infrequently updated data…`
+ - Fortunately, NVMe does not suffer from this; however, it’s not unlikely to assume some users will use rotational devices.
+ - This limitation is acceptable for the control plane as, by design, we shouldn’t be storing information at anywhere near the limits of etcd.
+
+- **Role-Based Access**
+ - Who is allowed to list what? Due to the linear keyspace, this is important to consider by using prefixes.
+
+- **Queries**
+ - range-based are encouraged to do. There is no analogue of tables in KVs.
+
+- **Notifications**
+ - being notified of changes can be very useful to drive on-change reconciler events.
+
+
+
+### Persistent Information
+
+There are two categories of information that the control plane wishes to store:
+
+1. Configuration
+ - Specification for volumes, pools, etc
+ - Policies for various scheduling logic
+ - etc
+
+2. System state
+ - Volume states
+ - Node states
+ - Pool states
+ - etc
+
+#### System State
+
+The control plane requires visibility of the state of the system in order to make autonomous decisions. For example, should a volume transition from a
+healthy state to a degraded state, the control plane could inspect the state of its children and optionally (based on the policy) replace any that are
+unhealthy.
+
+Additionally, this state information would be useful for implementing an early warning system. If any resource (volume, node, pool) changed state, any
+ etcd watchers would be notified. We could then potentially have a service which watches for state changes and notifies the upper layers (i.e. operators)
+ that an error has occurred.
+
+##### Note
+
+Although initially planned, the system state is not currently persisted in [etcd] as the initial use-case for watchers could be fulfilled
+by making use of an internal in-memory cache of objects, thus moving this problem further down the line. \
+Even though etcd is only used for configuration we've had had users with etcd-related performance issues, which would will no doubt get even further
+exacerbated if we also start placing the _**state in etcd**_. And so this will require very careful _**design**_ and _**consideration**_.
+
+## Control plane agents
+
+Agents form a specific function and concern themselves around a particular problem. There are several agents. The provisioning of a volume (say) involves
+ pipelining between different agents. Each agent receives a request and response, and the response _MAY_ be the input for a subsequent request.
+
+Agents can either be internal within the binary or be implemented as separate processes (containers).
+
+
+
+```mermaid
+sequenceDiagram
+ Actor User
+ participant REST
+
+ participant Core
+ participant PStor as PStor (etcd)
+ participant Scheduler
+ participant Pool
+ Participant Replica
+
+ User ->> REST: Put Create
+ REST ->> Core: Create Request
+ Core ->> PStor: Insert Spec
+ PStor ->> Core:
+ Core ->> REST:
+ REST ->> User: 200 Ok
+
+ alt Core Agent currently handles this
+ Scheduler -->> PStor: Watch Specs
+ Scheduler ->> Pool: Pools(s) select
+ Pool -->> Scheduler:
+ Scheduler ->> Replica: Create
+ Replica -->> Scheduler:
+ Scheduler -->> Core:
+ Core ->> PStor: Update status
+ end
+```
+
+
+
+> _**NOTE**_: As things stand today, the Core agent has taken the role of reconciler and scheduler.
+
+
+
+## Reconcilers
+
+Reconcilers implement the logic that drives the desired state to the actual state. In principle it's the same model as the operator framework provided by K8s, however as mentioned, it's tailored towards storage rather than stateless containers.
+
+Currently, reconcilers are implemented for pools, replicas, nexuses, volumes, nodes and etcd. When a volume enters the degraded state, it is notified of this event and will reconcile as a result of it. The exact heuristics for picking a new replica is likely to be subjective to user preferences. As such, volume objects as stored with the control plane will have fields to control this behaviour.
+
+```rust
+#[async_trait::async_trait]
+trait Reconciler {
+ /// Run the reconcile logic for this resource.
+ async fn reconcile(&mut self, context: &PollContext) -> PollResult;
+}
+
+#[async_trait::async_trait]
+trait GarbageCollect {
+ /// Run the `GarbageCollect` reconciler.
+ /// The default implementation calls all garbage collection methods.
+ async fn garbage_collect(&mut self, context: &PollContext) -> PollResult {
+ squash_results(vec![
+ self.disown_orphaned(context).await,
+ self.disown_unused(context).await,
+ self.destroy_deleting(context).await,
+ self.destroy_orphaned(context).await,
+ self.disown_invalid(context).await,
+ ])
+ }
+
+ /// Destroy resources which are in the deleting phase.
+ /// A resource goes into the deleting phase when we start to delete it and stay in this
+ /// state until we successfully delete it.
+ async fn destroy_deleting(&mut self, context: &PollContext) -> PollResult;
+
+ /// Destroy resources which have been orphaned.
+ /// A resource becomes orphaned when all its owners have disowned it and at that point
+ /// it is no longer needed and may be destroyed.
+ async fn destroy_orphaned(&mut self, context: &PollContext) -> PollResult;
+
+ /// Disown resources which are no longer needed by their owners.
+ async fn disown_unused(&mut self, context: &PollContext) -> PollResult;
+ /// Disown resources whose owners are no longer in existence.
+ /// This may happen as a result of a bug or manual edit of the persistent store (etcd).
+ async fn disown_orphaned(&mut self, context: &PollContext) -> PollResult;
+ /// Disown resources which have questionable existence, for example non reservable replicas.
+ async fn disown_invalid(&mut self, context: &PollContext) -> PollResult;
+ /// Reclaim unused capacity - for example an expanded but unused replica, which may
+ /// happen as part of a failed volume expand operation.
+ async fn reclaim_space(&mut self, _context: &PollContext) -> PollResult {
+ PollResult::Ok(PollerState::Idle)
+ }
+}
+
+#[async_trait::async_trait]
+trait ReCreate {
+ /// Recreate the state according to the specification.
+ /// This is required when an io-engine instance crashes/restarts as it always starts with no
+ /// state.
+ /// This is because it's the control-plane's job to recreate the state since it has the
+ /// overview of the whole system.
+ async fn recreate_state(&mut self, context: &PollContext) -> PollResult;
+}
+```
+
+## Data-Plane Agent
+
+The data plane agent is the trojan horse. It runs on all nodes that want to consume storage provided by Mayastor.
+It implements the CSI node specifications, but it will also offer the ability to register it as a service to the control plane.
+This provides us with the ability to manipulate the storage topology on the node(s) to control, for example, various aspects of asymmetric namespace
+access.
+
+> _**NOTE**_: the data-plane agent doesn't exist as its own entity per se today, rather we have the csi-node plugin and the agent-ha-node which perform
+> the role of what was to become the data-plane agent.
+
+Consider the following scenario;
+
+Given: A node(W) is connected to a mayastor NVMe controller on the node(1)
+
+When: Node(1) needs to be taken out of service
+
+Then: A new NVMe controller on node(2) that provides access to the same replicas needs to be added to the node(W)
+
+This can only be achieved if the control plane can provision a new Nexus and then dynamically add a new path to the node.
+
+```mermaid
+graph TD;
+ subgraph 1
+ AppNode_1["App Node"] ==> Node_1["Node 1"]
+ Node_1 --> Replicas_1[("Replicas")]
+ style Node_1 fill:#00C853
+ end
+
+ subgraph 2
+ AppNode_2["App Node"] -.-> Node_2["Node 1"]
+ Node_2 --> Replicas_2[("Replicas")]
+ Node_N["Node 2"] --> Replicas_2[("Replicas")]
+ style Node_2 fill:#D50000
+ end
+
+ subgraph 3
+ AppNode_3["App Node"] -.-> Node_3["Node 1"]
+ AppNode_3["App Node"] --> Node_N2
+ Node_3 --> Replicas_3[("Replicas")]
+ Node_N2["Node 2"] --> Replicas_3[("Replicas")]
+ style Node_3 fill:#D50000
+ style Node_N2 fill:#00C853
+ end
+
+ subgraph 4
+ Node_4["Node 1"]
+ AppNode_4["App Node"] ==> Node_N3
+ Node_N3["Node 2"] --> Replicas_4[("Replicas")]
+ style Node_4 fill:#D50000
+ style Node_N3 fill:#00C853
+ end
+```
+
+The above picture depicts the sequence of steps. The steps are taken by the control plane but executed by the agent.
+The value add is not the ANA feature itself, rather what you do with it.
+
+## NATS & Fault management
+
+We used to use NATS as a message bus within mayastor as a whole, but as since switched for gRPC for p2p communications. \
+We will continue to use NATS for async notifications. Async in the sense that we send a message, but we do NOT wait for a reply. This mechanism does not
+ do any form of "consensus," retries, and the likes. Information transported over NATS will typically be error telemetry that is used to diagnose problems. No work has started yet on this subject.
+
+At a high level, error detectors are placed in code parts where makes sense; for example, consider the following:
+
+```rust
+fn handle_failure(
+ &mut self,
+ child: &dyn BlockDevice,
+ status: IoCompletionStatus,
+) {
+ // We have experienced a failure on one of the child devices. We need to
+ // ensure we do not submit more IOs to this child. We do not
+ // need to tell other cores about this because
+ // they will experience the same errors on their own channels, and
+ // handle it on their own.
+ //
+ // We differentiate between errors in the submission and completion.
+ // When we have a completion error, it typically means that the
+ // child has lost the connection to the nexus. In order for
+ // outstanding IO to complete, the IO's to that child must be aborted.
+ // The abortion is implicit when removing the device.
+ if matches!(
+ status,
+ IoCompletionStatus::NvmeError(
+ NvmeCommandStatus::GenericCommandStatus(
+ GenericStatusCode::InvalidOpcode
+ )
+ )
+ ) {
+ debug!(
+ "Device {} experienced invalid opcode error: retiring skipped",
+ child.device_name()
+ );
+ return;
+ }
+ let retry = matches!(
+ status,
+ IoCompletionStatus::NvmeError(
+ NvmeCommandStatus::GenericCommandStatus(
+ GenericStatusCode::AbortedSubmissionQueueDeleted
+ )
+ )
+ );
+}
+```
+
+In the above snippet, we do not handle any other errors other than aborted and silently ignore invalid opcodes. If, for example, we experience a class of
+ error, we would emit an error report. Example classes are:
+
+```text
+err.io.nvme.media.* = {}
+err.io.nvme.transport.* = {}
+err.io.nexus.* = {}
+```
+
+Subscribes to these events will keep track of payloads and apply corrective actions. In its most simplistic form, it results in a model where one can
+define a per class for error an action that needs to be taken. This error handling can be applied to IO but also agents.
+
+The content of the event can vary, containing some general metadata fields, as well as event specific information.
+Example of the event message capsule:
+
+```protobuf
+// Event Message
+message EventMessage {
+ // Event category
+ EventCategory category = 1;
+ // Event action
+ EventAction action = 2;
+ // Target id for the category against which action is performed
+ string target = 3;
+ // Event meta data
+ EventMeta metadata = 4;
+}
+```
+
+An up to date API of the event format can be fetched
+ [here](https://github.com/openebs/mayastor-dependencies/blob/develop/apis/events/protobuf/v1/event.proto).
+
+## Distributed Tracing
+
+Tracing means different things at different levels. In this case, we are referring to tracing component boundary tracing.
+
+Tracing is by default implemented using open telemetry and, by default, we have provided a subscriber for jaeger. From jaeger, the information can be
+forwarded to, Elastic Search, Cassandra, Kafka, or whatever. In order to achieve full tracing support, all the gRPC requests and replies should add
+HTTP headers such that we can easily tie them together in whatever tooling is used. This is standard practice but requires a significant amount of work.
+The key reason is to ensure that all requests and responses pass along the headers, from REST to the scheduling pipeline.
+
+We also need to support several types of transport and serialization mechanisms. For example, HTTP/1.1 REST requests to HTTP/2 gRCP request to
+ a KV store operation to etcd. For this, we will use [Tower]. \
+[Tower] provides a not-so-easy to use an abstraction of Request to Response mapping.
+
+```rust
+pub trait Service {
+ /// Responses given by the service.
+ type Response;
+ /// Errors produced by the service.
+ type Error;
+ /// The future response value.
+ type Future: Future