Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add ModelMesh documentation #110

Merged
merged 10 commits into from
Oct 18, 2023
Prev Previous commit
Next Next commit
Update overview, add config, move payload
Signed-off-by: Rafael Vasquez <raf.vasquez@ibm.com>
  • Loading branch information
rafvasq committed Oct 11, 2023
commit 5b41cfa15ad3052f4814063146b524f58588d6b4
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -10,8 +10,8 @@ For more information on supported features and design details, see [these charts

## Get Started

To get started with the ModelMesh framework, check out [this guide](/docs/overview.md).
To get started with the ModelMesh framework, check out [this overview](/docs/overview.md).

## Developer guide

Check out the [developer guide](developer-guide.md) to learn about development practices for the project.
Use the [developer guide](developer-guide.md) to learn about development practices for the project.
74 changes: 74 additions & 0 deletions docs/configuration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
A core goal of the ModelMesh framework was minimizing the amount of custom configuration required. It should be possible to get up and running without changing most of these things.

## Model Runtime Configuration

There are a few basic parameters (some optional) that the model runtime implementation must report in a `RuntimeStatusResponse` response to the `ModelRuntime.runtimeStatus` rpc method once it has successfully initialized:

- `uint64 capacityInBytes`
- `uint32 maxLoadingConcurrency`
- `uint32 modelLoadingTimeoutMs`
- `uint64 defaultModelSizeInBytes`
- `string runtimeVersion` (optional)
- ~~`uint64 numericRuntimeVersion`~~ (deprecated, unused)
- `map<string,MethodInfo> methodInfos` (optional)
- `bool allowAnyMethod` - applicable only if one or more `methodInfos` are provided.
- `bool limitModelConcurrency` - (experimental)

It's expected that all model runtime instances in the same cluster (with same Kubernetes deployment config including image version) will report the same values for these, although it's not strictly necessary.

## TLS (SSL) Configuration

This can be configured via environment variables on the ModelMesh container, refer to [the documentation](/docs/configuration/tls.md).

## Model Auto-Scaling

Nothing needs to be configured to enable this, it is on by default. There is a single configuration parameter which can optionally be used to tune the sensitivity of the scaling, based on rate of requests per model. Note that this applies to scaling copies of models within existing pods, not scaling of the pods themselves.

The scale-up RPM threshold specifies a target request rate per model **copy** measured in requests per minute. Model-mesh balances requests between loaded copies of a given model evenly, and if one copy's share of requests increases above this threshold more copies will be added if possible in instances (replicas) that do not currently have the model loaded.

The default for this parameter is 2000 RPM. It can be overridden by setting either the `MM_SCALEUP_RPM_THRESHOLD` environment variable or `scaleup_rpm_threshold` etcd/zookeeper dynamic config parameter, with the latter taking precedence.

Other points to note:

- Scale up can happen by more than one additional copy at a time if the request rate breaches the configured threshold by a sufficient amount.
- The number of replicas in the deployment dictates the maximum number of copies that a given model can be scaled to (one in each Pod).
- Models will scale to two copies if they have been used recently regardless of the load - the autoscaling behaviour applies between 2 and N>2 copies.
- Scale-down will occur slowly once the per-copy load remains below the configured threshold for long enough.
- Note that if the runtime is in latency-based auto-scaling mode (when the runtime returns non-default `limitModelConcurrency = true` in the `RuntimeStatusResponse`), scaling is triggered based on measured latencies/queuing rather than request rates, and the RPM threshold parameter will have no effect.

## Request Header Logging

To have particular gRPC request metadata headers included in any request-scoped log messages, set the `MM_LOG_REQUEST_HEADERS` environment variable to a json string->string map (object) whose keys are the header names to log and values are the names of corresponding entries to insert into the logger thread context map (MDC).

Values can be either raw ascii or base64-encoded utf8; in the latter case the corresponding header name must end with `-bin`. For example:
```
{
"transaction_id": "txid",
"user_id-bin": "user_id"
}
```
**Note**: this does not generate new log messages and successful requests aren't logged by default. To log a message for every request, additionally set the `MM_LOG_EACH_INVOKE` environment variable to true.

## Other Optional Parameters

Set via environment variables on the ModelMesh container:

- `MM_SVC_GRPC_PORT` - external grpc port, default 8033
- `INTERNAL_GRPC_SOCKET_PATH` - unix domain socket, which should be a file location on a persistent volume mounted in both the model-mesh and model runtime containers, defaults to /tmp/mmesh/grpc.sock
- `INTERNAL_SERVING_GRPC_SOCKET_PATH` - unix domain socket to use for inferencing requests, defaults to be same as primary domain socket
- `INTERNAL_GRPC_PORT` - pod-internal grpc port (model runtime localhost), default 8056
- `INTERNAL_SERVING_GRPC_PORT` - pod-internal grpc port to use for inferencing requests, defaults to be same as primary pod-internal grpc port
- `MM_SVC_GRPC_MAX_MSG_SIZE` - max message size in bytes, default 16MiB
- `MM_SVC_GRPC_MAX_HEADERS_SIZE` - max headers size in bytes, defaults to gRPC default
- `MM_METRICS` - metrics configuration, see Metrics wiki page
- `MM_MULTI_PARALLELISM` - max multi-model request parallelism, default 4
- `KV_READ_ONLY` (advanced) - run in "read only" mode where new (v)models cannot be registered or unregistered
- `MM_LOG_EACH_INVOKE` - log an INFO level message for every request; default is false, set to true to enable
- `MM_SCALEUP_RPM_THRESHOLD` - see Model auto-scaling above

**Note**: only one of `INTERNAL_GRPC_SOCKET_PATH` and `INTERNAL_GRPC_PORT` can be set. The same goes for `INTERNAL_SERVING_GRPC_SOCKET_PATH` and `INTERNAL_SERVING_GRPC_PORT`.

Set dynamically in kv-store (etcd or zookeeper):
- log_each_invocation - dynamic override of `MM_LOG_EACH_INVOKE` env var
- logger_level - TODO
- scaleup_rpm_threshold - dynamic override of `MM_SCALEUP_RPM_THRESHOLD` env var, see [auto-scaling](#model-auto-scaling) above.
26 changes: 26 additions & 0 deletions docs/configuration/payloads.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
## Payload Processing Overview
ModelMesh exchanges `Payloads` with models deployed within runtimes. In ModelMesh, a `Payload` consists of information regarding the id of the model and the method of the model being called, together with some data (actual binary requests or responses) and metadata (e.g., headers).

A `PayloadProcessor` is responsible for processing such `Payloads` for models served by ModelMesh. Examples would include loggers of prediction requests, data sinks for data visualization, model quality assessment, or monitoring tooling.

They can be configured to only look at payloads that are consumed and produced by certain models, or payloads containing certain headers, etc. This configuration is performed at the ModelMesh instance level. Multiple `PayloadProcessors` can be configured per each ModelMesh instance, and they can be set to care about specific portions of the payload (e.g., model inputs, model outputs, metadata, specific headers, etc.).

As an example, a `PayloadProcessor` can see input data as below:

```text
[mmesh.ExamplePredictor/predict, Metadata(content-type=application/grpc,user-agent=grpc-java-netty/1.51.1,mm-model-id=myModel,another-custom-header=custom-value,grpc-accept-encoding=gzip,grpc-timeout=1999774u), CompositeByteBuf(ridx: 0, widx: 2000004, cap: 2000004, components=147)
```

and/or output data as `ByteBuf`:
```text
java.nio.HeapByteBuffer[pos=0 lim=65 cap=65]
```

A `PayloadProcessor` can be configured by means of a whitespace separated `String` of URIs. For example, in a URI like `logger:///*?pytorch1234#predict`:
- the scheme represents the type of processor, e.g., `logger`
- the query represents the model id to observe, e.g., `pytorch1234`
- the fragment represents the method to observe, e.g., `predict`

## Featured `PayloadProcessors`:
- `logger` : logs requests/responses payloads to `model-mesh` logs (_INFO_ level), e.g., use `logger://*` to log every `Payload`
- `http` : sends requests/responses payloads to a remote service (via _HTTP POST_), e.g., use `http://10.10.10.1:8080/consumer/kserve/v2` to send every `Payload` to the specified HTTP endpoint
68 changes: 68 additions & 0 deletions docs/configuration/tls.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
## Enable TLS/SSL

TLS between the ModelMesh container and the model runtime container isn't currently required or supported, since the communication happens with a single pod over localhost.

However, TLS must be enabled in production deployments for the external gRPC service interfaces exposed by ModelMesh itself (which include your proxied custom gRPC interface).

To do this, you must provide both private key and corresponding cert files in pem format, volume-mounting them into the ModelMesh container from a kubernetes secret. TLS is then enabled by setting the values of the following env vars on the ModelMesh container to the paths of those mounted files as demonstrated [here](https://github.com/kserve/modelmesh/blob/main/config/base/patches/tls.yaml#L39-L42).

The same certificate pair will then also be used for "internal" communication between the model-mesh pods, which is unencrypted otherwise (in prior versions the internal traffic was encrypted unconditionally, but using "hardcoded" certs baked into the image which have now been removed).

## Client Authentication

To additionally enable TLS Client Auth (aka Mutual Auth, mTLS):

- Set the `MM_TLS_CLIENT_AUTH` env var to either `REQUIRE` or `OPTIONAL` (case-insensitive)
- Mount pem-format cert(s) to use for trust verification into the container, and set the `MM_TLS_TRUST_CERT_PATH` to a comma-separated list of the mounted paths to these files

## Certificate Format

A `PKCS8` format key is required due to netty [only supporting PKCS8 keys](https://github.com/netty/netty/wiki/SslContextBuilder-and-Private-Key).

For a key cert pair, `server.crt` and `server.key`, you can convert an unencrypted `PKCS1` key to `PKCS8`.

```
$ openssl pkcs8 -topk8 -nocrypt -in server.key -out mmesh.key
```

If only one hash is displayed, they match. You can also use the above command to verify the original key cert pair `server.crt` and `server.key`.

### cert-manager
If you are using [cert-manager](https://github.com/cert-manager/cert-manager) on Kubernetes/OpenShift to generate certificates, just ensure that the `.spec.privateKey.encoding` field of your Certificate CR is set to `PKCS8` (it defaults to `PKCS1`).

## Updating and Rotating Private Keys

Because the provided certificates are also used for intra-cluster communication, care must be taken when updating to a new private key to avoid potential temporary impact to the service. All pods inter-communicate during rolling upgrade transitions, so the new pods must be able to connect to the old pods and vice versa. If new trust certs are required for the new private key, an update must be performed first to ensure both old and new trust certs are used, and these must both remain present for the subsequent key update. Note that these additional steps are not required if a common and unchanged CA certificate is used for trust purposes.

There is a dedicated env var `MM_INTERNAL_TRUST_CERTS` which can be used to specify additional trust (public) certificates for inter-cluster communication only. It can be set to one or more comma-separated paths which point to either individual pem-formatted cert files or directories containing certs with `.pem` and/or `.crt` extensions. These paths would correspond to Kube-mounted secrets. Here is an example of the three distinct updates required:

1. Add `MM_INTERNAL_TRUST_CERTS` pointing to the new cert:
```
- name: MM_TLS_KEY_CERT_PATH
value: /path/to/existing-keycert.pem
- name: MM_TLS_PRIVATE_KEY_PATH
value: /path/to/existing-key.pem
- name: MM_INTERNAL_TRUST_CERTS
value: /path/to/new-cacert.pem
```
2. Switch to the new private key pair, with `MM_INTERNAL_TRUST_CERTS` now pointing to the old cert:
```
- name: MM_TLS_KEY_CERT_PATH
value: /path/to/new-keycert.pem
- name: MM_TLS_PRIVATE_KEY_PATH
value: /path/to/new-key.pem
- name: MM_INTERNAL_TRUST_CERTS
value: /path/to/existing-keycert.pem
```
3. Optionally remove `MM_TRUST_CERTS`:
```
- name: MM_TLS_KEY_CERT_PATH
value: /path/to/new-keycert.pem
- name: MM_TLS_PRIVATE_KEY_PATH
value: /path/to/new-key.pem
```

**Note**: these additional steps shouldn't be required if either:

- The same CA is used for both the old and new public certs (so they are not self-signed)
- Some temporary service disruption is acceptable - this will likely manifest as some longer response times during the upgrade, possibly with some timeouts and failures. It should not persist beyond the rolling update process and the exact magnitude of the impact depends on various factors such as cluster size, loading time, request volume and patterns, etc.
16 changes: 8 additions & 8 deletions docs/overview.md
Original file line number Diff line number Diff line change
@@ -14,16 +14,16 @@ In ModelMesh, a **model** refers to an abstraction of machine learning models. I

### Implement a model runtime

1. Wrap your model-loading and invocation logic in this [model-runtime.proto](/src/main/proto/current/model-runtime.proto) gRPC service interface
- `runtimeStatus()` - called only during startup to obtain some basic configuration parameters from the runtime, such as version, capacity, model-loading timeout
- `loadModel()` - load the specified model into memory from backing storage, returning when complete
- `modelSize()` - determine size (mem usage) of previously-loaded model. If very fast, can be omitted and provided instead in the response from `loadModel`
- `unloadModel()` - unload previously loaded model, returning when complete
1. Wrap your model-loading and invocation logic in this [model-runtime.proto](/src/main/proto/current/model-runtime.proto) gRPC service interface.
- `runtimeStatus()` - called only during startup to obtain some basic configuration parameters from the runtime, such as version, capacity, model-loading timeout.
- `loadModel()` - load the specified model into memory from backing storage, returning when complete.
- `modelSize()` - determine size (memory usage) of previously-loaded model. If very fast, can be omitted and provided instead in the response from `loadModel`.
- `unloadModel()` - unload previously loaded model, returning when complete.
- Use a separate, arbitrary gRPC service interface for model inferencing requests. It can have any number of methods and they are assumed to be idempotent. See [predictor.proto](/src/test/proto/predictor.proto) for a very simple example.
- The methods of your custom applier interface will be called only for already fully-loaded models.
2. Build a grpc server docker container which exposes these interfaces on localhost port 8085 or via a mounted unix domain socket
3. Extend the [Kustomize-based Kubernetes manifests](/config) to use your docker image, and with appropriate mem and cpu resource allocations for your container
4. Deploy to a Kubernetes cluster as a regular Service, which will expose [this grpc service interface](/src/main/proto/current/model-mesh.proto) via kube-dns (you do not implement this yourself), consume using grpc client of your choice from your upstream service components
2. Build a grpc server docker container which exposes these interfaces on localhost port 8085 or via a mounted unix domain socket.
3. Extend the [Kustomize-based Kubernetes manifests](/config) to use your docker image, and with appropriate memory and CPU resource allocations for your container.
4. Deploy to a Kubernetes cluster as a regular Service, which will expose [this grpc service interface](/src/main/proto/current/model-mesh.proto) via kube-dns (you do not implement this yourself), consume using grpc client of your choice from your upstream service components.
- `registerModel()` and `unregisterModel()` for registering/removing models managed by the cluster
- Any custom inferencing interface methods to make a runtime invocation of previously-registered model, making sure to set a `mm-model-id` or `mm-vmodel-id` metadata header (or `-bin` suffix equivalents for UTF-8 ids)

34 changes: 0 additions & 34 deletions src/main/java/com/ibm/watson/modelmesh/payload/README.md

This file was deleted.