diff --git a/docs/quickstart.md b/docs/quickstart.md index ffa86a94..a75f70e9 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -191,7 +191,7 @@ since a normal Service has issues load balancing gRPC requests. See more info ### gRPC request -To test out **gRPC** inference requests, you can port-forward the headless service _in a separate terminal window_: +To test out gRPC inference requests, you can port-forward the headless service **in a separate terminal window**: ```shell kubectl port-forward --address 0.0.0.0 service/modelmesh-serving 8033 -n modelmesh-serving @@ -200,10 +200,9 @@ kubectl port-forward --address 0.0.0.0 service/modelmesh-serving 8033 -n modelme Then a gRPC client generated from the KServe [grpc_predict_v2.proto](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto) file can be used with `localhost:8033`. A ready-to-use Python example of this can be found [here](https://github.com/pvaneck/model-serving-sandbox/tree/main/grpc-predict). -Alternatively, you can test inference with [grpcurl](https://github.com/fullstorydev/grpcurl). This can easily be installed with `brew install grpcurl` if on macOS. +Alternatively, you can test inferences using [grpcurl](https://github.com/fullstorydev/grpcurl). This can be installed easily with `brew install grpcurl` if on macOS. -With `grpcurl`, a request can be sent to the SKLearn MNIST model like the following. Make sure that the `MODEL_NAME` -variable below is set to the name of your `InferenceService`. +An example that uses `grpcurl` to send a request to the SKLearn MNIST model is provided below. The example should be run from `modelmesh-serving`'s root directory and `MODEL_NAME` should be set to the name of the deployed `InferenceService`. ```shell MODEL_NAME=example-sklearn-isvc