Skip to content

Commit

Permalink
update ci tests for mnist example
Browse files Browse the repository at this point in the history
  • Loading branch information
jinchihe committed Dec 3, 2019
1 parent 341decc commit 0adaf2e
Show file tree
Hide file tree
Showing 19 changed files with 411 additions and 438 deletions.
106 changes: 11 additions & 95 deletions mnist/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@
- [Prerequisites](#prerequisites)
- [Deploy Kubeflow](#deploy-kubeflow)
- [Local Setup](#local-setup)
- [GCP Setup](#gcp-setup)
- [Modifying existing examples](#modifying-existing-examples)
- [Prepare model](#prepare-model)
- [Build and push model image.](#build-and-push-model-image)
- [(Optional) Build and push model image.](#optional-build-and-push-model-image)
- [Preparing your Kubernetes Cluster](#preparing-your-kubernetes-cluster)
- [Training your model](#training-your-model)
- [Local storage](#local-storage)
Expand Down Expand Up @@ -53,6 +54,9 @@ You also need the following command line tools:

**Note:** kustomize [v2.0.3](https://github.com/kubernetes-sigs/kustomize/releases/tag/v2.0.3) is recommented since the [problem](https://github.com/kubernetes-sigs/kustomize/issues/1295) in kustomize v2.1.0.

### GCP Setup

If you are using GCP, need to enable [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to execute below steps.

## Modifying existing examples

Expand All @@ -70,9 +74,9 @@ Basically, we must:

The resulting model is [model.py](model.py).

### Build and push model image.
### (Optional) Build and push model image.

With our code ready, we will now build/push the docker image.
With our code ready, we will now build/push the docker image, or use the existing image `gcr.io/kubeflow-ci/mnist/model:latest` without building and pushing.

```
DOCKER_URL=docker.io/reponame/mytfmodel:tag # Put your docker registry here
Expand Down Expand Up @@ -200,7 +204,7 @@ kustomize edit add configmap mnist-map-training --from-literal=name=mnist-train-
Optionally, if you want to use your custom training image, configurate that as below.

```
kustomize edit set image training-image=$DOCKER_URL:$TAG
kustomize edit set image training-image=$DOCKER_URL
```

Next we configure it to run distributed by setting the number of parameter servers and workers to use. The `numPs` means the number of Ps and the `numWorkers` means the number of Worker.
Expand All @@ -225,94 +229,6 @@ kustomize edit add configmap mnist-map-training --from-literal=modelDir=gs://${B
kustomize edit add configmap mnist-map-training --from-literal=exportDir=gs://${BUCKET}/${MODEL_PATH}/export
```

In order to write to GCS we need to supply the TFJob with GCP credentials. We do
this by telling our training code to use a [Google service account](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually).

If you followed the [getting started guide for GKE](https://www.kubeflow.org/docs/started/getting-started-gke/)
then a number of steps have already been performed for you

1. We created a Google service account named `${DEPLOYMENT}-user`

* You can run the following command to list all service accounts in your project

```
gcloud --project=${PROJECT} iam service-accounts list
```
2. We stored the private key for this account in a K8s secret named `user-gcp-sa`
* To see the secrets in your cluster
```
kubectl get secrets
```
3. We granted this service account permission to read/write GCS buckets in this project
* To see the IAM policy you can do
```
gcloud projects get-iam-policy ${PROJECT} --format=yaml
```
* The output should look like the following
```
bindings:
...
- members:
- serviceAccount:${DEPLOYMENT}-user@${PROJEC}.iam.gserviceaccount.com
...
role: roles/storage.admin
...
etag: BwV_BqSmSCY=
version: 1
```
To use this service account we perform the following steps
1. Mount the secret `user-gcp-sa` into the pod and configure the mount path of the secret.
```
kustomize edit add configmap mnist-map-training --from-literal=secretName=user-gcp-sa
kustomize edit add configmap mnist-map-training --from-literal=secretMountPath=/var/secrets
```
* Note: ensure your envrionment is pointed at the same `kubeflow` namespace as the `user-gcp-sa` secret
2. Next we need to set the environment variable `GOOGLE_APPLICATION_CREDENTIALS` so that our code knows where to look for the service account key.
```
kustomize edit add configmap mnist-map-training --from-literal=GOOGLE_APPLICATION_CREDENTIALS=/var/secrets/user-gcp-sa.json
```
* If we look at the spec for our job we can see that the environment variable `GOOGLE_APPLICATION_CREDENTIALS` is set.
```
kustomize build .
```
```
apiVersion: kubeflow.org/v1beta2
kind: TFJob
metadata:
...
spec:
tfReplicaSpecs:
Chief:
replicas: 1
template:
spec:
containers:
- command:
..
env:
...
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/user-gcp-sa.json
...
...
...
```

You can now submit the job

Expand Down Expand Up @@ -385,21 +301,21 @@ In order to write to S3 we need to supply the TensorFlow code with AWS credentia
export S3_MODEL_EXPORT_URI=s3://${BUCKET_NAME}/export
```
1. Create a K8s secret containing your AWS credentials
2. Create a K8s secret containing your AWS credentials
```
kustomize edit add secret aws-creds --from-literal=awsAccessKeyID=${AWS_ACCESS_KEY_ID} \
--from-literal=awsSecretAccessKey=${AWS_SECRET_ACCESS_KEY}
```
1. Pass secrets as environment variables into pod
3. Pass secrets as environment variables into pod
```
kustomize edit add configmap mnist-map-training --from-literal=awsAccessKeyIDName=awsAccessKeyID
kustomize edit add configmap mnist-map-training --from-literal=awsSecretAccessKeyName=awsSecretAccessKey
```
1. Next we need to set a whole bunch of S3 related environment variables so that TensorFlow knows how to talk to S3
4. Next we need to set a whole bunch of S3 related environment variables so that TensorFlow knows how to talk to S3
```
kustomize edit add configmap mnist-map-training --from-literal=S3_ENDPOINT=${S3_ENDPOINT}
Expand Down
17 changes: 0 additions & 17 deletions mnist/serving/GCS/deployment_patch.yaml

This file was deleted.

8 changes: 0 additions & 8 deletions mnist/serving/GCS/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,3 @@ kind: Kustomization

bases:
- ../base

patchesJson6902:
- path: deployment_patch.yaml
target:
group: extensions
kind: Deployment
name: $(svcName)
version: v1beta1
98 changes: 95 additions & 3 deletions mnist/testing/conftest.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,62 @@
import os
import pytest

def pytest_addoption(parser):

parser.addoption(
"--master", action="store", default="", help="IP address of GKE master")
"--tfjob_name", help="Name for the TFjob.",
type=str, default="mnist-test-" + os.getenv('BUILD_ID'))

parser.addoption(
"--namespace", help=("The namespace to run in. This should correspond to"
"a namespace associated with a Kubeflow namespace."),
type=str, default="kubeflow-kubeflow-testing")

parser.addoption(
"--repos", help="The repos to checkout; leave blank to use defaults",
type=str, default="")

parser.addoption(
"--trainer_image", help="TFJob training image",
type=str, default="gcr.io/kubeflow-ci/mnist/model:build-" + os.getenv('BUILD_ID'))

parser.addoption(
"--train_steps", help="train steps for mnist testing",
type=str, default="200")

parser.addoption(
"--batch_size", help="batch size for mnist trainning",
type=str, default="100")

parser.addoption(
"--namespace", action="store", default="", help="namespace of server")
"--learning_rate", help="mnist learnning rate",
type=str, default="0.01")

parser.addoption(
"--service", action="store", default="",
"--num_ps", help="The number of PS",
type=str, default="1")

parser.addoption(
"--num_workers", help="The number of Worker",
type=str, default="2")

parser.addoption(
"--model_dir", help="Path for model saving",
type=str, default="gs://kubeflow-ci-deployment_ci-temp/mnist/models/" + os.getenv('BUILD_ID'))

parser.addoption(
"--export_dir", help="Path for model exporting",
type=str, default="gs://kubeflow-ci-deployment_ci-temp/mnist/models/" + os.getenv('BUILD_ID'))

parser.addoption(
"--deploy_name", help="Name for the service deployment",
type=str, default="mnist-test-" + os.getenv('BUILD_ID'))

parser.addoption(
"--master", action="store", default="", help="IP address of GKE master")

parser.addoption(
"--service", action="store", default="mnist-test-" + os.getenv('BUILD_ID'),
help="The name of the mnist K8s service")

@pytest.fixture
Expand All @@ -22,3 +70,47 @@ def namespace(request):
@pytest.fixture
def service(request):
return request.config.getoption("--service")

@pytest.fixture
def tfjob_name(request):
return request.config.getoption("--tfjob_name")

@pytest.fixture
def repos(request):
return request.config.getoption("--repos")

@pytest.fixture
def trainer_image(request):
return request.config.getoption("--trainer_image")

@pytest.fixture
def train_steps(request):
return request.config.getoption("--train_steps")

@pytest.fixture
def batch_size(request):
return request.config.getoption("--batch_size")

@pytest.fixture
def learning_rate(request):
return request.config.getoption("--learning_rate")

@pytest.fixture
def num_ps(request):
return request.config.getoption("--num_ps")

@pytest.fixture
def num_workers(request):
return request.config.getoption("--num_workers")

@pytest.fixture
def model_dir(request):
return request.config.getoption("--model_dir")

@pytest.fixture
def export_dir(request):
return request.config.getoption("--export_dir")

@pytest.fixture
def deploy_name(request):
return request.config.getoption("--deploy_name")
Loading

0 comments on commit 0adaf2e

Please sign in to comment.