Skip to content

Commit

Permalink
add cloud-platform scope in the test to reclaim the ai platform sampl…
Browse files Browse the repository at this point in the history
…e models (#2355)

* add cloud-platform scope
* fix bug in the client wait_for_run_completion
  • Loading branch information
gaoning777 authored Oct 11, 2019
1 parent 1045b10 commit 93e7b8e
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 3 deletions.
1 change: 1 addition & 0 deletions samples/contrib/parameterized_tfx_oss/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Finally, run `python setup.py install` from `tfx/tfx`. After that, running
`chicago_taxi_pipeline_simple.py` compiles the TFX pipeline into KFP pipeline package.
This pipeline requires google storage permission to run.


## Caveats

This sample uses pipeline parameters in a TFX pipeline, which is not yet fully supported.
Expand Down
18 changes: 17 additions & 1 deletion samples/core/ai_platform/ai_platform.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,23 @@
"metadata": {},
"outputs": [],
"source": [
"kfp.Client().create_run_from_pipeline_func(pipeline, arguments={}, experiment_name=experiment_name)"
"pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={}, experiment_name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wait for the pipeline to finish"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline.wait_for_run_completion(timeout=1800)"
]
},
{
Expand Down
1 change: 1 addition & 0 deletions samples/core/xgboost_training_cm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ or not.

Preprocessing uses Google Cloud DataProc. Therefore, you must enable the [DataProc API](https://cloud.google.com/endpoints/docs/openapi/enable-api) for the given GCP project.


## Compile

Follow the guide to [building a pipeline](https://www.kubeflow.org/docs/guides/pipelines/build-pipeline/) to install the Kubeflow Pipelines SDK and compile the sample Python into a workflow specification. The specification takes the form of a YAML file compressed into a `.zip` file.
Expand Down
2 changes: 1 addition & 1 deletion sdk/python/kfp/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -358,7 +358,7 @@ def __init__(self, client, run_info):

def wait_for_run_completion(self, timeout=None):
timeout = timeout or datetime.datetime.max - datetime.datetime.min
return self._client.wait_for_run_completion(timeout)
return self._client.wait_for_run_completion(self.run_id, timeout)

def __str__(self):
return '<RunPipelineResult(run_id={})>'.format(self.run_id)
Expand Down
2 changes: 1 addition & 1 deletion test/deploy-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ else
SHOULD_CLEANUP_CLUSTER=true
# "storage-rw" is needed to allow VMs to push to gcr.io
# reference: https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam
SCOPE_ARG="--scopes=storage-rw"
SCOPE_ARG="--scopes=storage-rw,cloud-platform"
# Machine type and cluster size is the same as kubeflow deployment to
# easily compare performance. We can reduce usage later.
NODE_POOL_CONFIG_ARG="--num-nodes=2 --machine-type=n1-standard-8 \
Expand Down

0 comments on commit 93e7b8e

Please sign in to comment.