Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDK - Client - Added a way to set experiment name using environment variables #2292

6 changes: 3 additions & 3 deletions samples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,11 +75,11 @@ For better readability and integrations with the sample test infrastructure, sam
* The sample file should be either `*.py` or `*.ipynb`, and its file name is consistent with its directory name.
* For `*.py` sample, it's recommended to have a main invoking `kfp.compiler.Compiler().compile()` to compile the
pipeline function into pipeline yaml spec.
* For `*.ipynb` sample, parameters (e.g., `experiment_name` and `project_name`)
* For `*.ipynb` sample, parameters (e.g., `project_name`)
should be defined in a dedicated cell and tagged as parameter.
(If the author would like the sample test infra to run it by setting the `run_pipeline` flag to True in
the associated `config.yaml` file, the sample test infra will expect a parameter `experiment_name`
to inject so that it can run in the sample test experiment.)
the associated `config.yaml` file, the sample test infra will expect the sample to use the
`kfp.Client().create_run_from_pipeline_func` method for starting the run so that the sample test can watch the run.)
Detailed guideline is
[here](https://github.com/nteract/papermill). Also, all the environment setup and
preparation should be within the notebook, such as by `!pip install packages`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@
"\n",
"**Please fill in the below environment variables with you own settings.**\n",
"\n",
"- **EXPERIMENT_NAME**: A unique experiment name that will be created for this notebook demo.\n",
"- **KFP_PACKAGE**: The latest release of kubeflow pipeline platform library.\n",
"- **KUBEFLOW_PIPELINE_LINK**: The link to access the KubeFlow pipeline API.\n",
"- **MOUNT**: The mount configuration to map data above into the training job. The format is 'data:/directory'\n",
Expand All @@ -61,8 +60,6 @@
"metadata": {},
"outputs": [],
"source": [
"EXPERIMENT_NAME = 'myjob'\n",
"RUN_ID=\"run\"\n",
"KFP_SERVICE=\"ml-pipeline.kubeflow.svc.cluster.local:8888\"\n",
"KFP_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp/0.1.14/kfp.tar.gz'\n",
"KFP_ARENA_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp-arena/kfp-arena-0.3.tar.gz'\n",
Expand Down
7 changes: 2 additions & 5 deletions samples/contrib/ibm-samples/ffdl-seldon/ffdl_pipeline.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,7 @@
"# KUBEFLOW_PIPELINE_LINK = ''\n",
"# client = kfp.Client(KUBEFLOW_PIPELINE_LINK)\n",
"\n",
"client = kfp.Client()\n",
"\n",
"\n",
"EXPERIMENT_NAME = 'FfDL-Seldon Experiments'"
"client = kfp.Client()\n"
]
},
{
Expand Down Expand Up @@ -179,7 +176,7 @@
" 'model-class-file': 'gender_classification.py'}\n",
"\n",
"\n",
"run = client.create_run_from_pipeline_func(ffdlPipeline, arguments=parameters, experiment_name=EXPERIMENT_NAME).run_info\n",
"run = client.create_run_from_pipeline_func(ffdlPipeline, arguments=parameters).run_info\n",
"\n",
"import IPython\n",
"html = ('<p id=\"link\"> </p> <script> document.getElementById(\"link\").innerHTML = \"Actual Run link <a href=//\" + location.hostname + \"%s/#/runs/details/%s target=_blank >here</a>\"; </script>'\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,6 @@
"outputs": [],
"source": [
"# Kubeflow project settings\n",
"EXPERIMENT_NAME = 'Image Captioning'\n",
"PROJECT_NAME = '[YOUR-PROJECT-NAME]' \n",
"PIPELINE_STORAGE_PATH = GCS_BUCKET + '/ms-coco/components' # path to save pipeline component images\n",
"BASE_IMAGE = 'gcr.io/%s/img-cap:latest' % PROJECT_NAME # using image created in README instructions\n",
Expand Down Expand Up @@ -913,7 +912,7 @@
" 'training_batch_size': 16, # has to be smaller since only training on 80/100 examples \n",
"}\n",
"\n",
"kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments, experiment_name=EXPERIMENT_NAME)"
"kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments)"
]
},
{
Expand Down
7 changes: 3 additions & 4 deletions samples/core/ai_platform/ai_platform.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
"%%capture\n",
"\n",
"# Install the SDK (Uncomment the code if the SDK is not installed before)\n",
"!python3 -m pip install kfp --upgrade -q\n",
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n",
"!python3 -m pip install pandas --upgrade -q"
]
},
Expand Down Expand Up @@ -79,8 +79,7 @@
"source": [
"# Required Parameters\n",
"project_id = '<ADD GCP PROJECT HERE>'\n",
"output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash\n",
"experiment_name = 'Chicago Crime Prediction'"
"output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash\n"
]
},
{
Expand Down Expand Up @@ -280,7 +279,7 @@
"metadata": {},
"outputs": [],
"source": [
"pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={}, experiment_name=experiment_name)"
"pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})"
]
},
{
Expand Down
5 changes: 2 additions & 3 deletions samples/core/component_build/component_build.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"outputs": [],
"source": [
"# Install Pipeline SDK - This only needs to be ran once in the enviroment. \n",
"!pip3 install kfp --upgrade --quiet"
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n"
]
},
{
Expand Down Expand Up @@ -65,7 +65,6 @@
},
"outputs": [],
"source": [
"experiment_name = 'container_building'"
]
},
{
Expand Down Expand Up @@ -202,7 +201,7 @@
"outputs": [],
"source": [
"arguments = {'a': '7', 'b': '8'}\n",
"kfp.Client().create_run_from_pipeline_func(pipeline_func=calc_pipeline, arguments=arguments, experiment_name=experiment_name)\n",
"kfp.Client().create_run_from_pipeline_func(pipeline_func=calc_pipeline, arguments=arguments)\n",
"\n",
"# This should output link that leads to the run information page. \n",
"# Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working"
Expand Down
8 changes: 3 additions & 5 deletions samples/core/dataflow/dataflow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,7 @@
"outputs": [],
"source": [
"project = 'Input your PROJECT ID'\n",
"output = 'Input your GCS bucket name' # No ending slash\n",
"experiment_name = 'Dataflow - Launch Python'"
"output = 'Input your GCS bucket name' # No ending slash\n"
]
},
{
Expand All @@ -95,8 +94,7 @@
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"!pip3 install kfp --upgrade"
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n"
]
},
{
Expand Down Expand Up @@ -368,7 +366,7 @@
}
],
"source": [
"kfp.Client().create_run_from_pipeline_func(pipeline, arguments={}, experiment_name=experiment_name)"
"kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@
}
],
"source": [
"!pip3 install kfp --upgrade"
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n"
]
},
{
Expand Down
5 changes: 2 additions & 3 deletions samples/core/kubeflow_tf_serving/kubeflow_tf_serving.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@
],
"source": [
"# Install Pipeline SDK - This only needs to be ran once in the enviroment. \n",
"!pip3 install kfp --upgrade\n",
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n",
"!pip3 install tensorflow==1.14 --upgrade"
]
},
Expand Down Expand Up @@ -172,7 +172,6 @@
"# Set your output and project. !!!Must Do before you can proceed!!!\n",
"project = 'Your-Gcp-Project-ID' #'Your-GCP-Project-ID'\n",
"model_name = 'model-name' # Model name matching TF_serve naming requirements \n",
"experiment_name = 'serving_component'\n",
"import time\n",
"ts = int(time.time())\n",
"model_version = str(ts) # Here we use timestamp as version to avoid conflict \n",
Expand Down Expand Up @@ -323,7 +322,7 @@
}
],
"source": [
"kfp.Client().create_run_from_pipeline_func(model_server, arguments={}, experiment_name=experiment_name)\n",
"kfp.Client().create_run_from_pipeline_func(model_server, arguments={})\n",
"\n",
"#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@
},
"outputs": [],
"source": [
"experiment_name = 'lightweight python components'"
]
},
{
Expand All @@ -39,7 +38,7 @@
"outputs": [],
"source": [
"# Install the SDK\n",
"!pip3 install kfp --upgrade"
"#!pip3 install 'kfp>=0.1.31.2' --quiet"
]
},
{
Expand Down Expand Up @@ -243,7 +242,7 @@
"arguments = {'a': '7', 'b': '8'}\n",
"\n",
"#Submit a pipeline run\n",
"kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments, experiment_name=experiment_name)\n",
"kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)\n",
"\n",
"#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)"
]
Expand Down
7 changes: 3 additions & 4 deletions samples/core/multiple_outputs/multiple_outputs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
},
"outputs": [],
"source": [
"!pip install kfp --upgrade"
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n"
]
},
{
Expand All @@ -51,8 +51,7 @@
"outputs": [],
"source": [
"output = 'gs://[BUCKET-NAME]' # GCS bucket name\n",
"project_id = '[PROJECT-NAME]' # GCP project name\n",
"experiment_name = 'Multiple Outputs Sample'"
"project_id = '[PROJECT-NAME]' # GCP project name\n"
]
},
{
Expand Down Expand Up @@ -161,7 +160,7 @@
" 'b': 2.5,\n",
" 'c': 3.0,\n",
"}\n",
"run_result = kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments, experiment_name=experiment_name)"
"run_result = kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments)"
]
}
],
Expand Down
2 changes: 1 addition & 1 deletion samples/core/tfx-oss/TFX Example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"outputs": [],
"source": [
"!pip3 install tfx==0.13.0 --upgrade\n",
"!pip3 install kfp --upgrade"
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n"
]
},
{
Expand Down
9 changes: 8 additions & 1 deletion sdk/python/kfp/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ def camel_case_to_snake_case(name):

KF_PIPELINES_ENDPOINT_ENV = 'KF_PIPELINES_ENDPOINT'
KF_PIPELINES_UI_ENDPOINT_ENV = 'KF_PIPELINES_UI_ENDPOINT'
KF_PIPELINES_DEFAULT_EXPERIMENT_NAME = 'KF_PIPELINES_DEFAULT_EXPERIMENT_NAME'
KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME = 'KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME'

class Client(object):
""" API Client for KubeFlow Pipeline.
Expand Down Expand Up @@ -365,7 +367,12 @@ def __str__(self):

#TODO: Check arguments against the pipeline function
pipeline_name = os.path.basename(pipeline_file)
experiment_name = experiment_name or 'Default'
experiment_name = experiment_name or os.environ.get(KF_PIPELINES_DEFAULT_EXPERIMENT_NAME, None)
overridden_experiment_name = os.environ.get(KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME, experiment_name)
if overridden_experiment_name != experiment_name:
import warnings
warnings.warn('Changing experiment name from "{}" to "{}".'.format(experiment_name, overridden_experiment_name))
experiment_name = overridden_experiment_name or 'Default'
run_name = run_name or pipeline_name + ' ' + datetime.now().strftime('%Y-%m-%d %H-%M-%S')
experiment = self.create_experiment(name=experiment_name)
run_info = self.run_pipeline(experiment.id, run_name, pipeline_file, arguments)
Expand Down
7 changes: 4 additions & 3 deletions test/sample-test/check_notebook_results.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,26 +21,27 @@


class NoteBookChecker(object):
def __init__(self, testname, result, run_pipeline, namespace='kubeflow'):
def __init__(self, testname, result, run_pipeline, experiment_name, namespace='kubeflow'):
""" Util class for checking notebook sample test running results.

:param testname: test name in the json xml.
:param result: name of the file that stores the test result
:param run_pipeline: whether to submit for a pipeline run.
:param namespace: where the pipeline system is deployed.
:param experiment_name: Name of the experiment to monitor
"""
self._testname = testname
self._result = result
self._exit_code = None
self._run_pipeline = run_pipeline
self._namespace = namespace
self._experiment_name = experiment_name

def run(self):
""" Run the notebook sample as a python script. """
self._exit_code = str(
subprocess.call(['ipython', '%s.py' % self._testname]))


def check(self):
""" Check the pipeline running results of the notebook sample. """
test_cases = []
Expand All @@ -63,7 +64,7 @@ def check(self):
test_timeout = raw_args['test_timeout']

if self._run_pipeline:
experiment = self._testname + '-test'
experiment = self._experiment_name
###### Initialization ######
host = 'ml-pipeline.%s.svc.cluster.local:8888' % self._namespace
client = Client(host=host)
Expand Down
1 change: 0 additions & 1 deletion test/sample-test/configs/ai_platform.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,3 @@ test_name: ai_platform
notebook_params:
output:
project_id: ml-pipeline-test
experiment_name: ai_platform-test
1 change: 0 additions & 1 deletion test/sample-test/configs/component_build.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,4 @@

test_name: component_build
notebook_params:
experiment_name: component_build-test
PROJECT_NAME: ml-pipeline-test
1 change: 0 additions & 1 deletion test/sample-test/configs/dataflow.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,4 @@ test_name: dataflow
notebook_params:
output:
project: ml-pipeline-test
experiment_name: dataflow-test
run_pipeline: False
1 change: 0 additions & 1 deletion test/sample-test/configs/kubeflow_tf_serving.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,3 @@ test_name: kubeflow_tf_serving
notebook_params:
output:
project: ml-pipeline-test
experiment_name: kubeflow_tf_serving-test
1 change: 0 additions & 1 deletion test/sample-test/configs/multiple_outputs.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,3 @@ test_name: multiple_outputs
notebook_params:
output:
project_id: ml-pipeline-test
experiment_name: multiple_outputs-test
7 changes: 4 additions & 3 deletions test/sample-test/run_sample_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,18 @@


class PySampleChecker(object):
def __init__(self, testname, input, output, result, namespace='kubeflow'):
def __init__(self, testname, input, output, result, experiment_name, namespace='kubeflow'):
"""Util class for checking python sample test running results.

:param testname: test name.
:param input: The path of a pipeline file that will be submitted.
:param output: The path of the test output.
:param result: The path of the test result that will be exported.
:param namespace: namespace of the deployed pipeline system. Default: kubeflow
:param experiment_name: Name of the experiment to monitor
"""
self._testname = testname
self._experiment_name = experiment_name
self._input = input
self._output = output
self._result = result
Expand Down Expand Up @@ -68,8 +70,7 @@ def run(self):
exit(1)

###### Create Experiment ######
experiment_name = self._testname + ' sample experiment'
response = self._client.create_experiment(experiment_name)
response = self._client.create_experiment(self._experiment_name)
self._experiment_id = response.id
utils.add_junit_test(self._test_cases, 'create experiment', True)

Expand Down
Loading