Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add step by step tutorial using mnist as use case #2716

Merged
merged 5 commits into from
Jan 6, 2020
Merged

Add step by step tutorial using mnist as use case #2716

merged 5 commits into from
Jan 6, 2020

Conversation

luotigerlsx
Copy link
Member

@luotigerlsx luotigerlsx commented Dec 11, 2019

In this tutorial, we designed a series of notebooks to demonstrate how to interact with Kubeflow Pipelines through Python SDK step by step. In particular

  • 00 Kubeflow Cluster Setup: this notebook helps you deploy a Kubeflow cluster through CLI. The UI method of deploying a Kubeflow cluster does not support Kubeflow v0.7 yet.

Then, notebooks 01-04 use one concrete use case, i.e.,
MNIST classification, to demonstrate different ways of authoring a pipeline component:

  • 01 Lightweight Python Components: this notebook demonstrates how to build a
    component through defining a stand-alone python function and then calling kfp.components.func_to_container_op(func) to convert, which can be used in a pipeline.

  • 02 Local Development with Docker Image Components: this notebook guides you on creating a pipeline component with kfp.components.ContainerOp from an existing Docker image which should contain the program to perform the task required in a particular step of your ML workflow.

  • 03 Reusable Components: this notebook describes the manual way of writing a full component program (in any language) and a component definition for it. Below is a summary of the steps involved in creating and using a component.

    • Write the program that contains your component’s logic. The program must use files and command-line arguments
      to pass data to and from the component.
    • Containerize the program.
    • Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.
    • Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.
  • 04 Reusable and Pre-build Components as Pipeline: this notebook combines our built components, together with a pre-build GCP AI Platform components and a lightweight component to compose a pipeline with three steps.

    • Train a MINIST model and export it to GCS
    • Deploy the exported Tensorflow model on AI Platform prediction service
    • Test the deployment by calling the end point with test data

This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Hi @luotigerlsx. Thanks for your PR.

I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@luotigerlsx
Copy link
Member Author

Add @saurabh24292 here, who also contribute to this effort.

@gaoning777
Copy link
Contributor

These are really good samples. Thanks for the contributions. Adding these samples in the tutorials looks good to me. Probably add sample tests for these tutorials such that they are tested before releases.
However, the current sample test infra might need some improvement to cover the tests in the tutorial directory. Here is the sample test infra code, https://github.com/kubeflow/pipelines/tree/master/test/sample-test. Would you mind adding the required codes for the sample tests infra.
Thanks

@gaoning777
Copy link
Contributor

/cc @numerology

@gaoning777
Copy link
Contributor

/ok-to-test

@numerology
Copy link

Thanks @luotigerlsx ! This PR is very useful.

One small question: full-fledge Kubeflow deployment looks good to me. Do you also consider demonstrating standalone deployment?

@luotigerlsx
Copy link
Member Author

Hey @gaoning777 and @numerology , thanks for the review. We will address your comments and get back to you :). Have a great weekend.

@luotigerlsx
Copy link
Member Author

Thanks @luotigerlsx ! This PR is very useful.

One small question: full-fledge Kubeflow deployment looks good to me. Do you also consider demonstrating standalone deployment?

We want to demonstrate the full-fledge actually. The standalone deployment is not intended here to avoid confusion.

@luotigerlsx
Copy link
Member Author

luotigerlsx commented Dec 16, 2019

These are really good samples. Thanks for the contributions. Adding these samples in the tutorials looks good to me. Probably add sample tests for these tutorials such that they are tested before releases.
However, the current sample test infra might need some improvement to cover the tests in the tutorial directory. Here is the sample test infra code, https://github.com/kubeflow/pipelines/tree/master/test/sample-test. Would you mind adding the required codes for the sample tests infra.
Thanks

Hey @gaoning777 , I am trying to follow the instructions mentioned for samples/core, which seems to only work for the case of having a single notebook in a folder with the same name. Currently, I have tested them manually.

I don't know the details of current test infra setup, and it may take quite a bit of time to figure out how to make it work. Maybe it would be more efficient to have someone know the process to work on it. Sorry and hope I can be more helpful.

@luotigerlsx
Copy link
Member Author

Hey @gaoning777 and @numerology , thanks again for the review. We have tried to address all your comments. Please kindly help to have a look.

@luotigerlsx
Copy link
Member Author

Hey @gaoning777 and @numerology , thanks again for the review. We have tried to address all your comments. Please kindly help to have a look.

Hi @gaoning777 , would you kindly have a look at whether the concern is addressed and the PR can be merged. Thanks, Shixin

Before you follow the instructions below to deploy your own Kubeflow cluster, you should

- have a [GCP project setup](https://www.kubeflow.org/docs/gke/deploy/project-setup/) for your Kubeflow deployment
with you having the [owner role](https://cloud.google.com/iam/docs/understanding-roles#primitive_role_definitions)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Owner role is sufficient but not necessary, correct?
If so, could you adjust the role as the minimum required one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, we actually cite from here, that owner is the required.

We have also experimented, it seems owner is necessary.

kfctl apply -V -f ${CONFIG_URI}
```
### Running Notebook
Please not that the above configuration is required for notebook service running outside Kubeflow environment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: not->note?

Copy link
Member Author

@luotigerlsx luotigerlsx Dec 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, a typo. Thanks :)

Please not that the above configuration is required for notebook service running outside Kubeflow environment.
And the examples demonstrated are fully tested on notebook service for the following three situations:
- Notebook running on your personal computer
- Notebook on AI Platform, Google Cloud Platform
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be useful to post the link as well to the AI Platform notebook.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, added.

- Notebook on AI Platform, Google Cloud Platform
- Essentially notebook on any environment outside Kubeflow cluster

For notebook running inside Kubeflow cluster, for example JupytHub will be deployed together with kubeflow, the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: JupytHub -> JupyterHub

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, changed accordingly.

@gaoning777
Copy link
Contributor

Hi, Shixin,
you can add an owner file in this directory such that you can approve future PRs that update this directory.

@gaoning777
Copy link
Contributor

/cc @joeliedtke could you proof read this, Thanks

@luotigerlsx
Copy link
Member Author

@gaoning777 thanks a lot for the comments. I have pushed a new commit and addressed them. I have also added an owner file. Please have a check :)

Copy link
Member

@joeliedtke joeliedtke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for writing these guides, they look like they will be very helpful!

Here is my feedback for 00 to 02 (the feedback on 02 was pretty quick, so I may need to make another pass through it.). I also included the readme in this review. I'll try to take a look at 03 and 04 later tonight or tomorrow.

"cell_type": "markdown",
"metadata": {},
"source": [
"# Cluster Deployment and Environment Setup\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: revise to make "deploy" and action

Deploying a Kubeflow Cluster on Google Cloud (GCP)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified according :)

"source": [
"# Cluster Deployment and Environment Setup\n",
"This notebook helps you deploy kubeflow cluster, and necessary setup for running tutorial from different environment\n",
"- Notebook server outside kubeflow cluster\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Notebook server outside of a Kubeflow cluster

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified according :)

"# Cluster Deployment and Environment Setup\n",
"This notebook helps you deploy kubeflow cluster, and necessary setup for running tutorial from different environment\n",
"- Notebook server outside kubeflow cluster\n",
"- Notebook server on AI platform\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI platform -> AI Platform

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified according :)

"This notebook helps you deploy kubeflow cluster, and necessary setup for running tutorial from different environment\n",
"- Notebook server outside kubeflow cluster\n",
"- Notebook server on AI platform\n",
"- Notebook server within kubeflow cluster"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Notebook server within a Kubeflow cluster

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified according :)

"metadata": {},
"source": [
"## Summary\n",
"### Pre-requisites\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to be a second-level heading instead of a third-level heading.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks :), modified.

"source": [
"Now that we have created our Dockerfile we can create our Docker image. Then we need to push the image to a registry to host the image. \n",
"- We are going to use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Google Container Registry (GCR), which makes use of [kaniko](https://cloud.google.com/blog/products/gcp/introducing-kaniko-build-container-images-in-kubernetes-and-google-container-builder-even-without-root-access).\n",
"- It is definitely possible to build the image using Docker and push to GCR."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is definitely possible to build the image using Docker and push to GCR.

Do you mean this as "you can build a Docker image and push it to GCR without Kaniko"? If so, I would suggest rephrasing as something like:

It is possible to build the image locally using Docker and then to push it to GCR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, modified as suggested.

"metadata": {},
"source": [
"**Note**:\n",
"If you run the following code from a notebook **within kubeflow cluster** and **with kubeflow version >= 0.7**, you need to make sure that there is valid credential under your notebook's namespace, since the namespace of the notebook server is no long `kubeflow`. \n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested revision:

If you run this notebook within a Kubeflow cluster, with version >= 0.7,, you need to ensure that valid credentials are created within your notebook's namespace.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified as suggested.

"source": [
"**Note**:\n",
"If you run the following code from a notebook **within kubeflow cluster** and **with kubeflow version >= 0.7**, you need to make sure that there is valid credential under your notebook's namespace, since the namespace of the notebook server is no long `kubeflow`. \n",
"- With kubeflow version >= 0.7, the credentail is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook. \n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

credentail -> credential

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected.

"**Note**:\n",
"If you run the following code from a notebook **within kubeflow cluster** and **with kubeflow version >= 0.7**, you need to make sure that there is valid credential under your notebook's namespace, since the namespace of the notebook server is no long `kubeflow`. \n",
"- With kubeflow version >= 0.7, the credentail is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook. \n",
"- You can also add credentials to the new namespace by either copying them from an existing Kubeflow namespace or by creating a new service account as explained [here](https://www.kubeflow.org/docs/gke/authentication/#kubeflow-v0-6-and-before-gcp-service-account-key-as-secret).\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested revision:

You can also add credentials to the new namespace by either copying credentials from an existing Kubeflow namespace, or by creating a new service account.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified as suggested.

"metadata": {},
"source": [
"### Define each component\n",
"Define a component by creating an instance of `kfp.dsl.ContainerOp` that describes the interactions with the Docker container image created in the previous step. You need to specify the component name, the image to use, the command to run after the container starts, the input arguments, and the file outputs. ."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove extra period

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the command required (seems like something that the container may define by default)? Are the outputs guaranteed to be files?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the no default entry point is defined in the Dockerfile above, I put it here. But sure, it can possibly be defined in the Dockerfile. And for the outputs, I think it need to be in the files (could be the raw content or the string path to the content).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make clearer, I have changed it to

->

Define each component

Define a component by creating an instance of kfp.dsl.ContainerOp that describes the interactions with the Docker container image created in the previous step. You need to specify

  • component name
  • the image to use
  • the command to run after the container starts (If None, uses default CMD in defined in container.)
  • the input arguments
  • the file outputs (In the app.py above, the path of the trained model is written to /output.txt.)

@luotigerlsx
Copy link
Member Author

Thank you for writing these guides, they look like they will be very helpful!

Here is my feedback for 00 to 02 (the feedback on 02 was pretty quick, so I may need to make another pass through it.). I also included the readme in this review. I'll try to take a look at 03 and 04 later tonight or tomorrow.

@joeliedtke appreciate for your throughout review. I will start to address your comments once you finish the remaining two notebooks to avoid any possible back and forth. Thanks again and happy Christmas in advance :)

Copy link
Member

@joeliedtke joeliedtke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a quick pass through notebook 03 and 04. Please let me know if you have any questions or concerns.

"source": [
"## Create client\n",
"\n",
"**If submit outside the kubeflow cluster, need the following**\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubeflow -> Kubeflow

Suggested revision:

If you run this notebook outside of a Kubeflow cluster, run the following command:

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified as suggested, and also all other notebooks.

"client = kfp.Client(host, client_id, other_client_id, other_client_secret)\n",
"```\n",
"\n",
"**If you run and submit within the kubeflow cluster**, the following is enough\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you run this notebook within a Kubeflow cluster, run the following command:

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified as suggested, and also all other notebooks.

@@ -0,0 +1,604 @@
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove your notebook output before committing the file.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All cleared.

"## Create client\n",
"\n",
"**If submit outside the kubeflow cluster, need the following**\n",
"- `host`: the host name to use to talk to Kubeflow Pipelines, i.e., \"https://`<your-deployment>`.endpoints.`<your-project>`.cloud.goog/pipeline\"\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the same feedback that was specified for this section in notebook 01.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, modified accordingly.

"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell creates a file `app.py` that contains a Python script. The script takes a GCS bucket name as an input argument, gets the lists of blobs in that bucket, prints the list of blobs and also writes them to an output file."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GCS -> Cloud Storage

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prints the list of blobs and also writes

->

prints the list of blobs, and writes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been modified to align with Notebook-02.

"cell_type": "markdown",
"metadata": {},
"source": [
"## Create client\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the same comments as previous notebooks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been changed to align with previous notebooks.

"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell creates a file `app.py` that contains a Python script. The script takes a GCS bucket name as an input argument, gets the lists of blobs in that bucket, prints the list of blobs and also writes them to an output file."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GCS -> Cloud Storage
list of blobs and also writes -> list of blobs, and writes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been changed to align with previous notebooks.

"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Docker container\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the same comments as previous notebooks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been changed to align with previous notebooks.

"cell_type": "markdown",
"metadata": {},
"source": [
"## Writing your component definition file\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the same feedback as notebook 03

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been changed to align with previous notebooks.

"cell_type": "markdown",
"metadata": {},
"source": [
"Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration including `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the same feedback as notebook 03

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been changed to align with previous notebooks.

@luotigerlsx
Copy link
Member Author

@joeliedtke, really appreciate for your throughout review. I have addressed all of them and pushed the latest commit. The whole tutorial is definitely much higher quality with your suggestions. Please have a look and let me know if you have any further concerns.

@luotigerlsx
Copy link
Member Author

Hi @joeliedtke, would you help take another look and let me know if anything unaddressed. Thanks !

"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup a Kubeflow cluster through [CLI](https://www.kubeflow.org/docs/gke/deploy/deploy-cli/)\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend that you move the link out of the header so you can add some additional context. (Since the notebook walks users through the setup process, and they would only need to refer to the link for additional help deploying Kubeflow with the CLI.)

Also, please make the following change: Setup -> Setting up

For example, here is an option:

Setting up a Kubeflow cluster

This notebook provides instructions for setting up a Kubeflow cluster on GCP using the command-line interface (CLI). For additional help, see the guide to deploying Kubeflow using the CLI.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. I have removed the link from this header as suggested.

@@ -632,7 +482,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Alternative, can compile the pipeline into a package.**\n",
"**As an alternative, you can compile the pipeline into a package.** The compiled package can be easily shared and reused by others to reproduce the pipeline.\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace "The compiled package" -> "The compiled pipeline"
Replace "to reproduce the pipeline" -> "to execute the pipeline" or "to run the pipeline"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified and applied to all the relevant notebooks.

- Deploy the exported Tensorflow model on AI Platform prediction service
- Test the deployment by calling the end point with test data

## Setups Overview:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would still recommend removing the Setups Overview section. Users may be confused by the links in this content and try to setup their environment using this section as their instructions. Since your guide provides better instructions in notebook 00, it is better to leave this content until that point. Or, to put the complete instructions in this page and remove notebook 00.


## Content Overview:
In this tutorial, we designed a series of notebooks to demonstrate how to interact with `Kubeflow Pipelines` through
[Python SDK](https://github.com/kubeflow/pipelines/tree/master/sdk/python/kfp). In particular
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

through Python SDK -> through the Kubeflow Pipelines SDK

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have modified accordingly, and also removed the set up section from the readme.

@joeliedtke
Copy link
Member

I've added a few more comments. Once these changes are made I am comfortable with the PR being merged. There are a few additional changes that I would suggest, but I can provide a follow up PR for that.

@luotigerlsx
Copy link
Member Author

I've added a few more comments. Once these changes are made I am comfortable with the PR being merged. There are a few additional changes that I would suggest, but I can provide a follow up PR for that.

Hi @joeliedtke , again thanks a lot for the additional comments. I have pushed a new commit to address them. Please take a look.

For your additional suggestion, it's great that you would also contribute to it. Please also let me know if anything I can do.

@joeliedtke
Copy link
Member

LGTM, though I see that this PR still has the do-not-merge/invalid-owners-file label.

@gaoning777 and @numerology, would it be possible for you to help @luotigerlsx resolve that issue?

@numerology
Copy link

@gaoning777 and @numerology, would it be possible for you to help @luotigerlsx resolve that issue?

If I remember correct, @luotigerlsx and @saurabh24292 needs to add themselves to [here](@gaoning777 and @numerology, would it be possible for you to help @luotigerlsx resolve that issue?) to become a Kubeflow member.

The general rule is that everyone listed in a OWNER file needs to be a Kubeflow member

@luotigerlsx
Copy link
Member Author

@gaoning777 and @numerology, would it be possible for you to help @luotigerlsx resolve that issue?

If I remember correct, @luotigerlsx and @saurabh24292 needs to add themselves to [here](@gaoning777 and @numerology, would it be possible for you to help @luotigerlsx resolve that issue?) to become a Kubeflow member.

The general rule is that everyone listed in a OWNER file needs to be a Kubeflow member

Hi @numerology , I guess you try to direct me to some file but miss the link. And to add us as a Kubeflow member, does it need to be initiated by me or can be added from your side ?

@numerology
Copy link

@luotigerlsx Sorry the link should be this. You can simply send a PR to do that and refer to this PR in that one.

@luotigerlsx
Copy link
Member Author

@luotigerlsx Sorry the link should be this. You can simply send a PR to do that and refer to this PR in that one.

Got it, will do that. Thanks again !

@luotigerlsx
Copy link
Member Author

/verify-owners

1 similar comment
@luotigerlsx
Copy link
Member Author

/verify-owners

@luotigerlsx
Copy link
Member Author

Hi @numerology @joeliedtke , we have both joined the kubeflow org and the label has been removed. Please kindly help to merge the PR.

@numerology
Copy link

Thanks! @luotigerlsx

@numerology
Copy link

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: numerology

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 3a94ae3 into kubeflow:master Jan 6, 2020
Jeffwan pushed a commit to Jeffwan/pipelines that referenced this pull request Dec 9, 2020
* add step by step tutorial using mnist as use case

* fix mnist typo and change job submit default

* add owner file; modify setup and readme about ui deployment statement

* Refine notebooks and readme to incorporate reviewers comment

* fine tune of the documentation
magdalenakuhn17 pushed a commit to magdalenakuhn17/pipelines that referenced this pull request Oct 22, 2023
Signed-off-by: rachitchauhan43 <rachitchauhan43@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants