Skip to content

Commit

Permalink
quota info (#3092)
Browse files Browse the repository at this point in the history
  • Loading branch information
Jiaxiao Zheng authored Feb 17, 2020
1 parent 9b723bf commit 9f328a7
Showing 1 changed file with 9 additions and 0 deletions.
9 changes: 9 additions & 0 deletions samples/core/xgboost_training_cm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,15 @@ general [guideline](https://cloud.google.com/endpoints/docs/openapi/enable-api)
If KFP was deployed through K8S marketplace, please follow instructions in [the guideline](https://github.com/kubeflow/pipelines/blob/master/manifests/gcp_marketplace/guide.md#gcp-service-account-credentials)
to make sure the service account used has the role `storage.admin` and `dataproc.admin`.

### Quota

By default, Dataproc `create_cluster` creates a master instance of machine type 'n1-standard-4',
together with two worker instances of machine type 'n1-standard-4'. This sums up
to a request consuming 12.0 vCPU quota. The user GCP project needs to guarantee
this quota is available to make this sample work.

> :warning: Free-tier GCP account might not be able to fulfill this quota requirement. For upgrading your account please follow [this link]().
## Compile

Follow the guide to [building a pipeline](https://www.kubeflow.org/docs/guides/pipelines/build-pipeline/) to install the Kubeflow Pipelines SDK and compile the sample Python into a workflow specification. The specification takes the form of a YAML file compressed into a `.zip` file.
Expand Down

0 comments on commit 9f328a7

Please sign in to comment.