-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubeflow auto-deployments from master failing; error setting project #471
Labels
Comments
I think the problem is that we are trying to pull the config from kubeflow/kubeflow but the manifest has moved to kubeflow/manifests. Here's the invocation.
|
jlewi
pushed a commit
to jlewi/testing
that referenced
this issue
Sep 27, 2019
k8s-ci-robot
pushed a commit
that referenced
this issue
Oct 18, 2019
Still failing
|
Here's the latest error.
|
jlewi
pushed a commit
to jlewi/testing
that referenced
this issue
Oct 18, 2019
jlewi
pushed a commit
to jlewi/testing
that referenced
this issue
Oct 18, 2019
Related to kubeflow#471 * Don't set name in the spec because we want to infer it form directory.
jlewi
pushed a commit
to jlewi/testing
that referenced
this issue
Oct 18, 2019
Related to kubeflow#471 * Don't set name in the spec because we want to infer it form directory. * Create a new script to deploy with a unique name * Related to: kubeflow#444 * Update cleanup script to clean up new auto-deployed clusters
jlewi
pushed a commit
to jlewi/testing
that referenced
this issue
Oct 23, 2019
Related to kubeflow#471 * Don't set name in the spec because we want to infer it form directory. * Create a new script to deploy with a unique name * Related to: kubeflow#444 * Update cleanup script to clean up new auto-deployed clusters
k8s-ci-robot
pushed a commit
that referenced
this issue
Oct 23, 2019
* Auto deploy job needs to use the new kfctl syntax; also use unique names Related to #471 * Don't set name in the spec because we want to infer it form directory. * Create a new script to deploy with a unique name * Related to: #444 * Update cleanup script to clean up new auto-deployed clusters * In cron job get code from master. * Fix lint. * Revert changes to create_kf_instance * update to v1beta1 spec. * * We need to use a self-signed certificate with the auto-deployed clusters because otherwise we hit lets-encrypt rate limiting.
/kind bug |
I think this is obsolete. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Here's the stack trace from the most recent failure
The text was updated successfully, but these errors were encountered: