Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVC used from cache thereby causing issues when pvc is manually deleted. #5867

Closed
divyadilip91 opened this issue Jun 16, 2021 · 6 comments
Closed

Comments

@divyadilip91
Copy link

Environment

  • How did you deploy Kubeflow Pipelines (KFP)?
    I deployed kubeflow in GCP using AI Platform

  • KFP version:
    1.4.1

  • KFP SDK version:
    1.4.1

Steps to reproduce

  1. Create a pipeline that first creates a pvc that uses a dynamicaly generated disk of size 10Gi which is then used by subsequent containerops.
  2. After successful execution , I manually delete the job pods and pvc that automatically deletes the dynamically generated disk
  3. Next time when I schedule a pipeline run, I changed my pvcname but not size. I notice that the pod uses the deleted old pvc and the subsequent container ops shows the below error
    This step is in Pending state with this message: Unschedulable: persistentvolumeclaim "{{tasks.new-disk.outputs.parameters.new-disk-name}}" not found
    This is so frustrating since I have to change the disk size each time to create a new pipeline.
    How to avoid the pvc to be used from cache.

Expected result

A new pvc needs to get created of same size 10Gi instead of the old pvc

Materials and Reference

https://github.com/kubeflow/pipelines/blob/1.4.1/samples/contrib/volume_ops/volumeop_sequential.py

A similar pipeline used.

Kindly help regarding this issue as soon as possible.

Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.

@divyadilip91
Copy link
Author

divyadilip91 commented Jun 16, 2021

#1327

@skogsbrus
Copy link

I think this is the same issue as #5844

@divyadilip91
Copy link
Author

divyadilip91 commented Jun 17, 2021

@skogsbrus Yes it is. Does anyone know if this issue has been solved in the latest kubeflow
version which is 1.6.

@skogsbrus
Copy link

Don't know, but I suggest you close this and give a 👍 to the other issue.

@elikatsis
Copy link
Member

Hi, as mentioned in this comment: #5844 (comment) this issue is duplicate of #5257 and #5844.

I also suggest we close this one in favor of #5257.

@divyadilip91
Copy link
Author

Closing the issue- Redirected to #5257 and #5844

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants