-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separated data PVC #12345
Separated data PVC #12345
Conversation
except Exception as e: | ||
log.debug('Failed to parse `k8s_data_volume_claim` parameter in the kubernetes runner configuration') | ||
raise e | ||
inputs = job_wrapper.get_input_fnames() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't work with extra files paths and metadata files I assume.
This logic shouldn't be happening at this layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmchilton Where do you think it should be happening though? I'm not sure this logic can be moved elsewhere, because mounting PVCs is inherently tied to the k8s runner? The extra files path is a fair point, but the overall way this works is that the extra metadata can be mounted into the job and made available, out-of-band if necessary.
self._init_monitor_thread() | ||
self._init_worker_threads() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self._init_monitor_thread() | |
self._init_worker_threads() |
except Exception as e: | ||
log.debug('Failed to parse `k8s_data_volume_claim` parameter in the kubernetes runner configuration') | ||
raise e | ||
data_volume_name = data_claim_name if "/" not in data_claim_name else data_claim_name.split("/")[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parsing could be moved to separate function and shared between k8s_persistent_volume_claims
and k8s_data_volume_claim
and k8s_working_volume_claim
This is working well, and since it's backward compatible, should pose no issues even if we have to make any changes in future. |
Separates the volumes that should be mounted in all jobs (cvmfs, scripts, cache, etc...) vs the pvcs that should be mounted per job (input data + job directory).
Friends with galaxyproject/galaxy-helm#314
Example volume mounts per job:
Todo
k8s_persistent_volume_claims
to mount the entire shared FS in all jobs as before, but want to test that it handles empty/null values fork8s_data_volume_claim
properly)How to test the changes?
(Select all options that apply)
License