-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix volume handling in podman #2229
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rhatdan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
libpod/runtime_ctr.go
Outdated
if removeVolume { | ||
volumes, err = c.namedVolumes() | ||
if err != nil { | ||
logrus.Errorf("unable to retreive builtin volumes: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/retreive/retrieve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would probably be good to add the container id to the message:
`logrus.Errorf("unable to retrieve builtin volumes %v for container %v", err, c.ID())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@rhatdan looks like you've some test unhappiness |
libpod/container_internal.go
Outdated
if err != nil { | ||
return errors.Wrapf(ErrCtrStateInvalid, "failed to create volume for %s in %s for container %s", k, mountPoint, c.ID()) | ||
} | ||
volumePath := filepath.Join(volume.config.MountPoint, k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even better error, ty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, assuming happy tests
@umohnani8 PTAL |
/test images |
Hi, I have tested this PR and this looks good to me. Volume is now listed under "podman volume ls" while the container is running. However, the created volume is NOT always removed when the container exits (but was launched with --rm option). ie: $ podman volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
$ podman run --rm -ti test sh -c "sleep 5"
$ podman run --rm -ti test sh -c "sleep 5"
$ podman run --rm -ti test sh -c "sleep 5"
$ podman run --rm -ti test sh -c "sleep 5"
$ podman run --rm -ti test sh -c "sleep 5"
$ podman volume ls
DRIVER VOLUME NAME
local a5254033fb420e7c3495b30f869ac07df307b3ff0701be7613dcc306e73d2048
local e34bb6cf87eefe4683b1a0c18f0b4ca1fc4085dd208d6a0c9b045248255f0fca
local eaa4c5b1a89e951484d1b068977e339edf8b1125596361413db2b149b2493e8a |
Might be a race condition. Weird, I was trying that out in testing. Lets get this merged and then we can continue playing with it. I have seen another issue where you can remove a builtin volume and it does not complain if their is a container using it. |
The DB backend for built-in volumes needs an partial rewrite on top of
that. Leaving them as volumes in the spec is reallllllly iffy and can help
lead to these situations.
…On Tue, Jan 29, 2019, 14:07 Daniel J Walsh ***@***.*** wrote:
Might be a race condition. Weird, I was trying that out in testing. Lets
get this merged and then we can continue playing with it. I have seen
another issue where you can remove a builtin volume and it does not
complain if their is a container using it.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#2229 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHYHCIlAj98JwRVlZEf-Vwk6_lrgR1YIks5vIJv2gaJpZM4aYVED>
.
|
I'm not sure this patch is safe until that happens. Honestly not sure I
want anything built in volume work merged until the backend cleanup can
land.
…On Tue, Jan 29, 2019, 14:38 Matthew Heon ***@***.*** wrote:
The DB backend for built-in volumes needs an partial rewrite on top of
that. Leaving them as volumes in the spec is reallllllly iffy and can help
lead to these situations.
On Tue, Jan 29, 2019, 14:07 Daniel J Walsh ***@***.***
wrote:
> Might be a race condition. Weird, I was trying that out in testing. Lets
> get this merged and then we can continue playing with it. I have seen
> another issue where you can remove a builtin volume and it does not
> complain if their is a container using it.
>
> —
> You are receiving this because your review was requested.
> Reply to this email directly, view it on GitHub
> <#2229 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AHYHCIlAj98JwRVlZEf-Vwk6_lrgR1YIks5vIJv2gaJpZM4aYVED>
> .
>
|
I don't think we want to completely drop the old volume handling here,
either. The per container dirs and tmpfs volume drivers have merit. This
needs to be switchable. Libpod.conf?
More detailed review tomorrow, but I think this is probably safe to go in
even without those DB changes. Does need some changes.
…On Tue, Jan 29, 2019, 14:39 Matthew Heon ***@***.*** wrote:
I'm not sure this patch is safe until that happens. Honestly not sure I
want anything built in volume work merged until the backend cleanup can
land.
On Tue, Jan 29, 2019, 14:38 Matthew Heon ***@***.*** wrote:
> The DB backend for built-in volumes needs an partial rewrite on top of
> that. Leaving them as volumes in the spec is reallllllly iffy and can help
> lead to these situations.
>
> On Tue, Jan 29, 2019, 14:07 Daniel J Walsh ***@***.***
> wrote:
>
>> Might be a race condition. Weird, I was trying that out in testing. Lets
>> get this merged and then we can continue playing with it. I have seen
>> another issue where you can remove a builtin volume and it does not
>> complain if their is a container using it.
>>
>> —
>> You are receiving this because your review was requested.
>> Reply to this email directly, view it on GitHub
>> <#2229 (comment)>,
>> or mute the thread
>> <https://github.com/notifications/unsubscribe-auth/AHYHCIlAj98JwRVlZEf-Vwk6_lrgR1YIks5vIJv2gaJpZM4aYVED>
>> .
>>
>
|
libpod/runtime_ctr.go
Outdated
@@ -393,6 +402,14 @@ func (r *Runtime) removeContainer(ctx context.Context, c *Container, force bool) | |||
} | |||
} | |||
|
|||
for _, v := range volumes { | |||
if volume, err := runtime.state.Volume(v); err == nil { | |||
if err := runtime.removeVolume(ctx, volume, true, true); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we get an error that the volume is still in use, we should log nothing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed to ignore error if I get back the errnovolume or errcontainerbeingused
libpod/runtime_ctr.go
Outdated
@@ -221,17 +221,18 @@ func (r *Runtime) newContainer(ctx context.Context, rSpec *spec.Spec, options .. | |||
|
|||
// RemoveContainer removes the given container | |||
// If force is specified, the container will be stopped first | |||
// If removeVolume is specified, the volume contents will be removed also |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"If removeVolume is specified, named volumes used by the container will be removed also if and only if the container is the sole user"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
cmd/podman/start.go
Outdated
@@ -147,7 +147,7 @@ func startCmd(c *cli.Context) error { | |||
logrus.Errorf("unable to detect if container %s should be deleted", ctr.ID()) | |||
} | |||
if createArtifact.Rm { | |||
if rmErr := runtime.RemoveContainer(ctx, ctr, true); rmErr != nil { | |||
if rmErr := runtime.RemoveContainer(ctx, ctr, true, true); rmErr != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is safe at all - what if there are actual named volumes people are using, not image volumes, attached? We're risking killing people's data here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is questionable. We need a mechanism to know if the volume was created just for the container or not. But I agree we should remove the volume removal.
libpod/container_internal.go
Outdated
if err != nil { | ||
return errors.Wrapf(ErrCtrStateInvalid, "failed to create volume for %s in %s for container %s", k, mountPoint, c.ID()) | ||
} | ||
volumePath := filepath.Join(volume.config.MountPoint, k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nuke the filepath.Join() - why do we need it? There's only one thing in the volume, no need to make a subdir in it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The k in this case is the directory that is being volume mounted in.
In my tests It happens to be /myvol. I believe this is necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't be - we're bind-mounting it into the given path in the container, no need to have the volume path appended again after the path for the named volume
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, we have already created a new volume with a generated name, which will be bind-mounted to /myvol inside the container. So no need to do the filepath.Join() and create a sub-dir inside that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
libpod/container_internal.go
Outdated
@@ -1082,8 +1082,11 @@ func (c *Container) addLocalVolumes(ctx context.Context, g *generate.Generator, | |||
if MountExists(g.Mounts(), k) { | |||
continue | |||
} | |||
volumePath := filepath.Join(c.config.StaticDir, "volumes", k) | |||
|
|||
volume, err := c.runtime.NewVolume(ctx, []VolumeCreateOption{}...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to give the volume a name, at least - something based on the container's name. containername-volumepath?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The volume gets named automatically if there is no name specified. This is how it works in Docker also. The problem with us naming it, is that there could be multiple built in volumes within an image.
@@ -1290,3 +1293,23 @@ func getExcludedCGroups() (excludes []string) { | |||
excludes = []string{"rdma"} | |||
return | |||
} | |||
|
|||
// namedVolumes returns named volumes for the container | |||
func (c *Container) namedVolumes() ([]string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a fundamental problem that we don't have the actual name of the volume in here, just the path it's mounted to in the container
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't fix in this PR, just making a note for myself to go over when I do the DB stuff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Long range we need to revisit this, especially with the eventual adding of non local Docker volumes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
libpod/container_internal.go
Outdated
// namedVolumes returns named volumes for the container | ||
func (c *Container) namedVolumes() ([]string, error) { | ||
var volumes []string | ||
spec, err := c.specFromState() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use c.config.spec instead, specFromState is expensive and pointless, we don't add any named volumes to the generated spec on disk
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is what I had originally but the volume is not listed in the spec, it is only listed when I load from state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, what? We're not adding these when we generate the spec... At least, we shouldn't be... This is in a really bad spot if we can't find volumes without the running spec
LGTM once comments are addressed. |
e05393e
to
b484cc5
Compare
5e68469
to
9732931
Compare
Also fixes #2301 |
bot, retest this please |
I think this is only a partial solution, but a full solution is a ways off, and I really want this in 1.0 LGTM |
Sorry, 1.1 not 1.0 |
☔ The latest upstream changes (presumably #2307) made this pull request unmergeable. Please resolve the merge conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM (just a minor nit).
But note that I am not very familiar with those parts of the code. Tests would be nice. Currently, both commits seem to be merge commits which seems odd.
☔ The latest upstream changes (presumably #2309) made this pull request unmergeable. Please resolve the merge conflicts. |
☔ The latest upstream changes (presumably #2319) made this pull request unmergeable. Please resolve the merge conflicts. |
@rhatdan code LGTM but the merge commits in the PR may indicate that you need to rebase the branch and repush. |
iFix builtin volumes to work with podman volume Currently builtin volumes are not recored in podman volumes when they are created automatically. This patch fixes this. Remove container volumes when requested Currently the --volume option on podman remove does nothing. This will implement the changes needed to remove the volumes if the user requests it. When removing a volume make sure that no container uses the volume. Signed-off-by: Daniel J Walsh dwalsh@redhat.com Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
/lgtm |
Fix builtin volumes to work with podman volume
Currently builtin volumes are not recored in podman volumes when
they are created automatically. This patch fixes this.
Remove container volumes when requested
Currently the --volume option on podman remove does nothing.
This will implement the changes needed to remove the volumes
if the user requests it.
When removing a volume make sure that no container uses the volume.
Signed-off-by: Daniel J Walsh dwalsh@redhat.com