Skip to content
This repository has been archived by the owner on Aug 14, 2020. It is now read-only.

lib/repository2: fix parallel pull synchronisation #167

Merged
merged 1 commit into from
May 30, 2016

Conversation

jonboulle
Copy link
Contributor

The parallel pull refactoring in
47f2cb9 introduced an unfortunate and
subtle race condition. Because i) we were asynchronously starting the
fetching of each layer 1 (which implies that the fetch is added to
the progress bar asynchronously, via getLayerV2 2 3), and ii)
the progressutil package has no guards against calling PrintAndWait and
AddCopy in indeterminate order 4, it was possible for us to enter
the PrintAndWait loop 5 6 before all of the layer fetches had
actually been added to it. Then, in the event that the first layer was
particularly fast, the CopyProgressPrinter could actually think it was
done 7 before the other layer(s) had finished. In this situation,
docker2aci would happily continue forward and try to generate an ACI
from each layer 8, and for any layer(s) that had not actually
finished downloading, the GenerateACI->writeACI->tarball.Walk call
9 would be operating on a partially written file and hence result
in the errors we've been seeing ("unexpected EOF", "corrupt input
before offset 45274", and so forth).

A great case to reproduce this is the docker:busybox image because of
its particular characteristics: two layers, one very small (32B) and one
relatively larger (676KB) layer. Typical output looks like the following

% $(exit 0); while [ $? -eq 0 ]; do ./bin/docker2aci docker://busybox; done

Downloading sha256:385e281300c [===============================] 676 KB
/ 676 KB
Downloading sha256:a3ed95caeb0 [===============================]     32
B / 32 B

Generated ACI(s):
library-busybox-latest.aci

Downloading sha256:a3ed95caeb0 [==============================]      32
B / 32 B
Downloading sha256:385e281300c [==============================]  676 KB
/ 676 KB

Generated ACI(s):
library-busybox-latest.aci

Downloading sha256:a3ed95caeb0 [===================================] 32
B / 32 B
Error: conversion error: error generating ACI: error writing ACI: Error
reading tar entry: unexpected EOF

Note that i) the order in which the layers are registered with the
progress printer is indeterminate, and ii) every failure case (observed
so far) is when the small layer is retrieved first, and the stdout contains no
progress output at all from retrieving the other layer. This indeed
suggests that the progress printer returns before realising the second
layer is still being retrieved.

Tragically, @dgonyeo's test case 10 probably gives a false sense
of security (i.e. it cannot reproduce this issue), most likely
because the dgonyeo/test image contains a number of layers (5) of
varying sizes, and I suspect it's much less likely for this particular
race to occur.

Almost certainly fixes #166 - with this patch I'm unable to reproduce.

@jonboulle
Copy link
Contributor Author

coreos/pkg#61, coreos/pkg#62 and coreos/pkg#63 are non-critical for this but would be nice to have in asap too.

The parallel pull refactoring in
47f2cb9 introduced an unfortunate and
subtle race condition. Because i) we were asynchronously starting the
fetching of each layer [1][1] (which implies that the fetch is added to
the progress bar asynchronously, via getLayerV2 [2][2] [3][3]), and ii)
the progressutil package has no guards against calling PrintAndWait and
AddCopy in indeterminate order [4][4], it was possible for us to enter
the PrintAndWait loop [5][5] [6][6] before all of the layer fetches had
actually been added to it. Then, in the event that the first layer was
particularly fast, the CopyProgressPrinter could actually think it was
done [7][7] before the other layer(s) had finished. In this situation,
docker2aci would happily continue forward and try to generate an ACI
from each layer [8][8], and for any layer(s) that had not actually
finished downloading, the GenerateACI->writeACI->tarball.Walk call
[9][9] would be operating on a partially written file and hence result
in the errors we've been seeing ("unexpected EOF", "corrupt input
before offset 45274", and so forth).

A great case to reproduce this is the `docker:busybox` image because of
its particular characteristics: two layers, one very small (32B) and one
relatively larger (676KB) layer. Typical output looks like the following

```
% $(exit 0); while [ $? -eq 0 ]; do ./bin/docker2aci docker://busybox; done

Downloading sha256:385e281300c [===============================] 676 KB
/ 676 KB
Downloading sha256:a3ed95caeb0 [===============================]     32
B / 32 B

Generated ACI(s):
library-busybox-latest.aci

Downloading sha256:a3ed95caeb0 [==============================]      32
B / 32 B
Downloading sha256:385e281300c [==============================]  676 KB
/ 676 KB

Generated ACI(s):
library-busybox-latest.aci

Downloading sha256:a3ed95caeb0 [===================================] 32
B / 32 B
Error: conversion error: error generating ACI: error writing ACI: Error
reading tar entry: unexpected EOF

```

Note that i) the order in which the layers are registered with the
progress printer is indeterminate, and ii) every failure case (observed
so far) is when the small layer is retrieved first, and the stdout contains no
progress output at all from retrieving the other layer. This indeed
suggests that the progress printer returns before realising the second
layer is still being retrieved.

Tragically, @dgonyeo's test case [10][10] probably gives a false sense
of security (i.e. it cannot reproduce this issue), most likely
because the `dgonyeo/test` image contains a number of layers (5) of
varying sizes, and I suspect it's much less likely for this particular
race to occur.

Almost certainly fixes appc#166 - with this patch I'm unable to reproduce.

[1]:
https://github.com/appc/docker2aci/blob/ba503aa9b84b6c1ffbab03ec0589415ef598e5e0/lib/internal/backend/repository/repository2.go#L89
[2]:
https://github.com/appc/docker2aci/blob/ba503aa9b84b6c1ffbab03ec0589415ef598e5e0/lib/internal/backend/repository/repository2.go#L115
[3]:
https://github.com/appc/docker2aci/blob/ba503aa9b84b6c1ffbab03ec0589415ef598e5e0/lib/internal/backend/repository/repository2.go#L335
[4]: coreos/pkg#63
[5]: https://github.com/appc/docker2aci/blob/ba503aa9b84b6c1ffbab03ec0589415ef598e5e0/lib/internal/backend/repository/repository2.go#L123
[6]:
https://github.com/coreos/pkg/blob/master/progressutil/iocopy.go#L115
[7]:
https://github.com/coreos/pkg/blob/master/progressutil/iocopy.go#L144
[8]:
https://github.com/appc/docker2aci/blob/ba503aa9b84b6c1ffbab03ec0589415ef598e5e0/lib/internal/backend/repository/repository2.go#L149
[9]:
https://github.com/appc/docker2aci/blob/4e051449c0079ba8df59a51c14b7d310de1830b8/lib/internal/internal.go#L427
[10]: coreos/pkg#61 (comment)
jonboulle added a commit to jonboulle/pkg that referenced this pull request May 29, 2016
Rather than maintaining a list of results and individual synchronised
booleans signifying goroutine completion, just use a single channel to
propagate the result of each copyReader. This means we can use a
for/select loop as His Lord and Holiness Rob Pike always intended.

Notably, it is now explicitly prohibited to:
i) add additional copy operations to a CopyProgressPrinter after it has
   been started (i.e. after PrintAndWait has been called)
ii) call PrintAndWait more than once
If either of these is attempted, ErrAlreadyStarted is returned.

To achieve i), we would need to either change the interface of the
CopyProgressPrinter to accept a pre-defined size (i.e. number of copy
operations), or other sychronise the "shutting down" of the
CopyProgressPrinter (i.e. to block further copies being added just
before the CopyProgressPrinter is about to return - otherwise we can
never be sure that another will not be added as we're finishing). Both
of these seem overly complex for now and I suggest we only explore them
if use cases arise.

Similarly, ii) would require more delicate refactoring and it's hard to
imagine a use case since this package is typically designed to be used
with a single output stream (i.e. stdout). Adding the safety check in
this PR helps to mitigate potential abuse of the package, though (e.g.
appc/docker2aci#167)
@alban
Copy link
Member

alban commented May 30, 2016

LGTM

@alban alban merged commit a2baa79 into appc:master May 30, 2016
lucab pushed a commit to lucab/pkg that referenced this pull request May 30, 2016
Rather than maintaining a list of results and individual synchronised
booleans signifying goroutine completion, just use a single channel to
propagate the result of each copyReader. This means we can use a
for/select loop as His Lord and Holiness Rob Pike always intended.

Notably, it is now explicitly prohibited to:
i) add additional copy operations to a CopyProgressPrinter after it has
   been started (i.e. after PrintAndWait has been called)
ii) call PrintAndWait more than once
If either of these is attempted, ErrAlreadyStarted is returned.

To achieve i), we would need to either change the interface of the
CopyProgressPrinter to accept a pre-defined size (i.e. number of copy
operations), or other sychronise the "shutting down" of the
CopyProgressPrinter (i.e. to block further copies being added just
before the CopyProgressPrinter is about to return - otherwise we can
never be sure that another will not be added as we're finishing). Both
of these seem overly complex for now and I suggest we only explore them
if use cases arise.

Similarly, ii) would require more delicate refactoring and it's hard to
imagine a use case since this package is typically designed to be used
with a single output stream (i.e. stdout). Adding the safety check in
this PR helps to mitigate potential abuse of the package, though (e.g.
appc/docker2aci#167)
lucab pushed a commit to lucab/pkg that referenced this pull request May 30, 2016
Rather than maintaining a list of results and individual synchronised
booleans signifying goroutine completion, just use a single channel to
propagate the result of each copyReader. This means we can use a
for/select loop as His Lord and Holiness Rob Pike always intended.

Notably, it is now explicitly prohibited to:
i) add additional copy operations to a CopyProgressPrinter after it has
   been started (i.e. after PrintAndWait has been called)
ii) call PrintAndWait more than once
If either of these is attempted, ErrAlreadyStarted is returned.

To achieve i), we would need to either change the interface of the
CopyProgressPrinter to accept a pre-defined size (i.e. number of copy
operations), or other sychronise the "shutting down" of the
CopyProgressPrinter (i.e. to block further copies being added just
before the CopyProgressPrinter is about to return - otherwise we can
never be sure that another will not be added as we're finishing). Both
of these seem overly complex for now and I suggest we only explore them
if use cases arise.

Similarly, ii) would require more delicate refactoring and it's hard to
imagine a use case since this package is typically designed to be used
with a single output stream (i.e. stdout). Adding the safety check in
this PR helps to mitigate potential abuse of the package, though (e.g.
appc/docker2aci#167)
lucab pushed a commit to lucab/pkg that referenced this pull request May 30, 2016
Rather than maintaining a list of results and individual synchronised
booleans signifying goroutine completion, just use a single channel to
propagate the result of each copyReader. This means we can use a
for/select loop as His Lord and Holiness Rob Pike always intended.

Notably, it is now explicitly prohibited to:
i) add additional copy operations to a CopyProgressPrinter after it has
   been started (i.e. after PrintAndWait has been called)
ii) call PrintAndWait more than once
If either of these is attempted, ErrAlreadyStarted is returned.

To achieve i), we would need to either change the interface of the
CopyProgressPrinter to accept a pre-defined size (i.e. number of copy
operations), or other sychronise the "shutting down" of the
CopyProgressPrinter (i.e. to block further copies being added just
before the CopyProgressPrinter is about to return - otherwise we can
never be sure that another will not be added as we're finishing). Both
of these seem overly complex for now and I suggest we only explore them
if use cases arise.

Similarly, ii) would require more delicate refactoring and it's hard to
imagine a use case since this package is typically designed to be used
with a single output stream (i.e. stdout). Adding the safety check in
this PR helps to mitigate potential abuse of the package, though (e.g.
appc/docker2aci#167)
@alban alban added this to the v0.11.1 milestone May 30, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

random errors downloading images
2 participants