Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only the first docker-compose up -d succeeds #10795

Closed
x-yuri opened this issue Jun 27, 2021 · 2 comments · Fixed by #10893
Closed

Only the first docker-compose up -d succeeds #10795

x-yuri opened this issue Jun 27, 2021 · 2 comments · Fixed by #10893
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@x-yuri
Copy link

x-yuri commented Jun 27, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

The first docker-compose up -d succeeds, but the following ones fail. The workaround is to do docker-compose down before up -d.

Steps to reproduce the issue:

docker-compose.yml:

version: '3'
services:
    app:
        build: .
        command: sleep 1000

Dockerfile:

FROM alpine
WORKDIR /app
$ sudo /usr/local/bin/docker-compose up -d
Creating network "a1_default" with the default driver
Creating a1_app_1 ... 
Creating a1_app_1 ... done

Change 1000 to 2000 in docker-compose.yml.

$ sudo /usr/local/bin/docker-compose up -d
Recreating a1_app_1 ... 
[29100] Failed to execute script docker-compose

ERROR: for a1_app_1  'NoneType' object has no attribute 'get'

ERROR: for app  'NoneType' object has no attribute 'get'
Traceback (most recent call last):
  File "docker-compose", line 3, in <module>
  File "compose/cli/main.py", line 81, in main
  File "compose/cli/main.py", line 203, in perform_command
  File "compose/metrics/decorator.py", line 18, in wrapper
  File "compose/cli/main.py", line 1186, in up
  File "compose/cli/main.py", line 1182, in up
  File "compose/project.py", line 702, in up
  File "compose/parallel.py", line 108, in parallel_execute
  File "compose/parallel.py", line 206, in producer
  File "compose/project.py", line 688, in do
  File "compose/service.py", line 581, in execute_convergence_plan
  File "compose/service.py", line 503, in _execute_convergence_recreate
  File "compose/parallel.py", line 108, in parallel_execute
  File "compose/parallel.py", line 206, in producer
  File "compose/service.py", line 496, in recreate
  File "compose/service.py", line 615, in recreate_container
  File "compose/service.py", line 334, in create_container
  File "compose/service.py", line 922, in _get_container_create_options
  File "compose/service.py", line 962, in _build_container_volume_options
  File "compose/service.py", line 1549, in merge_volume_bindings
  File "compose/service.py", line 1579, in get_container_data_volumes
AttributeError: 'NoneType' object has no attribute 'get'

$ sudo /usr/local/bin/docker-compose ps
        Name             Command      State     Ports
-----------------------------------------------------
3b86cade646a_a1_app_1   sleep 1000   Exit 137        

$ sudo /usr/local/bin/docker-compose down
Removing 3b86cade646a_a1_app_1 ... 
Removing 3b86cade646a_a1_app_1 ... done
Removing network a1_default

$ sudo /usr/local/bin/docker-compose up -d
Creating network "a1_default" with the default driver
Creating a1_app_1 ... 
Creating a1_app_1 ... done

Describe the results you received:

up -d doesn't work w/o prior down.

Describe the results you expected:

I expected up -d to work w/o down.

Additional information you deem important (e.g. issue happens only occasionally):

Supposedly also fails when the image changes. And somehow can't be reproduced w/o WORKDIR (needs a custom image?).

Output of podman version:

Version:      3.0.2-dev
API Version:  3.0.0
Go Version:   go1.15.13
Built:        Tue Jun  8 07:52:06 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.8
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.26-3.module+el8.4.0+11311+9da8acfb.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.26, commit: a35bb9ea67d5a83c7da53202f2fcd505c036d29c'
  cpus: 1
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: ip-172-31-25-128.eu-central-1.compute.internal
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 4.18.0-305.el8.x86_64
  linkmode: dynamic
  memFree: 182263808
  memTotal: 845565952
  ociRuntime:
    name: runc
    package: runc-1.0.0-73.rc93.module+el8.4.0+11311+9da8acfb.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.4.0+11311+9da8acfb.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 0
  swapTotal: 0
  uptime: 58m 22.27s
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/ec2-user/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.4.0-3.module+el8.4.0+11311+9da8acfb.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.4
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/ec2-user/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  volumePath: /home/ec2-user/.local/share/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 1623138726
  BuiltTime: Tue Jun  8 07:52:06 2021
  GitCommit: ""
  GoVersion: go1.15.13
  OsArch: linux/amd64
  Version: 3.0.2-dev

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.0.1-7.module+el8.4.0+11311+9da8acfb.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

I checked the Podman Troubleshooting Guide.

Additional environment details (AWS, VirtualBox, physical, etc.):

AWS. I created it like so:

  • click Launch instances
  • (step 1) check Free tier only
  • type "red hat" and press Enter
  • choose Red Hat Enterprise Linux 8 with High Availability - ami-06ec8443c2a35b0ba
  • (step 2) t2.micro (the default)
  • (step 6) choose Create a new security group
  • specify a name and a description
  • create 1 record:
    • ssh tcp 22 my ip

Then:

$ sudo dnf install docker
$ sudo systemctl enable --now podman.socket
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo chmod u+x /usr/local/bin/docker-compose
@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 27, 2021
@mheon
Copy link
Member

mheon commented Jun 27, 2021

I'll take a look on Monday

@mheon mheon self-assigned this Jun 27, 2021
@stewartadam
Copy link

I've reproduced this with podman-3.2.2-1.fc34.x86_64 as well. This bug puts affected containers into an exited state, which if you cleanup manually with docker rm $(docker ps -q -f 'status=exited') then permits the docker-compose up -d to succeed again (because it thinks it's creating new containers, instead of re-creating).

@baude baude unassigned mheon Jul 8, 2021
@baude baude added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Jul 8, 2021
@baude baude self-assigned this Jul 8, 2021
baude added a commit to baude/podman that referenced this issue Jul 9, 2021
With docker-compose, there is a use case where you can `docker-compose
up -d`, then change a file like docker-compose.yml and run up again.
This requires a ContainerConfig with at least Volumes be populated in
the inspect data.  This PR adds just that.

Fixes: containers#10795

Signed-off-by: Brent Baude <bbaude@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants