Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add production Dockerfile and ci image upload workflow #70

Open
wants to merge 15 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions .github/workflows/.ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,34 @@ jobs:
- name: Output docker logs (minio)
if: failure()
run: docker logs object-storage-api-minio-1
docker:
VKTB marked this conversation as resolved.
Show resolved Hide resolved
# This job triggers only if all the other jobs succeed. It builds the Docker image and if successful,
# it pushes it to Harbor.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment needs updating as it won't push it every time anymore.

needs: [linting, unit-tests, e2e-tests]
name: Docker
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4.2.1

- name: Login to Harbor
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: ${{ secrets.HARBOR_URL }}
username: ${{ secrets.HARBOR_USERNAME }}
password: ${{ secrets.HARBOR_TOKEN }}

- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81 # v5.5.1
with:
images: ${{ secrets.HARBOR_URL }}/object-storage-api

- name: Build and push Docker image to Harbor
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75 # v6.9.0
with:
context: .
file: ./Dockerfile.prod
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
VKTB marked this conversation as resolved.
Show resolved Hide resolved
28 changes: 28 additions & 0 deletions Dockerfile.prod
Copy link
Collaborator

@VKTB VKTB Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to look at the Dockerfile in ral-facilities/scigateway-auth#134 and what I did there. If you like what I did then it would be nice to keep things consistent (thoughts on it in that PR are welcome). It just means that we can have a single file with multiple stages/targets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a look at the PR, and I like the idea of keeping of merging the different dockerfiles into 1 file with multiple stages. My only suggestion, is if it was implemented here for us to make a script to run everything succintly.

Looking at the README of that PR, the commands to build the image, and run tests/start the container are very long, so it would be useful to have shortened commands like we do on IMS.

Copy link
Collaborator

@VKTB VKTB Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My only suggestion, is if it was implemented here for us to make a script to run everything succintly.

Looking at the README of that PR, the commands to build the image, and run tests/start the container are very long, so it would be useful to have shortened commands like we do on IMS.

Sorry, I am not sure I fully understand what you mean. Could you please explain with examples if possible?

Copy link
Contributor Author

@asuresh-code asuresh-code Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
It's a small point, that the command for running the containers seem like they are very long (compared to just using docker compose up), and I'm not sure if you need to build an image each time you: 1. switch from testing to developing or 2. make changes? I think you could use a docker-compose.yml to run them (shortening the commands), although I'm not sure if/how you would configure it to work with different targets for the same dockerfile (i.e docker compose test up, docker compose dev up, etc)

Copy link
Collaborator

@VKTB VKTB Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your explanation, I see what you mean now. This is why in the README, the Using docker-compose.yml for local development section is at the top of the Inside of Docker section, telling the reader that this is the easiest way to run the app for local development.

As for testing, given that this app requires other services like MongoDB, minio etc to be spun up for the e2e to run, it makes sense to have a second file called docker-compose.test.yml file like you suggested. In that file you can define the services that are needed for testing and set the object-storage-api container to use the test target from the Dockerfile. You can then use the docker compose up and docker compose down to run the tests using that new file. At the same time, you should rename the current docker-compose.yml file to docker-compose.dev.yml and only use that for local development. As both Docker Compose files have the source code mounted through volumes, you will not need to rebuild an image when you make changes to the code.

SciGateway Auth doesn't require any services to be spun up so it is easier (at least for me) to just run the tests with the docker run command as I can just copy and paste it.

Please let me know if was not clear enough with my suggestion above or you need other help.

Copy link
Contributor Author

@asuresh-code asuresh-code Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the response, it helped clarify a lot of points. I've made all of the changes covered here now, but I had a few clarifying questions to ask:

  • Does prod also need its own compose file? I assume it doesn't, and that you only need the stage to push to Harbor.
  • In the dockerfile test stage, I'm not sure if the CMD line should run fastapi or pytest. It's currently running pytest (as I can see you did the same in the FastAPI PR), but I don't really understand why this decision is made? If I use the docker-compose.test.yml file the tests don't automatically run?

Also if both compose files are meant to be that similar (only difference being dockerfile stage target), I think we could configure it to accept an environment variable specificying the stage and just keep 1 compose file

Copy link
Collaborator

@VKTB VKTB Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does prod also need its own compose file? I assume it doesn't, and that you only need the stage to push to Harbor.

Not really because we are not spinning up our own instances of S3 and MongoDB in production. Even if we did not have these additional services (like is the case with SciGateway Auth), having a prod compose file will not fit all cases because the prod setup can vary i.e. one can decide to use a different reverse proxy and deploy some sort of metric services.

This highlights a case where a developer can have S3 and MongoDB set up locally on their machine outside of Docker so the instructions (in SciGateway Auth) on running the tests with docker run (without using the docker compose file) would be useful here.

In the dockerfile test stage, I'm not sure if the CMD line should run fastapi or pytest. It's currently running pytest (as I can see you did the same in the FastAPI PR), but I don't really understand why this decision is made? If I use the docker-compose.test.yml file the tests don't automatically run?

That's because the test stage is meant for running the tests (unit, e2e etc) locally as opposed to running the application so when you start a container with that image, it would run the tests. The reason we decided to do this is that we always run the tests in the same environment. The dev stage should be used for running the app locally whereas the prod stage for running in prod.

Regarding the tests not running, I will have a look next week because they should do if everything is configured correctly.

Also if both compose files are meant to be that similar (only difference being dockerfile stage target), I think we could configure it to accept an environment variable specificying the stage and just keep 1 compose file

While I am a fan of minimising duplicated code/configurations, I think this could get messy if the number of differences grows in future so I am not sure. I am also not sure if you can do something like that with ENV VARs but we can look into it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the latest commit I have an example implementation of using Environment Variables for the compose file.

I do agree if there would be more differences in the future, then keeping them separate sounds like a good idea since both commands are of similar length, and there wouldn't likely be more compose files in the future.

Copy link
Collaborator

@VKTB VKTB Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the tests not running, I will have a look next week because they should do if everything is configured correctly.

I spent too long on this today.. I discovered that Docker Compose doesn't respect the target if there is already a Docker image for the container. For example, if I run docker compose -f docker-compose.dev.yml up, it will build an image for the object-storage-api container and spin up a container that will start the app. If I then stop the containers (using Ctrl+c) and run TARGET_STAGE=test docker compose -f docker-compose.dev.yml up, it will not build a new image for the the object-storage-api but use the one it built before which is why it doesn't run the tests. Rebuilding the images (docker compose -f docker-compose.dev.yml up) before running the Docker Compose up command each time the target is switched seems to solve this problem.

However, this issue does not arise (and no rebuilding of images is required) if the two compose files are used and different service and container names are used in each of the files. For example, naming the service object-storage-api-test and the container object_storage_api_test_container in the docker-compose.test.yml.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, well in that case it seems even more appropriate to keep separate compose files in this case.

Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
FROM python:3.12.7-alpine3.20@sha256:edd1d8559c585e1e9a9b79de44ac27f8ac32cb0c7323e112ae6870ceeecd8dbf AS builder
VKTB marked this conversation as resolved.
Show resolved Hide resolved

COPY requirements.txt ./

RUN set -eux; \
\
# Install pip dependencies \
python3 -m pip install --no-cache-dir -r requirements.txt;

FROM python:3.12.7-alpine3.20@sha256:edd1d8559c585e1e9a9b79de44ac27f8ac32cb0c7323e112ae6870ceeecd8dbf

WORKDIR /object-storage-api-run

COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin

COPY README.md ./
VKTB marked this conversation as resolved.
Show resolved Hide resolved
COPY object_storage_api/ object_storage_api/

RUN set -eux; \
\
# Create loging.ini from its .example file \
cp object_storage_api/logging.example.ini object_storage_api/logging.ini;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't do this in other projects. We have instructions to tell the user to create the file manually before building the image. I think we should stay consistent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I saw we did this in the IMS api repo, so I assumed we were following that model.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, we do this in other projects, you are right. However, I think this is unnecessary because the instructions tell the user to create the file manually before building the image. The new SciGateway Auth Dockerfile does not copy this in and at some point, I am going to refactor IMS API and LDAP JWT AUth to be consistent with SciGateway Auth and have only one Dockerfile that do not copy this in and have stages for dev and prod. I suggested in my other comment that you could do the same for this repo.


USER nobody
VKTB marked this conversation as resolved.
Show resolved Hide resolved

CMD ["uvicorn", "object_storage_api.main:app", "--app-dir", "/object-storage-api-run", "--host", "0.0.0.0", "--port", "8000"]
VKTB marked this conversation as resolved.
Show resolved Hide resolved
EXPOSE 8000
Loading