Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add production Dockerfile and ci image upload workflow #70
base: develop
Are you sure you want to change the base?
Add production Dockerfile and ci image upload workflow #70
Changes from 4 commits
4bba5fe
cd4ca0e
ba8bfc3
05a7082
4e06f88
bf5e983
a2d28e0
1b2aa49
f97a991
7ec2aec
9a87e9c
823c294
11ae8d0
2decebd
2f74518
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment needs updating as it won't push it every time anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to look at the
Dockerfile
in ral-facilities/scigateway-auth#134 and what I did there. If you like what I did then it would be nice to keep things consistent (thoughts on it in that PR are welcome). It just means that we can have a single file with multiple stages/targets.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a look at the PR, and I like the idea of keeping of merging the different dockerfiles into 1 file with multiple stages. My only suggestion, is if it was implemented here for us to make a script to run everything succintly.
Looking at the README of that PR, the commands to build the image, and run tests/start the container are very long, so it would be useful to have shortened commands like we do on IMS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I am not sure I fully understand what you mean. Could you please explain with examples if possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a small point, that the command for running the containers seem like they are very long (compared to just using docker compose up), and I'm not sure if you need to build an image each time you: 1. switch from testing to developing or 2. make changes? I think you could use a
docker-compose.yml
to run them (shortening the commands), although I'm not sure if/how you would configure it to work with different targets for the same dockerfile (i.e docker compose test up, docker compose dev up, etc)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your explanation, I see what you mean now. This is why in the
README
, the Usingdocker-compose.yml
for local development section is at the top of the Inside of Docker section, telling the reader that this is the easiest way to run the app for local development.As for testing, given that this app requires other services like MongoDB, minio etc to be spun up for the e2e to run, it makes sense to have a second file called
docker-compose.test.yml
file like you suggested. In that file you can define the services that are needed for testing and set theobject-storage-api
container to use thetest
target from theDockerfile
. You can then use thedocker compose up
anddocker compose down
to run the tests using that new file. At the same time, you should rename the currentdocker-compose.yml
file todocker-compose.dev.yml
and only use that for local development. As both Docker Compose files have the source code mounted through volumes, you will not need to rebuild an image when you make changes to the code.SciGateway Auth doesn't require any services to be spun up so it is easier (at least for me) to just run the tests with the
docker run
command as I can just copy and paste it.Please let me know if was not clear enough with my suggestion above or you need other help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the response, it helped clarify a lot of points. I've made all of the changes covered here now, but I had a few clarifying questions to ask:
CMD
line should runfastapi
orpytest
. It's currently running pytest (as I can see you did the same in the FastAPI PR), but I don't really understand why this decision is made? If I use thedocker-compose.test.yml
file the tests don't automatically run?Also if both compose files are meant to be that similar (only difference being dockerfile stage target), I think we could configure it to accept an environment variable specificying the stage and just keep 1 compose file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really because we are not spinning up our own instances of S3 and MongoDB in production. Even if we did not have these additional services (like is the case with SciGateway Auth), having a prod compose file will not fit all cases because the prod setup can vary i.e. one can decide to use a different reverse proxy and deploy some sort of metric services.
This highlights a case where a developer can have S3 and MongoDB set up locally on their machine outside of Docker so the instructions (in SciGateway Auth) on running the tests with
docker run
(without using the docker compose file) would be useful here.That's because the test stage is meant for running the tests (unit, e2e etc) locally as opposed to running the application so when you start a container with that image, it would run the tests. The reason we decided to do this is that we always run the tests in the same environment. The
dev
stage should be used for running the app locally whereas theprod
stage for running in prod.Regarding the tests not running, I will have a look next week because they should do if everything is configured correctly.
While I am a fan of minimising duplicated code/configurations, I think this could get messy if the number of differences grows in future so I am not sure. I am also not sure if you can do something like that with ENV VARs but we can look into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the latest commit I have an example implementation of using Environment Variables for the compose file.
I do agree if there would be more differences in the future, then keeping them separate sounds like a good idea since both commands are of similar length, and there wouldn't likely be more compose files in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spent too long on this today.. I discovered that Docker Compose doesn't respect the target if there is already a Docker image for the container. For example, if I run
docker compose -f docker-compose.dev.yml up
, it will build an image for theobject-storage-api
container and spin up a container that will start the app. If I then stop the containers (usingCtrl+c
) and runTARGET_STAGE=test docker compose -f docker-compose.dev.yml up
, it will not build a new image for the theobject-storage-api
but use the one it built before which is why it doesn't run the tests. Rebuilding the images (docker compose -f docker-compose.dev.yml up
) before running the Docker Composeup
command each time the target is switched seems to solve this problem.However, this issue does not arise (and no rebuilding of images is required) if the two compose files are used and different service and container names are used in each of the files. For example, naming the service
object-storage-api-test
and the containerobject_storage_api_test_container
in thedocker-compose.test.yml
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see, well in that case it seems even more appropriate to keep separate compose files in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't do this in other projects. We have instructions to tell the user to create the file manually before building the image. I think we should stay consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I saw we did this in the IMS api repo, so I assumed we were following that model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, we do this in other projects, you are right. However, I think this is unnecessary because the instructions tell the user to create the file manually before building the image. The new SciGateway Auth Dockerfile does not copy this in and at some point, I am going to refactor IMS API and LDAP JWT AUth to be consistent with SciGateway Auth and have only one Dockerfile that do not copy this in and have stages for dev and prod. I suggested in my other comment that you could do the same for this repo.