-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add production Dockerfile and ci image upload workflow #70
base: develop
Are you sure you want to change the base?
Add production Dockerfile and ci image upload workflow #70
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #70 +/- ##
===========================================
+ Coverage 99.28% 99.57% +0.28%
===========================================
Files 19 19
Lines 422 468 +46
===========================================
+ Hits 419 466 +47
+ Misses 3 2 -1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still waiting on the Harbor project. The README will need to be updated at the end to include instructions.
Dockerfile.prod
Outdated
RUN set -eux; \ | ||
\ | ||
# Create loging.ini from its .example file \ | ||
cp object_storage_api/logging.example.ini object_storage_api/logging.ini; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't do this in other projects. We have instructions to tell the user to create the file manually before building the image. I think we should stay consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I saw we did this in the IMS api repo, so I assumed we were following that model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, we do this in other projects, you are right. However, I think this is unnecessary because the instructions tell the user to create the file manually before building the image. The new SciGateway Auth Dockerfile does not copy this in and at some point, I am going to refactor IMS API and LDAP JWT AUth to be consistent with SciGateway Auth and have only one Dockerfile that do not copy this in and have stages for dev and prod. I suggested in my other comment that you could do the same for this repo.
Dockerfile.prod
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to look at the Dockerfile
in ral-facilities/scigateway-auth#134 and what I did there. If you like what I did then it would be nice to keep things consistent (thoughts on it in that PR are welcome). It just means that we can have a single file with multiple stages/targets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a look at the PR, and I like the idea of keeping of merging the different dockerfiles into 1 file with multiple stages. My only suggestion, is if it was implemented here for us to make a script to run everything succintly.
Looking at the README of that PR, the commands to build the image, and run tests/start the container are very long, so it would be useful to have shortened commands like we do on IMS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only suggestion, is if it was implemented here for us to make a script to run everything succintly.
Looking at the README of that PR, the commands to build the image, and run tests/start the container are very long, so it would be useful to have shortened commands like we do on IMS.
Sorry, I am not sure I fully understand what you mean. Could you please explain with examples if possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a small point, that the command for running the containers seem like they are very long (compared to just using docker compose up), and I'm not sure if you need to build an image each time you: 1. switch from testing to developing or 2. make changes? I think you could use a docker-compose.yml
to run them (shortening the commands), although I'm not sure if/how you would configure it to work with different targets for the same dockerfile (i.e docker compose test up, docker compose dev up, etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your explanation, I see what you mean now. This is why in the README
, the Using docker-compose.yml
for local development section is at the top of the Inside of Docker section, telling the reader that this is the easiest way to run the app for local development.
As for testing, given that this app requires other services like MongoDB, minio etc to be spun up for the e2e to run, it makes sense to have a second file called docker-compose.test.yml
file like you suggested. In that file you can define the services that are needed for testing and set the object-storage-api
container to use the test
target from the Dockerfile
. You can then use the docker compose up
and docker compose down
to run the tests using that new file. At the same time, you should rename the current docker-compose.yml
file to docker-compose.dev.yml
and only use that for local development. As both Docker Compose files have the source code mounted through volumes, you will not need to rebuild an image when you make changes to the code.
SciGateway Auth doesn't require any services to be spun up so it is easier (at least for me) to just run the tests with the docker run
command as I can just copy and paste it.
Please let me know if was not clear enough with my suggestion above or you need other help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the response, it helped clarify a lot of points. I've made all of the changes covered here now, but I had a few clarifying questions to ask:
- Does prod also need its own compose file? I assume it doesn't, and that you only need the stage to push to Harbor.
- In the dockerfile test stage, I'm not sure if the
CMD
line should runfastapi
orpytest
. It's currently running pytest (as I can see you did the same in the FastAPI PR), but I don't really understand why this decision is made? If I use thedocker-compose.test.yml
file the tests don't automatically run?
Also if both compose files are meant to be that similar (only difference being dockerfile stage target), I think we could configure it to accept an environment variable specificying the stage and just keep 1 compose file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does prod also need its own compose file? I assume it doesn't, and that you only need the stage to push to Harbor.
Not really because we are not spinning up our own instances of S3 and MongoDB in production. Even if we did not have these additional services (like is the case with SciGateway Auth), having a prod compose file will not fit all cases because the prod setup can vary i.e. one can decide to use a different reverse proxy and deploy some sort of metric services.
This highlights a case where a developer can have S3 and MongoDB set up locally on their machine outside of Docker so the instructions (in SciGateway Auth) on running the tests with docker run
(without using the docker compose file) would be useful here.
In the dockerfile test stage, I'm not sure if the
CMD
line should runfastapi
orpytest
. It's currently running pytest (as I can see you did the same in the FastAPI PR), but I don't really understand why this decision is made? If I use thedocker-compose.test.yml
file the tests don't automatically run?
That's because the test stage is meant for running the tests (unit, e2e etc) locally as opposed to running the application so when you start a container with that image, it would run the tests. The reason we decided to do this is that we always run the tests in the same environment. The dev
stage should be used for running the app locally whereas the prod
stage for running in prod.
Regarding the tests not running, I will have a look next week because they should do if everything is configured correctly.
Also if both compose files are meant to be that similar (only difference being dockerfile stage target), I think we could configure it to accept an environment variable specificying the stage and just keep 1 compose file
While I am a fan of minimising duplicated code/configurations, I think this could get messy if the number of differences grows in future so I am not sure. I am also not sure if you can do something like that with ENV VARs but we can look into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the latest commit I have an example implementation of using Environment Variables for the compose file.
I do agree if there would be more differences in the future, then keeping them separate sounds like a good idea since both commands are of similar length, and there wouldn't likely be more compose files in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the tests not running, I will have a look next week because they should do if everything is configured correctly.
I spent too long on this today.. I discovered that Docker Compose doesn't respect the target if there is already a Docker image for the container. For example, if I run docker compose -f docker-compose.dev.yml up
, it will build an image for the object-storage-api
container and spin up a container that will start the app. If I then stop the containers (using Ctrl+c
) and run TARGET_STAGE=test docker compose -f docker-compose.dev.yml up
, it will not build a new image for the the object-storage-api
but use the one it built before which is why it doesn't run the tests. Rebuilding the images (docker compose -f docker-compose.dev.yml up
) before running the Docker Compose up
command each time the target is switched seems to solve this problem.
However, this issue does not arise (and no rebuilding of images is required) if the two compose files are used and different service and container names are used in each of the files. For example, naming the service object-storage-api-test
and the container object_storage_api_test_container
in the docker-compose.test.yml
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see, well in that case it seems even more appropriate to keep separate compose files in this case.
bccd108
to
29adf12
Compare
29adf12
to
a2d28e0
Compare
.github/workflows/.ci.yml
Outdated
# This job triggers only if all the other jobs succeed. It builds the Docker image and if successful, | ||
# it pushes it to Harbor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment needs updating as it won't push it every time anymore.
docker-compose.yml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only used for local dev, not prod. Please revert the changes so it uses the Dockerfile
.
dc3e4b3
to
5fb8ba9
Compare
5fb8ba9
to
9a87e9c
Compare
Dockerfile
Outdated
@@ -1,4 +1,4 @@ | |||
FROM python:3.12.8-alpine3.20@sha256:0c4f778362f30cc50ff734a3e9e7f3b2ae876d8386f470e0c3ee1ab299cec21b | |||
FROM python:3.12.8-alpine3.20@sha256:0c4f778362f30cc50ff734a3e9e7f3b2ae876d8386f470e0c3ee1ab299cec21b as base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't really need a base stage in this case. You can just move this to the dev
stage considering that the tests
stage uses it as the base.
|
||
COPY test/ test/ | ||
|
||
CMD ["pytest", "--config-file", "test/pytest.ini", "test/", "--cov"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would fail because only the prod dependencies are installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've opted to install from the pyproject.toml instead to get the test dependencies, which I believe should solve the issue.
.github/workflows/.ci.yml
Outdated
sleep 10 | ||
- name: Create MinIO buckets | ||
run: | | ||
docker compose up minio_create_buckets | ||
TARGET_STAGE=test docker compose -f docker-compose.dev.yml up minio_create_buckets | ||
|
||
- name: Run e2e tests | ||
run: pytest -c test/pytest.ini test/e2e/ --cov |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests should now run in a Docker container. This requires some thinking given there are multiple jobs for the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 solution I've thought of is in the Dockerfile
, we can split up the build stage for test into a unit test stage, and e2e test stage (the only difference being the command run).
Then in the workflow in the run e2e/unit test
step of the separate jobs, we build the image targeting the unit or e2e stage respectively, and then run the image instead of using the docker compose file.
docker build --target unit-test -t object-storage-api:unit-test .
docker run object-storage-api:unit-test
This saves us having to create a new compose file for unit/e2e testing, or introduce environment variables into the compose file. This does create a new problem of now the docker-compose.test.yml file no longer has a target stage that runs all of the tests. This could be addressed by:
- Define a 3rd test stage in the Dockerfile that runs all tests (same as existing test stage essentially), and set that as the target for the compose file.
- Create 2 separate services + containers in the compose file 1 for unit testing and 1 for e2e testing. The separate jobs can now just run each service to their respective tests, and to run all tests you can just run everything? I'm less sure about this approach, I don't know if the services would conflict with each other if you try to run them at the same time. I think the only other thing that would need to be changed is there port numbers, so that they're not the same?
services:
object-storage-api-unit-test:
container_name: object_storage_api_container_unit_test
build:
context: .
target: unit-test
...
object-storage-api-e2e-test:
container_name: object_storage_api_container_e2e_test
build:
context: .
target: e2e-test
...
docker-compose -f docker-compose.test.yml up object-storage-api-unit-tests
@asuresh-code Thanks for all your work on this PR so far, as agreed I will continue working on it. |
Description
Enter a description of the changes here
Testing instructions
Add a set up instructions describing how the reviewer should test the code
Agile board tracking
closes #5