Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add production Dockerfile and ci image upload workflow #70

Open
wants to merge 15 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 40 additions & 2 deletions .github/workflows/.ci.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
name: CI
on:
workflow_dispatch:
inputs:
push-docker-image-to-harbor:
description: 'Push Docker Image to Harbor'
type: boolean
default: false
pull_request:
push:
branches:
Expand Down Expand Up @@ -90,11 +95,11 @@ jobs:
# Sleep 10 seconds to give time for containers to start
- name: Start MongoDB and MinIO
run: |
docker compose up -d mongo-db minio minio_create_buckets
TARGET_STAGE=test docker compose -f docker-compose.dev.yml up -d mongo-db minio minio_create_buckets
sleep 10
- name: Create MinIO buckets
run: |
docker compose up minio_create_buckets
TARGET_STAGE=test docker compose -f docker-compose.dev.yml up minio_create_buckets

- name: Run e2e tests
run: pytest -c test/pytest.ini test/e2e/ --cov
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests should now run in a Docker container. This requires some thinking given there are multiple jobs for the tests.

Copy link
Contributor Author

@asuresh-code asuresh-code Jan 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 solution I've thought of is in the Dockerfile, we can split up the build stage for test into a unit test stage, and e2e test stage (the only difference being the command run).
image

Then in the workflow in the run e2e/unit test step of the separate jobs, we build the image targeting the unit or e2e stage respectively, and then run the image instead of using the docker compose file.

docker build --target unit-test -t object-storage-api:unit-test .
docker run object-storage-api:unit-test

This saves us having to create a new compose file for unit/e2e testing, or introduce environment variables into the compose file. This does create a new problem of now the docker-compose.test.yml file no longer has a target stage that runs all of the tests. This could be addressed by:

  1. Define a 3rd test stage in the Dockerfile that runs all tests (same as existing test stage essentially), and set that as the target for the compose file.
  2. Create 2 separate services + containers in the compose file 1 for unit testing and 1 for e2e testing. The separate jobs can now just run each service to their respective tests, and to run all tests you can just run everything? I'm less sure about this approach, I don't know if the services would conflict with each other if you try to run them at the same time. I think the only other thing that would need to be changed is there port numbers, so that they're not the same?
services:
  object-storage-api-unit-test:
    container_name: object_storage_api_container_unit_test
    build: 
      context: .
      target: unit-test
    ...

  object-storage-api-e2e-test:
    container_name: object_storage_api_container_e2e_test
    build: 
      context: .
      target: e2e-test
    ...

docker-compose -f docker-compose.test.yml up object-storage-api-unit-tests

Expand All @@ -106,3 +111,36 @@ jobs:
- name: Output docker logs (minio)
if: failure()
run: docker logs object-storage-api-minio-1
docker:
VKTB marked this conversation as resolved.
Show resolved Hide resolved
# This job triggers only if all the other jobs succeed. It builds the Docker image
# and if run manually from Github Actions, it pushes to Harbor.
needs: [linting, unit-tests, e2e-tests]
name: Docker
runs-on: ubuntu-latest
env:
PUSH_DOCKER_IMAGE_TO_HARBOR: ${{ inputs.push-docker-image-to-harbor != null && inputs.push-docker-image-to-harbor || 'false' }}
steps:
- name: Check out repo
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4.2.1

- name: Login to Harbor
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: ${{ secrets.HARBOR_URL }}
username: ${{ secrets.HARBOR_USERNAME }}
password: ${{ secrets.HARBOR_TOKEN }}

- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81 # v5.5.1
with:
images: ${{ secrets.HARBOR_URL }}/object-storage-api

- name: ${{ fromJSON(env.PUSH_DOCKER_IMAGE_TO_HARBOR) && 'Build and push Docker image to Harbor' || 'Build Docker image' }}
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75 # v6.9.0
with:
context: .
push: ${{ fromJSON(env.PUSH_DOCKER_IMAGE_TO_HARBOR) }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
target: prod
38 changes: 37 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM python:3.12.8-alpine3.20@sha256:0c4f778362f30cc50ff734a3e9e7f3b2ae876d8386f470e0c3ee1ab299cec21b
FROM python:3.12.8-alpine3.20@sha256:0c4f778362f30cc50ff734a3e9e7f3b2ae876d8386f470e0c3ee1ab299cec21b as base

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't really need a base stage in this case. You can just move this to the dev stage considering that the tests stage uses it as the base.

WORKDIR /object-storage-api-run

Expand All @@ -10,5 +10,41 @@ RUN --mount=type=cache,target=/root/.cache \
\
python3 -m pip install -r requirements.txt;

FROM python:3.12.8-alpine3.20@sha256:0c4f778362f30cc50ff734a3e9e7f3b2ae876d8386f470e0c3ee1ab299cec21b as dev

WORKDIR /object-storage-api-run

COPY --from=base /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=base /usr/local/bin /usr/local/bin
COPY object_storage_api/ object_storage_api/

CMD ["fastapi", "dev", "object_storage_api/main.py", "--host", "0.0.0.0", "--port", "8000"]
EXPOSE 8000

FROM dev as test

WORKDIR /object-storage-api-run

COPY test/ test/

CMD ["pytest", "--config-file", "test/pytest.ini", "test/", "--cov"]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would fail because only the prod dependencies are installed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've opted to install from the pyproject.toml instead to get the test dependencies, which I believe should solve the issue.

FROM python:3.12.8-alpine3.20@sha256:0c4f778362f30cc50ff734a3e9e7f3b2ae876d8386f470e0c3ee1ab299cec21b as prod

WORKDIR /object-storage-api-run

COPY requirements.txt ./
COPY object_storage_api/ object_storage_api/

RUN --mount=type=cache,target=/root/.cache \
set -eux; \
\
python3 -m pip install --no-cache-dir -r requirements.txt; \
# Create a non-root user to run as \
addgroup -S object-storage-api; \
adduser -S -D -G object-storage-api -H -h /object-storage-api-run object-storage-api;

USER object-storage-api

CMD ["fastapi", "run", "object_storage_api/main.py", "--host", "0.0.0.0", "--port", "8000"]
EXPOSE 8000
64 changes: 64 additions & 0 deletions docker-compose.dev.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
services:
object-storage-api:
container_name: object_storage_api_container
build:
context: .
target: ${TARGET_STAGE:-dev}
volumes:
- ./object_storage_api:/object-storage-api-run/object_storage_api
- ./keys:/object-storage-api-run/keys
restart: on-failure
ports:
- 8002:8000
depends_on:
- mongo-db
- minio
environment:
DATABASE__HOST_AND_OPTIONS: object_storage_api_mongodb_container:27017/?authMechanism=SCRAM-SHA-256&authSource=admin
extra_hosts:
# Want to use localhost for MinIO connection so the presigned URLs are correct but also want to avoid using host
# networking
- "localhost:host-gateway"

mongo-db:
image: mongo:7.0-jammy
container_name: object_storage_api_mongodb_container
volumes:
- ./mongodb/data:/data/db
restart: always
ports:
- 27018:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example

minio:
image: minio/minio:RELEASE.2024-09-13T20-26-02Z
container_name: object_storage_minio_container
command: minio server /data
volumes:
- ./minio/data:/data
ports:
- 9000:9000
- 9001:9001
environment:
MINIO_ROOT_USER: root
MINIO_ROOT_PASSWORD: example_password
MINIO_ADDRESS: ":9000"
MINIO_CONSOLE_ADDRESS: ":9001"
network_mode: "host"

# From https://stackoverflow.com/questions/66412289/minio-add-a-public-bucket-with-docker-compose
minio_create_buckets:
image: minio/mc
container_name: object_storage_minio_mc_container
depends_on:
- minio
entrypoint: >
/bin/sh -c "
/usr/bin/mc alias set object-storage http://localhost:9000 root example_password;
/usr/bin/mc mb object-storage/object-storage;
/usr/bin/mc mb object-storage/test-object-storage;
exit 0;
"
network_mode: "host"
4 changes: 3 additions & 1 deletion docker-compose.yml → docker-compose.test.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
services:
object-storage-api:
container_name: object_storage_api_container
build: .
build:
context: .
target: test
volumes:
- ./object_storage_api:/object-storage-api-run/object_storage_api
- ./keys:/object-storage-api-run/keys
Expand Down
Loading