-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split and parralelize CI jobs #74
Conversation
- "py3.10" | ||
- "py3.11" | ||
- "py3.12" | ||
- "py3.13" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea, however it solve issue partially as we still has different django, psycopg and postgres versions. Is it possible to move tox matrix that include python, django, psycopg, postgres versions fully to github actions matrix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it's true that it's a partial solution. It's a trade-off between CI workflow configuration complexity and convenience. If the split is fine grained down to 1 CI job per tox env, then your logs are super clear, but the YAML can get quite complicated.
I find that having 1 job per Python version is usually a good balance... I know of a few Django packages I use doing that (whitenoise, django-cors-headers) and that's the approach in the @django-commons org as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, your approach anyway better that current
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok with go python version split, have few comments:
- it nice to build image before usage
- it nice to add changes to
CHANGES.md
.github/workflows/check.yml
Outdated
- name: build and pull images | ||
run: | | ||
docker compose build | ||
docker compose pull --quiet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this step build image for each matrix item, lets build image first and then reuse in check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to put the build on a separate job which means that the image needs pushing somewhere in between: https://docs.docker.com/build/ci/github-actions/share-image-jobs/
I've also looked at using GitHub's container registry, but that might be a challenge to use from a fork...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't get the solution from the above link to work nicely with the docker compose pull
. I had to either repeat the services to pull, which would cause duplication when adding and removing Postgres version, or ignore errors which could hide another real problem down the line.
Here are my attempts: https://github.com/browniebroke/django-pg-zero-downtime-migrations/actions?query=branch%3Aci%2Fseparate-build-push
- "py3.10" | ||
- "py3.11" | ||
- "py3.12" | ||
- "py3.13" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, your approach anyway better that current
Ok, now it works on pushes my fork, but fails in the context of a pull request as my fork hasn't got permissions to push to the GH container registry |
I'll leave this for now, and I'll get back to it later when/if I have more time, but I'm starting to run out of ideas on how to solve that properly... |
Closing for now. Did a less ambitious first pass in #75 |
While contributing to this project recently, I struggled a bit to read through CI failures: the logs were super long and noisy, mixing the build and test phases. This was made worse because all jobs were mixed together.
We can split the build and run to make this a bit easier to read, as well as running each Python version on a separate CI job. The main drawback I see is that the build is duplicated on each job, which isn't optimial, but it's a small one compared to the maintainability gains.
I appreciate that you may feel differently and would understand if you decline this change.