-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] Add punet export test #623
Conversation
.github/workflows/ci-sharktank.yml
Outdated
- name: Run punet tests | ||
run: | | ||
pytest -v sharktank/ -m "model_punet and export" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding 40 lines of boilerplate to run tests matching a combination of marks for one particular model is a pretty bad code smell. Workflow changes generally shouldn't be required to add test coverage - tests should just get picked up by existing workflows automatically.
The work in #584 will simplify the pip install steps, but the test setup still needs more refactoring. Why are these tests not running as part of pytest -n 4 sharktank/
below? Maybe we could add a "integration test" mark or directory path and run those in a job, instead of specializing this to punet
? (Remember - we're going to be supporting 10s-100s of models, and we can't scale this sort of test or workflow code as currently written)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah Nithin is going to add an integration test mark for now. We can also have it all under pytest -n 4 sharktank/
but maybe good to have a bit of separation, so it is clear what jobs are testing going forward when bringing in a lot more models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this test as the first of many integration tests, I think its better to keep it as a separate tab integration test as Sai suggested
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As part of #584, I think we should keep integration tests in separate workflow files that use package builds while we keep unit tests building from source.
This should merge with https://github.com/nod-ai/shark-ai/blob/main/.github/workflows/ci-llama-quick-tests.yaml and https://github.com/nod-ai/shark-ai/blob/main/.github/workflows/ci_eval_short.yaml as just "short integration tests" (20-30 minute target)
It can start here for now, but it will need to move soon IMO.
2df50d3
to
bed64cb
Compare
bed64cb
to
4591bf5
Compare
Seems running |
15 minutes should be fine. If that's too long, we can switch to the mi250 runners |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a PR description explaining what new test coverage this adds and how much CI time it takes.
Adds back punet export test to the CI to run on pre-commit. Takes about ~15 minutes for the tests to run
Adds back punet export test to the CI to run on pre-commit. Takes about ~15 minutes for the tests to run