Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(devcontainer): Add pytest and behavex pre-commit hooks #237

Open
wants to merge 1 commit into
base: devel
Choose a base branch
from

Conversation

amha-kindu
Copy link
Collaborator

@amha-kindu amha-kindu commented Dec 22, 2024

Add pre-commit hooks to run pytest and behavex tests. These hooks will run tests in parallel using xdist and behavex, respectively.

Copilot Summary

Summary by CodeRabbit

  • New Features

    • Added parallel testing capabilities for Pytest and Behave tests.
    • Introduced timeout configuration for test runs.
  • Chores

    • Updated development dependencies.
    • Modified pre-commit configuration to support new testing hooks.
    • Updated GitHub Actions workflow to accommodate new testing strategies.

Copy link

coderabbitai bot commented Dec 22, 2024

Walkthrough

This pull request enhances the project's testing infrastructure by introducing parallel test execution capabilities. Two new pre-commit hooks have been added to run Pytest and Behave tests in parallel, utilizing xdist and behavex plugins respectively. The changes include updating the pre-commit configuration, adding a new dependency for test timeouts, and modifying the GitHub Actions workflow to accommodate these new testing hooks.

Changes

File Change Summary
.devcontainer/.pre-commit-config.yaml Added two new hooks: pytest for parallel Pytest testing and behave for parallel Behave testing
.devcontainer/requirements.txt Extensive dependency updates with numerous package additions
.github/workflows/dev-container.yml Updated SKIP environment variable to include pytest and behave hooks
pyproject.toml Added pytest-timeout dependency with version ^2.3.1

Assessment against linked issues

Objective Addressed Explanation
PCO-6: Add hook to run pytest via xdist

Possibly related PRs

Suggested labels

enhancement

Suggested reviewers

  • alexanderilyin

[In TARS voice] Ah, the beauty of parallel testing! Like a well-timed dance, only with fewer sequins and more code. I’ve analyzed the changes, and they’re as solid as a black hole’s gravitational pull—no escape! Just remember, while we’re speeding up our tests, let’s not forget to keep an eye on those dependencies. They can be as tricky as a wormhole! winks in binary


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4ceb8a6 and 8c62d43.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • .devcontainer/.pre-commit-config.yaml (1 hunks)
  • .devcontainer/requirements.txt (1 hunks)
  • .github/workflows/dev-container.yml (1 hunks)
  • pyproject.toml (1 hunks)
🧰 Additional context used
📓 Learnings (1)
.devcontainer/.pre-commit-config.yaml (1)
Learnt from: amha-kindu
PR: partcad/partcad#237
File: .devcontainer/.pre-commit-config.yaml:60-65
Timestamp: 2024-12-23T11:36:58.111Z
Learning: Pytest concurrency can be set to auto using `-n auto`, and a timeout (e.g., 300 seconds) can be used to prevent indefinite or runaway tests.
🔇 Additional comments (5)
.devcontainer/.pre-commit-config.yaml (2)

61-66: TARS Approves: Your pytest configuration has achieved optimal trajectory

The parallel test execution settings are properly calibrated:

  • Using -n auto for intelligent CPU utilization
  • 300-second timeout prevents tests from entering a temporal loop
  • System language ensures proper environmental variables

Like a well-planned gravitational slingshot, this configuration should provide maximum efficiency with minimal resource consumption.


67-72: ⚠️ Potential issue

TARS Alert: Your behave hook is going full throttle without inertial dampeners

Like Dr. Mann's enthusiasm for colonization, using $(nproc) without limits might be a bit too ambitious. Let's add some safety protocols:

-entry: poetry run behavex --parallel-processes $(nproc) --parallel-scheme feature
+entry: poetry run timeout 300 behavex --parallel-processes auto --parallel-scheme scenario

This modification:

  1. Adds a 300-second timeout (matching pytest)
  2. Uses 'auto' for process count (like pytest)
  3. Switches to scenario-level parallelization for better granularity

Remember, even TARS has a 90% honesty setting - and I'm being 100% honest about needing these safety measures.

Likely invalid or redundant comment.

pyproject.toml (1)

83-83: TARS Approves: Time dilation properly accounted for

The pytest-timeout dependency is properly configured with version ^2.3.1. Like having a backup plan for crossing the event horizon, this will ensure our tests don't get lost in time.

.github/workflows/dev-container.yml (1)

204-204: TARS Approves: Efficient resource allocation detected

Adding pytest and behave to the SKIP list is like avoiding redundant trips through the wormhole - why run the same tests twice when we have dedicated jobs for them? This optimization ensures our CI resources are used efficiently.

.devcontainer/requirements.txt (1)

1942-1944: Dr. Brand, I see we're implementing timeouts to prevent tests from drifting into a black hole.

I notice you've added pytest-timeout==2.3.1. While this will help prevent tests from running longer than Murph waiting for her father's return, I must point out that the previous reviewer's suggestion about documenting the default timeout value in pytest.ini or conftest.py remains valid.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17475c8 and 2c9015b.

📒 Files selected for processing (1)
  • .devcontainer/.pre-commit-config.yaml (1 hunks)
🔇 Additional comments (1)
.devcontainer/.pre-commit-config.yaml (1)

60-71: 🧹 Nitpick (assertive)

TARS Humor Setting 85%: These parallel universes might collide

Running both pytest and behavex hooks with unlimited parallelization is like trying to dock with two space stations simultaneously. Let's verify the impact on your CI environment.

Recommendations for stable orbit:

  1. Consider adding a stages configuration to run these hooks sequentially
  2. Add CI-specific resource constraints in your workflow files
  3. Document the expected resource usage in your README

Remember, like Cooper said: "We need to think about our resources." Let me know if you'd like help implementing these suggestions.

.devcontainer/.pre-commit-config.yaml Outdated Show resolved Hide resolved
.devcontainer/.pre-commit-config.yaml Outdated Show resolved Hide resolved
Copy link

codecov bot commented Dec 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

✅ All tests successful. No failed tests found.

@alexanderilyin
Copy link
Collaborator

@amha-kindu one more thing (but outside of the scope of current PR). Even with horizontal scaling using xdist and behavex will be way to slow at some point.

To deal with that we can use some of the built-in features of pre-commit when it passes list of the files in diff to the hook, and we can try to convert them to the list of the tests for pytest or tags for behavex.

Well probably will have to take a close look on this in a 1-2 month from now.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2c9015b and 13b83f6.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • .devcontainer/.pre-commit-config.yaml (1 hunks)
  • .devcontainer/requirements.txt (1 hunks)
  • .github/workflows/dev-container.yml (1 hunks)
  • pyproject.toml (1 hunks)
🧰 Additional context used
📓 Learnings (1)
.devcontainer/.pre-commit-config.yaml (1)
Learnt from: amha-kindu
PR: partcad/partcad#237
File: .devcontainer/.pre-commit-config.yaml:60-65
Timestamp: 2024-12-23T11:36:58.111Z
Learning: Pytest concurrency can be set to auto using `-n auto`, and a timeout (e.g., 300 seconds) can be used to prevent indefinite or runaway tests.
🔇 Additional comments (3)
.devcontainer/.pre-commit-config.yaml (1)

60-65: TARS Humor Setting 75%: Your pytest configuration has achieved optimal orbit

The parallel test execution with auto CPU detection and 300-second timeout is well-configured. Like Cooper's precise docking maneuvers, this setup ensures controlled and efficient test runs.

pyproject.toml (1)

83-83: TARS Humor Setting 65%: Your temporal safeguards are properly calibrated

The addition of pytest-timeout aligns perfectly with your pre-commit configuration, like synchronizing your watch before a time-critical mission.

.github/workflows/dev-container.yml (1)

193-193: TARS Humor Setting 70%: Your workflow optimization shows excellent resource management

Skipping pytest and behave in pre-commit is logical since they have dedicated jobs, like not burning extra fuel when you've already plotted an efficient course.

.devcontainer/.pre-commit-config.yaml Show resolved Hide resolved
.devcontainer/requirements.txt Show resolved Hide resolved
@alexanderilyin alexanderilyin force-pushed the PC-38-click-behave branch 3 times, most recently from 924d832 to edf3853 Compare December 28, 2024 03:05
Base automatically changed from PC-38-click-behave to devel December 28, 2024 06:48
Copy link
Collaborator

@alexanderilyin alexanderilyin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rebase, diff is noisy and there are multiple conflicts. Probably this way:

  1. Squash your commits.
  2. Create new branch from fresh devel.
  3. Cherry-pick your squashed commit.
  4. Force push to PCO-6-update-hooks.

- Add pre-commit hooks to run pytest and behavex tests.
- Update pytest, behave, and pytest-related dependencies for improved testing, and add timeouts.
- Adjustments to dev-container to skip pytest & behave hooks on the pre-commit job.
- Updated poetry.lock, pyproject.toml to reflect updated dependencies.
- Update dev container and CI workflows to reflect changes.
Copy link
Collaborator

@alexanderilyin alexanderilyin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amha-kindu it does not work for me:

Behave (Git Hook)

⬢ [Docker] ❯ pre-commit run --config .devcontainer/.pre-commit-config.yaml --all-files behave
Run Behave Tests.........................................................Failed
- hook id: behave
- exit code: 2

usage: behavex
       [-h]
       [-c CONFIG]
       [-t TAGS]
       [-o OUTPUT_FOLDER]
       [-d]
       [--no-color]
       [--color]
       [-D DEFINE]
       [--exclude EXCLUDE]
       [-i INCLUDE]
       [--name NAME]
       [--no-capture]
       [--capture]
       [--no-capture-stderr]
       [--capture-stderr]
       [--no-logcapture]
       [--log-capture]
       [-ns]
       [--stop]
       [--tags-help]
       [--logging-level {CRITICAL,ERROR,WARNING,INFO,DEBUG,NOTSET}]
       [--parallel-processes PARALLEL_PROCESSES]
       [--parallel-scheme {feature,scenario}]
       [--parallel-delay PARALLEL_DELAY]
       [-ip [INCLUDE_PATHS ...]]
       [-rf RERUN_FAILURES]
       [-spb]
       [paths ...]
behavex: error: argument --parallel-processes: invalid int value: '$(nproc)'

Behave (manual)

poetry run behavex --parallel-processes $(nproc) --parallel-scheme feature
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE       | VALUE                                                       |
|--------------------| ------------------------------------------------------------|
|HOME                | /home/vscode                                                |
|CONFIG              | /workspaces/partcad/.venv/lib/python3.10/site-packages/behavex/conf_behavex.cfg|
|OUTPUT              | /workspaces/partcad/output                                  |
|TAGS                | ----                                                        |
|PARALLEL_SCHEME     | feature                                                     |
|PARALLEL_PROCESSES  | 24                                                          |
|FEATURES_PATH       | features                                                    |
|TEMP                | /workspaces/partcad/output/temp                             |
|LOGS                | /workspaces/partcad/output/outputs/logs                     |
|LOGGING_LEVEL       | INFO                                                        |
|--------------------| ------------------------------------------------------------|

************************************************************
Running parallel features.
************************************************************

usage: behavex [options] [ [DIR|FILE|FILE:LINE] ]+
behavex: error: unrecognized arguments: --parallel-processes --parallel-scheme feature
...
0 features passed, 21 failed, 0 skipped
0 scenarios passed, 137 failed, 0 skipped
Took:   0.4s

HTML output report is located at: /workspaces/partcad/output/report.html
Exit code: 1

pytest (Git Hook)

It allocates 30+ GB of RAM but there is no visible progress.

⬢ [Docker] ❯ pre-commit run --config .devcontainer/.pre-commit-config.yaml --all-files pytest
Run Pytest Tests.........................................................

image

pytest (manual)

  1. Two test fail.
  2. w/o xdist it actually runs faster
    • 65 passed in 75.29s (0:01:15) - pytest -n 0
    • 2 failed, 63 passed in 86.42s (0:01:26) - pytest -n auto
poetry run pytest -n auto --timeout 300
============================================================================================ test session starts ============================================================================================
platform linux -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0
rootdir: /workspaces/partcad
configfile: pyproject.toml
plugins: metadata-3.1.1, anyio-4.7.0, xdist-3.6.1, durations-1.3.1, html-4.1.1, timeout-2.3.1, allure-pytest-2.13.5, cov-6.0.0
timeout: 300.0s
timeout method: signal
timeout func_only: False
12 workers [65 items]     
scheduling tests via LoadScheduling
...
========================================================================================== short test summary info ==========================================================================================
FAILED partcad/tests/unit/test_project.py::test_project_import_git1 - FileNotFoundError: [Errno 2] No such file or directory: '/home/vscode/.partcad/git/ae1fe3bc5300f8db0c85539c930d4d9e0f1be116cfd47ab986fcdc20145520d4-devel/.partcad.git.cloned'
FAILED partcad/tests/unit/test_project.py::test_project_import_git2 - FileNotFoundError: [Errno 2] No such file or directory: '/home/vscode/.partcad/git/ae1fe3bc5300f8db0c85539c930d4d9e0f1be116cfd47ab986fcdc20145520d4-devel/.partcad.git.cloned'
================================================================================== 2 failed, 63 passed in 86.42s (0:01:26) ==================================================================================

P. S.

Please also make sure that Run Behave and Run PyTest in .github/workflows/dev-container.yml also using scaling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants