-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix -vv
overriding --durations-min
(#12938)
#13100
Conversation
7e06d44
to
7e5c1b6
Compare
|
||
tested = "3" | ||
for x in tested: | ||
for y in ("call",): # 'setup', 'call', 'teardown': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what the rationale for this comment is. It was added with --durations
in 3b9fd3a, 13 years ago. I removed it.
|
||
for x in "123": | ||
for y in ("call",): # 'setup', 'call', 'teardown': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking for "call"
is not sufficient, as the output in question contains test_calls_showall_verbose.py::test_X PASSED
(e.g.) for all test numbers X
, which contains call
as well, even though the call might be missing in the durations list. Added a space before and after call
to fix this.
tested = "3" | ||
for x in tested: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only checks whether test 3 is in the output, but not if the other tests are not. Fixed this.
break | ||
else: | ||
raise AssertionError(f"not found {x} {y}") | ||
def test_calls_showall_durationsmin(self, pytester: Pytester, mock_timing) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A test for --durations-min
was missing. Added one.
assert result.ret == 0 | ||
TestDurations.check_tests_in_output(result.stdout.lines, "3") | ||
|
||
def test_calls_showall_durationsmin_verbose( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the actual test that fails without the change in src/_pytest/runner.py
.
testing/acceptance_test.py
Outdated
TestDurations.check_tests_in_output(result.stdout.lines, "3") | ||
|
||
@staticmethod | ||
def check_tests_in_output(lines: Sequence[str], expected_test_numbers: str) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reduced code duplication by refactoring common code into a separate function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to take expected_test_numbers: Collection[int]
, or better yet *expected_test_numbers: int
, then check for more than just 1/2/3 in the condition below, and finally assert a comparison between eg lists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wanted to have somewhat minimal diff, that's why I used strings as well. Changed it to *expected_test_numbers: int
.
check for more than just 1/2/3 in the condition below
I don't see how we could do this. As mentioned in #13100 (comment), the tests now also checks whether the other tests are not in the output. For this, we need to know how many tests there are.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so I'd make number_of_tests: int
an argument to the helper function rather than a constant defined in the body. That way it works for cases where there are other numbers of tests too 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few things I noticed. Not a complete review.
d8ccca7
to
a811765
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change itself and your fixes to the tests look good to me - thanks! One comment below about clarifying the helper function, and then I think we're ready to merge 🙂
testing/acceptance_test.py
Outdated
TestDurations.check_tests_in_output(result.stdout.lines, "3") | ||
|
||
@staticmethod | ||
def check_tests_in_output(lines: Sequence[str], expected_test_numbers: str) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to take expected_test_numbers: Collection[int]
, or better yet *expected_test_numbers: int
, then check for more than just 1/2/3 in the condition below, and finally assert a comparison between eg lists.
Use `endswith()` because the code would break for `number_of_tests >= 10` (as `test_10` also contains `test_1`).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests fail, but the runs of other PRs fail as well, so this is probably unrelated. acceptance_test
succeeds on my machine.
testing/acceptance_test.py
Outdated
TestDurations.check_tests_in_output(result.stdout.lines, "3") | ||
|
||
@staticmethod | ||
def check_tests_in_output(lines: Sequence[str], expected_test_numbers: str) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wanted to have somewhat minimal diff, that's why I used strings as well. Changed it to *expected_test_numbers: int
.
check for more than just 1/2/3 in the condition below
I don't see how we could do this. As mentioned in #13100 (comment), the tests now also checks whether the other tests are not in the output. For this, we need to know how many tests there are.
Once the fix is in and this PR gets rebased, hopefully things are all green again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again @JulianJvn!
Fix
--durations-min
argument not respected if-vv
is used.Fixes #12938.