-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Initial additions for fp32 and windows GPU test support #1778
Changes from all commits
767fa2d
1c71877
d473b54
b16f232
768204d
e515416
e0e7977
079951a
bd702f2
d65a4e0
bc802a9
faf5574
cc4cc21
cd98477
bfbf572
0d33ff4
23cf7b0
4650e1f
68cdfd2
a3fe427
3b152a3
8764370
47e0f17
5f8c23f
3030961
aa810ec
59c6e26
769f37b
c4e8912
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -66,7 +66,8 @@ def test_dense_self_rbf_kernel(queue): | |
result = rbf_kernel(X, queue=queue) | ||
expected = sklearn_rbf_kernel(X) | ||
|
||
assert_allclose(result, expected, rtol=1e-14) | ||
tol = 1e-5 if result.dtype == np.float32 else 1e-14 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yikes, a 10^9 change in performance. Would this also warrant an investigation? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not sure why 1e-14 was used here specifically - generally we don't go more specific than 1e-7. For rtol, 1e-5 should be acceptable, as this is a typical threshold used in other fp32 testing |
||
assert_allclose(result, expected, rtol=tol) | ||
|
||
|
||
def _test_dense_small_rbf_kernel(queue, gamma, dtype): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the logic behind the desired accuracy -1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it matches a minor threshold change added to test_kmeans (https://github.com/intel/scikit-learn-intelex/blob/main/onedal/cluster/tests/test_kmeans.py#L87) not sure exactly how the desired_accuracy was set but seems the threshold doesn't necessarily matchup with actual performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, the kmeans++ init issue may get fixed with the changes proposed here uxlfoundation/oneDAL#2796. The onedal code uses 1 trial at the moment.