-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove obsolete test skip #235
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@rgommers @charris @r-devulap @seiko2plus Not sure why this is not consistent across python versions, but the Test suite error log
|
@h-vetinari Can you confirm if numpy/numpy#18933 is part of the build? I would think it should be .. |
Hey @r-devulap, thanks for stopping by! As I wrote above, I checked that numpy/numpy#18933 was part of 1.21.0rc1, as can be seen from numpy/numpy@cd73ab7 and https://github.com/numpy/numpy/commits/v1.21.0rc1/numpy/core/src/umath/loops_exponent_log.dispatch.c.src. Since we're verifying the hash of the source tar for 1.21.0rc1, I'm 99.9% sure that it is included alreaady. |
Actually, I just noticed that the release artefact changed (since the hashes don't match anymore). Let's try again if it works now. |
The error persists on py37 & py39; To triple-check I also downloaded https://github.com/numpy/numpy/archive/refs/tags/v1.21.0rc1.tar.gz and verified that numpy/numpy#18933 is contained in the release. |
Could you execute the following code before running the tests so we can determine what kind of CPU features we're dealing with and also to check the current NumPy tag? python -c "import numpy; numpy._pytesttester._show_numpy_info()" |
So, on python 3.7, the addition of the line requested by @seiko2plus made the error go away (not sure if this run got a differently-specced CI agent or if this is a heisenbug), here's the output:
On python 3.9, the failure remained (with the following):
The difference seems to be |
It makes sense now the error disappears when the machine has AVX512 support, it seems we are dealing with compiler/libc that doesn't respect underflow error since the current exp/f64 SIMD kernel only optimizes CPUs with AVX512 Skylake features. I'm not sure what exactly you doing here since I see you requesting the compiler to tune the whole lib for Haswell CPUs through env var CFLAGS="-mtune=haswell" However, Could you try the following patch? index 41e0bf37b..89798f74a 100644
--- a/numpy/core/src/umath/loops_exponent_log.dispatch.c.src
+++ b/numpy/core/src/umath/loops_exponent_log.dispatch.c.src
@@ -1248,6 +1248,7 @@ NPY_NO_EXPORT void NPY_CPU_DISPATCH_CURFX(FLOAT_@func@)
/**begin repeat
* #func = exp, log#
* #scalar = npy_exp, npy_log#
+ * #underflow = 1, 0#
*/
NPY_NO_EXPORT void NPY_CPU_DISPATCH_CURFX(DOUBLE_@func@)
(char **args, npy_intp const *dimensions, npy_intp const *steps, void *NPY_UNUSED(data))
@@ -1260,6 +1261,13 @@ NPY_NO_EXPORT void NPY_CPU_DISPATCH_CURFX(DOUBLE_@func@)
#endif
UNARY_LOOP {
const npy_double in1 = *(npy_double *)ip1;
+ #if @underflow@
+ if (NPY_UNLIKELY(!npy_isnan(in1) && in1 < -0x1.74910d52d3053p+9)) {
+ *(npy_double *)op1 = 0;
+ npy_set_floatstatus_underflow();
+ continue;
+ }
+ #endif
*(npy_double *)op1 = @scalar@(in1);
}
} |
if the patch passes the test, then we will need to determine which versions of glibc are affected by this bug. |
I don't know where this is coming from, the
Could this be a similar issue like numpy/numpy#15179? This test is currently being skipped here (after I verified it still fails), but does pass with the following patch (suggested by @r-devulap): diff --git a/numpy/core/tests/test_umath.py b/numpy/core/tests/test_umath.py
index 9d1b13b53..faa2ca8f0 100644
--- a/numpy/core/tests/test_umath.py
+++ b/numpy/core/tests/test_umath.py
@@ -1188,8 +1188,6 @@ def test_sincos_float32(self):
M = np.int_(N/20)
index = np.random.randint(low=0, high=N, size=M)
x_f32 = np.float32(np.random.uniform(low=-100.,high=100.,size=N))
- # test coverage for elements > 117435.992f for which glibc is used
- x_f32[index] = np.float32(10E+10*np.random.rand(M))
x_f64 = np.float64(x_f32)
assert_array_max_ulp(np.sin(x_f32), np.float32(np.sin(x_f64)), maxulp=2)
assert_array_max_ulp(np.cos(x_f32), np.float32(np.cos(x_f64)), maxulp=2 The comment also mentions glibc, which is why I'm asking... |
With the patch, it passed. |
Any follow-up you'd like me to do? Open an issue in numpy? |
Ping @seiko2plus @r-devulap For 1.21.0 (once it arrives), should we skip the test? Carry your patch? Something else? |
Opened an issue: numpy/numpy#19192 |
Note, the last commits were to test whether numpy/numpy#19192 disappears in the face of a newer glibc (2.17 instead of 2.12), and indeed, this is the case. Interestingly, there seems to be a non-timeout failure on pypy3.7+aarch, not sure what caused it. |
@conda-forge/numpy @conda-forge/core In the upstream issue uncovered by testing the rc, the point came up if this recipe could move to CentOS 7 already. Any thoughts on that?
|
I think we should in general move conda-forge to |
Found it: conda-forge/conda-forge.github.io#1436 |
This was mostly superseded by #236, but for reasons of skip-hygiene, we should still remove the skip that has become obsolete in 1.21.0 (because numpy is currently xfailing it itself). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for reasons of skip-hygiene, we should still remove the skip that has become obsolete in 1.21.0 (because numpy is currently xfailing it itself).
Agreed.
All jobs are green except for one timeout, so in it goes. Thanks @h-vetinari
Builds on #234