Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

np.linalg.svd gives weird output #13401

Closed
mortonjt opened this issue Apr 24, 2019 · 35 comments
Closed

np.linalg.svd gives weird output #13401

mortonjt opened this issue Apr 24, 2019 · 35 comments

Comments

@mortonjt
Copy link

mortonjt commented Apr 24, 2019

It appears that numpy.linalg.svd has different outputs between 1.15.4 and 1.16.3 -- and I don't see any hints that this should happen in the changelog.

Reproducing code example:

If you run numpy==1.15.4 with the following input, you get the following

In [20]: np.random.seed(0)

In [21]: X = np.random.randn(100, 100) * 10

In [22]: u, s, v = np.linalg.svd(X)

In [23]: s[:5]
Out[23]:
array([193.6995948 , 183.01489365, 180.94983595, 178.47342134,
       174.07146491])

In [24]: u[:5, :5]
Out[24]:
array([[-0.12777193,  0.1473861 , -0.04626332,  0.09269663,  0.12149756],
       [-0.23535239,  0.06543527, -0.02588301, -0.05401032, -0.02436388],
       [ 0.09956489, -0.07108259, -0.10083294,  0.16710062, -0.0333519 ],
       [-0.05718053, -0.11184903, -0.12280465,  0.04707958,  0.16647603],
       [ 0.25422798,  0.01464547, -0.15071626, -0.03225463, -0.12620621]])

In [25]: np.mean(X - u @ np.diag(s) @ v)
Out[25]: 1.336599234069702e-16

In [26]: np.std(X - u @ np.diag(s) @ v)
Out[26]: 2.0947185697750298e-14

However, if you run svd with numpy==1.16.3, you get the following

In [13]: np.random.seed(0)

In [14]: X = np.random.randn(100, 100) * 10

In [15]: u, s, v = np.linalg.svd(X)

In [16]: s[:5]
Out[16]:
array([193.6995948 , 183.01489365, 180.94983595, 178.47342134,
       174.07146491])

In [17]: u[:5, :5]
Out[17]:
array([[-0.07680886,  0.17809124, -0.41179481,  0.14073815,  0.14494919],
       [ 0.46612298, -0.38573887,  0.34177276, -0.11314353, -0.03170648],
       [-0.62584307,  0.10539512,  0.20991414, -0.24513665, -0.11505925],
       [-0.43937521, -0.07100245,  0.2941617 , -0.03515797,  0.04093411],
       [ 0.00359183, -0.64178218,  0.69484207, -0.09871542,  0.28272959]])

In [18]: np.mean(X - u @ np.diag(s) @ v)
Out[18]: 1.0356975839816995

In [19]: np.std(X - u @ np.diag(s) @ v)
Out[19]: 92.84521345330656

Numpy/Python version information:

numpy version 1.15

In [18]: np.__config__.show()
mkl_info:
    libraries = ['mkl_rt', 'pthread']
    library_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/include']
blas_mkl_info:
    libraries = ['mkl_rt', 'pthread']
    library_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/include']
blas_opt_info:
    libraries = ['mkl_rt', 'pthread']
    library_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/include']
lapack_mkl_info:
    libraries = ['mkl_rt', 'pthread']
    library_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/include']
lapack_opt_info:
    libraries = ['mkl_rt', 'pthread']
    library_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/jmorton/miniconda3/envs/qiime2-2019.1/include']
In [27]:  import sys, numpy; print(numpy.__version__, sys.version)
1.15.4 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:44:09)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)]

numpy version 1.16

In [3]: np.__config__.show()
blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]

In [20]:  import sys, numpy; print(numpy.__version__, sys.version)
1.16.3 3.6.7 | packaged by conda-forge | (default, Feb 28 2019, 02:16:08)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]

As you can see, the output from 1.16.3 is broken, and the svd is no longer a true reconstruction.

@charris
Copy link
Member

charris commented Apr 24, 2019

This is probably a conda issue related to MKL. Can you check if you are linked to MKL? You might be able to see that with numpy.__config__.show(). BTW, it helps for reproducting these examples if you explicitly set the random generator seed.

@mortonjt
Copy link
Author

mortonjt commented Apr 25, 2019

@charris that would make a lot of sense. Except the 1.16.3 version was pip installed (within a conda environment) -- config is attached. The examples have been updated with random seeds (sorry about that ...).

@mortonjt
Copy link
Author

It also looks like scipy is having similar issues

In [12]: from scipy.linalg import svd

In [16]: np.random.seed(0)

In [17]: X = np.random.randn(100, 100) * 10

In [18]: u, s, v = svd(X)

In [19]: np.mean(X - u @ np.diag(s) @ v)
Out[19]: 1.0356975839819398

In [20]: np.std(X - u @ np.diag(s) @ v)
Out[20]: 92.84521345330933
In [22]: import scipy

In [23]: scipy.__version__
Out[23]: '1.2.1'

@charris
Copy link
Member

charris commented Apr 25, 2019

I can't duplicate the problem on linux with pip installed 1.16.3. @eric-wieser Do you see this on Windows? @mortonjt Are you running 32 or 64 bits python.

@charris
Copy link
Member

charris commented Apr 25, 2019

Note that pip and conda do not play well together. It would be helpful if you could duplicate the results in a non-conda environment.

@mattip
Copy link
Member

mattip commented Apr 25, 2019

The 1.15.4 results are using mkl, the 1.16.4 are without any acceleration. They also use different clang and python

1.15.4 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:44:09)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)]

The 1.16.4 results seem to use a different clang

1.16.3 3.6.7 | packaged by conda-forge | (default, Feb 28 2019, 02:16:08)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]

@mortonjt
Copy link
Author

@charris , I was able to reproduce the results with just pip outside of my conda environment

>>> import numpy as np
>>> np.__version__
'1.16.3'
>>> np.random.seed(0)
>>> X = np.random.randn(100, 100) * 10
>>> u, s, v = np.linalg.svd(X)
>>> np.mean(X - u @ np.diag(s) @ v)
1.0356975839816995
>>> np.std(X - u @ np.diag(s) @ v)
92.84521345330656
>>> np.__config__.show()
blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
>>> import sys, numpy; print(numpy.__version__, sys.version)
1.16.3 3.7.3 (default, Mar 27 2019, 09:23:39)
[Clang 10.0.0 (clang-1000.11.45.5)]

@mattip - I don't think it is compiler specific, because in that same environment with numpy==1.15.2, I get correct results

>>> import numpy as np
>>> np.random.seed(0)
>>> X = np.random.randn(100, 100) * 10
>>> u, s, v = np.linalg.svd(X)
>>> np.mean(X - u @ np.diag(s) @ v)
1.0218109995283053e-16
>>> np.std(X - u @ np.diag(s) @ v)
2.1302064719026226e-14
>>> np.__config__.show()
blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
>>> import sys, numpy; print(numpy.__version__, sys.version)
1.15.4 3.7.3 (default, Mar 27 2019, 09:23:39)
[Clang 10.0.0 (clang-1000.11.45.5)]

@charris
Copy link
Member

charris commented Apr 26, 2019

Looks like it is probably an openblas issue on windows. We have a different openblas version for the forthcoming 1.17 and there are wheels for that at https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf2.rackcdn.com/, although the most recent windows version seems to be from April 2. If you could try that as well it would be helpful.

@mortonjt
Copy link
Author

mortonjt commented Apr 26, 2019

@charris the errors above were run on a mac.

But I can confirm that this error will not arise on linux - see below for output.

In [1]: import numpy as np

In [2]: np.random.seed(0)

In [3]: X = np.random.randn(100, 100) * 10

In [4]: u, s, v = np.linalg.svd(X)

In [5]: np.mean(X - u @ np.diag(s) @ v)
Out[5]: 9.420362710385599e-17

In [6]: np.std(X - u @ np.diag(s) @ v)
Out[6]: 2.086820837986988e-14

In [7]: np.__config__.show()
blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]

In [8]: import sys, numpy; print(numpy.__version__, sys.version)
1.16.3 3.6.2 (default, Aug  2 2017, 14:29:03)
[GCC 5.4.0]

@charris
Copy link
Member

charris commented Apr 26, 2019

Hmm. @tylerjereddy @mathew-brett Is there anything special about how OpenBLAS is compiled on the Mac? Compiler used, that sort of thing.

@matthew-brett
Copy link
Contributor

I believe it uses clang - flags etc here: https://github.com/MacPython/openblas-libs/blob/master/travis-ci/build_steps.sh#L81

I wonder if it's an interaction with the OpenBLAS kernel. @mortonjt - can you try some different values for the OpenBLAS kernel, with:

export OPENBLAS_VERBOSE=2
export OPENBLAS_CORETYPE=haswell

in the shell, before starting Python?

My Mac laptop and Mac desktop give correct 1e-15ish results, choosing default values of 'haswell' and 'sandybridge' for the OpenBLAS kernel, respectively.

Examples of values for OPENBLAS_CORETYPE in https://github.com/matthew-brett/manylinux-testing/blob/master/.travis.yml

@tylerjereddy
Copy link
Contributor

For diagnostic purposes, with 1.16.3, you should also be able to directly probe the OpenBLAS config data as well. From Matti's code block there:

import numpy, ctypes, os
dll = ctypes.CDLL(numpy.core._multiarray_umath.__file__)
get_config = dll.openblas_get_config
get_config.restype=ctypes.c_char_p
res = get_config()
res

On Linux I get something like this: b'OpenBLAS 0.3.5.dev DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64'
I get the same config result on Mac & I can't reproduce your original issue. Maybe providing us with that ctypes output would be helpful too.

@matthew-brett
Copy link
Contributor

I can't get a failure on any kernel on my laptop or desktop:

#!/bin/bash
cat << EOF > np_check.py
import numpy as np
print(np.__version__)
np.random.seed(0)
X = np.random.randn(100, 100) * 10
u, s, v = np.linalg.svd(X)
print(np.mean(X - u @ np.diag(s) @ v))
print(np.std(X - u @ np.diag(s) @ v))
EOF
export OPENBLAS_CORETYPE
export OPENBLAS_VERBOSE=2
for kernel in penryn sandybridge prescott nehalem atom haswell; do
    echo Testing with kernel $kernel
    export OPENBLAS_CORETYPE=$kernel
    python np_check.py
done

With Tyler's script I get:

b'OpenBLAS 0.3.5.dev DYNAMIC_ARCH NO_AFFINITY Core2 MAX_THREADS=64'

(Laptop)

b'OpenBLAS 0.3.5.dev DYNAMIC_ARCH NO_AFFINITY Sandybridge MAX_THREADS=64'

(Desktop)

@matthew-brett
Copy link
Contributor

@mortonjt - what do you get from my script above?

@charris
Copy link
Member

charris commented Apr 28, 2019

I wonder if there is another OpenBLAS library floating around? @mortonjt Could you do a file search for '*openblas*'.

@mortonjt
Copy link
Author

mortonjt commented Apr 29, 2019

@matthew-brett , I think you are on to something. When setting the flags,

export OPENBLAS_VERBOSE=2
export OPENBLAS_CORETYPE=haswell

I get the right answer. And when I run the script provided, it seems to nail the right answer every time.

Testing with kernel penryn
Core: Core2
1.16.3
3.666055848651484e-16
2.1619569755102583e-14
Testing with kernel sandybridge
Core: Sandybridge
1.16.3
4.3135485417941654e-16
2.2037472763482323e-14
Testing with kernel prescott
Core: Prescott
1.16.3
3.654898107254001e-16
2.162364554532177e-14
Testing with kernel nehalem
Core: Atom
1.16.3
1.424538330178915e-16
2.1591982659274054e-14
Testing with kernel atom
Core: Atom
1.16.3
1.424538330178915e-16
2.1591982659274054e-14
Testing with kernel haswell
Core: Haswell
1.16.3
1.0218109995283053e-16
2.1302064719026226e-14

So, I'm guessing I had a faulty openblas library floating around somewhere. Still not sure what is going on there, but I'm going to close this issue, since it looks like I have a working solution now.

Thank you @matthew-brett @charris @tylerjereddy for the detective work!

@matthew-brett
Copy link
Contributor

@mortonjt - just to check - what do you get after:

export OPENBLAS_VERBOSE=2
unset OPENBLAS_CORETYPE
python np_check.py

What do you get for Tyler's script? It would be good to be sure what the original problem was.

@mortonjt
Copy link
Author

Below are the results for the np_check.py

python3 np_check.py
Core: SkylakeX
1.16.3
1.0356975839816995
92.84521345330656

And here are the results for Tyler's script

>>> import numpy, ctypes, os
Core: SkylakeX
>>> dll = ctypes.CDLL(numpy.core._multiarray_umath.__file__)
>>> get_config = dll.openblas_get_config
>>> get_config.restype=ctypes.c_char_p
>>> res = get_config()
>>> res
b'OpenBLAS 0.3.5.dev DYNAMIC_ARCH NO_AFFINITY SkylakeX MAX_THREADS=64'

@matthew-brett
Copy link
Contributor

OK - so I suppose this must be an error in the OpenBLAS SkylakeX kernel - that I can't test - I don't have the hardware.

Are you running Linux on the same machine? Can you test on Linux with:

export OPENBLAS_VERBOSE=2
export OPENBLAS_CORETYPE=skylakex
python np_check.py

?

@mortonjt
Copy link
Author

I'm running linux on another machine - and it doesn't seem to have the openblas skylakeX kernel

@charris
Copy link
Member

charris commented Apr 29, 2019

There is a reported skylake openblas bug that should be fixed in 3.5. So maybe this is related.

@matthew-brett
Copy link
Contributor

Chuck - which bug were you thinking of? I was just looking at OpenMathLib/OpenBLAS#1955 . I think the relevant kernel is being disabled in OpenBLAS 0.3.6 - OpenMathLib/OpenBLAS#2061 .

@charris
Copy link
Member

charris commented Apr 29, 2019

@matthew-brett I didn't have a particular one in mind, but I note that one of the referenced issues in your link involves pinv (Julia), so that is probably svd related also.

@charris
Copy link
Member

charris commented Apr 29, 2019

Sounds like we want OpenBLAS 3.6 until the skylake bugs are ironed out, probably for NumPy 1.16.4 also.

@tylerjereddy
Copy link
Contributor

@stefanv was asking about unit testing for this issue. I suspect we do have tests that would fail here (perhaps many things in linalg?), but they don't fail in CI because of a specific CPU architecture requirement to induce the failure? Any creative way we could detect the issue in CI with tests, and is that worth the effort?

@rgommers It looks like this means that SciPy 1.2.1 / 1.3.x are hosed for some linear algebra calculations on Skylake, since the user report above is for 1.2.1 & I don't think I've bumped OpenBLAS for 1.3.x series (yet anyway). Proof / regression guards with tests would also be better for SciPy of course.

But do we really want to invest effort in generating creative tests that somehow guard against regressions specific to N architectures upstream?

@matthew-brett
Copy link
Contributor

We can try to run specific blas tests with different archictures, as set by the OPENBLAS_CORETYPE variable, and that seems like a good idea to me - but we can only test the architectures supported by the CI virtual machines, which might not include SkylakeX, for example. We could also ask people with specific hardware to test for us, before release.

@rgommers
Copy link
Member

rgommers commented May 1, 2019

they don't fail in CI because of a specific CPU architecture requirement to induce the failure?

looks like it

Any creative way we could detect the issue in CI with tests, and is that worth the effort?

not easy, and no I don't think that that is worth the effort to investigate. there's no way we can cover all common architectures, and we anyway have no control over what hardware CI runs on.

@mortonjt could you check if numpy.linalg.test() and scipy.linalg.test() fail for you? If not, we should add these as regression tests.

We can try to run specific blas tests with different archictures, as set by the OPENBLAS_CORETYPE variable, and that seems like a good idea to me - but we can only test the architectures supported by the CI virtual machines, which might not include SkylakeX, for example. We could also ask people with specific hardware to test for us, before release.

Really this is an OpenBLAS issue though. It is with pretty much every release. Rather than fix those upstream of OpenBLAS, it would make more sense to help OpenBLAS improve its CI and release QA. Then these things are a lot less likely to hit us in the first place, and it also helps other libraries and communities (R for example).

We can also try to downgrade back to the same OpenBLAS version we used for 1.15.3, but I guess we upgraded to avoid some other bug........

@matthew-brett
Copy link
Contributor

Yes, but we use OpenBLAS for all our binary releases, so it becomes our problem too. It's surely easier for us to make sure that OpenBLAS is working for Numpy, than for us to make sure that they are doing it.

Actually, they have been running some tests here:

http://build.openblas.net/builders

But it doesn't look as if they are testing Skylake.

@rgommers
Copy link
Member

rgommers commented May 1, 2019

It's surely easier for us to make sure that OpenBLAS is working for Numpy, than for us to make sure that they are doing it.

I don't think it is. At least not in CI. We can push people to test if it works for them with a release candidate, but that's apparently not working and if we find a problem, it then becomes a release blocker straight away and there may not be a working OpenBLAS version at all that we can switch to.

detecting problems where they're created is almost always the best idea ....

@matthew-brett
Copy link
Contributor

I'm sure that's better in the abstract, but the issue is agency. We've got lots of agency testing Numpy, but much less testing OpenBLAS. It's difficult for me to imagine that any of us are going to put lots of effort into - for example - trying to keep the OpenBLAS buildbots maintained. So I suspect that won't happen. Meaning that we're at risk of releasing wheels that give wrong results, unless we check. And it's pretty easy for us to check - maybe something like:

export OPENBLAS_CORETYPE
export OPENBLAS_VERBOSE=2
for kernel in penryn sandybridge prescott nehalem atom haswell skylake skylakex; do
    echo Testing with kernel $kernel
    export OPENBLAS_CORETYPE=$kernel
    python -c 'import numpy.linalg as npl; npl.test()'
done

It could go, for example, in nightly wheel tests.

@rgommers
Copy link
Member

rgommers commented May 1, 2019

That very likely won't work because the hardware doesn't support all that (no control on what you get with TravisCI & co). If it was that easy, OpenBLAS wouldn't need buildbots that run outside regular CI right?

@matthew-brett
Copy link
Contributor

Try it on your own machine. I think you'll find that all the kernels work up until Skylake, where they might segfault, trying to use assembly instructions your CPU does not have. So the trick is only working out what kernels the CI machine will run, and then running those. From time to time, we can prod the machines to see if we can get away with another kernel. I imagine the chances of losing support for a kernel in a VM update is pretty small.

For example, I used this trick a lot for the initial manylinux / OpenBLAS testing.

So no, you don't need special hardware for most of the kernels, but you will for the most cutting edge ones, like SkylakeX.

@rgommers
Copy link
Member

rgommers commented May 1, 2019

Try it on your own machine.

I have a Ryzen CPU here ....

So no, you don't need special hardware for most of the kernels, but you will for the most cutting edge ones, like SkylakeX.

The most cutting edge ones are the ones most likely to be a problem though probably. At least it is here. Anyway, if it is that easy then we should do that. Just doesn't solve the real issue imho, so we should also do something more systematic.

@matthew-brett
Copy link
Contributor

It would be good to try and gather interest for supporting wider testing by OpenBLAS, as well. I guess the Julia people will be interested, they found the SkylakeX bug first. But I suppose we might be OpenBLAS's biggest customers, in terms of users.

@rgommers
Copy link
Member

rgommers commented May 1, 2019

Yep, we have the biggest user base, but Julia may have as much or more developer time available. R uses OpenBLAS too IIRC. There really aren't too many flavors to choose from ....

TiborGY added a commit to TiborGY/OpenBLAS that referenced this issue Jul 7, 2019
* With the Intel compiler on Linux, prefer ifort for the final link step 

icc has known problems with mixed-language builds that ifort can handle just fine. Fixes OpenMathLib#1956

* Rename operands to put lda on the input/output constraint list

* Fix wrong constraints in inline assembly

for OpenMathLib#2009

* Fix inline assembly constraints

rework indices to allow marking argument lda4 as input and output. For OpenMathLib#2009

* Fix inline assembly constraints

rework indices to allow marking argument lda as input and output.

* Fix inline assembly constraints

* Fix inline assembly constraints

* Fix inline assembly constraints in Bulldozer TRSM kernels

rework indices to allow marking i,as and bs as both input and output (marked operand n1 as well for simplicity). For OpenMathLib#2009

* Correct range_n limiting

same bug as seen in OpenMathLib#1388, somehow missed in corresponding PR OpenMathLib#1389

* Allow multithreading TRMV again

revert workaround introduced for issue OpenMathLib#1332 as the actual cause appears to be my incorrect fix from OpenMathLib#1262 (see OpenMathLib#1388)

* Fix error introduced during cleanup

* Reduce list of kernels in the dynamic arch build

to make compilation complete reliably within the 1h limit again

* init

* move fix to right place

* Fix missing -c option in AVX512 test

* Fix AVX512 test always returning false due to missing compiler option

* Make x86_32 imply NO_AVX2, NO_AVX512 in addition to NO_AVX

fixes OpenMathLib#2033

* Keep xcode8.3 for osx BINARY=32 build

as xcode10 deprecated i386

* Make sure that AVX512 is disabled in 32bit builds

for OpenMathLib#2033

* Improve handling of NO_STATIC and NO_SHARED

to avoid surprises from defining either as zero. Fixes OpenMathLib#2035 by addressing some concerns from OpenMathLib#1422

* init

* address warning introed with OpenMathLib#1814 et al

* Restore locking optimizations for OpenMP case

restore another accidentally dropped part of OpenMathLib#1468 that was missed in OpenMathLib#2004 to address performance regression reported in OpenMathLib#1461

* HiSilicon tsv110 CPUs optimization branch

add HiSilicon tsv110 CPUs  optimization branch

* add TARGET support for  HiSilicon tsv110 CPUs

* add TARGET support for HiSilicon tsv110 CPUs

* add TARGET support for HiSilicon tsv110 CPUs

* Fix module definition conflicts between LAPACK and ReLAPACK

for OpenMathLib#2043

* Do not compile in AVX512 check if AVX support is disabled

xgetbv is function depends on NO_AVX being undefined - we could change that too, but that combo is unlikely to work anyway

* ctest.c : add __POWERPC__ for PowerMac

* Fix crash in sgemm SSE/nano kernel on x86_64

Fix bug OpenMathLib#2047.

Signed-off-by: Celelibi <[email protected]>

* param.h : enable defines for PPC970 on DarwinOS

fixes:
gemm.c: In function 'sgemm_':
../common_param.h:981:18: error: 'SGEMM_DEFAULT_P' undeclared (first use in this function)
 #define SGEMM_P  SGEMM_DEFAULT_P
                  ^

* common_power.h: force DCBT_ARG 0 on PPC970 Darwin

without this, we see
../kernel/power/gemv_n.S:427:Parameter syntax error
and many more similar entries

that relates to this assembly command
dcbt 8, r24, r18

this change makes the DCBT_ARG = 0
and openblas builds through to completion on PowerMac 970
Tests pass

* Make TARGET=GENERIC compatible with DYNAMIC_ARCH=1

for issue OpenMathLib#2048

* make DYNAMIC_ARCH=1 package work on TSV110.

* make DYNAMIC_ARCH=1 package work on TSV110

* Add Intel Denverton

for OpenMathLib#2048

* Add Intel Denverton

* Change 64-bit detection as explained in OpenMathLib#2056

* Trivial typo fix

as suggested in OpenMathLib#2022

* Disable the AVX512 DGEMM kernel (again)

Due to as yet unresolved errors seen in OpenMathLib#1955 and OpenMathLib#2029

* Use POSIX getenv on Cygwin

The Windows-native GetEnvironmentVariable cannot be relied on, as
Cygwin does not always copy environment variables set through Cygwin
to the Windows environment block, particularly after fork().

* Fix for OpenMathLib#2063: The DllMain used in Cygwin did not run the thread memory
pool cleanup upon THREAD_DETACH which is needed when compiled with
USE_TLS=1.

* Also call CloseHandle on each thread, as well as on the event so as to not leak thread handles.

* AIX asm syntax changes needed for shared object creation

* power9 makefile. dgemm based on power8 kernel with following changes : 32x unrolled 16x4 kernel and 8x4 kernel using (lxv stxv butterfly rank1 update). improvement from 17 to 22-23gflops. dtrmm cases were added into dgemm itself

* Expose CBLAS interfaces for I?MIN and I?MAX

* Build CBLAS interfaces for I?MIN and I?MAX

* Add declarations for ?sum and cblas_?sum

* Add interface for ?sum (derived from ?asum)

* Add ?sum

* Add implementations of ssum/dsum and csum/zsum

as trivial copies of asum/zsasum with the fabs calls replaced by fmov to preserve code structure

* Add ARM implementations of ?sum

(trivial copies of the respective ?asum with the fabs calls removed)

* Add ARM64 implementations of ?sum

as trivial copies of the respective ?asum kernels with the fabs calls removed

* Add ia64 implementation of ?sum

as trivial copy of asum with the fabs calls removed

* Add MIPS implementation of ?sum

as trivial copy of ?asum with the fabs calls removed

* Add MIPS64 implementation of ?sum

as trivial copy of ?asum with the fabs replaced by mov to preserve code structure

* Add POWER implementation of ?sum

as trivial copy of ?asum with the fabs replaced by fmr to preserve code structure

* Add SPARC implementation of ?sum

as trivial copy of ?asum with the fabs replaced by fmov to preserve code structure

* Add x86 implementation of ?sum

as trivial copy of ?asum with the fabs calls removed

* Add x86_64 implementation of ?sum

as trivial copy of ?asum with the fabs calls removed

* Add ZARCH implementation of ?sum

as trivial copies of the respective ?asum kernels with the ABS and vflpsb calls removed

* Detect 32bit environment on 64bit ARM hardware

for OpenMathLib#2056, using same approach as OpenMathLib#2058

* Add cmake defaults for ?sum kernels

* Add ?sum

* Add ?sum definitions for generic kernel

* Add declarations for ?sum

* Add -lm and disable EXPRECISION support on *BSD

fixes OpenMathLib#2075

* Add in runtime CPU detection for POWER.

* snprintf define consolidated to common.h

* Support INTERFACE64=1

* Add support for INTERFACE64 and fix XERBLA calls

1. Replaced all instances of "int" with "blasint"
2. Added string length as "hidden" third parameter in calls to fortran XERBLA

* Correct length of name string in xerbla call

* Avoid out-of-bounds accesses in LAPACK EIG tests

see Reference-LAPACK/lapack#333

* Correct INFO=4 condition

* Disable reallocation of work array in xSYTRF

as it appears to cause memory management problems (seen in the LAPACK tests)

* Disable repeated recursion on Ab_BR in ReLAPACK xGBTRF

due to crashes in LAPACK tests

* sgemm/strmm

* Update Changelog with changes from 0.3.6

* Increment version to 0.3.7.dev

* Increment version to 0.3.7.dev

* Misc. typo fixes

Found via `codespell -q 3 -w -L ith,als,dum,nd,amin,nto,wis,ba -S ./relapack,./kernel,./lapack-netlib`

* Correct argument of CPU_ISSET for glibc <2.5

fixes OpenMathLib#2104

* conflict resolve

* Revert reference/ fixes

* Revert Changelog.txt typos

* Disable the SkyLakeX DGEMMITCOPY kernel as well

as a stopgap measure for numpy/numpy#13401 as mentioned in OpenMathLib#1955

* Disable DGEMMINCOPY as well for now

OpenMathLib#1955

* init

* Fix errors in cpu enumeration with glibc 2.6

for OpenMathLib#2114

* Change two http links to https

Closes OpenMathLib#2109

* remove redundant code OpenMathLib#2113

* Set up CI with Azure Pipelines

[skip ci]

* TST: add native POWER8 to CI

* add native POWER8 testing to
Travis CI matrix with ppc64le
os entry

* Update link to IBM MASS library, update cpu support status

* first try migrating one of the arm builds from travis

* fix tabbing in azure commands

* Update azure-pipelines.yml

take out offending lines (although stolen from https://github.com/conda-forge/opencv-feedstock azure-pipelines fiie)

* Update azure-pipelines.yml

* Update azure-pipelines.yml

* Update azure-pipelines.yml

* Update azure-pipelines.yml

* DOC: Add Azure CI status badge

* Add ARMV6 build to azure CI setup (OpenMathLib#2122)

using aytekinar's Alpine image and docker script from the Travis setup

[skip ci]

* TST: Azure manylinux1 & clean-up

* remove some of the steps & comments
from the original Azure yml template

* modify the trigger section to use
develop since OpenBLAS primarily uses
this branch; use the same batching
behavior as downstream projects NumPy/
SciPy

* remove Travis emulated ARMv6 gcc build
because this now happens in Azure

* use documented Ubuntu vmImage name for Azure
and add in a manylinux1 test run to the matrix

[skip appveyor]

* Add NO_AFFINITY to available options on Linux, and set it to ON

to match the gmake default. Fixes second part of OpenMathLib#2114

* Replace ISMIN and ISAMIN kernels on all x86_64 platforms (OpenMathLib#2125)

* Mark iamax_sse.S as unsuitable for MIN due to issue OpenMathLib#2116
* Use iamax.S rather than iamax_sse.S for ISMIN/ISAMIN on all x86_64 as workaround for OpenMathLib#2116

* Move ARMv8 gcc build from Travis to Azure

* Move ARMv8 gcc build from Travis to Azure

* Update .travis.yml

* Test drone CI

* install make

* remove sudo

* Install gcc

* Install perl

* Install gfortran and add a clang job

* gfortran->gcc-gfortran

* Switch to ubuntu and parallel jobs

* apt update

* Fix typo

* update yes

* no need of gcc in clang build

* Add a cmake build as well

* Add cmake builds and print options

* build without lapack on cmake

* parallel build

* See if ubuntu 19.04 fixes the ICE

* Remove qemu armv8 builds

* arm32 build

* Fix typo

* TST: add SkylakeX AVX512 CI test

* adapt the C-level reproducer code for some
recent SkylakeX AVX512 kernel issues, provided
by Isuru Fernando and modified by Martin Kroeker,
for usage in the utest suite

* add an Intel SDE SkylakeX emulation utest run to
the Azure CI matrix; a custom Docker build was required
because Ubuntu image provided by Azure does not support
AVX512VL instructions

* Add option USE_LOCKING for single-threaded build with locking support

for calling from concurrent threads

* Add option USE_LOCKING for single-threaded build with locking support

* Add option USE_LOCKING for SMP-like locking in USE_THREAD=0 builds

* Add option USE_LOCKING but keep default settings intact

* Remove unrelated change

* Do not try ancient PGI hacks with recent versions of that compiler

should fix OpenMathLib#2139

* Build and run utests in any case, they do their own checks for fortran availability

* Add softfp support in min/max kernels

fix for OpenMathLib#1912

* Revert "Add softfp support in min/max kernels"

* Separate implementations of AMAX and IAMAX on arm

As noted in OpenMathLib#1912 and comment on OpenMathLib#1942, the combined implementation happens to "do the right thing" on hardfp, but cannot return both value and index on softfp where they would have to share the return register

* Ensure correct output for DAMAX with softfp

* Use generic kernels for complex (I)AMAX to support softfp

* improved zgemm power9 based on power8

* upload thread safety test folder

* hook up c++ thread safety test (main Makefile)

*  add c++ thread test option to Makefile.rule

* Document NO_AVX512 

for OpenMathLib#2151

* sgemm pipeline improved, zgemm rewritten without inner packs, ABI lxvx v20 fixed with vs52

* Fix detection of AVX512 capable compilers in getarch

21eda8b introduced a check in getarch.c to test if the compiler is capable of
AVX512. This check currently fails, since the used __AVX2__ macro is only
defined if getarch itself was compiled with AVX2/AVX512 support. Make sure this
is the case by building getarch with -march=native on x86_64. It is only
supposed to run on the build host anyway.

* c_check: Unlink correct file

* power9 zgemm ztrmm optimized

* conflict resolve

* Add gfortran workaround for ABI violations in LAPACKE

for OpenMathLib#2154 (see gcc bug 90329)

* Add gfortran workaround for ABI violations

for OpenMathLib#2154 (see gcc bug 90329)

* Add gfortran workaround for potential ABI violation 

for OpenMathLib#2154

* Update fc.cmake

* Remove any inadvertent use of -march=native from DYNAMIC_ARCH builds

from OpenMathLib#2143, -march=native precludes use of more specific options like -march=skylake-avx512 in individual kernels, and defeats the purpose of dynamic arch anyway.

* Avoid unintentional activation of TLS code via USE_TLS=0

fixes OpenMathLib#2149

* Do not force gcc options on non-gcc compilers

fixes compile failure with pgi 18.10 as reported on OpenBLAS-users

* Update Makefile.x86_64

* Zero ecx with a mov instruction

PGI assembler does not like the initialization in the constraints.

* Fix mov syntax

* new sgemm 8x16

* Update dtrmm_kernel_16x4_power8.S

* PGI compiler does not like -march=native

* Fix build on FreeBSD/powerpc64.

Signed-off-by: Piotr Kubaj <[email protected]>

* Fix build for PPC970 on FreeBSD pt. 1

FreeBSD needs DCBT_ARG=0 as well.

* Fix build for PPC970 on FreeBSD pt.2

FreeBSD needs those macros too.

* cgemm/ctrmm power9

* Utest needs CBLAS but not necessarily FORTRAN

* Add mingw builds to Appveyor config

* Add getarch flags to disable AVX on x86

(and other small fixes to match Makefile behaviour)

* Make disabling DYNAMIC_ARCH on unsupported systems work

needs to be unset in the cache for the change to have any effect

* Mingw32 needs leading underscore on object names

(also copy BUNDERSCORE settings for FORTRAN from the corresponding Makefile)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants