Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement]: Ship a correctly rounded threaded OpenBLAS as an Artifact #131

Open
orkolorko opened this issue Jan 31, 2023 · 7 comments
Open
Labels
enhancement New feature or request

Comments

@orkolorko
Copy link
Collaborator

orkolorko commented Jan 31, 2023

Feature description

I think it would be a good idea to ship a version of OpenBLAS with the CONSISTENT_FPCSR=1 flag enabled together with the library as an Artifact, or compile during installation.

The main reason is that the system (or Julia) OpenBLAS distribution may not have this flag enabled.
While Julia may be started with only 1 thread, unless explicitly stated, OpenBLAS may run with multiple thread enabled and have different rounding modes on each thread.

Currently, a fix that allows consistent rounding is to call Julia with the

OPENBLAS_NUM_THREADS=1

but this affects performance.

See
Julia Threads + BLAS Threads
Using directed rounding in Octave/Matlab

@lucaferranti
Copy link
Member

Hi @orkolorko , apologies for the delay in answering.

This sounds very interesting!

Exploiting BLAS multithreading is also what makes Rump multiplication algorithm faster. We can use matrix multiplication as a benchmark to see how this affects performance

@OlivierHnt
Copy link
Member

Glancing at the code, it seems that OpenBLASConsistentFPCSR_jll is being used; so I am curious what is the missing piece preventing this issue to be closed?

Also, do I understand correctly that this resolves the issue raised in "Parallel Implementation of Interval Matrix Multiplication", by N. Revol and P. Théveny? I quote (cf. page 2):

The difficulties one has to face when implementing an interval matrix multiplica- tion are manifold. Implementing interval arithmetic through floating-point arithmetic relies on changes of the rounding modes, either rounding downwards and upwards with the representation by endpoints, or rounding to nearest and upwards with the so-called midpoint-radius representation, using the midpoints and radii. Whether the rounding mode is kept unchanged or modified by BLAS routines is undocumented. Furthermore, whether the rounding mode is properly saved and restored at context switches, as in a multithreaded execution, is not documented either [...].

If so this means Rump's algorithms can be safely used for rigorous numerics which are faster than the algorithms presented in the article.

@orkolorko
Copy link
Collaborator Author

Hi @OlivierHnt, together with @lucaferranti we implemented some of Rump's algorithms in https://github.com/JuliaBallArithmetic/BallArithmetic.jl, with the idea of bringing this back to IntervalLinearAlgebra.jl in the future if you want to have a look

@OlivierHnt
Copy link
Member

Thx for the link. Although I am not sure if these algorithms are rigorous since I did not see how you impose that $A \cdot B$ and $|A| \cdot |B|$ are both computed in the same order.

I glanced at JuliaPackaging/Yggdrasil#6215 which added OpenBLASConsistentFPCSR_jll. You did not support arrch64 at the time. I have a M1 chip, so maybe I could give you a hand with debugging this.
Do you have instructions I could follow to do so?

@orkolorko
Copy link
Collaborator Author

orkolorko commented Aug 26, 2024

It is a long time since I did this, I don't remember really well. Is this flag supported on aarch64?

I understand what you mean by order as in Theorem 3.4 pag. 42. So, probably, the best thing is to fallback to single thread computation to guarantee, I will think about this.

Edit: I checked, at the time when we compiled the library, CONSISTENT_FPCSR was not supported on aarch64, there was an open issue on OpenBLAS... I don't know if now if it is supported now.
Screenshot from 2024-08-25 21-31-55

Edit 2: It seems like it was added in Version 0.3.22 26-Mar-2023

@OlivierHnt
Copy link
Member

I understand what you mean by order as in Theorem 3.4 pag. 42. So, probably, the best thing is to fallback to single thread computation to guarantee, I will think about this.

The other (and better in terms of performance) option is to not use this theorem, at the cost of having an additional matrix multiplication as described in "Fast interval matrix multiplication" by Rump (e.g. Algorithm 4.5 instead of Algorithm 4.7).

Edit 2: It seems like it was added in Version 0.3.22 26-Mar-2023

Ah that's good to hear. How does one update the artefact?

@orkolorko
Copy link
Collaborator Author

orkolorko commented Aug 26, 2024

I will update BallArithmetic.jl with MMul4 as default, thanks!

About the artifact, I think what is needed is to update
this line
so that the platform list contains aarch64.

Something like

platforms = expand_gfortran_versions(supported_platforms(; exclude=p -> (arch(p) != "x86_64" && arch(p) != "aarch64")))

In BallArithmetic.jl there are some smoking gun tests to check that everything is working fine, if you want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants