-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrencies in stdlib (do concurrent) #429
Comments
Would this be mutually exclusive with #213 (OpenMP)? If so I would think OpenMP is preferable due to better compiler support no? For those with less experience with |
I think for our purpose concurrencies and OpenMP are in principle similar and therefore more or less mutually exclusive. So far it seems like I was only able to explore how Given that OpenMP offers more flexibility than just parallelization of loops (like tasks, sections, ...) and coarrays/collectives are still incompatible with library applications (require a Fortran main), OpenMP seems to be the indeed most appealing choice. |
Just my two cents here:
The compiler is supposed to be very conservative with DO CONCURRENT. That
is, unless it can prove that each iteration is independent of the other
iterations, DO CONCURRENT will become an ordinary DO-loop with slightly
different syntax. One thing that prevents parallellism in this case is a
write-statement.
Op ma 7 jun. 2021 om 16:21 schreef Laurence Kedward <
***@***.***>:
… Would this be mutually exclusive with #213
<#213> (OpenMP)? If so I
would think OpenMP is preferable due to better compiler support no? For
those with less experience with do concurrent (me), are you able to
elaborate on what you mean by parallel and non-parallel concurrencies in do
concurrent with examples and the issues you mentioned with the latter?
Another basic question (sorry): is do concurrent deterministic, since
this will affect whether we can effectively test it in our CI?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#429 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN6YRYRXFRSFLHURJ2H2O3TRTIWVANCNFSM46FU6DRQ>
.
|
That was my initial assumption as well with Therefore, I started this thread with the question if we want to use |
Yes, I think we should use
With the recent announcement of nvfortran offloading |
@milancurcic do have you a project where you make use of |
Here are some examples: UMWM, neural-fortran. How exactly does |
Thanks, this actually looks pretty straight-forward. My first encounter with I tried |
Yes, I don't think you can rely on For parts of the program that I need to run in parallel, I use coarrays or MPI. And that code can also have |
Is it a problem if a procedure that includes |
In at least some of your loops you could also use array intrinsics, i.e.
could be replaced with
or even with whole-array arithmetic assuming the ranges signify the whole array
Judging by the
feels quite risky to me at this early stage. |
Keep in mind that I'm very open to look more into parallel concurrencies, the possibility to avoid pragmas for shared memory parallelism seems like a very significant advantage to me. I don't see a fundamental issue with So far I found a few issues, which discourage the usage of this language feature for me:
At least the scheduling issue blocks the usage of |
@jvdp1 I don't have experience with OpenMP, but it looks like @awvwgk had issues combining OpenMP and
It's controversial only in the context of
What do you think is risky about it? A counter-argument: If we use it early and liberally, we'll increase its surface area (i.e. the number of users that rely on it), which will incentivize vendors to make it better (e.g. more stable, better performance, offloading to various GPUs, etc.). We're in experimental stage, so if there are problems it will be easy to go back from it.
@awvwgk Is that really true? I think all |
Regarding OpenMP it might be a construct that works with
So far OpenMP and |
In applications where the programmer has gone to great length to parallelize code themselves, they may not want stdlib library functions to have their own internal parallelism. For example, I'm working on an application where I've carefully arranged teams of threads and pinned them to specific cores, with one such team per NUMA node on the machine. Work items are dealt out by team, with each team collaborating on a given work item. It's extremely fast, but took some careful planning. I'd be annoyed if I were calling library functions that were internally creating their own threads. The above is perhaps not the most typical use case, but I'd be in favour of using openmp or do concurrent inside stdlib only if it can be turned off at build-time. That way, even if it's on by default, I could rebuild stdlib and disable parallelism when I want to manage that stuff myself. |
Should we use
do concurrent
in stdlib as parallel concurrency?There have been a lot of discussion about the
do concurrent
construct in Fortran (j3-fortran, discourse), especially with respect to the question whether a concurrency should imply parallel execution, which some compilers (intel, nvidia) already support. I don't want to open a discussion about thedo concurrent
construct here, instead I want to discuss how we can make best use ofdo concurrent
in stdlib.From my experience concurrent but not parallel constructs inside
do concurrent
can cause issues with compilers enabling aggressive parallelization for concurrencies. I therefore suggest to only usedo concurrent
for parallel concurrencies, unless the locality specification is explicitly given (Fortran 2018 feature).It is important that we test the parallel concurrencies in our continuous integration workflows, this means we actually have to compile a parallel version of stdlib and enable the compiler support for parallelization of
do concurrent
in our build files, which we are currently not doing.The text was updated successfully, but these errors were encountered: