Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[c11] define mono atomics in terms of standard atomics #91489

Merged
merged 17 commits into from
Sep 7, 2023
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion src/mono/mono/tools/offsets-tool/offsets-tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,9 @@ def require_emscipten_path (args):
self.target_args += ["-target", args.abi]
else:
require_emscipten_path (args)
self.sys_includes = [args.emscripten_path + "/system/include", args.emscripten_path + "/system/include/libc", args.emscripten_path + "/system/lib/libc/musl/arch/emscripten", args.emscripten_path + "/system/lib/libc/musl/include", args.emscripten_path + "/system/lib/libc/musl/arch/generic"]
clang_path = os.path.dirname(args.libclang)
self.sys_includes = [args.emscripten_path + "/system/include", args.emscripten_path + "/system/include/libc", args.emscripten_path + "/system/lib/libc/musl/arch/emscripten", args.emscripten_path + "/system/lib/libc/musl/include", args.emscripten_path + "/system/lib/libc/musl/arch/generic",
clang_path + "/../lib/clang/16/include"]
self.target = Target ("TARGET_WASM", None, [])
self.target_args += ["-target", args.abi]

Expand Down
257 changes: 256 additions & 1 deletion src/mono/mono/utils/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,262 @@ F/MonoDroid( 1568): shared runtime initialization error: Cannot load library: re
Apple targets have historically being problematic, xcode 4.6 would miscompile the intrinsic.
*/

#if defined(HOST_WIN32)
/* Decide if we will use stdatomic.h */
/*
* Generally, we can enable C11 atomics if the header is available and if all the primitive types we
* care about (int, long, void*, long long) are lock-free.
*
* Note that we genrally don't want the compiler's locking implementation because it may take a
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
* global lock, in which case if the atomic is used by both the GC implementation and runtime
* internals we may have deadlocks during GC suspend.
*
* It might be possible to use some Mono specific implementation for specific types (e.g. long long)
* on some platforms if the standard atomics for some type are not lock-free (for example: long
* long). We might be able to use a GC-aware lock, for example.
*
*/
#if defined(_MSC_VER)
/*
* we need two things:
*
* 1. MSVC atomics support is not experimental, or we pass /experimental:c11atomics
*
* 2. We build our C++ code with C++23 or later (otherwise MSVC will complain about including
* stdatomic.h)
*
*/
# undef MONO_USE_STDATOMIC
#elif defined(HOST_IOS) || defined(HOST_OSX) || defined(HOST_WATCHOS) || defined(HOST_TVOS)
# define MONO_USE_STDATOMIC 1
#elif defined(HOST_ANDROID)
/* on Android-x86 ATOMIC_LONG_LONG_LOCK_FREE == 1, not 2 like we want. */
/* on Andriod-x64 ATOMIC_LONG_LOCK_FREE == 1, not 2 */
/* on Android-armv7 ATOMIC_INT_LOCK_FREE == 1, not 2 */
# if defined(HOST_ARM64)
# define MONO_USE_STDATOMIC 1
# endif
#elif defined(HOST_LINUX)
/* FIXME: probably need arch checks */
# define MONO_USE_STDATOMIC 1
#elif defined(HOST_WASI) || defined(HOST_BROWSER)
# define MONO_USE_STDATOMIC 1
#else
# undef MONO_USE_STDATOMIC
#endif

#ifdef MONO_USE_STDATOMIC

#include<stdatomic.h>
lambdageek marked this conversation as resolved.
Show resolved Hide resolved

static inline gint32
mono_atomic_cas_i32 (volatile gint32 *dest, gint32 exch, gint32 comp)
{
g_static_assert (sizeof(atomic_int) == sizeof(*dest) && ATOMIC_INT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
(void)atomic_compare_exchange_strong ((atomic_int*)dest, &comp, exch);
Copy link
Member

@lateralusX lateralusX Sep 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have strong guarantees for mono_atomic_cas_* functions? I believe we normally call these in loops to handle potential spuriously fails on platforms that could trigger them. If we would like to make the default implementation using strong guarantees, then maybe we should also offer the option to use weak when we already do cas in loops to potentially be more performant on platforms that takes advantage of spuriously fails?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea good point. I need to look at the callers and check if they're all in loops. My feeling is that we should make weak the default and add strong as a second option. I didn't want to change the mono functions as part of this PR (and strong seemed like a safer default). But I can audit and report back and then we can add the non-default versions

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found 173 occurrences of mono_atomic_cas in src/mono/mono. Looked through about 2/3rds of them, chasing down callers if it's in a static helper method or a macro. About half are weak uses and about half are strong. So strong is definitely the right default. In a few places we could switch over to weak (particularly inside sgen there are some loops that could use weak). But it should be follow-up work, IMO.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is what I expected, we have made different assumption around the strong/weak guarantees around the runtime over years. Moving over to strong will potential correct some false assumptions previously made in runtime (at least it won't break anything), so it should be safe, and I agree that we could identify and use weak versions where it might benefit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created #91747 for follow-up work

return comp;
}

static inline gint64
mono_atomic_cas_i64 (volatile gint64 *dest, gint64 exch, gint64 comp)
{
#if SIZEOF_LONG == 8
g_static_assert (sizeof (atomic_long) == sizeof (*dest) && ATOMIC_LONG_LOCK_FREE == 2);
(void)atomic_compare_exchange_strong ((atomic_long*)dest, (long*)&comp, exch);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
return comp;
#elif SIZEOF_LONG_LONG == 8
g_static_assert (sizeof (atomic_llong) == sizeof (*dest) && ATOMIC_LLONG_LOCK_FREE == 2);
(void)atomic_compare_exchange_strong ((atomic_llong*)dest, (long long*)&comp, exch);
return comp;
#else
#error gint64 not same size atomic_llong or atomic_long, define MONO_IGNORE_STDATOMIC
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
#endif
}

static inline gpointer
mono_atomic_cas_ptr (volatile gpointer *dest, gpointer exch, gpointer comp)
{
g_static_assert(ATOMIC_POINTER_LOCK_FREE == 2);
(void)atomic_compare_exchange_strong ((_Atomic gpointer *)dest, &comp, exch);
return comp;
}

static inline gint32
mono_atomic_fetch_add_i32 (volatile gint32 *dest, gint32 add);
static inline gint64
mono_atomic_fetch_add_i64 (volatile gint64 *dest, gint64 add);

static inline gint32
mono_atomic_add_i32 (volatile gint32 *dest, gint32 add)
{
// mono_atomic_add_ is supposed to return the value that is stored.
// the atomic_add intrinsic returns the previous value instead.
// so we return prev+add which should be the new value
return mono_atomic_fetch_add_i32 (dest, add) + add;
}

static inline gint64
mono_atomic_add_i64 (volatile gint64 *dest, gint64 add)
{
return mono_atomic_fetch_add_i64 (dest, add) + add;
}

static inline gint32
mono_atomic_inc_i32 (volatile gint32 *dest)
{
return mono_atomic_add_i32 (dest, 1);
}

static inline gint64
mono_atomic_inc_i64 (volatile gint64 *dest)
{
return mono_atomic_add_i64 (dest, 1);
}

static inline gint32
mono_atomic_dec_i32 (volatile gint32 *dest)
{
return mono_atomic_add_i32 (dest, -1);
}

static inline gint64
mono_atomic_dec_i64 (volatile gint64 *dest)
{
return mono_atomic_add_i64 (dest, -1);
}

static inline gint32
mono_atomic_xchg_i32 (volatile gint32 *dest, gint32 exch)
{
g_static_assert (sizeof(atomic_int) == sizeof(*dest) && ATOMIC_INT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
return atomic_exchange ((atomic_int*)dest, exch);
}

static inline gint64
mono_atomic_xchg_i64 (volatile gint64 *dest, gint64 exch)
{
#if SIZEOF_LONG == 8
g_static_assert (sizeof (atomic_long) == sizeof (*dest) && ATOMIC_LONG_LOCK_FREE == 2);
return atomic_exchange ((atomic_long*)dest, exch);
#elif SIZEOF_LONG_LONG == 8
g_static_assert (sizeof (atomic_llong) == sizeof (*dest) && ATOMIC_LLONG_LOCK_FREE == 2);
return atomic_exchange ((atomic_llong*)dest, exch);
#else
#error gint64 not same size atomic_llong or atomic_long, define MONO_IGNORE_STDATOMIC
#endif
}

static inline gpointer
mono_atomic_xchg_ptr (volatile gpointer *dest, gpointer exch)
{
g_static_assert (ATOMIC_POINTER_LOCK_FREE == 2);
return atomic_exchange ((_Atomic gpointer*)dest, exch);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
}

static inline gint32
mono_atomic_fetch_add_i32 (volatile gint32 *dest, gint32 add)
{
g_static_assert (sizeof(atomic_int) == sizeof(*dest) && ATOMIC_INT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
return atomic_fetch_add ((atomic_int*)dest, add);
}

static inline gint64
mono_atomic_fetch_add_i64 (volatile gint64 *dest, gint64 add)
{
#if SIZEOF_LONG == 8
g_static_assert (sizeof (atomic_long) == sizeof (*dest) && ATOMIC_LONG_LOCK_FREE == 2);
return atomic_fetch_add ((atomic_long*)dest, add);
#elif SIZEOF_LONG_LONG == 8
g_static_assert (sizeof (atomic_llong) == sizeof (*dest) && ATOMIC_LLONG_LOCK_FREE == 2);
return atomic_fetch_add ((atomic_llong*)dest, add);
#else
#error gint64 not same size atomic_llong or atomic_long, define MONO_IGNORE_STDATOMIC
#endif
}

static inline gint8
mono_atomic_load_i8 (volatile gint8 *src)
{
g_static_assert (sizeof(atomic_char) == sizeof(*src) && ATOMIC_CHAR_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
return atomic_load ((atomic_char *)src);
}

static inline gint16
mono_atomic_load_i16 (volatile gint16 *src)
{
g_static_assert (sizeof(atomic_short) == sizeof(*src) && ATOMIC_SHORT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
return atomic_load ((atomic_short*)src);
}

static inline gint32 mono_atomic_load_i32 (volatile gint32 *src)
{
g_static_assert (sizeof(atomic_int) == sizeof(*src) && ATOMIC_INT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
return atomic_load ((atomic_int*)src);
}

static inline gint64
mono_atomic_load_i64 (volatile gint64 *src)
{
#if SIZEOF_LONG == 8
g_static_assert (sizeof (atomic_long) == sizeof (*src) && ATOMIC_LONG_LOCK_FREE == 2);
return atomic_load ((atomic_long*)src);
#elif SIZEOF_LONG_LONG == 8
g_static_assert (sizeof (atomic_llong) == sizeof (*src) && ATOMIC_LLONG_LOCK_FREE == 2);
return atomic_load ((atomic_llong*)src);
#else
#error gint64 not same size atomic_llong or atomic_long, define MONO_IGNORE_STDATOMIC
#endif
}

static inline gpointer
mono_atomic_load_ptr (volatile gpointer *src)
{
g_static_assert (ATOMIC_POINTER_LOCK_FREE == 2);
return atomic_load ((_Atomic gpointer*)src);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
}

static inline void
mono_atomic_store_i8 (volatile gint8 *dst, gint8 val)
{
g_static_assert (sizeof(atomic_char) == sizeof(*dst) && ATOMIC_CHAR_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
atomic_store ((atomic_char*)dst, val);
}

static inline void
mono_atomic_store_i16 (volatile gint16 *dst, gint16 val)
{
g_static_assert (sizeof(atomic_short) == sizeof(*dst) && ATOMIC_SHORT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
atomic_store ((atomic_short*)dst, val);
}

static inline void
mono_atomic_store_i32 (volatile gint32 *dst, gint32 val)
{
g_static_assert (sizeof(atomic_int) == sizeof(*dst) && ATOMIC_INT_LOCK_FREE == 2);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
atomic_store ((atomic_int*)dst, val);
}

static inline void
mono_atomic_store_i64 (volatile gint64 *dst, gint64 val)
{
#if SIZEOF_LONG == 8
g_static_assert (sizeof (atomic_long) == sizeof (*dst) && ATOMIC_LONG_LOCK_FREE == 2);
atomic_store ((atomic_long*)dst, val);
#elif SIZEOF_LONG_LONG == 8
g_static_assert (sizeof (atomic_llong) == sizeof (*dst) && ATOMIC_LLONG_LOCK_FREE == 2);
atomic_store ((atomic_llong*)dst, val);
#else
#error gint64 not same size atomic_llong or atomic_long, define MONO_IGNORE_STDATOMIC
#endif
}

static inline void
mono_atomic_store_ptr (volatile gpointer *dst, gpointer val)
{
g_static_assert (ATOMIC_POINTER_LOCK_FREE == 2);
atomic_store ((_Atomic gpointer*)dst, val);
lambdageek marked this conversation as resolved.
Show resolved Hide resolved
}

#elif defined(HOST_WIN32)

#ifndef WIN32_LEAN_AND_MEAN
#define WIN32_LEAN_AND_MEAN
Expand Down
Loading