-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Made LazyValue virtual thread friendly #34479
Made LazyValue virtual thread friendly #34479
Conversation
I don't think is a good idea: the overhead of reentrant locks while contended is higher (it actually create real garbage due to adding waiters to the lock's waiter list!) and you don't want, in case the synchronized region is not held for "long", to be descheduled. So, in short: are you observing pinning in some load scenario @imperatorx ? |
Yes, some of my lazy beans are accessing remote services or databases in their |
@mkouba this is common usage scenario for users? |
I don't think it's a good idea to perform some long-running or I/O intensive operation in the |
FTR we have the same class (although not public) in Qute. It's only used to lazily init a map of template fragments (no I/O intensive operation). |
@franz1981 would the |
Nope @mkouba , read write is even worse because it track read lock acquisitions, which requires again atomics, while now we just perform a volatile load, on steady state. |
I see.
It's not a good practice because you can easily block various things. Unfortunately, there is no async variant of In theory, a user can leverage CDI async events (or any other async facility) to trigger some I/O intensive init, e.g. something like: @ApplicationScoped
public class MyBean {
volatile Object data;
@Inject
Event<MyBean> event;
@PostConstruct
void init() {
event.fireAsync(this);
}
}
public class SomeOtherBean {
void initMyBean(@ObservesAsync MyBean myBean) {
asyncClient.doSomething().thenAccept(data -> myBean.data = data);
}
} But it's not very convenient and you need to make sure the bean is in the correct state before it's actually used... |
From the javadoc of I'm not sure about the performance impact of reentrantlock vs synchronized in mostly uncontested scenarios, as for example JEP 353 replaced the synchronzied blocks with locks in the JDK socket API, which is on the hot path of most applications. |
Well, we could replace locking with a simple CAS-based state machine, something like: public class LazyValue<T> {
private static final VarHandle VALUE = MethodHandles.lookup().findVarHandle(LazyValue.class, "value", Object.class);
private static final Object INITIALIZATION_IN_PROGRESS = new Object();
private final Supplier<T> supplier;
// this may be:
// - `null` in case the value has not been computed yet (or `clear()` was called)
// - `INITIALIZATION_IN_PROGRESS` in case the value is currently being computed
// - otherwise: the value has been computed
private transient volatile T value;
public LazyValue(Supplier<T> supplier) {
this.supplier = supplier;
}
public T get() {
T value = this.value;
if (value != null && value != INITIALIZATION_IN_PROGRESS) {
return value;
}
if (VALUE.compareAndSet(this, null, INITIALIZATION_IN_PROGRESS)) {
value = supplier.get();
this.value = value;
return value;
} else {
// TODO block until `value` is computed
}
}
public T getIfPresent() {
T value = this.value;
return value == INITIALIZATION_IN_PROGRESS ? null : value;
}
public void clear() {
value = null;
}
public boolean isSet() {
T value = this.value;
return value != null && value != INITIALIZATION_IN_PROGRESS;
}
} The TODO is left as an exercise for the reader. Actually no, I have no idea how I would implement that TODO without implementing a lock :-) Perhaps this is a terrible idea. |
@Ladicek it is fine, just use a completable future which can be used to await the computation (and will be friendly with loom).
I hope it won't deadlock, just "wait" without being (virtually) context switch
They have to support loom there, but there's currently some ongoing work on JDK to make synchronized more loom friendly as well. As said, I think that we need to capture better the users requirements for these specific cases |
I don't immediately see how I would use a (Also, a trivially correct but pretty much horrible implementation of the TODO is of course a spin lock: while (!isSet()) {
Thread.onSpinWait();
}
return this.value; I have no idea how Loom-friendly that is 😆) |
@Ladicek you can use a compareAndSet to a completable future as a promise of the future computed value and you'll than use it to both wait and awake after the computation has completed (who win the compare and set, win the race to populate it) and yes...it's terribly costy (at this point ReentrantLock is fine and better i think!) |
Ah yeah, that would work, but as you mention, has a pretty high cost. I guess we could implement a tailored lock by subclassing This PR seems good enough to me. (Of course, I'd much prefer being able to give the JVM a hint that synchronization is not an externally visible contract here and it would do whatever it wants...) |
Ok, so if I understand the discussion above correctly we do want to use the |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Yeah, I think we should accept this (if we care enough, that is). It is a little sad that the size of the |
It would be interesting to see the numbers from some our benchmark before we merge this PR... |
@@ -34,9 +42,9 @@ public T getIfPresent() { | |||
} | |||
|
|||
public void clear() { | |||
synchronized (this) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can check outside of this critical zone if it has already happened and save entering into it.
@mkouba do we expect to be able to init it again, after a clear?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that can happen - first thing coming to my mind is dynamic resolution and caching of its result which can then be cleared and subsequently cached again - the cache is a LazyValue
.
See https://github.com/quarkusio/quarkus/blob/main/independent-projects/arc/runtime/src/main/java/io/quarkus/arc/impl/InstanceImpl.java#L304
In term of memory footprint...
while now is:
that means that just the additional
28
that's than +44 bytes more. In short, depending how many instances we got, it could be relevant (or a drop in the ocean too?). |
I think that it's acceptable.
Or rather not ;-) |
2234cd3
to
ff28650
Compare
What's the status of this? |
I think it's mergeable unless @franz1981 has anything else we should address here. |
When a bean is first created by arc when a caller from a virtual thread accesses it, it pins the carrier thread. This simple change changes the synchronized blocks with a
ReentrantLock