Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #8007 - Support Loom. #8035

Closed
wants to merge 3 commits into from

Conversation

sbordet
Copy link
Contributor

@sbordet sbordet commented May 22, 2022

Implemented virtual threads support for HTTP/1.1, HTTP/2 and HTTP/3.

Signed-off-by: Simone Bordet [email protected]

Implemented virtual threads support for HTTP/1.1, HTTP/2 and HTTP/3.

Signed-off-by: Simone Bordet <[email protected]>
@sbordet sbordet requested review from gregw, joakime and lorban May 22, 2022 20:27
@sbordet sbordet linked an issue May 22, 2022 that may be closed by this pull request
@sbordet
Copy link
Contributor Author

sbordet commented May 22, 2022

@gregw @lorban @joakime here is the proposal to support Loom.

I have 2 concerns:

  • HTTP/1.1 seems too coarse grained to make the whole ReadCallback being run in a virtual thread. The other implementations are only running the HttpChannel.handle() Runnable in a virtual thread, and in HttpConnection we could do the same when we call httpChannel.handle() directly.
    However, doing so would possibly run into CPU cache misses as the request and everything has just been parsed in a core, so calling HttpChannel.handle() in a virtual thread may be allocated in another core.
    But still, would be worth discussing it (e.g. in HttpConnection we do non-blocking read and parsing, which is not the use case for Loom). Also, for H2 and H3 we may do better if AdaptiveExecutionStrategy is virtual threads aware.
  • What about HttpClient? The current contract is that callback methods implemented by applications are supposed to be non-blocking. We could invoke each one of them in a virtual thread, but we would lose CPU cache locality as discussed above -- not sure I want to do that. I'm leaning towards applications having to use virtual threads if they have blocking code.

Thoughts?

Updates after review.

Signed-off-by: Simone Bordet <[email protected]>
@sbordet sbordet requested a review from gregw May 23, 2022 10:58
@sbordet
Copy link
Contributor Author

sbordet commented May 23, 2022

Reworked this PR to have useVirtualThreadsToInvokeRootHandler only on the server-side.

Copy link
Contributor

@gregw gregw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this is under analysed. Not exactly sure when virtual threads are being used and suspect they are being over-used. For example they should not be used for async callback.

We need to support this PR with benchmarks. I'd also like to see how this plays out in jetty-12, as we don't want to add a feature that will then be significantly different in 12.

@@ -34,18 +33,19 @@ public abstract class AbstractConnectionFactory extends ContainerLifeCycle imple
{
private final String _protocol;
private final List<String> _protocols;
private int _inputbufferSize = 8192;
private int _inputBufferSize = 8192;
private boolean _useVirtualThreadToInvokeRootHandler;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think _useVirtualThreads is sufficient for the name. Make it clear in the javadoc that we use virtual threads where appropriate to do so, specifically invoking root handler.

Comment on lines +79 to +87
protected void execute(Runnable task)
{
if (_httpConnection.isUseVirtualThreadToInvokeRootHandler())
{
if (VirtualThreads.startVirtualThread(task))
return;
}
super.execute(task);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this using virtual threads too much? It is certainly using them more than the claim in the current name of "InvokeRootHandler".

Comment on lines +270 to +276
public void fillInterested()
{
if (_readCallback != null)
getEndPoint().fillInterested(_readCallback);
else
super.fillInterested();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels asymmetric and another "if" in critical path. Would be better to make the _readCallback in AbstractConnection extensible rather than duplicated.

@lorban
Copy link
Contributor

lorban commented May 24, 2022

I second @gregw 's comment as I fail to see what benefit virtual threads bring here in exchange for the added complexity.

IMHO it's worth investigating from a side-branch, but not merging into any main branch for the moment.

Copy link
Contributor

@gregw gregw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbordet Thinking about this a bit more, all we need do is start using virtual threads for any fillInterest callback that are blocking.

This could be done at several levels:

  1. In Abstract connection we could replace the default blocking ReadCallback with a nonblocking version that starts a virtual thread. This would catch anything that just calls no args fillInterest()
  2. Alternately, we could intercept in the AbstractEndPoint#fillInterested(Callback) method and any blocking callback passed is immediately wrapped in a non-blocking callback that uses virtual threads.
  3. Or we could do the same in the FillInterested class itself

I think 1 is simplest and least cost, but 2 is probably a more thorough solution. Our execution strategy around the selector would then never ever see a blocking callback.

We could perhaps do the same on the write side... but I'm not sure we have many instances of blocking callbacks passed to a write?

@gregw
Copy link
Contributor

gregw commented Jul 6, 2022

@sbordet actually change that! The best place to do this is with a pluggable execution strategy (which we used to have??? still do???).
The Loom execution strategy would be:

  • If the task is NON_BLOCKING then consume it in PC mode (like always)
  • otherwise if a reserved thread is available, then use it to keep producing and consume task as EPC, using a real thread with hot cache
  • otherwise start a virtual thread (instead of PEC) substituting our QTP queue delay and handoff for a virtual thread startup and handoff.

There can then be variations of that once we have pluggable strategies.

@sbordet
Copy link
Contributor Author

sbordet commented Jul 6, 2022

@gregw I think in a different way, in the sense that if I deploy my webapps in Jetty and I want to enable Loom, I really want my applications to be called by a virtual thread.

In your example above, if we have a reserved thread, then we use a native thread to call the application, and that would be unexpected.

For example, AWS has a signing service - say give it bytes, it gives back a crypto signature - but it's a blocking AWS API call.
If an application must use that AWS service, it wants to be invoked in a virtual thread (so it does not need to change its code, Jetty does all the magic, and it scales better because it does not block waiting for the AWS service).

So I think we must always invoke user code with a virtual thread if Jetty is configured so.
Even in case of an application using Servlet 3.1 non-blocking IO, we don't know what the user is doing in the Servlet API callbacks (onDataAvailable(), onWritePossible(), etc.), so we must always use a virtual thread (in case the application performs a blocking operation).

So perhaps we tag the Runnables we know are calling user code with a UserCode interface, and we only execute them in a virtual thread if they implement UserCode?

@gregw
Copy link
Contributor

gregw commented Jul 6, 2022

@sbordet application code should not care what kind of thread it gets. It can call the blocking aws call with either type.

But if you really want that, then it's just a different execution strategy that doesn't use reserved threads.

@sbordet
Copy link
Contributor Author

sbordet commented Jul 6, 2022

application code should not care what kind of thread it gets

They actually do, it's the whole point of Loom. Call apps with a virtual thread, they can do blocking no problems. Otherwise they have to do non-blocking/reactive to scale.
Existing apps that currently do blocking will get an automatic benefit by turning on Loom, no code changes.

But if you really want that, then it's just a different execution strategy that doesn't use reserved threads.

You mean a strategy with reserved=0 and always executing using virtual threads?
Maybe, but would not that be too coarse, as we would execute Runnables that don't call application code in virtual threads even if not necessary?
And less efficient too, due to reserved=0?

@gregw
Copy link
Contributor

gregw commented Jul 6, 2022

@sbordet, you've not understood my proposal.

Application is able to block no matter what type of thread it is called by. All we have to to is ensure that the we don't use the last available real thread to call them, ie if no reserved threads then use a virtual thread to call the application.

If all the real threads -1 end up blocked in the application, then no problem as virtual thread are used from them on, whilst last real thread does the producing. Less hot caches, but it still works.

Or configure with 1 real thread to do the producing and every other thread will be virtual

Either way, the solution is to have a pluggable strategy again.

@sbordet
Copy link
Contributor Author

sbordet commented Jul 6, 2022

@gregw we're not on the same page. Loom is an application feature, not a server feature.
An application that enabled Loom and then it's not called by a virtual thread will have a surprise.

We don't care about using Loom inside the server. We have to use Loom to call the application, no matter how many reserved threads we have, etc.

@gregw
Copy link
Contributor

gregw commented Jul 6, 2022

agreed. We are not on the same page :) let's chat.

@sbordet
Copy link
Contributor Author

sbordet commented Aug 10, 2022

Closed in favor of #8360.

@sbordet sbordet closed this Aug 10, 2022
@sbordet sbordet deleted the jetty-10.0.x-8007-support-loom branch August 10, 2022 07:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support Loom
4 participants