Long-running tasks blocking the WebSocket process #585
Replies: 4 comments 1 reply
-
Jordi,
I've used ReactPHP for long running tasks and in small setups it's ok. For
larger scale applications I've given up and moved to Java/Vert.x. I don't
think this is specific to ReactPHP rather PHP in general and its garbage
collection. The only tip I can give for now is disable circular garbage
collection `gc_disable`. This does not disable garbage collection
altogether, but it helps. This means you will have memory grow and it will
need to be restarted regardless, but it's better than pegging the single
process / core at 100% CPU. The garbage collection alone causes long delays
over time so much so that clients start disconnecting/reconnecting because
the server isn't responding fast enough.
However that said, this depends on your setup and you may not run into this
issue. Could be something simple like using PDO rather than mysqli. PHP in
general does not have async calls for IO (built-in) so you have to be very
careful on every call you do. It's very easy to block the event loop
systems that ReactPHP relies on.
Thanks,
Joseph Montanez
…On Tue, Oct 1, 2024 at 1:35 AM Jordi Bassagana ***@***.***> wrote:
Until recently, we had implemented our chess functionality on two
different servers: The WebSocket server
<https://github.com/chesslablab/chess-server> and the web server
<https://github.com/chesslablab/chess-api>. The former was intended to
run real-time tasks while the latter hosted a REST-like API for
long-running tasks like database queries, ad hoc reportings, and so on,
which would take a few seconds to run.
This separation of concerns was perfeclty fine.
However, at some point it was decided to get rid of the web server and try
out an implementation completely based on WebSockets. All functionality,
which is to say real-time operations and long-running operations, was moved
to the WebSocket server. The reason being was mainly because this setup
looked simpler and cheaper.
The thing is, whether using Workerman or Ratchet, the staging server has
demonstrated that there is something wrong with this setup.
If two users are playing chess online (real-time) while another user is
generating and ad hoc report (long-running) the two users playing online
will experience a bottleneck because the report generation seems to be
blocking the WebScocket process for a few seconds.
The current WebSocket server is pretty much unusable if there are a few
users connected at the same time:
- Re: While loops blocking send?
<https://groups.google.com/g/ratchet-php/c/Ry4VWh-xnts>
- ratchet event loop getting blocked
<https://stackoverflow.com/questions/38824506/ratchet-event-loop-getting-blocked>
- Is Workerman appropriate for database queries?
<walkor/workerman#1045>
Are we missing something?
Could you please provide some guidance on how to implement long-running
tasks with WebSockets? Or should we get back to the previous API
implementation for the long-running tasks?
🙏 Thank you for the help, it is very much appreciated!
—
Reply to this email directly, view it on GitHub
<#585>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA3C4BCQI36NOIEZZIFHG3ZZJNERAVCNFSM6AAAAABPFBIP2CVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZXGI3DENJZGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
My first suggestion would be to split off the report generation to another process/thread so the main event loop isn't blocked. Also, WebSockets and API's over HTTP in general are only about communication and shouldn't be the basis for what you run. Have a look at message queues for these kinds of things like RabbitMQ or use Redis for it. Also don't use PHP build in functions/classes for database connections, that will block your event loop. Which database are you using? |
Beta Was this translation helpful? Give feedback.
-
Cees-Jan,
You should never have to turn off garbage collection. If you have to do
that you have a memory leak somewhere causing the issue, some reference to
an object or something that keeps it in memory without needing to.
`gc_disable` does not turn off garbage collection (completely). It's
specific to circular dependency which gets more expensive over time (number
of objects). The more objects you have living in your application the
slower it gets and that specific action of circular garbage collection
becomes much more expensive to do, and happens at random. If you want to
avoid gc_disable, then try your best to avoid circular dependency. With the
JVM you have a choice on how GC is handled and can pick from Parral GC, G1
GC, ZGC, etc. PHP's garbage collection has been a long standing issue and
while it's a lot better than PHP 5.x days, it's still an issue. I've pushed
PHP into gaming where tight loops are needed and JIT / `gc_disable` is
almost necessary and you can work with gc_disable as it will help
detect/eliminate circular dependency based code since your application will
quickly get out of hand.
https://www.php.net/manual/en/features.gc.performance-considerations.php
If you read at the bottom of that link it comes to the same conclusion I
have:
*The benefits are most apparent for longer-running scripts, such as lengthy
test suites or daemon scripts. Also, for » PHP-GTK applications that
generally tend to run longer than scripts for the Web, the new mechanism
should make quite a bit of a difference regarding memory leaks creeping in
over time.*
Also don't use PHP build in functions/classes for database connections,
that will block your event loop.
If you look at their code they are using PDO which is blocking. They are
also using MySQL so they can use the MySQLi extension that can execute one
`async` query (at a time) per MySQL connection. So they can find libraries
that leverage MySQLi specifically because it is designed for async calls:
https://www.php.net/manual/en/mysqli.poll.php
https://www.php.net/manual/en/mysqli.reap-async-query.php
This way they don't have to complicate things by introducing another tech
stack due to the adding message queue. While this is a good idea at some
point, what I've learned is to keep nothing open... forever. Don't keep
long running MySQLi connections, don't keep a message queue connection open
forever, etc. Always terminate at some point, it can be every call, or
several hours. PHP has really weird issues around some of the simplest
things when long running. For example, a PHP Gearman Worker, all it did was
want to do a CURL request, ran for months and it was no longer able to get
any CURL requests as they all timeouted. The Gearman Worker was still
responsive, quick to take in a new job and hit the timeout of 30 seconds
with a response. It had to be restarted and then it was fine, so instead it
terminates after every request. I tend to just stick to HTTP interactions
in my ReactPHP applications as it's the only reliable way, connection is
opened, request is made, connection is closed. It's simple, and in my long
running use of ReactPHP always proven to be more reliable.
Thanks,
Joseph Montanez
…On Tue, Oct 1, 2024 at 1:47 PM Cees-Jan Kiewiet ***@***.***> wrote:
You should never have to turn off garbage collection. If you have to do
that you have a memory leak somewhere causing the issue, some reference to
an object or something that keeps it in memory without needing to.
—
Reply to this email directly, view it on GitHub
<#585 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA3C4H472I7E6OQOSHSAKTZZMC6RAVCNFSM6AAAAABPFBIP2CVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOBRGM2TGNQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you for the help. PHP Chess Server is a flexible asynchronous PHP chess server allowing support for multiple async PHP frameworks. At the moment is using Workerman and Ratchet with the default one being Workerman. Also we'd want to support AMPHP. This is made possible thanks to a polymorphic, object-oriented WebSocket implementation that is providing chess functionality. The database being used is MySQL with PDO. We're thinking along the lines of solving the concurrency issue using PCNTL functions with the help of spatie/async agnostically to the async PHP frameworks. |
Beta Was this translation helpful? Give feedback.
-
Until recently, we had implemented our chess functionality on two different servers: The WebSocket server and the web server. The former was intended to run real-time tasks while the latter hosted a REST-like API for long-running tasks like database queries, ad hoc reportings, and so on, which would take a few seconds to run.
This separation of concerns was perfeclty fine.
However, at some point it was decided to get rid of the web server and try out an implementation completely based on WebSockets. All functionality, which is to say real-time operations and long-running operations, was moved to the WebSocket server. The reason being was mainly because this setup looked simpler and cheaper.
The thing is, whether using Workerman or Ratchet, the staging server has demonstrated that there is something wrong with this setup.
If two users are playing chess online (real-time) while another user is generating an ad hoc report (long-running) the two users playing online will experience a bottleneck because the report generation seems to be blocking the WebScocket process for a few seconds.
The current WebSocket server is pretty much unusable if there are a few users connected at the same time:
Are we missing something?
Could you please provide some guidance on how to implement long-running tasks with WebSockets? Or should we get back to the previous API implementation for the long-running tasks?
🙏 Thank you for the help, it is very much appreciated!
Beta Was this translation helpful? Give feedback.
All reactions