You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently this limit is not enforced client side and if it is exceed it poisons the subscription system so that only recovery method is a process restart. Obviously this will loose all inflight messages not yet written to the back plane.
When messages fail to be written to the DB they are retried by enqueuing them into the channel, then immediately consumed again. Messages are written in batches of 256 (if using the default configuration) so it is possible that once there are more than this many messages in the channel/queue the poison message will not be included in a batch and some messages will be published.
The message size includes the topic and payload and some hotchocolate controlled delimiters so it can be hard to tell if the message is going to be oversize before it is published. Users should be aware of this limit and stay under it. However, once published via the ITopicEventSender hot chocolate has no choice but to discard the message in order to preserve the running of the system.
I suggest a length check be added below inside the forloop as the message is formatted to ensure it is less than 8000 bytes, and if it is discard the message with an appropriate diagnostic event. With some refactoring it could be possible validate the message as it is passed into the ITopicEventSender but due to encoding that only occurs as the message is being sent this may not be practical.
The 8000byte limit is not configurable in Postgres and is unlikely to change so this validation can be hard coded into the PostgresChannelWriter
// if we cannot send the message we put it back into the channel
foreach(varmessageinmessages)
{
await_channel.Writer.WriteAsync(message,ct);
}
}
Symptoms of this issue include elevated network and cpu usage on the Database sever. In my case we were observing approx 20k/sec to the database resulting in significant logging in the database error log.
Steps to reproduce
Publish a message to into IEventTopicSender of more than 8000bytes
System is now broken, any subsequent publish will not make it to the database
Relevant log output
(Postgres logs not HotChocolate Logs)
2023-11-01 19:28:10 UTC:10.11.0.228(48950):user@db:[22886]:STATEMENT: SELECT pg_notify($3, $4)
2023-11-01 19:28:10 UTC:10.11.0.228(48950):user@db:[22886]:ERROR: payload string too long
Additional Context?
No response
Version
13.7
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
Product
Hot Chocolate
Describe the bug
Subscriptions.Postgres
uses postgrespg_notify
as the messaging backplane. This DB Function has a hard limit of 8000 bytes for the message payload. https://www.postgresql.org/docs/14/sql-notify.htmlCurrently this limit is not enforced client side and if it is exceed it poisons the subscription system so that only recovery method is a process restart. Obviously this will loose all inflight messages not yet written to the back plane.
When messages fail to be written to the DB they are retried by enqueuing them into the channel, then immediately consumed again. Messages are written in batches of 256 (if using the default configuration) so it is possible that once there are more than this many messages in the channel/queue the poison message will not be included in a batch and some messages will be published.
The message size includes the topic and payload and some hotchocolate controlled delimiters so it can be hard to tell if the message is going to be oversize before it is published. Users should be aware of this limit and stay under it. However, once published via the ITopicEventSender hot chocolate has no choice but to discard the message in order to preserve the running of the system.
I suggest a length check be added below inside the forloop as the message is formatted to ensure it is less than 8000 bytes, and if it is discard the message with an appropriate diagnostic event. With some refactoring it could be possible validate the message as it is passed into the ITopicEventSender but due to encoding that only occurs as the message is being sent this may not be practical.
The 8000byte limit is not configurable in Postgres and is unlikely to change so this validation can be hard coded into the
PostgresChannelWriter
graphql-platform/src/HotChocolate/Core/src/Subscriptions.Postgres/PostgresChannelWriter.cs
Lines 89 to 122 in 14ad505
Symptoms of this issue include elevated network and cpu usage on the Database sever. In my case we were observing approx 20k/sec to the database resulting in significant logging in the database error log.
Steps to reproduce
Relevant log output
Additional Context?
No response
Version
13.7
The text was updated successfully, but these errors were encountered: