This repository has been archived by the owner on Oct 17, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 68
Logs fill up with 'available tasks is 0' upon network saturation #759
Labels
bug
Something isn't working
Comments
huitseeker
added a commit
to huitseeker/narwhal
that referenced
this issue
Aug 14, 2022
We operate an executor with a bound on the concurrent number of messages (see MystenLabs#463, MystenLabs#559, MystenLabs#706). We expect the executors to operate for a long time at this limit (e.g. in recovery situation). The spammy logging is not usfeful This removes the logging of the concurrency bound being hit. Fixes MystenLabs#759
huitseeker
added a commit
to huitseeker/narwhal
that referenced
this issue
Aug 14, 2022
We operate an executor with a bound on the concurrent number of messages (see MystenLabs#463, MystenLabs#559, MystenLabs#706). PR MystenLabs#472 added logging for the bound being hit. We expect the executors to operate for a long time at this limit (e.g. in recovery situation). The spammy logging is not usfeful This removes the logging of the concurrency bound being hit. Fixes MystenLabs#759
huitseeker
added a commit
that referenced
this issue
Aug 15, 2022
) We operate an executor with a bound on the concurrent number of messages (see #463, #559, #706). PR #472 added logging for the bound being hit. We expect the executors to operate for a long time at this limit (e.g. in recovery situation). The spammy logging is not usfeful This removes the logging of the concurrency bound being hit. Fixes #759
huitseeker
added a commit
to huitseeker/narwhal
that referenced
this issue
Aug 16, 2022
…ystenLabs#763) We operate an executor with a bound on the concurrent number of messages (see MystenLabs#463, MystenLabs#559, MystenLabs#706). PR MystenLabs#472 added logging for the bound being hit. We expect the executors to operate for a long time at this limit (e.g. in recovery situation). The spammy logging is not usfeful This removes the logging of the concurrency bound being hit. Fixes MystenLabs#759
huitseeker
added a commit
that referenced
this issue
Aug 16, 2022
) We operate an executor with a bound on the concurrent number of messages (see #463, #559, #706). PR #472 added logging for the bound being hit. We expect the executors to operate for a long time at this limit (e.g. in recovery situation). The spammy logging is not usfeful This removes the logging of the concurrency bound being hit. Fixes #759
mwtian
pushed a commit
to mwtian/sui
that referenced
this issue
Sep 30, 2022
…ystenLabs/narwhal#763) We operate an executor with a bound on the concurrent number of messages (see MystenLabs/narwhal#463, MystenLabs/narwhal#559, MystenLabs/narwhal#706). PR MystenLabs/narwhal#472 added logging for the bound being hit. We expect the executors to operate for a long time at this limit (e.g. in recovery situation). The spammy logging is not usfeful This removes the logging of the concurrency bound being hit. Fixes MystenLabs/narwhal#759
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Description
The log files fill up and are spammed with the above.
Explanation from @huitseeker :
It means that the network has reached its maximum amount of concurrent network messages. Further messages are queued and will be sent when further semaphore tickets are available.
A network sending attempt can move off the semaphore in 3 ways :
Steps to reproduce
Run a docker-compose with multiple Narwhal nodes. Saturate the network such that the network queue fills up.
Possible solutions:
Have the network throttling code not spam warnings every time it is full -- maybe set a flag and warn just once out of N invocations?
The text was updated successfully, but these errors were encountered: