Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

immediate_buffer_flush should be in configuration file, defaulting to true. #718

Closed
jflamy opened this issue Feb 7, 2023 · 8 comments · Fixed by #738
Closed

immediate_buffer_flush should be in configuration file, defaulting to true. #718

jflamy opened this issue Feb 7, 2023 · 8 comments · Fixed by #738

Comments

@jflamy
Copy link

jflamy commented Feb 7, 2023

Expected behavior

Correct performance behavior, similar to Mosquitto or Aedes (less than 20ms for message to propagate from emitter to subscriber on localhost).
The parameter immediate_buffer_flush is not listed in the default conf file, and not mentioned in the documentation.
The default value does not seem appropriate for normal use cases.

Actual behavior

Requests are buffered, resulting in delays of 1 second.

Steps to reproduce

Run with default configuration.

Minimal yet complete reproducer code (or URL to code) or complete log file

Moquette MQTT version

0.16, 0.17-SNAPSHOT

JVM version (e.g. java -version)

n/a

OS version (e.g. uname -a)

n/a

@dentex
Copy link

dentex commented Mar 3, 2023

Nice catch!
adding props.put(BrokerConstants.IMMEDIATE_BUFFER_FLUSH_PROPERTY_NAME, "true"); I saved ~1 sec, as you said.

@andsel
Copy link
Collaborator

andsel commented Mar 3, 2023

That flag is used to force the flush on every channel write operation.

if (brokerConfig.isImmediateBufferFlush()) {
channelFuture = channel.writeAndFlush(retainedDup);
} else {
channelFuture = channel.write(retainedDup);
}

This means that avoid to batch writes and let the underlying network library to better usage the resources. This implies that for every write there is also a flush syscall, which on heavy data situations could potentially impact the performance in term of event per second.

With flush on every write the latency reduces obviously, so it's matter of the use case, if set or not such flag.

@jflamy
Copy link
Author

jflamy commented Mar 3, 2023

Perhaps the buffering delay should be configurable.

@andsel
Copy link
Collaborator

andsel commented Mar 3, 2023

Actually there is an autoflusher filter on the Netty pipeline that after a period (1 sec) of write idle command a flush.

pipeline.addLast("autoflush", new AutoFlushHandler(1, TimeUnit.SECONDS));

We could just make it configurable, but which is the expected value ranges? Max 5 seconds? Do 0 means "no flush" ?
WDYT?

@jflamy
Copy link
Author

jflamy commented Mar 3, 2023

That would work. And making it more obvious. When switching from Mosqutto or Aedes -- neither does buffering -- this Moquette behaviour looks like a bug. I would actually switch the behavior around. I don't think many people do high volume. They could pick a buffering value that makes sense for their special case. In other words, make it configurable, and make 0 the default :-)

@andsel
Copy link
Collaborator

andsel commented Mar 3, 2023

Oks, thanks' for the suggestion I'll try to keep in the radar for 0.17.
Just one question, when you say:

make it configurable, and make 0 the default

What means 0? "immediate flush" or "no flush at all, just flush when the write buffer are full" ?

@jflamy
Copy link
Author

jflamy commented Mar 3, 2023

Same behaviour as Mosquitto or Aedes. Delay before flushing 0 = no buffering, immediate mode. I guess flush when buffer full could be -1.

@hylkevds
Copy link
Collaborator

hylkevds commented Mar 4, 2023

Having 0=immediate flush and -1=when buffer full makes sense.
It would be good to be able to configure this in milliseconds. 1 Second is a long time in some use-cases, but buffering for 100ms may be beneficial.

Just noticed that #738 does just that :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants