Skip to content
This repository has been archived by the owner on Jan 29, 2023. It is now read-only.

QNEthernet higher latency #38

Closed
5b4wn opened this issue Feb 26, 2022 · 10 comments
Closed

QNEthernet higher latency #38

5b4wn opened this issue Feb 26, 2022 · 10 comments

Comments

@5b4wn
Copy link

5b4wn commented Feb 26, 2022

First, many thanks for an awesome library!

I would like to report a bug and a possible solution but not sure if this introduces any other issues but this seems to have improved the problem for me..

Describe the bug

When using QNEthernet on a teensy 4.1 websocket server, there seems to be 5-6x latency compared to Native ethernet when sending data
using client.send

Reading https://github.com/ssilverman/QNEthernet#write-immediacy

It seems that the client.write waits for a timeout or enough data to send that adds about 150ms delay for each call

This change in Teensy41_QNEthernet_tcp.hpp seems to correct the problem

void send(const WSString& data) override
{
 client.write(reinterpret_cast<uint8_t*>(const_cast<char*>(data.c_str())), data.size());
 **client.flush();** 
}

void send(const WSString&& data) override
{
 client.write(reinterpret_cast<uint8_t*>(const_cast<char*>(data.c_str())), data.size());
 **client.flush();**
}

void send(const uint8_t* data, const uint32_t len) override
{
 client.write(data, len);
 **client.flush();**
}

BW
Marios

@khoih-prog
Copy link
Owner

khoih-prog commented Feb 26, 2022

Hi @5b4wn

Thanks for your bug report as well as good solution, which I'll add to the new release.

Could you also try to use the QNEthernet's new and better client.writeFully() instead of client.write(), with client.flush() to see if there is any improvement. @ssilverman, do you have any suggestion here ?

It's even better you make a new PR for this enhancement.

void send(const WSString& data) override
{
  //client.write(reinterpret_cast<uint8_t*>(const_cast<char*>(data.c_str())), data.size());
  client.writeFully(reinterpret_cast<uint8_t*>(const_cast<char*>(data.c_str())), data.size());
  client.flush();
}

void send(const WSString&& data) override
{
  //client.write(reinterpret_cast<uint8_t*>(const_cast<char*>(data.c_str())), data.size());
  client.writeFully(reinterpret_cast<uint8_t*>(const_cast<char*>(data.c_str())), data.size());
  client.flush();
}

void send(const uint8_t* data, const uint32_t len) override
{
  //client.write(data, len);
  client.writeFully(data, len);
  client.flush();
}

I'm waiting for your new tests and reply.

Regards,

@khoih-prog
Copy link
Owner

Hi @5b4wn

Just tested the modifications using client.writeFully() and working OK.

Can you test using your code and use-case and be sure everything is OK and better.

@5b4wn
Copy link
Author

5b4wn commented Feb 26, 2022 via email

@khoih-prog
Copy link
Owner

khoih-prog commented Feb 26, 2022

that seems to work OK

Good news. Will you make a PR or I'll do it. Any way, your contribution will be noted.

Of note is that I find QNEthernet (since @ssilverman
https://github.com/ssilverman suggestion) more reliable than Native for multiple sockets on the teensy 4.1.

I agree it's more reliable for many more applications, and have spent more time on that new and better QNEthernet library.

websockets2 potentially capable of supporting wss as a server? (at least for teensy 4.1)?

Currently ws server is OK, not wss. I'm also very reluctant, even against to do so, as wss server requires lot more power to fulfill the TLS wss encryption / decryption, and have to check / update all the certs frequently to ensure safety.

I suggest you to use RPi 3, 4 or some cheap embedded computer, running Linux (Raspbian, Ubuntu, etc), to do so, without headache.

khoih-prog added a commit that referenced this issue Feb 27, 2022
### Release v1.10.1

1. Reduce QNEthernet latency. Check [QNEthernet higher latency #38](#38)
2. Update `Packages' Patches`
@khoih-prog
Copy link
Owner

Hi @5b4wn

The new WebSockets2_Generic releases v1.10.1 has just been published. Your contribution is noted in Contributions and Thanks

I'm looking forward to receiving more of your contributions (bug report, enhancement, PR, etc.)

Best Regards,


Release v1.10.1

  1. Reduce QNEthernet latency. Check QNEthernet higher latency #38
  2. Update Packages' Patches

@ssilverman
Copy link
Contributor

ssilverman commented Feb 27, 2022

In my view, there should be an option to not send all data immediately. It’s not something that’s always desired. I would add a flush() function to TcpClient so that it gives users an option. Right now, always flushing in each send() means there’s no option to accumulate data.

It’s not the case that QNEthernet has high latency, it’s that there’s no option in the websockets library to flush data that’s been accumulated. It’s true that some libraries always send data immediately, but I wanted to provide both options: one option to accumulate and another to send immediately.

Another option (and I haven’t played with it much) is to disable Nagle’s algorithm with a call to EthernetClient::setNoDelay(true). That might remove the need for flush().

@5b4wn
Copy link
Author

5b4wn commented Feb 27, 2022

Hi @ssilverman
Thanks for the input. I tried EthernetClient::setNoDelay(true) and the latency is high ie the same as without flush().
Is there way for adding a method in QNEthernet such as writeFullyFlush() which sends flush() after writingFully()?

OR

@khoih-prog is there way for another method such as sendImmediately() or an optional parameter in send() for flush?
eg

void send(const uint8_t* data, const uint32_t len, bool flush=false) override
{
  client.writeFully(data, len);
  if (flush) {
    client.flush();
  }
}

or

void flush() {
  client.flush();
}

@ssilverman
Copy link
Contributor

ssilverman commented Feb 27, 2022

Thanks for trying out setNoDelay(true).

My opinion is that, if it’s decided to provide an optional flush-after-write, it’s better to have two functions rather than one with a Boolean because Booleans, when reading the code, don’t have an obvious meaning. Some options:

  1. send() and flush()
  2. sendWithFlush()/sendNow() and send()
  3. sendNoFlush() and send()/sendNow()

etc.
It depends how TcpClient::send() is documented and what user expectations are. Look at the Arduino-style Print API as an example. There’s write() functions and a flush() function. Most I/O libraries I’ve used operate this way. “Send/write something with the expectation it may be buffered” and then “flush all buffered data.” This is why I prefer option 1, send()/write() and flush().

But the word “send” does seem to imply that the data actually gets sent, so if that was the real intent behind this function, maybe option 3 would better fit expectations of the names.

I also like your sendImmediately() suggestion (sendNow() would be shorter, but that’s a personal preference). That’s similar to options 2 and 3.

@khoih-prog
Copy link
Owner

My opinion is that, if it’s decided to provide an optional flush-after-write, it’s better to have two functions rather than one with a Boolean because Booleans, when reading the code, don’t have an obvious meaning.

I also like your sendImmediately() suggestion (sendNow() would be shorter, but that’s a personal preference). That’s similar to options 2 and 3.

void send(const uint8_t* data, const uint32_t len, bool flush=false) override

In the normal and independent library, I agree it's better to create new functions with correct names. But this enhancement, IMO, has to follow the standard way, good or bad, so that users can have portable code to use in different settings.

That's why I agree with @5b4wn to use the same send(), but with the default flush = true for better performance. In the MCU world, I believe this is better and efficient way to send as fast as possible, and we don't need and don't have large buffer to store and wait.

The best way to do this, IMO, is inside QNEthernet library, with the write() with auto flush() after a short delay time (similar to NativeEthernet ???)

Anyway, I prefer to keep the same as it is now, until there are some real issue posted by the users that the current implementation is buggy.

@ssilverman
Copy link
Contributor

ssilverman commented Feb 28, 2022

There isn’t a “standard” way, nor is one way “good” and the other “bad”. Nor does one way provide “better performance.” It depends what you’re optimizing for. Some libraries I use (eg. implementations of Arduino’s own Print class) buffer up to some amount and then send upon full buffer or send upon a call to flush(). That’s why the flush() function exists. Some do it the other way.

This isn’t my library so I don’t really have a say, but my opinion is that the library should support both approaches, and it sounds like you’re doing that with an additional Boolean flush parameter. If you want the default to be to always flush then that’s the way it’s going to be. Every use case is different, and I’m only advocating that both ways are supported, not for a specific one. Both flush-by-default and don’t-flush-by-default have their use cases. The API design is up to you.

One reason to flush for every call is it avoids delays. One reason to not flush for every call is that it reduces the network traffic, which is desirable in other cases.

It’s not appropriate for QNEthernet to flush on every write() call as that’s not the intent of the Print Interface nor is it something I want the library to do. It’s a building block for a larger system, and that decision is up to the higher layers, one example being this websockets library. If the designer of this library and its users prefer to always flush upon write(), then that’s possible and enabled by QNEthernet. If that option is not preferable, then QNEthernet enables that too. There’s choice.

Bottom line: I’m glad you’re using my library and I’m glad you’re using it in a manner that works for you. We all want to make great software here.

Side point: QNEthernet does send buffered traffic if it hasn’t been flushed after a short time. That time depends on how the underlying lwIP stack is configured.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants