Replies: 1 comment 2 replies
-
Did you set this zero latency on one side or both? Because the effective latency is the maximum of two. If you have two endpoints, A and B, then the latency in the direction A -> B is the maximum of:
Of course, you can set the same latency in both directions, in which case simply set it to SRTO_LATENCY on both endpoints. The latency currently serves two purposes:
The packet is kept in the buffer up to this packet's PTS, and only up to that time lost packets preceding it have a chance to get recovered. PTS is more-less defined this way:
The base time is taken from the packet 0, the first ever received data packet. Then:
So, if you simply set latency to 0, packets will be delivered immediately to the application, and also packets will be dropped immediately after reception of the packet following the loss, without any chance of recovery. This will happen already in TSBPD thread, so before even the application is informed about a packet ready to deliver. Also, according to the tests, RTS for further packets happens most likely to be later than ETS (which is equal to PTS in case of zero latency, and when we ignore DRIFT). This means that practically every packet will be ready to deliver immediately upon reception, and will cause dropping every lost packet preceding it also in this moment. In effect, with zero latency and TLPKTDROP turned on (default) you'll get from the SRT transmission exactly the same as on bare UDP. You can, of course, turn TLPKTDROP off, but with this you risk HOL-blocking, which can even lead to breaking the connection in the live mode, caused by the receiver buffer overflow. To avoid this, you'd need some mechanism to force dropping lost packets only or request, which doesn't exist. And it's not exactly easy to add. I'm not sure what you are trying to achieve. TSBPD is required to ensure that the time, when an I-Frame is delivered to the decoder, is the same distant towards the previous I-Frame's delivery time as the difference in their PTS. Without this mechanism this would have to be ensured by the application to deliver the information at the right time. And there's still no way to avoid the risk of breaking the transmission when waiting for a packet recovery takes too long. |
Beta Was this translation helpful? Give feedback.
-
I want the received packets to be delivered to the application as fast as possible (without TSBPD buffering, without retransmission).
Currently, if I turn off TSBPD, it doesn't allow packet gap so it waits until the missing packet arrives.
However, I want to make it directly deliver the packet to app even if there's a packet gap (thus allowing packet loss).
By doing this, I want each packet's end-to-end latency (=from the time when
srt_sendmsg
is called, to the time when the packet is received withsrt_recvmsg
) to be always ~=RTT/2, regardless of packet loss rate.My try
SRT_TRANSTYPE
=SRTT_LIVE,SRTO_RCVLATENCY
=0.In this setting, the end-to-end latency is set to be RTT_estimated/2.
Test environment
srt_sendmsg
with some interval in-between.I have some questions regarding this approach.
When the interval between
srt_sendmsg
calls varies, the end-to-end latency fluctuates.SRT seems to adjust the packet sending period based on the
srt_sendmsg
interval.To the best of my knowledge, this should not happen in LiveCC since
PKT_SND_PERIOD
= PktSize * 1000000 / MAX_BW. linkSo with same Pktsize 1316, send period should be constant. I wonder why this happens.
If the estimated RTT is longer than the actual RTT, the frame buffering time (until RTT_estimated/2) may still exist. I haven't seen this case yet but I wonder if this may happen.
It doesn't turn off the retransmission. So it may unnecessarily send NAK.
Do these drawbacks actually exist?
And would there be a better approach?
Beta Was this translation helpful? Give feedback.
All reactions