Datagram lost Increase with Decreasing its length #1422
Replies: 6 comments 10 replies
-
I can answer the second question...the reason you can have 128KB blocks with TCP tests is that TCP is a stream-oriented protocol. It doesn't keep the boundaries between blocks (which are basically just chunks of memory passed to a send() system call). So the TCP protocol implementation is free to break up that 128KB send into IP packets of some appropriate size to be transmitted across the network (looks like each packet holds a maximum of 1460 payload bytes, which is typical for Ethernet networks). UDP operation is different, every "block" sent corresponds to a UDP datagram, which corresponds to an IP packet. So the block size of a UDP test has to be capable of fitting into a single IP packet. (There might be some fragmentation and reassembly of packets in the IP layer but that's transparent to the sending and receiving iperf3 processes.) |
Beta Was this translation helpful? Give feedback.
-
@bmah888, and @davidBar-On, Is the affect of packet size on bitrate and lost an issue or it is me understanding something wrong? |
Beta Was this translation helpful? Give feedback.
-
The lost UDP with
|
Beta Was this translation helpful? Give feedback.
-
@PraptiSarker02, thanks for the information you provided. I still don't understand the reason for the lost packets ... My current guess is that it is related to the Cygwin/Windows UDP implementation, and for some reason every 2nd packet "send" fails or transmits no data. Currently, if no data is transmitted, iperf3 still increases the packet count, so on the server side it would be seem as packets are lost, although no data is really lost (see PR #1380 with a proposed solution for this issue). The following information can help to further evaluate the issue:
|
Beta Was this translation helpful? Give feedback.
-
I just observe that the total datagram sent don't match the total datagram received, although the wireshark says that it have received. So, my assumption is that the datagram which was not reported on was due to overhead processing of iperf. But, the question arise here is: Then the lost datagram count is just the number of corrupted data that was received, based on the UDP checksum bytes? |
Beta Was this translation helpful? Give feedback.
-
If you named the pcap files correctly, then the server got all the 600K packets and the missing packets are reported on the client side... That can happen when for different reasons, usually CPU or network load, Wireshark is not able to capture all packets. I assume this is the case here and it may be that the client's CPU is overloaded during the test.
As the server received all the 600K packets, it seems that the 17% lost may in the iperf3 server side (and not the client as I thought before). To evaluate that it will really help if you can provide the client and server pcap files for the 49% lost case. In addition, as I wrote before, it will help if you will run another test with setting |
Beta Was this translation helpful? Give feedback.
-
I am using iperf3.12.0 on windows. I know iperf3 isn't officially supported on windows, but my question will be more of a conceptual rather than technical, so os shouldn't matter.
iperf command used: iperf3 -c 192.168.1.93 -t 15 -w4M -u -b 1G -V
iperf command used: iperf3 -c 192.168.1.93 -t 15 -w4M -l 8000 -u -b 1G -V
Can someone explain why does the lost and bitrate decreases when datagram payload length is increased from 1460 byte to 8000 byte?
One further question, how can the block/datagram length be 131072 byte when doing tcp test, where the maximum theoretical value is around 65535 bytes?(see image below)
iperf command used: iperf3 -c 192.168.1.93 -t 15 -w4M -V
Beta Was this translation helpful? Give feedback.
All reactions