From 27c9632acdb0982e4d1a76a93e075272a442dfa1 Mon Sep 17 00:00:00 2001 From: Dante Stancato <45296507+dantecit0@users.noreply.github.com> Date: Sat, 27 May 2023 08:44:48 +0200 Subject: [PATCH 1/2] Update expressroute-troubleshooting-network-performance.md Added note obtained from a recent case about throughput performance over a long distance link (from Denver DC to Virginia ExR circuit). By using the following description, we could obtain in a 2500km distance over 200mbps in a single session by tuning TCP Window. Cannot modify entire values as I don't have an ExpressRoute with 10gbps for testing and update the table accordingly: [!NOTE] While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor, and uses a way lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with "-w" switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. --- .../expressroute-troubleshooting-network-performance.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/articles/expressroute/expressroute-troubleshooting-network-performance.md b/articles/expressroute/expressroute-troubleshooting-network-performance.md index 22590fc7fe13..617eaba94b2d 100644 --- a/articles/expressroute/expressroute-troubleshooting-network-performance.md +++ b/articles/expressroute/expressroute-troubleshooting-network-performance.md @@ -207,6 +207,9 @@ Test setup: \* The latency to Brazil is a good example where the straight-line distance significantly differs from the fiber run distance. The expected latency would be in the neighborhood of 160 ms, but is actually 189 ms. The difference in latency would seem to indicate a network issue somewhere. But the reality is the fiber line doesn't go to Brazil in a straight line. So you should expect an extra 1,000 km or so of travel to get to Brazil from Seattle. +>[!NOTE] +>While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor, and uses a way lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with "-w" switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. + ## Next steps - Download the [Azure Connectivity Toolkit](https://aka.ms/AzCT) @@ -216,4 +219,4 @@ Test setup: [Performance Doc]: https://github.com/Azure/NetworkMonitoring/blob/master/AzureCT/PerformanceTesting.md [Availability Doc]: https://github.com/Azure/NetworkMonitoring/blob/master/AzureCT/AvailabilityTesting.md [Network Docs]: ../index.yml -[Ticket Link]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview \ No newline at end of file +[Ticket Link]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview From 17dd809d85f7030a856b7d238333efa5d8c6d33c Mon Sep 17 00:00:00 2001 From: Jak Koke Date: Wed, 5 Jul 2023 14:10:05 -0700 Subject: [PATCH 2/2] Update articles/expressroute/expressroute-troubleshooting-network-performance.md --- .../expressroute-troubleshooting-network-performance.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/expressroute/expressroute-troubleshooting-network-performance.md b/articles/expressroute/expressroute-troubleshooting-network-performance.md index 617eaba94b2d..f863fa81d21c 100644 --- a/articles/expressroute/expressroute-troubleshooting-network-performance.md +++ b/articles/expressroute/expressroute-troubleshooting-network-performance.md @@ -208,7 +208,7 @@ Test setup: \* The latency to Brazil is a good example where the straight-line distance significantly differs from the fiber run distance. The expected latency would be in the neighborhood of 160 ms, but is actually 189 ms. The difference in latency would seem to indicate a network issue somewhere. But the reality is the fiber line doesn't go to Brazil in a straight line. So you should expect an extra 1,000 km or so of travel to get to Brazil from Seattle. >[!NOTE] ->While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor, and uses a way lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with "-w" switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. +>While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor and uses a much lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with `-w` switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi-threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. ## Next steps