You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Context and scope
The relayer processes Warp messages from separate chains concurrently, but within a single chain messages are processed in serial (see #31). There is therefore a throughput limit per chain that we should mesaure and characterize.
Discussion and alternatives
Throughput is very likely network bound, as the application side processing complexity is insignificant compared to the number of network round trips required to relay a message. We should aim to answer the following questions:
Is single message relaying in fact network bound?
What's the breakdown between network latency and application side processing latency for a single message?
What's the minimum and maximum number of sequential network round trips needed to relay a single message?
Open questions
At what load level do concurrent database writes become a bottleneck?
The text was updated successfully, but these errors were encountered:
Alongside implementing #31, load testing was performed using https://github.com/ava-labs/as-simulator. The tests were performed on Fuji, and measured on-chain end-to-end Teleporter message latency. At 10 TPS sustained, the measured average latency was ~2seconds, which when considering the per-chain expected time to finality of ~1s, is close to optimal.
The observed bottleneck in raw throughput was due to limits on the number of simultaneous transactions from a single address that AvalancheGo nodes will keep in the mempool before ejecting further transactions. #256 Addresses this corner case.
This round of testing is enough evidence to conclude that the relayer's concurrency model is scalable enough that it will likely not be the bottleneck in an end-to-end cross-chain system. Closing this ticket out as completed. Future profiling and optimization work will be represented by new tickets with more focused target areas.
Context and scope
The relayer processes Warp messages from separate chains concurrently, but within a single chain messages are processed in serial (see #31). There is therefore a throughput limit per chain that we should mesaure and characterize.
Discussion and alternatives
Open questions
At what load level do concurrent database writes become a bottleneck?
The text was updated successfully, but these errors were encountered: