Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(lantern): divide throughput only on network node count #14564

Merged
merged 2 commits into from
Nov 29, 2022

Conversation

brendankenny
Copy link
Member

Fixes a bug in how the simulator divides throughput across network nodes. Previously it was counting any in-progress CPU tasks in the number of requests currently being made.

Since the simulator only simulates a single thread this was only ever off by (at most) 1, but that can be significant, especially in early page load where the request count is usually lower (e.g. if two active requests and a busy CPU, each request is allocated 1/3 of the available throughput instead of 1/2).

Looking at the updated tests gives a good overview: every simulated load metric gets a little faster, to varying degrees. The tests may show more pronounced improvements than many real pages because they tend to have fewer requests.

Impact on lantern test accuracy is mixed, but, the current thing is incorrect, so... :)

@brendankenny brendankenny requested a review from a team as a code owner November 28, 2022 20:38
@brendankenny brendankenny requested review from adamraine and removed request for a team November 28, 2022 20:38
Copy link
Collaborator

@connorjclark connorjclark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice find!

for (const connection of this._connectionPool.connectionsInUse()) {
connection.setThroughput(this._throughput / this._nodes[NodeState.InProgress].size);
Copy link
Member Author

@brendankenny brendankenny Nov 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the historical record: the reason it did this was because the original prototype graph only had network nodes, not cpu ones. @paulirish actually noted at the time that we should make sure not to keep including all active nodes when cpu nodes were added to the graph, but we did not remember.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants