Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Perf -11%] System.Collections.Tests.Add_Remove_SteadyState<Int32>.ConcurrentQueue #43905

Closed
DrewScoggins opened this issue Oct 27, 2020 · 10 comments
Assignees
Milestone

Comments

@DrewScoggins
Copy link
Member

DrewScoggins commented Oct 27, 2020

Run Information

Architecture x64
OS Windows 10.0.18362
Baseline c9d3d3e60dd9bf049c47d39895f1c2d4fddeb5df
Compare 095539fd800775fc9542a07f411cbc1b7e72084e
Diff Diff

Regressions in System.Collections.Tests.Add_Remove_SteadyState

Benchmark Baseline Test Test/Base Baseline IR Compare IR IR Ratio Baseline ETL Compare ETL
ConcurrentQueue 20.72 ns 23.01 ns 1.11 124.02802102440427 142.22494413403405 1.1467162255701013 Trace Trace

graph
Historical Data in Reporting System

Repro

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f netcoreapp5.0 --filter 'System.Collections.Tests.Add_Remove_SteadyState<Int32>*'

Histogram

System.Collections.Tests.Add_Remove_SteadyState.ConcurrentQueue(Count: 512)

[20.662 ; 21.152) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[21.152 ; 21.643) | 
[21.643 ; 22.297) | 
[22.297 ; 22.843) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[22.843 ; 23.407) | @@

Docs

Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository

@DrewScoggins DrewScoggins added arch-x86 os-windows tenet-performance Performance related issue tenet-performance-benchmarks Issue from performance benchmark arch-x64 labels Oct 27, 2020
@Dotnet-GitSync-Bot Dotnet-GitSync-Bot added area-System.Collections untriaged New issue has not been triaged by the area owner labels Oct 27, 2020
@ghost
Copy link

ghost commented Oct 27, 2020

Tagging subscribers to this area: @eiriktsarpalis, @jeffhandley
See info in area-owners.md if you want to be subscribed.

@DrewScoggins
Copy link
Member Author

We are also seeing regressions in other similar tests across the same commit diff. I will be posting them below.

@DrewScoggins
Copy link
Member Author

DrewScoggins commented Oct 27, 2020

Run Information

Architecture x86
OS Windows 10.0.18362
Baseline c9d3d3e60dd9bf049c47d39895f1c2d4fddeb5df
Compare 095539fd800775fc9542a07f411cbc1b7e72084e
Diff Diff

Regressions in System.Collections.Concurrent.Count

Benchmark Baseline Test Test/Base Baseline IR Compare IR IR Ratio Baseline ETL Compare ETL
Queue_EnqueueCountDequeue 28.06 ns 30.47 ns 1.09 191.38764338788695 209.92582783749634 1.0968619714494205 Trace Trace

graph
Historical Data in Reporting System

Repro

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f netcoreapp5.0 --filter 'System.Collections.Concurrent.Count<Int32>*'

Histogram

System.Collections.Concurrent.Count.Queue_EnqueueCountDequeue(Size: 512)

[27.435 ; 28.034) | @
[28.034 ; 29.015) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[29.015 ; 29.815) | 
[29.815 ; 30.746) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Docs

Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository

@DrewScoggins
Copy link
Member Author

Run Information

Architecture x64
OS Windows 10.0.18362
Baseline c9d3d3e60dd9bf049c47d39895f1c2d4fddeb5df
Compare 095539fd800775fc9542a07f411cbc1b7e72084e
Diff Diff

Regressions in System.Threading.Channels.Tests.UnboundedChannelPerfTests

Benchmark Baseline Test Test/Base Baseline IR Compare IR IR Ratio Baseline ETL Compare ETL
TryWriteThenTryRead 44.32 ns 47.24 ns 1.07 266.96108897183603 284.0653911793766 1.0640703942039473 Trace Trace

Repro

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f netcoreapp5.0 --filter 'System.Threading.Channels.Tests.UnboundedChannelPerfTests*'

Histogram

System.Threading.Channels.Tests.UnboundedChannelPerfTests.TryWriteThenTryRead

[44.027 ; 45.843) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[45.843 ; 47.685) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Docs

Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository

@DrewScoggins
Copy link
Member Author

DrewScoggins commented Oct 27, 2020

Run Information

Architecture x64
OS Windows 10.0.18362
Baseline c9d3d3e60dd9bf049c47d39895f1c2d4fddeb5df
Compare 095539fd800775fc9542a07f411cbc1b7e72084e
Diff Diff

Regressions in System.Collections.Concurrent.Count

Benchmark Baseline Test Test/Base Baseline IR Compare IR IR Ratio Baseline ETL Compare ETL
Queue_EnqueueCountDequeue 28.61 ns 35.81 ns 1.25 226.0080356935066 242.67086925105576 1.0737267305846856 Trace Trace

graph
Historical Data in Reporting System

Repro

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f netcoreapp5.0 --filter 'System.Collections.Concurrent.Count<String>*'

Histogram

System.Collections.Concurrent.Count.Queue_EnqueueCountDequeue(Size: 512)

[28.174 ; 29.284) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[29.284 ; 30.044) | 
[30.044 ; 31.515) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[31.515 ; 32.626) | @@@@
[32.626 ; 33.736) | 
[33.736 ; 34.847) | 
[34.847 ; 36.362) | @
[36.362 ; 37.535) | 
[37.535 ; 38.646) | @
[38.646 ; 39.847) | @

Docs

Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository

@DrewScoggins
Copy link
Member Author

My only guess is that this is related to #38225

@naricc
Copy link
Contributor

naricc commented Oct 28, 2020

So for the mono interpreter ones, it looks like there are actually two regressions in there, one caused by #38225 and another (possibly) caused by dd23c81.

I am going to create a separate issue for that.

@eiriktsarpalis eiriktsarpalis removed the untriaged New issue has not been triaged by the area owner label Oct 30, 2020
@danmoseley
Copy link
Member

@kouvel your commit 2a234f9 was during this time -- the commit message says it is all off by default though, is that right?

@kouvel kouvel added this to the 6.0.0 milestone Nov 30, 2020
@kouvel kouvel self-assigned this Nov 30, 2020
@kouvel
Copy link
Member

kouvel commented Nov 30, 2020

The change was enabled by default in coreclr in a subsequent commit. Part of the regression may be the same issue as in #44211, I'll take a look.

@kouvel
Copy link
Member

kouvel commented Aug 4, 2021

I'm not sure which change would have caused the regression. It doesn't seem likely that #38225 would have caused it since the portable thread pool was not enabled in that change, and the tests currently perform similarly with and without it enabled. In any case, the large regressions seem to have recovered in all of the tests above and #46120 seems to have improved the perf in most of the tests above (maybe due to better code gen, not due to the spin-waiting change).

@kouvel kouvel closed this as completed Aug 4, 2021
@ghost ghost locked as resolved and limited conversation to collaborators Sep 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants