You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the CPU time we report for GC in the gctrace is based on the assumption that each GC worker actually gets the CPU for the full wall-clock duration it runs for. This is a bad assumption on over-subscribed systems. We've seen this internally when users set GOMAXPROCS too high or have other processes in the same cgroup that consume too much CPU and the cgroup throttles execution. This may also be relevant to #19378.
We should consider asking the OS for actual CPU consumed by GC and reporting that. Comparing this with wall-clock time in the gctrace will give both us and users a better sense of any environmental problems causing long GC phases.
We should also consider how GC pacing would be affected by measuring actual CPU time versus wall-clock CPU time.
Currently, the CPU time we report for GC in the gctrace is based on the assumption that each GC worker actually gets the CPU for the full wall-clock duration it runs for. This is a bad assumption on over-subscribed systems. We've seen this internally when users set GOMAXPROCS too high or have other processes in the same cgroup that consume too much CPU and the cgroup throttles execution. This may also be relevant to #19378.
We should consider asking the OS for actual CPU consumed by GC and reporting that. Comparing this with wall-clock time in the gctrace will give both us and users a better sense of any environmental problems causing long GC phases.
We should also consider how GC pacing would be affected by measuring actual CPU time versus wall-clock CPU time.
/cc @RLH
The text was updated successfully, but these errors were encountered: