You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found an issue when using automaxprocs in Kubernetes pods managed and autoscaled by a VPA.
For containers with a fractional CPU limit between 1 and 2 cores, the current implementation rounds down GOMAXPROCS to 1, which means the container will never use more than 1s of CPU because it has a single active thread, and the VPA won't scale it up because it still has resources available.
For some components that depend on automaxprocs, like prometheus, it would be helpful to have an option to round up the quota instead of down to allow triggering autoscales when reaching the threshold.
Technically, any fractional number of CPUs would have the same problem, but a higher number of cores increases the probability that the VPA threshold is below the quota rounded down. The problem is more frequent in pods with fractional CPUs between 1 and 2.
The text was updated successfully, but these errors were encountered:
#13 altered the default behavior from ceil to floor. I dought throttling is real matter here. If a process is frequently throttled, we should consider allocating more CPUs for container. Running out of CPUs for the go service doesn not necessary imply depriving other services of CPUs. The OS will distribute CPUs between them.
#14 try to add an option. Unfortunately, the author closed it due to prolonged inactivity.
I found an issue when using automaxprocs in Kubernetes pods managed and autoscaled by a VPA.
For containers with a fractional CPU limit between 1 and 2 cores, the current implementation rounds down
GOMAXPROCS
to 1, which means the container will never use more than 1s of CPU because it has a single active thread, and the VPA won't scale it up because it still has resources available.For some components that depend on automaxprocs, like prometheus, it would be helpful to have an option to round up the quota instead of down to allow triggering autoscales when reaching the threshold.
Technically, any fractional number of CPUs would have the same problem, but a higher number of cores increases the probability that the VPA threshold is below the quota rounded down. The problem is more frequent in pods with fractional CPUs between 1 and 2.
The text was updated successfully, but these errors were encountered: