Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a flag for cpu_hard_limit #3825

Merged
merged 8 commits into from
Feb 9, 2018
Merged

Conversation

jaininshah9
Copy link
Contributor

@jaininshah9 jaininshah9 commented Feb 1, 2018

Fixes #3810

@@ -120,6 +120,11 @@ const (
// https://docs.docker.com/engine/reference/run/#block-io-bandwidth-blkio-constraint
dockerBasicCaps = "CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID," +
"SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE"

// This is cpu.cfs_period_us: the length of a period. The default values is 100 microsecnds represented in nano Seconds, below is the documnentation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrap the line to 80 chars

@jaininshah9
Copy link
Contributor Author

jaininshah9 commented Feb 2, 2018

Once I made the changes to the code, I created the nomad binary and started the agent in dev mode. After that I started a nomad job for docker -> stress (docker run --rm -it progrium/stress --cpu 4). In my job, I passed 1500 CPU (out of 24800 MHz), which equates to about 6.04 percent of CPU. And by setting the flag cpu_hard_limit = true, the CPU usage did not go over ~6%. Attached are 3 screenshots:

  1. Screenshot for starting the agent

1

2. Screenshot for starting the job CPU usage where cpu_hard_limit = true

2

  1. Screenshot for CPU usage where cpu_hard_limit = false

3

@schmichael
Copy link
Member

I'm sorry for not responding to this PR sooner. We should support this in a cross-driver way which means adding cpu_hard_limit to the resources stanza of the job file.

I'll leave more details on the issue. Sorry again for not letting you know this requirement earlier!

@schmichael schmichael closed this Feb 6, 2018
@jaininshah9
Copy link
Contributor Author

@schmichael: Sure can add that in resources stanza. I will get started on that, but if you have other specific instructions then let me know whenever you can.

@schmichael schmichael reopened this Feb 6, 2018
@schmichael
Copy link
Member

schmichael commented Feb 6, 2018

@jaininshah9 That's quite a bit more work, so on second thought let's get this work in and we can deprecate it once there's a cross-driver solution.

Sorry for the confusion!

// Below is the documnentation:
// https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt
// https://docs.docker.com/engine/admin/resource_constraints/#cpu
defaultCFSPeriod = 100000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for a task allocated 10% of the CPU it could pause for 90ms at a time? That seems awfully long, perhaps 10000 (10ms) or 1000 (1ms) increments would be a better level of granularity to ensure responsiveness for low priority interactive services?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if a task is allocated 10% of CPU and we keep the default 100 microseconds cfs_peroid then it will be paused for 90 microseconds. We can change that to 10 ms, for that we will have to pass CPUPeriod as well (which would always be 10 ms )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Side note: Sorry, I meant "ms" as in milli seconds. I know that kernel doc uses "ms" to mean microseconds, so I'm sorry for confusing the matter. Within Nomad code/comments/docs let's always use "ms" for millis and "us" or "μs" for micros.

I'd be in favor of lowering this to 10000μs (10ms) to minimize pause duration unless somebody has a strong preference.

Since it's only used for an internal calculation tweaking it in the future should be ok (and it should probably even be configurable once we make this cross-driver).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schmichael The default for cfs period in docker is 100000, changing this would mean people's experience of using the quota flag would differ between using normal docker cli and Nomad. If you want a lower value, maybe introduce another flag to tweak that but keep the default as is?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schmichael I am testing the changes that you recommended and running into an issue. When I specify cpu.cfs_period_us=10000 (milliseconds) and when I want to use 6% of CPU, I will have to specify cpu.cfs_quota_us=600(milliseconds). When I do that, it gives me an error: CPU cfs quota cannot be less than 1ms (i.e. 1000).

Copy link
Member

@schmichael schmichael Feb 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@diptanu has a good point. Matching Docker's setting is the right thing to do. Sorry for the noise!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although now I'm confused by the error @jaininshah9 mentioned. According to Docker's API docs this setting should be in microseconds (like the kernel itself expects), not nanoseconds as the code comment implies.

The Docker docs lead me to believe this should be the value?

defaultCFSPeriod = 100

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schmichael You may be right, that got me confused as well, I am digging into the error and will let you know.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I think initially I made a mistake mentioning that it's a nanosecond. The reason I mentioned nanosecond it is because the docker docs says (see screenshot below) the default value is 100 microseconds, which is wrong, it should say 100 milliseconds (which is what kernel doc says).

screen shot 2018-02-06 at 6 53 23 pm

I also tested and the docker API does not do any translation. Whatever value we pass in CPUPeriod is what we find under /sys/fs/cgroups...
So the value is the code is correct, I need to update the comment on it though

Sorry for the confusion I created. I will add a change which has a clear comment on what the defaultCFSPeriod refers to.

@@ -1119,6 +1130,12 @@ func (d *DockerDriver) createContainerConfig(ctx *ExecContext, task *structs.Tas
VolumeDriver: driverConfig.VolumeDriver,
}

// Calculate CPU Quota
if driverConfig.CPUHardLimit {
percentTicks := float64(task.Resources.CPU) / shelpers.TotalTicksAvailable()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use d.node.Resources.CPU instead as we don't always properly detect the available CPU so some users have to override it.

// https://docs.docker.com/engine/admin/resource_constraints/#cpu
defaultCFSPeriod = 100000
// https://docs.docker.com/engine/api/v1.35/#
defaultCFSPeriod_us = 100000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No _s in Go. Rename this to `defaultCFSPeriodUS.

@schmichael
Copy link
Member

LGTM! Just need to fix the merge conflicts before I can pull it.

@jaininshah9
Copy link
Contributor Author

jaininshah9 commented Feb 9, 2018 via email

@schmichael schmichael merged commit c7c4564 into hashicorp:master Feb 9, 2018
@schmichael
Copy link
Member

I went and resolved the merge conflicts and updated the documentation. Thanks for sticking with this one @jaininshah9!

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants