-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add node "status", "scheduling eligibility" to all client metrics #8925
Add node "status", "scheduling eligibility" to all client metrics #8925
Conversation
pete-woods
commented
Sep 18, 2020
•
edited
Loading
edited
- We previously added these to the client host metrics, but it's useful to have them on all client metrics.
- e.g. so you can exclude draining nodes from charts showing how much memory / CPU capacity your fleet has.
- We previously added these to the client host metrics, but it's useful to have them on all client metrics. - e.g. so you can exclude draining nodes from charts showing your fleet size.
9f2a6c7
to
a44d8db
Compare
It would be awesome if we could backport this to the |
Apologies for the ping @langmartin and @schmichael, but you've reviewed my PRs in this area before, and was hoping for a quick look over this one! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @pete-woods! Thanks for this PR!
LGTM. I built this PR and smoke-tested it with the default example job. Then I can query the metrics and get things like the following:
$ curl -s http://localhost:4646/v1/metrics | jq '.Gauges[] | select(.Name == "nomad.client.allocated.memory")'
{
"Labels": {
"node_scheduling_eligibility": "eligible",
"host": "linux",
"node_id": "3dc8f04b-9158-8045-ba03-c41748ef8a5d",
"datacenter": "dc1",
"node_class": "none",
"node_status": "ready"
},
"Name": "nomad.client.allocated.memory",
"Value": 256
}
Just a heads up @pete-woods that Lang has moved on to a new project, but the whole team is always happy to review PRs!
Thanks for merging, @tgross! Is it in any way possible to backport this fix to the |
Hi @pete-woods. Our intent at this point is not to ship another 0.12.x series unless there's a critical bug or security flaw. |
Fair enough! I shall await |
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |