Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Add host metric fields to ECS #950

Merged
merged 4 commits into from
Oct 13, 2020

Conversation

kaiyan-sheng
Copy link
Contributor

@kaiyan-sheng kaiyan-sheng commented Aug 21, 2020

This PR is stage 1 of the RFC for adding host metric fields into ECS.

Preview of the RFC

@kaiyan-sheng kaiyan-sheng self-assigned this Aug 21, 2020
exekias
exekias previously approved these changes Aug 25, 2020
Copy link
Member

@andrewkroh andrewkroh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once concern I have is about putting network data under host.network when ECS already has a network namespace containing related values like network.total.bytes and network.total.packets.


| field | type | description |
| --- | --- | --- |
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's important to specify the scaling_factor so that we know how the value should be interpreted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we document how this cpu percentage works on a server with multiple CPU cores.
Example, If I have a server with 12 CPU cores, will this "CPU percentage" max to 1200% or to 100%?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the way that the endpoint team has done this in the past is the 12 CPU cores has a max of 1200.0123... right @ferullo ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the input! This host.cpu.pct will be the normalized value ranges from 0 to 1 only. If the server has 12 CPU cores, then this normalized value will be total cpu percentage/12.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that make scaling_factor: 100?

This would be the first time a scaled_float is defined in ECS, and we'll need to add support for the defining scaling_factor in the schema. I don't see any issue with the addition - just noting that dependency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ebeahan I was thinking to keep scaleing_factor as default value. So value 0.12 for example will be stored into ES as 0.12 itself. Maybe I should just use float here instead? Thanks!

Copy link
Contributor

@webmat webmat Sep 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the server has 12 CPU cores, then this normalized value will be total cpu percentage/12.

Unless there's disagreement on this (I'm fine with it), let's update the definition in the RFC to make this a bit more clear. I'm not sure the current phrasing "normalized by by the number of cores" makes it clear enough.

Could be as simple as adding an example.

Example: For a two core host, if one core is at 100% and the other is at zero, the value should be 0.5.

Or we could make it more literal and say "if there's 12 cores, this should be the average of the 12 cores, between 0 and 1"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the scaling_factor, it's a required parameter for this datatype, there's no default value here.

Note that the source doc is not affected by it, and would always contain the full float value.

A scaling factor of 100 would mean 0.123 (for 12.3%) would only store 0.12 in the index, for aggregations & so on. If we want to aggregate on full digit percents, a scaling_factor of 100 is appropriate.

@kaiyan-sheng or @cyrille-leclerc is 100 appropriate for this use case? Who can chime in on how observability intends to query this field?

Experiment with scaled_float

I was hoping that scaled_float being backed by a long meant it wouldn't have float artifacts, but with the simple test below, the last two documents ingested show up with ...0000001 in the aggregation 🤷‍♂️

PUT sfloat-1
{ "mappings" : { "properties": {
  "pct" : { "type" : "scaled_float", "scaling_factor": 100 }
}}}
PUT sfloat-1/_doc/1
{ "pct": 0.12 }
PUT sfloat-1/_doc/2
{ "pct": 0.1234 }


PUT sfloat-2
{ "mappings" : { "properties": {
  "pct" : { "type" : "scaled_float", "scaling_factor": 10000 }
}}}
PUT sfloat-2/_doc/1
{ "pct": 0.12 }
PUT sfloat-2/_doc/2
{ "pct": 0.1234 }

GET sfloat-*/_search
GET sfloat-*/_search
{ "size": 0, "aggs" : {
    "percentages" : { "terms" : { "field" : "pct" }
}}}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@webmat Thank you for the explanation!! I think I would prefer a scale_factor of 1000 in this case so for CPU usage, we are keeping one decimal when displaying it in a percentage format. For example, I would like to store 0.1234 to be 0.123 in the index so it will show as 12.3% in percentage. But if this is a concern with saving space for storing these values, scale_factor of 100 is also fine for me. @exekias WDYT?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can try to do some testing and compare sizes, 100 has always looked a bit limiting, some more precision would be good

| field | type | description |
| --- | --- | --- |
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. |
| `host.network.in.bytes` | long | The number of bytes received on all network interfaces by the host in a given period of time. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the observer namespaces, ECS uses "ingress" and "egress". For consistency we might want to consider uses those terms.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we document the nature of this counter. Is it a monotonic counter that is sometimes reset (e.g. system restart)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyrille-leclerc These values will actually be gauges. For example, host.network.in.bytes will be the total bytes received aggregated among all network interfaces in 10 seconds. Next collection period, this value will change, might go up or down depends on the network usage of the next 10 seconds.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ on aligning on ingress and egress

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @kaiyan-sheng I was used to ever increasing counter this kind of metric but I don't know what's the state of the art today.
One benefit of ever increasing counters being that the collector can miss some collects without losing data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with monotonic counters being superior here. Capturing rates may make it easier to render the data, however rates are lossy.

As Cyrille pointed out, counters are more resilient to an agent missing a beat (pun intended 😄). But counters are also superior for data rollups. Rolling up initial /10s metrics to /5m or hourly percentiles with a counter is trivial. With rates it's not so easy, as the basic piece of data is already an average. Not sure if we can do better in the Elastic Stack than the tool I was using in the past, though...

I'm not opposed to capturing rates if they're dramatically easier to work with in general, but I would like them to be accompanied with the counter backing them.

WDYT?

Copy link

@sorantis sorantis Sep 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're confusing two different things here.
The main purpose for this set of host metrics is to align the different resource providers, such as public cloud infrastructures, physical hosts, VMware clusters, other hypervisors, i.e. bring hosts/VMs to a common denominator. The purpose is not to capture high fidelity system metrics because some providers (AWS, GCP, Azure, etc.) simply don't provide these.
Granted, the Metricbeat system module can do much more than just the proposed CPU (that includes raw counters) metrics and should be used wherever possible. 
By making these metrics part of ECS we want improve user experience, i.e. we can always show basic health information about any hosts anywhere, and recommend users to enable the system module for enhanced details.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@webmat @cyrille-leclerc Agree on the benefit of ever-increasing counters. We will definitely keep all the counters as they are right now from the system module. We only added extra calculation using these counters to get gauges in system module to match whatever we get from other resource providers such as AWS, Azure, and GCP. Unfortunately, they don't provide ever-increasing counters in their monitoring metrics as @sorantis mentioned above.

Also in the UI side, it's definitely easier to have metrics as gauges than counters.

rfcs/text/0005-host-metric-fields.md Outdated Show resolved Hide resolved
Copy link
Contributor

@jonathan-buttner jonathan-buttner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be out of scope for this proposal but for host.cpu it may also be useful to have these fields:

field type description
host.cpu.mean a float of some sort Average CPU load since reboot or since the program observing the CPU was started
host.cpu.latest a float of some sort Average CPU load for some time interval
host.cpu.histogram histogram This field defines an elasticsearch histogram field (https://www.elastic.co/guide/en/elasticsearch/reference/current/histogram.html#histogram)

The histogram field would allow visualizations to plot the cpu over time easily I think.

@ferullo not sure if the endpoint has current plans to ship other host metrics? Any other ideas?


| field | type | description |
| --- | --- | --- |
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the way that the endpoint team has done this in the past is the 12 CPU cores has a max of 1200.0123... right @ferullo ?

@kaiyan-sheng
Copy link
Contributor Author

This may be out of scope for this proposal but for host.cpu it may also be useful to have these fields:

field type description
host.cpu.mean a float of some sort Average CPU load since reboot or since the program observing the CPU was started
host.cpu.latest a float of some sort Average CPU load for some time interval
host.cpu.histogram histogram This field defines an elasticsearch histogram field (https://www.elastic.co/guide/en/elasticsearch/reference/current/histogram.html#histogram)
The histogram field would allow visualizations to plot the cpu over time easily I think.

@jonathan-buttner These would be great addition but unfortunately we won't be able to collect these metrics for all hosts. That's why these metrics are not included in the schema.

@jonathan-buttner
Copy link
Contributor

@jonathan-buttner These would be great addition but unfortunately we won't be able to collect these metrics for all hosts. That's why these metrics are not included in the schema.

Ah ok 👍

Copy link
Contributor

@webmat webmat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @kaiyan-sheng for this subsequent PR, and for the follow-up so far.

First here's a bit of virtual paperwork around this stage 1 RFC :-)

  • Could you update to "Stage: 1 (proposal)" at the top of the Markdown?
  • Please add Cyrille as the sponsor representing Observability, in the "People" section at the bottom.
  • Please add Carlos as a subject matter expert.

I'm also commenting on some of the discussion below.

rfcs/text/0005-host-metric-fields.md Outdated Show resolved Hide resolved

| field | type | description |
| --- | --- | --- |
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. |
Copy link
Contributor

@webmat webmat Sep 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the server has 12 CPU cores, then this normalized value will be total cpu percentage/12.

Unless there's disagreement on this (I'm fine with it), let's update the definition in the RFC to make this a bit more clear. I'm not sure the current phrasing "normalized by by the number of cores" makes it clear enough.

Could be as simple as adding an example.

Example: For a two core host, if one core is at 100% and the other is at zero, the value should be 0.5.

Or we could make it more literal and say "if there's 12 cores, this should be the average of the 12 cores, between 0 and 1"

| field | type | description |
| --- | --- | --- |
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. |
| `host.network.in.bytes` | long | The number of bytes received on all network interfaces by the host in a given period of time. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with monotonic counters being superior here. Capturing rates may make it easier to render the data, however rates are lossy.

As Cyrille pointed out, counters are more resilient to an agent missing a beat (pun intended 😄). But counters are also superior for data rollups. Rolling up initial /10s metrics to /5m or hourly percentiles with a counter is trivial. With rates it's not so easy, as the basic piece of data is already an average. Not sure if we can do better in the Elastic Stack than the tool I was using in the past, though...

I'm not opposed to capturing rates if they're dramatically easier to work with in general, but I would like them to be accompanied with the counter backing them.

WDYT?

* VMs
* AWS EC2 instances
* GCP compute engines
* Azure compute VMs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if the following is applicable. But would we eventually want to capture the same metrics for containers as well?

If that's the case, perhaps we should consider defining this set of metrics independently, and make them nestable both under "host" and "container".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! We haven't got to container in inventory schema work yet. @exekias Can we potentially treat containers as it's own VMs and report the same set of host metrics?


| field | type | description |
| --- | --- | --- |
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the scaling_factor, it's a required parameter for this datatype, there's no default value here.

Note that the source doc is not affected by it, and would always contain the full float value.

A scaling factor of 100 would mean 0.123 (for 12.3%) would only store 0.12 in the index, for aggregations & so on. If we want to aggregate on full digit percents, a scaling_factor of 100 is appropriate.

@kaiyan-sheng or @cyrille-leclerc is 100 appropriate for this use case? Who can chime in on how observability intends to query this field?

Experiment with scaled_float

I was hoping that scaled_float being backed by a long meant it wouldn't have float artifacts, but with the simple test below, the last two documents ingested show up with ...0000001 in the aggregation 🤷‍♂️

PUT sfloat-1
{ "mappings" : { "properties": {
  "pct" : { "type" : "scaled_float", "scaling_factor": 100 }
}}}
PUT sfloat-1/_doc/1
{ "pct": 0.12 }
PUT sfloat-1/_doc/2
{ "pct": 0.1234 }


PUT sfloat-2
{ "mappings" : { "properties": {
  "pct" : { "type" : "scaled_float", "scaling_factor": 10000 }
}}}
PUT sfloat-2/_doc/1
{ "pct": 0.12 }
PUT sfloat-2/_doc/2
{ "pct": 0.1234 }

GET sfloat-*/_search
GET sfloat-*/_search
{ "size": 0, "aggs" : {
    "percentages" : { "terms" : { "field" : "pct" }
}}}

@webmat webmat self-requested a review September 2, 2020 20:04
| `host.cpu.usage` | scaled_float | Percent CPU used with scaling_factor of 1000. This value is normalized by the number of CPU cores and it ranges from 0 to 1. For example: For a two core host, this value should be the average of the 2 cores, between 0 and 1. |
| `host.network.in.bytes` | long | The number of bytes received on all network interfaces by the host in a given period of time. |
| `host.network.in.packets` | long | The number of packets received on all network interfaces by the host in a given period of time. |
| `host.network.out.bytes` | long | The number of bytes sent out on all network interfaces by the host in a given period of time. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have in ES now _meta which also support byte as unit: elastic/elasticsearch#61941 Even though it is unfortunate that in older metrics we still have the postfix of .bytes, I think all new values we add we should skip the unit postfix and use _meta instead.

@webmat ECS should add support for _meta.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point, and I think we should avoid postfixes just for the sake of making the unit explicit. In the case of system.network.out object (this metric comes from it) this is what we have today:

    "system": {
        "network": {
            "in": {
                "bytes": 37904869172,
                "dropped": 32,
                "errors": 0,
                "packets": 32143403
            }
        } 
     }

I wonder how we should differentiate all the different metrics there, specially between bytes and packets

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha, interesting case. I'm tempted to say in this context the unit name might make sense. In any case, we should still add the info to _meta as this is what should be used by Kibana.

Alternative bytes could be called data but not sure if that is better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! Thank you @exekias and @ruflin. In this case, I still feel like bytes makes more sense in the metric name instead of data(too broad). bytes in this case, comparing with packets, is identifying the type of this value instead of only being the unit postfix.

Copy link
Contributor

@webmat webmat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback everyone, and thanks for addressing concerns @kaiyan-sheng

My understanding is that the following concerns have been addressed in the discussions above:

  • type & scaling factor: if scaled_float with a factor of 1000 is what you need, we're good 👍
  • normalization for multi-core: how to normalize is clearly captured in the RFC 👍
  • monotonic counters: they can't always be captured, so you're going with what you can get across all sources 👍
  • .bytes suffix: it looks like a suffix, but it's actually the clearest field name for this. 👍
    • I've made the same observation on the bytes fields in ECS, like network.bytes, source.bytes and so on. I agree there isn't another word that more clearly describes the content of the field. So I'm good with keeping the .bytes field names.

I'm still seeing two points that haven't been addressed:

  • using "ingress" and "egress" instead of "in" and "out"
    • It would be good to align with the rest of ECS on the naming here, ideally. We strive for consistency as much as possible, to make the schema predictable and easier to learn.
    • As Andrew mentioned, this wording is used in the observer fields. And in the upcoming ECS 1.7, these are two new expected values for network.direction as well.
    • Are there concerns with using host.network.ingress.* and host.network.egress.*?
  • Does this apply to containers?

Of the two points above, I'd really like to see if ingress/egress is possible.

The second about containers is not a blocker to merge this stage 1 PR IMO. We can always discuss this again if/when necessary.

One nipick:

  • The mention of scaling_factor in the host.cpu.usage description is confusing. Could you move this to the "type" column instead? Can be as simple as scaled_float (scaling_factor 1000).

@kaiyan-sheng
Copy link
Contributor Author

@webmat Thank you for the summary. I made the change for ingress/egress and scaled_float. Regarding your question about container, we will work on the common schema for containers in the upcoming release so we would like to keep this PR only for host metrics only.

Copy link
Contributor

@webmat webmat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, thanks for the quick turnaround :-)

Ack on addressing containers separately 👍

I'll merge this now. When I've done that, would you mind opening the PR for the next stage of this discussion?

For now no need to overthink this next PR, this RFC looks really good already. You could simply open the PR and set "stage: 2 (draft)" at the top. This way we'll already have a place to capture the next action items and further feedback.

@webmat webmat merged commit b8d008c into elastic:master Oct 13, 2020
@kaiyan-sheng kaiyan-sheng deleted the add_host_fields branch October 15, 2020 04:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants