Cherry-pick #14829 to 7.6: [Metricbeat] Add Google Cloud Platform module #15575
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Cherry-pick of PR #14829 to 7.6 branch. Original message:
ONGOING work on docs bust most code is ready to go.
Seed PR for the Google Cloud Platform module for Metricbeat.
It includes the following:
Ignore the following Metricsets which are already included in the PR for testing purposes but they are not going to be merged yet (they'll be removed before merging):
Some vocabulary for people new to Google Cloud
You can find some translations for GCP services in AWS:
Labels / Metadata
You'll see lots of mentions to Metadata inside the code. This refers to two different entities within GCP: labels and metadata. For Elasticsearch purposes both can be considered metadata so whenever you read "label" or "metadata" it's going to be treated as the same thing at the end of the pipeline.
Grouping of events
The way that GCP labels metrics is somehow complex to generate "service based events". They export their metrics individually so you don't request "compute metrics" or "metrics of this compute instance" but instead you have to request "give all cpu_utilization values of compute instances" so a single response will bring one or more values per instance for a specified timeframe for all your instances. That's a single response.
For example, a request for CPU utilization can return (in pseudocode):
Then, a new call must be done to (in this example it will be Compute API) to request Instance metadata (like working group, network group, user labels or user metadata which is associated only to the instance and not to a particular metrics like CPU). Then you get data like this (again, in pseudocode)
At the end, both response for that particular metric must be grouped into a single event that share some common metadata. For compute this includes instance_id and availability zone apart from timestamp. Each service requires an specifici implementation to get non-stackdriver metadata. The service metadata implementation is only developed for Compute at the moment and can be seen in
googlecloud/stackdriver/compute
, the rest of the services uses only metadata provided by Stackdriver.ECS
Metadata returned from Stackdriver is ECS compliant for Compute metadata (mainly availability zone, account id and cloud provider, instance id and instance name). Some of the metadata might be written out of the ECS fields. More deployment configurations plus testing is needed find them all.
Modules
All services from https://cloud.google.com/monitoring/api/metrics_gcp can be added as more configuration. Tests until now shows no problem but their specific metadata must be developed separatedly for each of them.
Limitations
You cannot set period under 300s (you can right now, but it won't return any metric). I think it's some kind of limitation of Stackdriver because their metrics are sampled each 60 to 300 seconds.
Happy reviewing :)
Sorry for the big PR, it was impossible to make it smaller