Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/awsemfexporter] namespace is always set to cloudwatch namespace #1712

Closed
crigertg opened this issue Dec 14, 2022 · 4 comments
Closed
Labels

Comments

@crigertg
Copy link

crigertg commented Dec 14, 2022

Describe the bug
I'm exporting k8s metrics to cloudwatch and want the namespace of the containers/services/pods to be visible as a Dimension. But currently this is not working because the awsemf exporter does of a namespace configuration parameter for the cloudwatch metrics namespace which seems to overwrite the Namespace which is available from the awscontainerinsightreceiver (see here: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/awscontainerinsightreceiver ).

Steps to reproduce

  1. Setup EKS cluster
  2. install adot (you might use this terraform module
  3. Deploy configuration looking like the following (you might want to use this helm chart):

What did you expect to see?
I expect that the namespace attribute from the metric is set for the dimensions .

What did you see instead?
The cloudwatch namespace is set in the dimensions.

Environment

receivers:
  awscontainerinsightreceiver:
    collection_interval:  
    container_orchestrator:  
    add_service_as_attribute:  
    prefer_full_pod_name:  
    add_full_pod_name_metric_label:  
processors:
  batch/metrics:
    timeout: 60s
  attributes:
    actions:
      - key: foo
        action: insert
        value: bar
      - key: namespace
        action: insert
        from_attribute: Namespace
exporters:
  awsemf:
    namespace: ContainerInsights
    log_group_name: '/aws/containerinsights/eks-dev/performance'
    log_stream_name: InputNodeName
    region: eu-central-1
    resource_to_telemetry_conversion:
      enabled: true
    dimension_rollup_option: NoDimensionRollup
    parse_json_encoded_attr_values:
      - Sources
      - kubernetes.namespace_name
    metric_declarations:
      - dimensions: [[Service, Namespace, ClusterName], [ClusterName]]
        metric_name_selectors:
          - service_number_of_running_pods
service:
  pipelines:
    metrics:
      receivers:
      - awscontainerinsightreceiver
      processors:
      - batch/metrics
      - attributes
      exporters:
      - awsemf
  extensions:
  - health_check

Additional context

Cloudwatch logs output:

{
    "ClusterName": "eks-dev",
    "Namespace": "loki",
    "NodeName": "ip-10-1-79-141.eu-central-1.compute.internal",
    "Service": "loki",
    "Sources": [
        "apiserver"
    ],
    "Timestamp": "1670489791032",
    "Type": "ClusterService",
    "Version": "0",
    "_aws": {
        "CloudWatchMetrics": [
            {
                "Namespace": "ContainerInsights",
                "Dimensions": [
                    [
                        "ClusterName",
                        "Namespace",
                        "Service"
                    ],
                    [
                        "ClusterName"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "service_number_of_running_pods",
                        "Unit": "Count"
                    }
                ]
            }
        ],
        "Timestamp": 1670489791032
    },
    "foo": "bar",
    "kubernetes": "{\"namespace_name\":\"loki\",\"service_name\":\"loki\"}",
    "service_number_of_running_pods": 1
}

I've opened a ticket in [opentelemetry-collector-contrib](open-telemetry/opentelemetry-collector-contrib#17024) too.

@crigertg
Copy link
Author

Is this somehow related to the order to the order of the keys in the CloudwatchMetrics key? Looking at the logs from the cloudwatch agent (where the namespace is set correctly) it looks like this:

{
    "CloudWatchMetrics": [
        {
            "Metrics": [
                {
                    "Unit": "Count",
                    "Name": "service_number_of_running_pods"
                }
            ],
            "Dimensions": [
                [
                    "Service",
                    "Namespace",
                    "ClusterName"
                ],
                [
                    "ClusterName"
                ]
            ],
            "Namespace": "ContainerInsights"
        }
    ],
    "ClusterName": "eks-dev",
    "Namespace": "kube-system",
    "Service": "aws-load-balancer-webhook-service",
    "Sources": [
        "apiserver"
    ],
    "Timestamp": "1671023034918",
    "Type": "ClusterService",
    "Version": "0",
    "kubernetes": {
        "namespace_name": "kube-system",
        "service_name": "aws-load-balancer-webhook-service"
    },
    "service_number_of_running_pods": 2
}

@crigertg
Copy link
Author

I've found the issue in the awsemfexporter and supplied a pull request in the opentelemetry-collector-contrib repository: open-telemetry/opentelemetry-collector-contrib#17030

When this is merged and a new version is released it should be included as soon as possible to fix this issue.

@github-actions
Copy link
Contributor

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions github-actions bot added the stale label Feb 12, 2023
@crigertg
Copy link
Author

This issue is not reproducible anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant