Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicated jobs for logs when using log_slots #56

Closed
chanchiwai-ray opened this issue Aug 15, 2023 · 6 comments · Fixed by #66
Closed

Duplicated jobs for logs when using log_slots #56

chanchiwai-ray opened this issue Aug 15, 2023 · 6 comments · Fixed by #66

Comments

@chanchiwai-ray
Copy link
Contributor

Bug Description

When I am trying to use the log_slots for a snap that provides two slots, for example,

# in snapcraft.yaml
...
slots:
  ovn-logs:
    interface: content
    source:
      read:
        - $SNAP_COMMON/log/ovn
  ovs-logs:
    interface: content
    source:
      read:
        - $SNAP_COMMON/log/openvswitch 
...

and in the charm, I used the slots:

# in ./src/charm.py
self._grafana_agent = COSAgentProvider(
    ...
    log_slots=["openstack-hypervisor:ovn-logs", "openstack-hypervisor:ovs-logs"],
    ...
)

The logs section of /etc/grafana-agent.yaml contains duplicated jobs:

logs:
    ...
    - job_name: openstack-hypervisor
      pipeline_stages:
      - drop:
          expression: .*file is a directory.*
      relabel_configs:
      - replacement: /openvswitch
        source_labels:
        - __path__
        target_label: path
      static_configs:
      - labels:
          __path__: /snap/grafana-agent/13/shared-logs/openvswitch/**
          job: openstack-hypervisor
          juju_model: controller
          juju_model_uuid: d451ba9f-c875-4845-8564-adc06a1880a7
        targets:
        - localhost
    - job_name: openstack-hypervisor
      pipeline_stages:
      - drop:
          expression: .*file is a directory.*
      relabel_configs:
      - replacement: /openvswitch
        source_labels:
        - __path__
        target_label: path
      static_configs:
      - labels:
          __path__: /snap/grafana-agent/13/shared-logs/openvswitch/**
          job: openstack-hypervisor
          juju_model: controller
          juju_model_uuid: d451ba9f-c875-4845-8564-adc06a1880a7
        targets:
        - localhost

To Reproduce

See bug description

Environment

N/A

Relevant log output

N/A

Additional context

No response

@jneo8
Copy link
Contributor

jneo8 commented Aug 31, 2023

Define multiple source in same slots is also not working.

snapcraft.yaml

slots:
  service-logs:
    interface: content
    content: service-logs
    source:
      read:
        - $SNAP_COMMON/log/ovn
        - $SNAP_COMMON/log/openvswitch
        - $SNAP_COMMON/log/ovn/ovn-controller.log

/var/lib/snapd/mount/snap.grafana-agent.fstab

/var/snap/openstack-hypervisor/common/log/ovn /snap/grafana-agent/13/shared-logs/ovn none bind,ro 0 0
/var/snap/openstack-hypervisor/common/log/openvswitch /snap/grafana-agent/13/shared-logs/openvswitch none bind,ro 0 0
/var/snap/openstack-hypervisor/common/log/ovn/ovn-controller.log /snap/grafana-agent/13/shared-logs/ovn-controller.log none bind,ro 0 0

/etc/grafana-agent.yaml

    - job_name: openstack-hypervisor
      pipeline_stages:
      - drop:
          expression: .*file is a directory.*
      relabel_configs:
      - replacement: /openvswitch
        source_labels:
        - __path__
        target_label: path
      static_configs:
      - labels:
          __path__: /snap/grafana-agent/13/shared-logs/openvswitch/**
          job: openstack-hypervisor
          juju_model: controller
          juju_model_uuid: 010ca2a4-3209-44df-8471-d058ae9e0ee5
        targets:
        - localhost

This somehow block the scenario that that user want to collect multiple service log in the same snap.

@Abuelodelanada
Copy link
Contributor

Hi @chanchiwai-ray

What is the snap you are using?
May you provide a bundle in order to reproduce the same situation?

@chanchiwai-ray
Copy link
Contributor Author

Hi @Abuelodelanada , I don't have the bundle to reproduce the situation, since we developing and building the charm locally. However, I created a simple snap and simple charm to reproduce it: https://github.com/chanchiwai-ray/grafana-agent-k8s-operator/tree/issue/234/grafana_agent_234. Following the "README.md" you should be able to reproduce the duplicated log configs.

@dragomirp
Copy link
Contributor

The Postgresql VM charm also seems to be affected by this. We have multiple sources in the logging slot:

slots:
  logs:
    interface: content
    source:
      read:
        - $SNAP_COMMON/var/log/patroni
        - $SNAP_COMMON/var/log/postgresql
        - $SNAP_COMMON/var/log/pgbackrest
        - $SNAP_COMMON/var/log/pgbouncer

We only get a single job in /etc/grafana.conf:

    - job_name: charmed-postgresql
      pipeline_stages:
      - drop:
          expression: .*file is a directory.*
      relabel_configs:
      - replacement: /postgresql
        source_labels:
        - __path__
        target_label: path
      static_configs:
      - labels:
          __path__: /snap/grafana-agent/13/shared-logs/postgresql/**
          job: charmed-postgresql
          juju_model: pg
          juju_model_uuid: 372cccb6-59f5-448b-8db6-93df5b88d48e
        targets:
        - localhost

And the contents of /var/lib/snapd/mount/snap.grafana-agent.fstab don't seem to be mounted on /snap/grafana-agent/13/shared-logs/.

It should be reproducible by deploying postgresql from channel 14/edge

@simskij
Copy link
Member

simskij commented Sep 19, 2023

The Postgresql VM charm also seems to be affected by this. We have multiple sources in the logging slot:

slots:
  logs:
    interface: content
    source:
      read:
        - $SNAP_COMMON/var/log/patroni
        - $SNAP_COMMON/var/log/postgresql
        - $SNAP_COMMON/var/log/pgbackrest
        - $SNAP_COMMON/var/log/pgbouncer

We only get a single job in /etc/grafana.conf:

    - job_name: charmed-postgresql
      pipeline_stages:
      - drop:
          expression: .*file is a directory.*
      relabel_configs:
      - replacement: /postgresql
        source_labels:
        - __path__
        target_label: path
      static_configs:
      - labels:
          __path__: /snap/grafana-agent/13/shared-logs/postgresql/**
          job: charmed-postgresql
          juju_model: pg
          juju_model_uuid: 372cccb6-59f5-448b-8db6-93df5b88d48e
        targets:
        - localhost

And the contents of /var/lib/snapd/mount/snap.grafana-agent.fstab don't seem to be mounted on /snap/grafana-agent/13/shared-logs/.

It should be reproducible by deploying postgresql from channel 14/edge

@Abuelodelanada ☝🏼

@shayancanonical
Copy link

The MySQL and MySQLRouter VM charms are also affected by this bug. SImilar to postgres above:

slots:
  logs:
    interface: content
    source:
      read:
        - $SNAP_COMMON/var/log/mysql
        - $SNAP_COMMON/var/log/mysqlrouter

In /etc/grafana-agent.yaml

    - job_name: charmed-mysql
      pipeline_stages:
      - drop:
          expression: .*file is a directory.*
      relabel_configs:
      - replacement: /mysql
        source_labels:
        - __path__
        target_label: path
      static_configs:
      - labels:
          __path__: /snap/grafana-agent/16/shared-logs/mysql/**
          job: charmed-mysql
          juju_model: database
          juju_model_uuid: e14f3b7b-e352-4342-80c8-2d93c5ee2984
        targets:
        - localhost

The code is in PR form, but will soon be merged to release for edge

@simskij simskij transferred this issue from canonical/grafana-agent-k8s-operator Feb 8, 2024
shayancanonical added a commit to canonical/mysql-router-operator that referenced this issue Mar 19, 2024
## Issue
We need to introduce COS support for MySQLRouter

## Solution
1. Introduce COS support for MySQLRouter
2. Introduce the scaffolding for secrets to be able to store the
monitoring password in the juju secrets backend

## Prerequisites
1. We need to merge the [snap
PR](canonical/charmed-mysql-snap#41) that adds
the COS machinery to the snap. After, we need to update the snap
revision in the `snap.py` file
2. We need to import the secrets lib after the [following secrets PR in
data-platform-libs](canonical/data-platform-libs#117)
is merged

## TODO
1. ~~Determine why the `All Connections Information` and `Route byte
to/from server` sections on the dashboard are not populated when more
than 1 routers are deployed~~ (see
#93 (comment))
2. ~~Test router log collection (via Loki) when [the following issue is
resolved](canonical/grafana-agent-operator#56
(see
#93 (comment))

## Demo

![image](https://github.com/canonical/mysql-router-operator/assets/99665202/e2173939-c2e8-4de2-a007-bb0dbb2269d4)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants