Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Doubled haproxy entries in prometheus.yml after an upgrade #2997

Closed
7 of 18 tasks
rafzei opened this issue Feb 28, 2022 · 0 comments
Closed
7 of 18 tasks

[BUG] Doubled haproxy entries in prometheus.yml after an upgrade #2997

rafzei opened this issue Feb 28, 2022 · 0 comments
Assignees
Labels

Comments

@rafzei
Copy link
Contributor

rafzei commented Feb 28, 2022

Describe the bug
Prometheus is failing to start after upgrade due to duplicated haproxy target in its config

How to reproduce
Steps to reproduce the behavior:

  1. execute epicli apply on ver1.3
  2. execute epicli upgrade on ver2.0

Expected behavior
Prometheus running

Config files

...
  components:
    kubernetes_master:
      count: 1
      machine: kubernetes-master-machine
      configuration: default
      subnets:
      - address_pool: 10.1.1.0/24
    kubernetes_node:
      count: 1
      machine: kubernetes-node-machine
      configuration: default
      subnets:
      - address_pool: 10.1.1.0/24
    logging:
      count: 1
      machine: logging-machine
      configuration: default
      subnets:
      - address_pool: 10.1.3.0/24
    monitoring:
      count: 1
      machine: monitoring-machine
      configuration: default
      subnets:
      - address_pool: 10.1.4.0/24
    kafka:
      count: 1
      machine: kafka-machine
      configuration: default
      subnets:
      - address_pool: 10.1.5.0/24
    postgresql:
      count: 1
      machine: postgresql-machine
      configuration: default
      subnets:
      - address_pool: 10.1.6.0/24
    load_balancer:
      count: 1
      machine: load-balancer-machine
      configuration: default
      subnets:
      - address_pool: 10.1.7.0/24
    rabbitmq:
      count: 1
      machine: rabbitmq-machine
      configuration: default
      subnets:
      - address_pool: 10.1.8.0/24
    ignite:
      count: 0
      machine: ignite-machine
      configuration: default
      subnets:
      - address_pool: 10.1.9.0/24
    opendistro_for_elasticsearch:
      count: 0
      machine: logging-machine
      configuration: default
      subnets:
      - address_pool: 10.1.10.0/24
    repository:
      count: 1
      machine: repository-machine
      configuration: default
      subnets:
      - address_pool: 10.1.11.0/24
    single_machine:
      count: 0
      machine: single-machine
      configuration: default
      subnets:
      - address_pool: 10.1.1.0/24
version: 1.3.0

Environment
Faced on Ubuntu 20.04 Azure

epicli version: [epicli --version]
1.3 -> 2.0
Additional context
This is most probably related to this change: 12b99bf#diff-5737624263c568e51436b0b0b7ac25c161beebf7ceae84954b7339f1e5b6aaa6R168


DoD checklist

  • Changelog
    • updated
    • not needed
  • COMPONENTS.md
    • updated
    • not needed
  • Schema
    • updated
    • not needed
  • Backport tasks
    • created
    • not needed
  • Documentation
    • added
    • updated
    • not needed
  • Feature has automated tests
  • Automated tests passed (QA pipelines)
    • apply
    • upgrade
    • backup/restore
  • Idempotency tested
  • All conversations in PR resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants