Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [Elasticsearch] Error when having multiple VMs and non-clustered mode #2332

Closed
6 of 10 tasks
to-bar opened this issue May 19, 2021 · 3 comments
Closed
6 of 10 tasks

Comments

@to-bar
Copy link
Contributor

to-bar commented May 19, 2021

Describe the bug
Deployment fails when both conditions are true:

  • > 1 VM for logging component
  • specification.clustered: false

ES API is not available and epicli fails on Kibana task:
TASK [kibana : Wait for Kibana to be ready]

How to reproduce
Steps to reproduce the behavior:

  1. In config file:
kind: epiphany-cluster
specification:
  components:
    logging:
      count: 3
kind: configuration/logging
specification:
  clustered: false
  1. execute epicli apply

Expected behavior
Cluster deployed with 3 independent (non-clustered) Elasticsearch instances.

Environment

  • Cloud provider: All
  • OS: All

epicli version: Found in 0.8.2 but present in develop branch as well (1.1.0).

Additional context
On affected host elasticsearch.yml contains incorrect settings (3 nodes for cluster.initial_master_nodes):

network.host: ec2-A-B-C-D
cluster.initial_master_nodes: ["ec2-A-B-C-D","ec2-X-Y-Z-221","ec2-X-Y-Z-29"]

DoD checklist

  • Changelog updated (if affected version was released)
  • COMPONENTS.md updated / doesn't need to be updated
  • Automated tests passed (QA pipelines)
    • apply
    • upgrade
  • Case covered by automated test (if possible)
  • Idempotency tested
  • Documentation updated / doesn't need to be updated
  • All conversations in PR resolved
  • Backport tasks created / doesn't need to be backported
@to-bar to-bar self-assigned this May 19, 2021
@to-bar to-bar changed the title [BUG] [Elasticsearch] Error when having more than one VM and 'clustered: false' specified [BUG] [Elasticsearch] Error when having multiple VMs and non-clustered mode May 20, 2021
@to-bar to-bar added this to the S20210603 milestone May 20, 2021
@przemyslavic przemyslavic self-assigned this May 24, 2021
@przemyslavic
Copy link
Collaborator

@to-bar

  1. It doesn't work as it should. There were no errors during installation, but there are other problems. In case of multiple non-clustered nodes, only one of them has admin_password set according to the specification. For the rest, the password is still admin. Same for kibanaserver_password and logstash_password.
    This is probably because the Set OpenDistro admin password task runs on only one node and for other nodes, the default values remain.
    Also task [logging: Load vars into variable] runs on only one node. It seems that the whole logging role would have to be refactored to support non-clustered mode, but I don't know if that makes sense.
  2. Test scripts for logging, filebeat and kibana have to be adjusted for clustered and non-clustered cases.

Moving back to todo for some discussion.

@mkyc mkyc modified the milestones: S20210603, S20210617 Jun 7, 2021
@mkyc mkyc modified the milestones: S20210617, S20210701, S20210715 Jun 18, 2021
@mkyc mkyc removed this from the S20210715 milestone Jul 5, 2021
@erzetpe erzetpe assigned erzetpe and unassigned przemyslavic and to-bar Jul 9, 2021
@erzetpe
Copy link
Contributor

erzetpe commented Jul 9, 2021

After consulting @mkyc and others we have agreed that we are not supporting multiple VMs in non-clustered mode, so this error is happening only for unsupported configuration. I will provide update to our documentation and we should remove the flag for clustered option, because clustering will be automatic for any number of nodes higher than 1.

@erzetpe erzetpe linked a pull request Jul 12, 2021 that will close this issue
@przemyslavic przemyslavic self-assigned this Jul 16, 2021
@przemyslavic
Copy link
Collaborator

✔️ Tested with 1 opendistro/logging node
✔️ Tested with 3 opendistro/logging nodes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants