-
-
Notifications
You must be signed in to change notification settings - Fork 476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Module update and path to supporting 7.x #1068
Comments
We're using both ansible and puppet for managing our infrastructure. |
Please ensure support for upgrading from 6.x to 7.x for instances that were installed using this module. |
Since Ubuntu 20.04 is near and there are starting to be documtation for 8.x, please test at least some against if possible against those so the transition going forward is as easy as possible. Also, version 7.x can be run on this module if jvm setting are set directly. It's not clear to me why those jvm settings are not just pulled more directly from the base installation with the ability to tweak them from there. Ideally this module would pull as much as possible from what's known to be a good base installation and provide the ability to change the important configurations from there. Thanks for your work on this! |
@fatmcgav I honestly disagree with the multiple instances removal for this module. The docker options while nice should be handled by other methods like docker-compose or helm charts for k8s. This is still a “puppet” module for deploying and maintaining elastic on a full VM/bare metal install. It definitely has different audience than the “docker” folks. Unless the plan for this module is to use docker module as dependency and spin up elastic inside containers leveraging ECE or something I feel that instance support should remain for the foreseeable future. The users of this module most likely wouldn’t even be using it, or puppet for that matter, in the case of deploying to a container setup. With that said I don’t think that allows for a “one size fits all model” unless, as mentioned above, this module will now support the installation of elastic on containers via puppet in which multi instance would still be supported but via containers rather than systemd/jvm processes. |
Master installed on the same host as a data node should absolutely be removed. That is already covered by having both node.master and node.data as true. This is a better solution than having two instances of elasticsearch running on the same VM. https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html Having two separate instances of elastic running was sited by the previous maintainer as being on of the main challenges of moving to his forward. It should be removed. I'm also using both puppet and docker images for our deploy. I think most shops working with an updated puppet and elastic will have some docker workflow. |
Running data and master in same node is great for non-prod use. However in a production level having the master and data on separate JVMs to avoid ingestion OOM exceptions taking down the master instance is always best. In a VM world this is fine but in real steel hardware can have tons of memory so to utilize hardware with 64gb+ memory best is to run multiple instances to utilize the memory properly. |
I think that is a pretty big assumption. If you are using containers there is too much bloat in this module to maintain that deployment. You have no reason for half of what this controls. Unless your running full blown systemd containers not sure how this module would manage elastic on docker. |
At first, thank you very much for providing a puppet module for elasticsearch. Today I've noticed this issue and I've read this:
I don't understand it. If you use elasticsearch in docker, then you probably don't use this puppet module. I don't want to have 5 dockers, I want 5 elasticsearch instances on my machine. Also to massively simplify this module, the first steps from my point of view would be to make it more "puppet" like. You have a lot of custom functions/types/... in it which are not needed you could have done that with puppet standard features. |
Firstly, thank you to everyone who has taken the time to provide feedback, 👍's etc, and apologies for not updating sooner. In the background, we've been working on resolving some latent issues with the failing tests, updating dependencies etc. Please be assured that we don't make the decision to remove the multi-instance functionality lightly. With that being said, we appreciate that in certain scenarios, such as dev or test clusters etc, you are often forced to work with the hardware that's available. So we will be exploring whether we can provide some more guidance, examples etc around running multiple instances on a single node using other tools, such as Docker. I'll follow up and address more specific feedback in separate comments. |
@zofog We will be working to make the upgrade and migration path between
@andrewbierbaum Matching as much as possible the "default" JVM options etc will be part of this work, as we are aware that the divergence is an issue. @cdenneen I appreciate that when running on bare metal, you will want to maximise the resource utilisation of the underlying hardware. However that is where Docker provides better guarantees around resource separation etc.
@mookie- It's worth noting that Puppet itself has changed a lot over recent years, with the introduction of richer typing etc. And the language is continually evolving and adding new functionality. So reviewing the existing "custom" pieces is part of the plan. |
I honestly think if you diverge from being able to support multi-instance I see people moving away from this module completely in favor of ECK or Elastic's official Helm Chart. Not to say there wouldn't be some way to cobble this together in a container scenario but honestly if you are setting things up in a "container" fashion... you aren't going to use something that's basically shoe horning a "full stack" solution into a "container" model. |
I am wondering about the use of the instance notation that now is like this: elasticsearch::instance { 'es-01': Are You planning to remove that when dealing with the multi instance part of the module? If so, do You have any examples of how we have to deal with changing from the old to the new notation? |
@gvenaas yes, the intention is to remove the With regards to migration; not currently. However we will be working on making the migration as simple as possible. |
Can removing multi-instance be reconsidered? Removing multi-instance support and telling users to re-deploy an Elasticsearch cluster using Docker is a pretty large request. Especially the types of users that would go through the effort to use Puppet to deploy multi-instances; often times they would be the users with many hundreds of nodes per cluster and many clusters. Adding to what @cdenneen said above. Once you get big enough, you often run Elasticsearch bare metal and you use some heavy automation. VM's are way too expensive. Docker is an unnecessary complexity unless you are already a docker only shop. Your servers will have way more hardware then any single instance of Elasticsearch would be able to use on its own. And you have so many servers and nodes that it is impossible to manage through any other way then automation (like this Puppet module). What is likely to happen is many of the large users of this Puppet module will either hard fork it or stop using it all together since the official stance from Elastic is to use Docker (and thus...not Puppet) if you have bare metal hardware. |
any update for ver7 support ? |
+1 |
Hey there, Firstly, apologies for the radio silence on this issue. The good news is that the The caveat to that is that the documentation hasn't been completely updated yet. We also don't have a supported upgrade path from 6.x to 7.x yet, due to the breaking changes around removing the multi-instance support which has changed all the paths on disk etc. The documentation updates and upgrade path should be coming in the near future. If anyone does try the new functionality, feedback is most definitely welcome. Thanks again for all your patience. |
@fatmcgav I just did a quick deployment on Debian 10 (nothing fancy, just a quick "first run" in a VM to give me an idea and for the purpose of evaluating the changes to the module), and everything was smooth. Better: everything was as I expected: no need to re-apply a catalog to "fix" the configuration after an update ; no need to set dubious env vars to actually update elasticsearch ❤️ 🧡 💛 💚 💙 Two things I though while testing:
|
Trying to get Version 7 installed from the master branch per @fatmcgav advice above, using this super simple config - I get - Any ideas? |
EDIT - removing or renaming the file 'lib/puppet/provider/elastic_parsedfile.rb' has resolved the above issue, but not likely a long term solution. |
UPDATE - I think the above condition is resolved by upgrading the "elastic_stack" module to the latest Forge v7.0.0, in case anyone else encounters it. |
From internal testing, it looks like mandatory/required files: ['jvm.options','log4j2.properties'] are not currently managed by the module. Although they're in the directory when the default installer runs, when transitioning clients from a legacy version of the module (eg, last Forge release v6.4.0) these files are missing by design of the previous version of the module. |
Any updates on when this is ready to be released? |
What's the current timeline look like on getting the 7.x support added? We are definitely patiently waiting. :) |
@fatmcgav We're waiting too. |
And waiting as well |
Apologies again for the lack of progress here... Other priorities took over... With that being said, I'd like to announce that version I'd also like to call out the Warning from the README - This module DOES NOT support upgrading or migrating existing multi-instance deployments! Our current recommendation is to deploy Elasticsearch to new nodes and migrate the data over. We're planning on trying to make this story better, however I'm honestly not sure when that will happen. Feel free to reach out with any issues or questions, and thanks again for everyone's patience and comments on this issue. Hope everyone has a good festive period and the computer gods are kind! |
Agreed.
Recommendation: Deploy to new nodes that are fresh. Even the rolling upgrade by official ELK does not provide rolling downgrade. So, when you run into trouble during installation, good luck to you :( |
i think adding an explicit variable for node role would help, node.master and node.data, etc |
Any updates on the "upgrade path" challenge. How about allowing a module rename. Having one for Elasticsearch6, one for Elasticsearch7 etc. Then we could use them in parallel within the same puppet environment. |
Hi,
with the command used: /opt/puppetlabs/bin/puppet generate types --environment nxms_production --force |
FWIW, I've tried to deploy Elasticsearch 7.x to a clean RHEL 8 server and the service won't start. The README is confusing as it mentions support in some places for 7, but not others. Some clear details in the README on what is supported, and perhaps some clarification here, from @fatmcgav, would be really helpful |
HEllo, I am asking here about my case: |
Hey all,
So I wanted to open this issue to give anyone watching an update on what's next for this module.
Firstly, apologies from Elastic that this module has not been given the time and effort it deserves.
The plan for the next steps for this module are as follows:
Refresh dependencies and supported versions.
This will unfortunately require dropping support for any Puppet versions older than
5.x
, and any Elasticsearch version older than6.x
.Remove support for installing and managing multiple instances on a single host.
This was the "recommended" method a few years ago in order to maximise the memory utilisation for a given host containing more than 32GB of RAM. However other methods have become more prevalent since, such as running Elasticsearch in Docker.
Therefore in order to massively simplify this module we will be removing support for installing and managing multiple instances on a single host.
Update the module to support installing Elasticsearch 7.
This is the biggest thing that most of you are asking for - Support for installing and managing Elasticsearch
7.x
.The good news is: It's coming!
Feel free to ask any questions or raise any concerns you might have with any of the above, and we'll do our best to work with you.
And thanks again to everyone who's taken the time to open issues or PR's - It's really appreciated.
The text was updated successfully, but these errors were encountered: