-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address of remote node in CCS context is not available #44976
Comments
Pinging @elastic/es-distributed |
@rtkjliviero could you please elaborate on
usage scenario. |
I think that the user is asking if we can expose the http address of the remote nodes. We have removed that as it required a remote call to each of the nodes when calling remote info, given that the http port is not part of the info kept around about each remote node, while the transport port is. Please correct me if I am wrong. I would also be curious to hear what the usecase is for this. |
@javanna is correct - I should have been more clear. Thanks! My particular use case is:
...but in general, there is no guarantee that a remote node is using port 9200 for http - it could be configured to use any arbitrary port. I'm using a single "coordinator" node connected to several remote clusters to facilitate CCS, but I'd like to be able to perform actions on them which are not covered by CCS functionality. To do so I'd like to have a way of discovering the correct port so that I can connect to the cluster directly when necessary. |
Thanks for the explanation @rtkjliviero ! Why would it be fragile to keep track of the different nodes and their http ports? I rather think that it would be a burden to have to maintain this redundant info (the http port of remote nodes), or make remote calls to be able to return it when it's not needed. It would be much cleaner to track which nodes run on which http port externally, as nodes are started. |
That's kind of what I meant - I'll have to maintain a separate mapping which could change at any time (say I start using another hypothetical tool that reserves one of the non-9200 ports I have already chosen, and I have to change my config). Ideally I would just maintain a single docker-compose file, and detect the correct http.port values at runtime by using the In my case I can certainly track the nodes and their ports externally, since I'm defining the test environment. But in a "live" environment, I might not have that information. In that case I'd never be able to reliable connect to a node that wasn't using |
I see what you mean, it's a burden somehow, but it makes little sense for Elasticsearch to do this with the only goal of simplifying application's code, given that it can be done externally. On the mapping possibly changing at anytime: the http port can only change by modifying the config file and restarting the node. I also don't follow the point around a "live" environment: If you want to connect to your nodes and send requests to them, you need to know their http ports, no matter which cluster they belong to. And that is decided when the node has started. That is why the record of all the nodes and their http addresses should be kept outside of Elasticsearch, in my opinion. |
@rtkjliviero as far as I understand, you want to get remote node ports by invoking |
@rtkjliviero I've just realized that you're asking about HTTP ports, not transport ports. The PR you've linked is about different change. |
Understood, that's fair. Thanks for taking the time to discuss this, @javanna and @andrershov! I'll go ahead and close this issue. |
Feature request
Expose (at least) seed node address information in
remote/info
.I see that this feature was actually removed as part of this PR. It would be very useful to have it back, even with the caveat that it is best-effort.
This is particularly relevant when nodes do not use 9200 as their HTTP port - for example, in a multi-node testing situation on a single host.
The text was updated successfully, but these errors were encountered: