Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query http listeners #10

Closed
clintongormley opened this issue Feb 14, 2010 · 3 comments
Closed

Query http listeners #10

clintongormley opened this issue Feb 14, 2010 · 3 comments

Comments

@clintongormley
Copy link
Contributor

In the same way as you can find out if a node is a data node or not, it'd be good to tell if a node has http enabled or not.

One of the things I'd like to be able to do, is to query one node about
the other http enabled nodes in the cluster (in the same was as you can
find out which nodes are data nodes)

In other words, one of my clients starts up, queries the 'main' node
about which listeners are available, then randomly selects one of those
nodes.

The idea is to spread the load between the nodes, and also, if a node
goes down, then my client already has a list of other nodes that it can
try connecting to.

thanks

Clint

@kimchy
Copy link
Member

kimchy commented Feb 14, 2010

Make sense. This will be part of the admin cluster node info API (REST is: http://localhost:9200/_cluster/nodes?pretty=true).

The json will be:

{
  "clusterName" : "elasticsearch",
  "nodes" : {
    "mackimchy-45484" : {
      "name" : "Commander Kraken",
      "transportAddress" : "inet[10.0.0.1/10.0.0.1:9300]",
      "dataNode" : true,
      "httpAddress" : "inet[/10.0.0.1:9200]"
    },
    "mackimchy-13357" : {
      "name" : "Ramshot",
      "transportAddress" : "inet[/10.0.0.1:9301]",
      "dataNode" : true,
      "httpAddress" : "inet[/10.0.0.1:9201]"
    }
  }
}

Note that it will now wrap the nodes in the "nodes" element for simpler usage.

@kimchy
Copy link
Member

kimchy commented Feb 14, 2010

Query http listeners. Closed by b5f3fc9.

@kimchy
Copy link
Member

kimchy commented Feb 14, 2010

Ohh, and I added an http parameter to include node settings, just use: http://localhost:9200/_cluster/nodes?settings=true

dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 9, 2015
dadoonet added a commit that referenced this issue Jun 9, 2015
dadoonet added a commit that referenced this issue Jun 9, 2015
imotov added a commit to imotov/elasticsearch that referenced this issue Jul 7, 2016
doc: remove not exist import.
rahulanishetty referenced this issue in rahulanishetty/elasticsearch Jan 22, 2017
ywelsch added a commit to ywelsch/elasticsearch that referenced this issue Dec 21, 2017
fcofdez pushed a commit to fcofdez/elasticsearch that referenced this issue Nov 19, 2021
…ions_distrib_commit

Spacetime transactions distrib commit
ChrisHegarty pushed a commit that referenced this issue Aug 9, 2023
Fixes elastic/elasticsearch-internal#497
Fixes ESQL-560

A query like `from test | sort data | limit 2 | project count` fails
because `LocalToGlobalLimitAndTopNExec` planning rule adds a collecting
`TopNExec` after last GATHER exchange, to perform last reduce, see

```
TopNExec[[Order[data{f}#6,ASC,LAST]],2[INTEGER]]
\_ExchangeExec[GATHER,SINGLE_DISTRIBUTION]
  \_ProjectExec[[count{f}#4]]      // <- `data` is projected away but still used by the TopN node above
    \_FieldExtractExec[count{f}#4]
      \_TopNExec[[Order[data{f}#6,ASC,LAST]],2[INTEGER]]
        \_FieldExtractExec[data{f}#6]
          \_ExchangeExec[REPARTITION,FIXED_ARBITRARY_DISTRIBUTION]
            \_EsQueryExec[test], query[][_doc_id{f}#9, _segment_id{f}#10, _shard_id{f}#11]
```

Unfortunately, at that stage the inputs needed by the TopNExec could
have been projected away by a ProjectExec, so they could be no longer
available.

This PR adapts the plan as follows:
- add all the projections used by the `TopNExec` to the existing
`ProjectExec`, so that they are available when needed
- add another ProjectExec on top of the plan, to project away the
originally removed projections and preserve the query semantics


This approach is a bit dangerous, because it bypasses the mechanism of
input/output resolution and validation that happens on the logical plan.
The alternative would be to do this manipulation on the logical plan,
but it's probably hard to do, because there is no concept of Exchange at
that level.
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this issue Oct 2, 2023
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this issue Oct 2, 2023
In elastic#10 we specified to run two node benchmarks with
1 replica (instead of 0). We have now added the
corresponding annotations for the affected tracks.

Relates elastic#10
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants