Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

localhost:9300 is not being used #670

Open
anuhabi opened this issue Oct 16, 2015 · 4 comments
Open

localhost:9300 is not being used #670

anuhabi opened this issue Oct 16, 2015 · 4 comments

Comments

@anuhabi
Copy link

anuhabi commented Oct 16, 2015

My Current Installation of
Elasticsearch: 1.7.2
JDBC River: elasticsearch-jdbc-1.7.1.0-dist
MySQL JDBC Driver: mysql-connector-java-5.1.33.jar
MySQL JDBC Connection from the ES host: Success; tested via java program.
Deployment: Google Cloud.
Ports Open: tcp:9200-9400

I noticed failure after "[05:15:00,901][INFO ][BaseTransportClient ][pool-3-thread-1] trying to connect to [inet[localhost/127.0.0.1:9300]]" I dont know what is causing an issue. Could you please tell me whether it is JDBC river plugin issue or networking port issue ? Appreciate your time and work.

JDBC.Log
[05:14:05,675][INFO ][importer.jdbc ][main] index name = wb_sesnsordata, concrete index name = wb_sesnsordata
[05:14:05,683][INFO ][importer ][main] schedule with cron expressions [0 0-59 0-23 ? * *]
[05:15:00,002][INFO ][importer.jdbc ][pool-2-thread-2] index name = wb_sesnsordata, concrete index name = wb_sesnsordata
[05:15:00,018][INFO ][importer.jdbc ][pool-3-thread-1] strategy standard: settings = {type=sensor data, metrics.lastexecutionend=2015-05-10T10:58:00.044Z, sql.0.parameter.0=$metrics.lastexecutions$
[05:15:00,068][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@32af84d3
[05:15:00,110][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@65babc5c
[05:15:00,124][INFO ][BaseTransportClient ][pool-3-thread-1] creating transport client, java version 1.7.0_79, effective settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false,$
[05:15:00,190][INFO ][org.elasticsearch.plugins][pool-3-thread-1] [importer] loaded [support-1.7.1.0-b344fa4], sites []
[05:15:00,901][INFO ][BaseTransportClient ][pool-3-thread-1] trying to connect to [inet[localhost/127.0.0.1:9300]]
[05:15:00,993][WARN ][org.elasticsearch.transport.netty][elasticsearch[importer][transport_client_worker][T#1]{New I/O worker #1}] [importer] Message not fully read (response) for [0] handler future(org.elas$
[05:15:00,991][INFO ][org.elasticsearch.client.transport][pool-3-thread-1] [importer] failed to get node info for [#transport#-1][instance-1.c.massive-seer-107519.internal][inet[localhost/127.0.0.1:9300]], d$
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse]
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:157) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:132) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/standard/StandardAnalyzer
at org.elasticsearch.Version.fromId(Version.java:462) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.Version.readVersion(Version.java:254) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.cluster.node.DiscoveryNode.readFrom(DiscoveryNode.java:324) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.cluster.node.DiscoveryNode.readNode(DiscoveryNode.java:307) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.action.support.nodes.NodeOperationResponse.readFrom(NodeOperationResponse.java:54) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readFrom(NodeInfo.java:200) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readNodeInfo(NodeInfo.java:194) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]

@jprante
Copy link
Owner

jprante commented Oct 19, 2015

NoClassDefFoundError: org/apache/lucene/analysis/standard/StandardAnalyzer

Please add the lucene jar of Elasticsearch 1.7 to the 'lib' directory to fix this.

@anuhabi
Copy link
Author

anuhabi commented Oct 22, 2015

Hi, Unfortunately, it did not solve the problem.

I got the following when i ran "curl localhost:9300".
"This is not HTTP port", hence I filed ticket with google cloud where our ES running. interestingly. he confirmed that 9300 port is indeed open. The additional information he gave was

"However, since I have the IP I can see that the port 9300 is indeed open but with the default configuration is expected to receive that error message on that port. More information about this at [3].
On the other hand, I have found a guide that uses port 9200 [4].

[3] elastic/elasticsearch#12355
[4] https://github.com/jprante/elasticsearch-jdbc/wiki/Quickstart
"

The questions to you are

  1. Is your plugin code using 9300 port? or connecting 9200?
  2. based on my understanding, internal node communication uses 9300. being single node startupkit, it should not be throwing any exceptions. but the stack trace is saying "[05:37:01,019][ERROR][importer ][pool-3-thread-1] error while getting next input: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
    org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}"

Please help. thank you so much!

I appreciate your time.

[05:36:17,815][INFO ][importer.jdbc ][main] index name = wb_sesnsordata, concrete index name = wb_sesnsordata
[05:36:17,825][INFO ][importer ][main] schedule with cron expressions [0 0-59 0-23 ? * *]
[05:37:00,002][INFO ][importer.jdbc ][pool-2-thread-2] index name = wb_sesnsordata, concrete index name = wb_sesnsordata
[05:37:00,016][INFO ][importer.jdbc ][pool-3-thread-1] strategy standard: settings = {type=sensor_data, url=jdbc:mysql://104.197.83.28:3306/waterbitDB, sql=select md5(concat(sensorID, senseTime)) as "_id", sensorid, moisture1, moisture2, moisture3, moisture4, moisture5, moisture6, moisture7, moisture8, moisture9, airtemp, soiltemp, humidity, light, sensetime, recievetime from waterbitdb.sensordata, user=root, schedule=0 0-59 0-23 ? * *, index=wb_sesnsordata, password=sJp6FJWT}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@2190b03c
[05:37:00,018][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@1510bd0b
[05:37:00,023][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@3f2c6850
[05:37:00,034][INFO ][BaseTransportClient ][pool-3-thread-1] creating transport client, java version 1.7.0_79, effective settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
[05:37:00,074][INFO ][org.elasticsearch.plugins][pool-3-thread-1] [importer] loaded [support-1.7.1.0-b344fa4], sites []
[05:37:00,744][INFO ][BaseTransportClient ][pool-3-thread-1] trying to connect to [inet[localhost/127.0.0.1:9300]]
[05:37:01,018][WARN ][org.elasticsearch.client.transport][pool-3-thread-1] [importer] node [#transport#-1][instance-1.c.massive-seer-107519.internal][inet[localhost/127.0.0.1:9300]] not part of the cluster Cluster [elasticsearch], ignoring...
[05:37:01,019][ERROR][importer ][pool-3-thread-1] error while getting next input: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
at org.xbib.elasticsearch.support.client.BaseTransportClient.createClient(BaseTransportClient.java:52) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.elasticsearch.support.client.BaseIngestTransportClient.newClient(BaseIngestTransportClient.java:22) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.elasticsearch.support.client.transport.BulkTransportClient.newClient(BulkTransportClient.java:88) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext$1.create(StandardContext.java:440) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.beforeFetch(StandardSink.java:94) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.beforeFetch(StandardContext.java:207) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:188) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:118) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.tools.Importer.newRequest(Importer.java:241) [elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.tools.Importer.newRequest(Importer.java:57) [elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:86) [elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:17) [elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]

@jprante
Copy link
Owner

jprante commented Oct 22, 2015

You show the exception traces, but not the configuration you use, so it is very hard to comment. From the diagnostics I can only guess that you did not configure a host name to connect to, and you kept the default cluster name elastisearch, which is not the cluster name you use, it seems.

@anuhabi
Copy link
Author

anuhabi commented Oct 25, 2015

it helped. Thank you so much. Hats off to your work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants