-
Notifications
You must be signed in to change notification settings - Fork 708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
localhost:9300 is not being used #670
Comments
Please add the lucene jar of Elasticsearch 1.7 to the 'lib' directory to fix this. |
Hi, Unfortunately, it did not solve the problem. I got the following when i ran "curl localhost:9300". "However, since I have the IP I can see that the port 9300 is indeed open but with the default configuration is expected to receive that error message on that port. More information about this at [3]. [3] elastic/elasticsearch#12355 The questions to you are
Please help. thank you so much! I appreciate your time. [05:36:17,815][INFO ][importer.jdbc ][main] index name = wb_sesnsordata, concrete index name = wb_sesnsordata |
You show the exception traces, but not the configuration you use, so it is very hard to comment. From the diagnostics I can only guess that you did not configure a host name to connect to, and you kept the default cluster name |
it helped. Thank you so much. Hats off to your work. |
My Current Installation of
Elasticsearch: 1.7.2
JDBC River: elasticsearch-jdbc-1.7.1.0-dist
MySQL JDBC Driver: mysql-connector-java-5.1.33.jar
MySQL JDBC Connection from the ES host: Success; tested via java program.
Deployment: Google Cloud.
Ports Open: tcp:9200-9400
I noticed failure after "[05:15:00,901][INFO ][BaseTransportClient ][pool-3-thread-1] trying to connect to [inet[localhost/127.0.0.1:9300]]" I dont know what is causing an issue. Could you please tell me whether it is JDBC river plugin issue or networking port issue ? Appreciate your time and work.
JDBC.Log
[05:14:05,675][INFO ][importer.jdbc ][main] index name = wb_sesnsordata, concrete index name = wb_sesnsordata
[05:14:05,683][INFO ][importer ][main] schedule with cron expressions [0 0-59 0-23 ? * *]
[05:15:00,002][INFO ][importer.jdbc ][pool-2-thread-2] index name = wb_sesnsordata, concrete index name = wb_sesnsordata
[05:15:00,018][INFO ][importer.jdbc ][pool-3-thread-1] strategy standard: settings = {type=sensor data, metrics.lastexecutionend=2015-05-10T10:58:00.044Z, sql.0.parameter.0=$metrics.lastexecutions$
[05:15:00,068][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@32af84d3
[05:15:00,110][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@65babc5c
[05:15:00,124][INFO ][BaseTransportClient ][pool-3-thread-1] creating transport client, java version 1.7.0_79, effective settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false,$
[05:15:00,190][INFO ][org.elasticsearch.plugins][pool-3-thread-1] [importer] loaded [support-1.7.1.0-b344fa4], sites []
[05:15:00,901][INFO ][BaseTransportClient ][pool-3-thread-1] trying to connect to [inet[localhost/127.0.0.1:9300]]
[05:15:00,993][WARN ][org.elasticsearch.transport.netty][elasticsearch[importer][transport_client_worker][T#1]{New I/O worker #1}] [importer] Message not fully read (response) for [0] handler future(org.elas$
[05:15:00,991][INFO ][org.elasticsearch.client.transport][pool-3-thread-1] [importer] failed to get node info for [#transport#-1][instance-1.c.massive-seer-107519.internal][inet[localhost/127.0.0.1:9300]], d$
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse]
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:157) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:132) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/standard/StandardAnalyzer
at org.elasticsearch.Version.fromId(Version.java:462) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.Version.readVersion(Version.java:254) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.cluster.node.DiscoveryNode.readFrom(DiscoveryNode.java:324) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.cluster.node.DiscoveryNode.readNode(DiscoveryNode.java:307) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.action.support.nodes.NodeOperationResponse.readFrom(NodeOperationResponse.java:54) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readFrom(NodeInfo.java:200) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
at org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readNodeInfo(NodeInfo.java:194) ~[elasticsearch-jdbc-1.7.1.0-uberjar.jar:?]
The text was updated successfully, but these errors were encountered: