Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pulsar SQL example cannot reproduce in master (2.11.0-SNAPSHOT) #16354

Closed
tisonkun opened this issue Jul 3, 2022 · 3 comments
Closed

Pulsar SQL example cannot reproduce in master (2.11.0-SNAPSHOT) #16354

tisonkun opened this issue Jul 3, 2022 · 3 comments
Labels
type/bug The PR fixed a bug or issue reported a bug

Comments

@tisonkun
Copy link
Member

tisonkun commented Jul 3, 2022

Describe the bug

Running follow the instructions https://pulsar.apache.org/docs/sql-getting-started doesn't give the expected result. But sql-worker failed to connect to ZooKeeper server.

To Reproduce

  1. Compile pulsar from master branch (2.11.0-SNAPSHOT)
  2. Start standalone Pulsar cluster (from pulsar/distribution/server/target/apache-pulsar-2.11.0-SNAPSHOT): ./bin/pulsar standalone
  3. Run sql-worker: ./bin/pulsar sql-worker run

Expected behavior

No error. Can proceed later commands in the example.

Screenshots

The ZK issue has been resolved, see further issues #16354 (comment).

Desktop (please complete the following information):

  • OS: macOS 10.15.7

Additional context

java -version
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment Temurin-17.0.2+8 (build 17.0.2+8)
OpenJDK 64-Bit Server VM Temurin-17.0.2+8 (build 17.0.2+8, mixed mode, sharing)
@tisonkun
Copy link
Member Author

tisonkun commented Jul 3, 2022

OK. The problem about ZK could be, as PIP-117 (#13302) moving forward (we should update docs for this), ZK doesn't always start by default. However, Pulsar SQL depends on ZK settings and thus generate errors like above.

With PULSAR_STANDALONE_USE_ZOOKEEPER=1 ./bin/pulsar standalone the problem above is overcomed.

However, the Pulsar SQL example still cannot runs correctly. Now the show catalogs; query hangs:

$ ./bin/pulsar sql 
presto> SHOW CATALOGS;

Query 20220703_154025_00001_c5n98, RUNNING, 1 node, 0 splits

image

image

@tisonkun
Copy link
Member Author

tisonkun commented Jul 4, 2022

SQL Worker logs:

2022-07-04T19:05:53.256+0800	INFO	main	stdout	2022-07-04T19:05:53,256 - INFO  - [main:Environment@98] - Client environment:java.library.path=/Users/tison/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.

2022-07-04T19:05:53.256+0800	INFO	main	stdout	2022-07-04T19:05:53,256 - INFO  - [main:Environment@98] - Client environment:java.io.tmpdir=/var/folders/bl/ls5z3wdx67962l9v7gvn82sc0000gn/T/

2022-07-04T19:05:53.256+0800	INFO	main	stdout	2022-07-04T19:05:53,256 - INFO  - [main:Environment@98] - Client environment:java.compiler=<NA>

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:os.name=Mac OS X

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:os.arch=x86_64

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:os.version=10.15.7

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:user.name=tison

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:user.home=/Users/tison

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:user.dir=/Users/tison/Brittani/pulsar/distribution/server/target/apache-pulsar-2.11.0-SNAPSHOT/lib/presto

2022-07-04T19:05:53.257+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:os.memory.free=858MB

2022-07-04T19:05:53.258+0800	INFO	main	stdout	2022-07-04T19:05:53,257 - INFO  - [main:Environment@98] - Client environment:os.memory.max=16384MB

2022-07-04T19:05:53.258+0800	INFO	main	stdout	2022-07-04T19:05:53,258 - INFO  - [main:Environment@98] - Client environment:os.memory.total=1024MB

2022-07-04T19:05:53.267+0800	INFO	main	stdout	2022-07-04T19:05:53,267 - INFO  - [main:ZooKeeper@637] - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=30000 watcher=org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase@37ec9c49

2022-07-04T19:05:53.272+0800	INFO	main	stdout	2022-07-04T19:05:53,272 - INFO  - [main:X509Util@77] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation

2022-07-04T19:05:53.275+0800	INFO	main	stdout	2022-07-04T19:05:53,275 - INFO  - [main:ClientCnxnSocket@239] - jute.maxbuffer value is 1048575 Bytes

2022-07-04T19:05:53.284+0800	INFO	main	stdout	2022-07-04T19:05:53,284 - INFO  - [main:ClientCnxn@1732] - zookeeper.request.timeout value is 0. feature enabled=false

2022-07-04T19:05:53.294+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,294 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1171] - Opening socket connection to server localhost/127.0.0.1:2181.

2022-07-04T19:05:53.295+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,295 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1173] - SASL config status: Will not attempt to authenticate using SASL (unknown error)

2022-07-04T19:05:53.296+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,296 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1005] - Socket connection established, initiating session, client: /127.0.0.1:50608, server: localhost/127.0.0.1:2181

2022-07-04T19:05:53.329+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,329 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1444] - Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x10031c53e74000d, negotiated timeout = 30000

2022-07-04T19:05:53.331+0800	INFO	main-EventThread	stdout	2022-07-04T19:05:53,331 - INFO  - [main-EventThread:ZooKeeperWatcherBase@130] - ZooKeeper client is connected now.

2022-07-04T19:05:53.461+0800	INFO	main	stdout	2022-07-04T19:05:53,461 - INFO  - [main:MetadataDrivers@106] - BookKeeper metadata driver manager initialized

2022-07-04T19:05:53.469+0800	INFO	main	stdout	2022-07-04T19:05:53,469 - INFO  - [main:ZKMetadataDriverBase@206] - Initialize zookeeper metadata driver at metadata service uri zk://127.0.0.1:2181/ledgers : zkServers = 127.0.0.1:2181, ledgersRootPath = /ledgers.

2022-07-04T19:05:53.471+0800	INFO	main	stdout	2022-07-04T19:05:53,471 - INFO  - [main:ZooKeeper@637] - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=10000 watcher=org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase@6e1d63ec

2022-07-04T19:05:53.472+0800	INFO	main	stdout	2022-07-04T19:05:53,471 - INFO  - [main:ClientCnxnSocket@239] - jute.maxbuffer value is 1048575 Bytes

2022-07-04T19:05:53.472+0800	INFO	main	stdout	2022-07-04T19:05:53,472 - INFO  - [main:ClientCnxn@1732] - zookeeper.request.timeout value is 0. feature enabled=false

2022-07-04T19:05:53.474+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,473 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1171] - Opening socket connection to server localhost/127.0.0.1:2181.

2022-07-04T19:05:53.474+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,474 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1173] - SASL config status: Will not attempt to authenticate using SASL (unknown error)

2022-07-04T19:05:53.475+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,475 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1005] - Socket connection established, initiating session, client: /127.0.0.1:50609, server: localhost/127.0.0.1:2181

2022-07-04T19:05:53.501+0800	INFO	main-SendThread(127.0.0.1:2181)	stdout	2022-07-04T19:05:53,501 - INFO  - [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1444] - Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x10031c53e74000e, negotiated timeout = 10000

2022-07-04T19:05:53.502+0800	INFO	main-EventThread	stdout	2022-07-04T19:05:53,501 - INFO  - [main-EventThread:ZooKeeperWatcherBase@130] - ZooKeeper client is connected now.

2022-07-04T19:05:53.548+0800	INFO	main	stdout	2022-07-04T19:05:53,542 - WARN  - [main:RackawareEnsemblePlacementPolicyImpl@272] - Failed to initialize DNS Resolver org.apache.bookkeeper.net.ScriptBasedMapping, used default subnet resolver 
java.lang.RuntimeException: No network topology script is found when using script based DNS resolver.
	at org.apache.bookkeeper.net.ScriptBasedMapping$RawScriptBasedMapping.validateConf(ScriptBasedMapping.java:163) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.net.AbstractDNSToSwitchMapping.setConf(AbstractDNSToSwitchMapping.java:81) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.net.ScriptBasedMapping.setConf(ScriptBasedMapping.java:123) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl.initialize(RackawareEnsemblePlacementPolicyImpl.java:264) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl.initialize(RackawareEnsemblePlacementPolicyImpl.java:80) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.client.BookKeeper.initializeEnsemblePlacementPolicy(BookKeeper.java:582) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.client.BookKeeper.<init>(BookKeeper.java:506) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.client.BookKeeper.<init>(BookKeeper.java:345) ~[bookkeeper-server-4.15.0.jar:4.15.0]
	at org.apache.bookkeeper.mledger.impl.ManagedLedgerFactoryImpl$DefaultBkFactory.<init>(ManagedLedgerFactoryImpl.java:219) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at org.apache.bookkeeper.mledger.impl.ManagedLedgerFactoryImpl.<init>(ManagedLedgerFactoryImpl.java:145) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at org.apache.pulsar.sql.presto.PulsarConnectorCache.initManagedLedgerFactory(PulsarConnectorCache.java:130) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at org.apache.pulsar.sql.presto.PulsarConnectorCache.<init>(PulsarConnectorCache.java:78) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at org.apache.pulsar.sql.presto.PulsarConnectorCache.getConnectorCache(PulsarConnectorCache.java:104) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at org.apache.pulsar.sql.presto.PulsarConnector.initConnectorCache(PulsarConnector.java:85) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at org.apache.pulsar.sql.presto.PulsarConnectorFactory.create(PulsarConnectorFactory.java:70) ~[pulsar-presto-connector-original-2.11.0-SNAPSHOT.jar:2.11.0-SNAPSHOT]
	at io.prestosql.connector.ConnectorManager.createConnector(ConnectorManager.java:349) ~[presto-main-334.jar:334]
	at io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:208) ~[presto-main-334.jar:334]
	at io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:200) ~[presto-main-334.jar:334]
	at io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:186) ~[presto-main-334.jar:334]
	at io.prestosql.metadata.StaticCatalogStore.loadCatalog(StaticCatalogStore.java:88) ~[presto-main-334.jar:334]
	at io.prestosql.metadata.StaticCatalogStore.loadCatalogs(StaticCatalogStore.java:68) ~[presto-main-334.jar:334]
	at io.prestosql.server.Server.doStart(Server.java:114) ~[presto-main-334.jar:334]
	at io.prestosql.server.Server.lambda$start$0(Server.java:69) ~[presto-main-334.jar:334]
	at io.prestosql.$gen.Presto_334____20220704_110543_1.run(Unknown Source) [?:?]
	at io.prestosql.server.Server.start(Server.java:69) [presto-main-334.jar:334]
	at io.prestosql.server.PrestoServer.main(PrestoServer.java:39) [presto-server-main-334.jar:334]

2022-07-04T19:05:53.557+0800	INFO	main	stdout	2022-07-04T19:05:53,557 - INFO  - [main:RackawareEnsemblePlacementPolicyImpl@216] - Initialize rackaware ensemble placement policy @ <Bookie:127.0.0.1:0> @ /default-rack : org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy$DefaultResolver.

2022-07-04T19:05:53.558+0800	INFO	main	stdout	2022-07-04T19:05:53,558 - INFO  - [main:RackawareEnsemblePlacementPolicyImpl@226] - Not weighted

2022-07-04T19:05:53.572+0800	INFO	main	stdout	2022-07-04T19:05:53,572 - INFO  - [main:BookKeeper@526] - Weighted ledger placement is not enabled

2022-07-04T19:05:53.698+0800	INFO	main-EventThread	stdout	2022-07-04T19:05:53,698 - INFO  - [main-EventThread:ZKRegistrationClient@270] - Update BookieInfoCache (writable bookie) 127.0.0.1:3181 -> BookieServiceInfo{properties={}, endpoints=[EndpointInfo{id=bookie, port=3181, host=127.0.0.1, protocol=bookie-rpc, auth=[], extensions=[]}]}

2022-07-04T19:05:53.700+0800	INFO	BookKeeperClientScheduler-OrderedScheduler-0-0	stdout	2022-07-04T19:05:53,700 - INFO  - [BookKeeperClientScheduler-OrderedScheduler-0-0:NetworkTopologyImpl@428] - Adding a new node: /default-rack/127.0.0.1:3181

2022-07-04T19:05:53.730+0800	INFO	main	stdout	2022-07-04T19:05:53,730 - INFO  - [main:RangeEntryCacheManagerImpl@68] - Initialized managed-ledger entry cache of 0.0 Mb

2022-07-04T19:05:53.737+0800	INFO	main	org.apache.pulsar.sql.presto.PulsarConnectorCache	No ledger offloader configured, using NULL instance
2022-07-04T19:05:53.741+0800	INFO	main	io.prestosql.metadata.StaticCatalogStore	-- Added catalog pulsar using connector pulsar --
2022-07-04T19:05:53.744+0800	INFO	main	io.prestosql.security.AccessControlManager	Using system access control allow-all
2022-07-04T19:05:53.782+0800	INFO	main	io.prestosql.server.Server	======== SERVER STARTED ========
2022-07-04T19:05:53.844+0800	WARN	node-state-poller-0	io.prestosql.metadata.RemoteNodeState	Node state update request to http://198.18.0.14:8081/v1/info/state has not returned in 835055.49s
2022-07-04T19:05:53.965+0800	WARN	query-management-1	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 835055.61s
2022-07-04T19:05:54.969+0800	WARN	query-management-0	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 835056.61s
2022-07-04T19:05:55.974+0800	WARN	query-management-4	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 835057.62s
2022-07-04T19:05:56.979+0800	WARN	query-management-3	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 835058.62s
2022-07-04T19:05:57.984+0800	WARN	query-management-2	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 835059.63s
2022-07-04T19:05:58.850+0800	WARN	node-state-poller-0	io.prestosql.metadata.RemoteNodeState	Node state update request to http://198.18.0.14:8081/v1/info/state has not returned in 835060.50s
2022-07-04T19:05:58.864+0800	WARN	http-client-node-manager-scheduler-1	io.prestosql.metadata.RemoteNodeState	Error fetching node state from http://198.18.0.14:8081/v1/info/state: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
2022-07-04T19:05:58.988+0800	WARN	query-management-1	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 835060.63s
2022-07-04T19:05:59.953+0800	WARN	http-client-memoryManager-scheduler-1	io.prestosql.memory.RemoteNodeMemory	Error fetching memory info from http://198.18.0.14:8081/v1/memory: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
2022-07-04T19:06:10.042+0800	WARN	query-management-0	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 10.09s
2022-07-04T19:06:11.004+0800	WARN	http-client-memoryManager-scheduler-1	io.prestosql.memory.RemoteNodeMemory	Error fetching memory info from http://198.18.0.14:8081/v1/memory: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
2022-07-04T19:06:13.860+0800	WARN	http-client-node-manager-scheduler-1	io.prestosql.metadata.RemoteNodeState	Error fetching node state from http://198.18.0.14:8081/v1/info/state: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
2022-07-04T19:06:19.038+0800	WARN	ContinuousTaskStatusFetcher-20220704_110607_00000_x95me.2.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting task status 20220704_110607_00000_x95me.2.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.2.0
2022-07-04T19:06:19.042+0800	WARN	TaskInfoFetcher-20220704_110607_00000_x95me.2.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting info for task 20220704_110607_00000_x95me.2.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.2.0
2022-07-04T19:06:19.058+0800	WARN	ContinuousTaskStatusFetcher-20220704_110607_00000_x95me.1.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting task status 20220704_110607_00000_x95me.1.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.1.0
2022-07-04T19:06:19.058+0800	WARN	TaskInfoFetcher-20220704_110607_00000_x95me.1.0-242	io.prestosql.server.remotetask.RequestErrorTracker	Error getting info for task 20220704_110607_00000_x95me.1.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.1.0
2022-07-04T19:06:19.061+0800	WARN	ContinuousTaskStatusFetcher-20220704_110607_00000_x95me.0.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting task status 20220704_110607_00000_x95me.0.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.0.0
2022-07-04T19:06:19.062+0800	WARN	TaskInfoFetcher-20220704_110607_00000_x95me.0.0-242	io.prestosql.server.remotetask.RequestErrorTracker	Error getting info for task 20220704_110607_00000_x95me.0.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.0.0
2022-07-04T19:06:19.144+0800	WARN	UpdateResponseHandler-20220704_110607_00000_x95me.2.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error updating task 20220704_110607_00000_x95me.2.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.2.0
2022-07-04T19:06:19.149+0800	WARN	UpdateResponseHandler-20220704_110607_00000_x95me.0.0-242	io.prestosql.server.remotetask.RequestErrorTracker	Error updating task 20220704_110607_00000_x95me.0.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.0.0
2022-07-04T19:06:19.150+0800	WARN	UpdateResponseHandler-20220704_110607_00000_x95me.1.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error updating task 20220704_110607_00000_x95me.1.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.1.0
2022-07-04T19:06:21.095+0800	WARN	query-management-2	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 10.09s
2022-07-04T19:06:22.055+0800	WARN	http-client-memoryManager-scheduler-1	io.prestosql.memory.RemoteNodeMemory	Error fetching memory info from http://198.18.0.14:8081/v1/memory: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
2022-07-04T19:06:23.881+0800	WARN	node-state-poller-0	io.prestosql.metadata.RemoteNodeState	Node state update request to http://198.18.0.14:8081/v1/info/state has not returned in 10.02s
2022-07-04T19:06:28.876+0800	WARN	http-client-node-manager-scheduler-1	io.prestosql.metadata.RemoteNodeState	Error fetching node state from http://198.18.0.14:8081/v1/info/state: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
2022-07-04T19:06:29.045+0800	WARN	ContinuousTaskStatusFetcher-20220704_110607_00000_x95me.2.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting task status 20220704_110607_00000_x95me.2.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.2.0
2022-07-04T19:06:29.060+0800	WARN	ContinuousTaskStatusFetcher-20220704_110607_00000_x95me.1.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting task status 20220704_110607_00000_x95me.1.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.1.0
2022-07-04T19:06:29.063+0800	WARN	ContinuousTaskStatusFetcher-20220704_110607_00000_x95me.0.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting task status 20220704_110607_00000_x95me.0.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.0.0
2022-07-04T19:06:29.150+0800	WARN	UpdateResponseHandler-20220704_110607_00000_x95me.2.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error updating task 20220704_110607_00000_x95me.2.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.2.0
2022-07-04T19:06:29.154+0800	WARN	UpdateResponseHandler-20220704_110607_00000_x95me.1.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error updating task 20220704_110607_00000_x95me.1.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.1.0
2022-07-04T19:06:29.154+0800	WARN	UpdateResponseHandler-20220704_110607_00000_x95me.0.0-242	io.prestosql.server.remotetask.RequestErrorTracker	Error updating task 20220704_110607_00000_x95me.0.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.0.0
2022-07-04T19:06:32.082+0800	WARN	TaskInfoFetcher-20220704_110607_00000_x95me.2.0-242	io.prestosql.server.remotetask.RequestErrorTracker	Error getting info for task 20220704_110607_00000_x95me.2.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.2.0
2022-07-04T19:06:32.082+0800	WARN	TaskInfoFetcher-20220704_110607_00000_x95me.1.0-241	io.prestosql.server.remotetask.RequestErrorTracker	Error getting info for task 20220704_110607_00000_x95me.1.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.1.0
2022-07-04T19:06:32.082+0800	WARN	TaskInfoFetcher-20220704_110607_00000_x95me.0.0-242	io.prestosql.server.remotetask.RequestErrorTracker	Error getting info for task 20220704_110607_00000_x95me.0.0: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed: http://198.18.0.14:8081/v1/task/20220704_110607_00000_x95me.0.0
2022-07-04T19:06:32.144+0800	WARN	query-management-3	io.prestosql.memory.RemoteNodeMemory	Memory info update request to http://198.18.0.14:8081/v1/memory has not returned in 10.09s
2022-07-04T19:06:33.110+0800	WARN	http-client-memoryManager-scheduler-1	io.prestosql.memory.RemoteNodeMemory	Error fetching memory info from http://198.18.0.14:8081/v1/memory: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed

@tisonkun
Copy link
Member Author

tisonkun commented Jul 4, 2022

After I clean up data dir and retry the example, it works this time. Closed as invalid.

No. It's unrelated to data, but about proxy. I confirm that after I close proxy it works.

@tisonkun tisonkun closed this as completed Jul 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug The PR fixed a bug or issue reported a bug
Projects
None yet
Development

No branches or pull requests

1 participant