You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched in the issues and found no similar issues.
Describe the bug
current set up
I am running kyuubi 1.9.1 on AKS, clients submit batch jobs using the kyuubi API. I have set the KUBECONFIG envvar to the path to a context file containing the contexts for my spark worker clusters (I have dedicated K8s for spark jobs).
The batch job get scheduled in the remote cluster, but kyuubi is unable to retrieve the status. The logs will contain:
ERROR OkHttp http://k8s/... io.fabric8.kubernetes.client.informers.impl.cache.Reflector: listSyncAndWatch failed for v1/namespaces/spark/pods, will stop
java.util.concurrent.CompletionException: java.net.UnknownHostException: k8s
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:957)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$1.onFailure(OkHttpClientImpl.java:330)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:211)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.net.UnknownHostException: k8s
additionally clients are REQUIRED to set the spark master.
Is it supported to set spark-master per context, so the clients doesn't have to know the API server IP of the backend cluster?
Affects Version(s)
1.9.1
Kyuubi Server Log Output
No response
Kyuubi Engine Log Output
No response
Kyuubi Server Configurations
No response
Kyuubi Engine Configurations
No response
Additional context
No response
Are you willing to submit PR?
Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix.
No. I cannot submit a PR at this time.
The text was updated successfully, but these errors were encountered:
Code of Conduct
Search before asking
Describe the bug
current set up
I am running kyuubi 1.9.1 on AKS, clients submit
batch
jobs using the kyuubi API. I have set theKUBECONFIG
envvar to the path to a context file containing the contexts for my spark worker clusters (I have dedicated K8s for spark jobs).Server config:
Kyuubi server config are as follows:
Clients sample request
User are able are able to post the following:
The problem
The issue occurs when we set the master on the kyuubi side per context:
The
batch
job get scheduled in the remote cluster, but kyuubi is unable to retrieve the status. The logs will contain:additionally clients are REQUIRED to set the spark master.
Is it supported to set spark-master per context, so the clients doesn't have to know the API server IP of the backend cluster?
Affects Version(s)
1.9.1
Kyuubi Server Log Output
No response
Kyuubi Engine Log Output
No response
Kyuubi Server Configurations
No response
Kyuubi Engine Configurations
No response
Additional context
No response
Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: