Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS #3795

Closed
rg970197 opened this issue Feb 1, 2022 · 29 comments
Labels
Waiting on feedback Issues that require feedback from User/Other community members

Comments

@rg970197
Copy link

rg970197 commented Feb 1, 2022

Describe the bug

This is happening in my 4 Jenkins servers after upgrading Jenkins and AKS.

`Af of sudden all Jenkins agent pods started giving Errors as below, Few pods are working and few are giving Errors. This is Happening 1-2 times out of 4-5 attempts.

AKS Version : 1.20.13

Jenkins Version, which clusters is having different version. I can reproduce this Error in all versions.

AKS-1:
kubernetes:1.30.1
kubernetes-client-api:5.10.1-171.vaa0774fb8c20
kubernetes-credentials:0.8.0

AKS-2:
kubernetes:1.31.3
kubernetes-client-api:5.11.2-182.v0f1cf4c5904e
kubernetes-credentials:0.9.0

AKS-3:
kubernetes:1.30.1
kubernetes-client-api:5.10.1-171.vaa0774fb8c20
kubernetes-credentials:0.8.0

AKS-4:
kubernetes:1.31.3
workflow-job:1145.v7f2433caa07f
workflow-aggregator:2.6`

Troubleshooting steps i did.

  1. revert Jenkins to old version => results in same Error
  2. upgrade Jenkins to all new Version including plugins in use => Results in same Error.
  3. Downgraded Jenkins K8s and K8s API plugins to stable version as per some suggestion in github. => same Error
  4. Created Brand new cluster and install Jenkins and Job pod starting giving same Error. => same Error

Not sure how to resolve this Error. Please let me know if i'm missing anything.

Fabric8 Kubernetes Client version

5.10.1@latest

Steps to reproduce

Create/run job will give this Error.

Expected behavior

Pod should get successfully executed .

Runtime

Kubernetes (vanilla)

Kubernetes API Server version

1.21.6

Environment

Linux

Fabric8 Kubernetes Client Logs

18:23:33  [WS-CLEANUP] Deleting project workspace...
18:23:33  [WS-CLEANUP] Deferred wipeout is used...
18:23:33  [WS-CLEANUP] done
18:23:33  [Pipeline] }
18:23:33  [Pipeline] // container
18:23:33  [Pipeline] }
18:23:33  [Pipeline] // node
18:23:33  [Pipeline] }
18:23:33  [Pipeline] // timeout
18:23:33  [Pipeline] }
18:23:33  [Pipeline] // podTemplate
18:23:33  [Pipeline] End of Pipeline
18:23:33  io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS
18:23:33  	at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:176)
18:23:33  	at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:322)
18:23:33  	at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:84)
18:23:33  	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:413)
18:23:33  	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:330)
18:23:33  	at hudson.Launcher$ProcStarter.start(Launcher.java:507)
18:23:33  	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
18:23:33  	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
18:23:33  	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
18:23:33  	at jdk.internal.reflect.GeneratedMethodAccessor6588.invoke(Unknown Source)
18:23:33  	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
18:23:33  	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
18:23:33  	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
18:23:33  	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
18:23:33  	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
18:23:33  	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
18:23:33  	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
18:23:33  	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
18:23:33  	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163)
18:23:33  	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
18:23:33  	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:158)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33  	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
18:23:33  	at WorkflowScript.run(WorkflowScript:114)
18:23:33  	at ___cps.transform___(Native Method)
18:23:33  	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86)
18:23:33  	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
18:23:33  	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
18:23:33  	at jdk.internal.reflect.GeneratedMethodAccessor210.invoke(Unknown Source)
18:23:33  	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
18:23:33  	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
18:23:33  	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
18:23:33  	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
18:23:33  	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
18:23:33  	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
18:23:33  	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
18:23:33  	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
18:23:33  	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
18:23:33  	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:402)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:96)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:314)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:278)
18:23:33  	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
18:23:33  	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
18:23:33  	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
18:23:33  	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
18:23:33  	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
18:23:33  	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
18:23:33  	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
18:23:33  	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
18:23:33  	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
18:23:33  	at java.base/java.lang.Thread.run(Thread.java:829)
18:23:33  [Bitbucket] Notifying commit build result
18:23:33  [Bitbucket] Build result notified
18:23:33  Finished: FAILURE

Additional context

No response

@shawkins
Copy link
Contributor

shawkins commented Feb 2, 2022

Can you provide what the dsl call looks like leading up to the exec?

I can reproduce similar behavior if I omit handling for stderr:

ExecWatch execWatch = podResource
        .writingError(System.out)
        ...
        .exec("date");

works, but

ExecWatch execWatch = podResource
        ...
        .exec("date");

fails with the timeout

@rg970197
Copy link
Author

rg970197 commented Feb 2, 2022

Hi Shawkins, Thanks for responding to this. How can i check that dsl call? Can you please help me on that?

@shawkins
Copy link
Contributor

shawkins commented Feb 2, 2022

@rg970197
Copy link
Author

rg970197 commented Feb 2, 2022

Ok got it thanks, i never checked this. I'm using https://github.com/jenkinsci/helm-charts helm chart to Deploy this service. how do we fix this Error? it's blocking almost 4 Jenkins servers. since 3 weeks.

@shawkins
Copy link
Contributor

shawkins commented Feb 2, 2022

how do we fix this Error?

I'm not sure.

  • are there other entries in the log that seem related?
  • do you have a set of versions when this worked - that would narrow the set of possible changes
  • have you tried increasing the timeout?

@rg970197
Copy link
Author

rg970197 commented Feb 2, 2022

Shawkins,
here are few things that i tried.

  1. reverting Jenkins version back to 2 months old (before upgrade this was working fine.)
  2. Update Jenkins to latest with latest plugins.
  3. delete Jenkins and recreating everything brand new with latest version.
  4. Downgraded plugins only according to some reference docs. kubernetes-client-api:5.10.1-171.vaa0774fb8c20, kubernetes-credentials:0.8.0 and kubernetes:1.30.1.
  5. Manually Override Default values by increasing , example:

Custom values for jenkins.

controller:
tag: "2.319.2"
imagePullPolicy: "Always"
javaOpts: >-
-Dorg.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.websocketConnectionTimeout=90
-Dorg.csanchez.jenkins.plugins.kubernetes.pipeline.websocketConnectionTimeout=80000
-Dkubernetes.websocket.timeout=70000
-Xms2G
-Xmx2G

output Error: io.fabric8.kubernetes.client.KubernetesClientException: not ready after 70000 MILLISECONDS

  1. Adding Variable according to this io.fabric8.kubernetes.client.KubernetesClientException: not ready after 10000 MILLISECONDS #3324.

  2. increase pod resource

Nothing worked out. Not sure if i'm missing anything here?

I can share few more logs, when this pods/job fail i see this Error. always across all pods hostname will be different.

`Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: jenkins-agent.jenkins.svc.cluster.local
Agent port: 50000
Identity: 3f:c7:f7:7b:41:45:42:6a:b2:52:8c:35:8d:f9:7f:7b
Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins-agent.jenkins.svc.cluster.local:50000
Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Remote identity confirmed: 3f:c7:f7:7b:41:45:42:6a:b2:52:8c:35:8d:f9:7f:7b
Jan 24, 2022 11:57:25 AM org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer onRecv
INFO: [JNLP4-connect connection to jenkins-agent.jenkins.svc.cluster.local/10.29.51.210:50000] Local headers refused by remote: Unknown client name: jenkins-agent-2031e5ba-eb0b-4f5c-bfae-2ba8719a5c07-804hv-dtf8c
Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Unknown client name: jenkins-agent-2031e5ba-eb0b-4f5c-bfae-2ba8719a5c07-804hv-dtf8c
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:743)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Unknown client name: jenkins-agent-2031e5ba-eb0b-4f5c-bfae-2ba8719a5c07-804hv-dtf8c
at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.newAbortCause(ConnectionHeadersFilterLayer.java:378)
at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.onRecvClosed(ConnectionHeadersFilterLayer.java:433)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:172)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:117)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.channels.ClosedChannelException
... 7 more

Jan 24, 2022 11:57:25 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:828)
at hudson.remoting.Engine.innerRun(Engine.java:768)
at hudson.remoting.Engine.run(Engine.java:518)

`

@DennisGlindhart
Copy link

I'm also hitting this on a regular basis error after updating Jenkins + installed plugins. Unfortunately don't have a records of the old versions, but it was fairly old, so would probably not be of much use either way :)

@manusa manusa moved this to Planned in Eclipse JKube Feb 23, 2022
@mateusvtt
Copy link

We started receiving this error a lot after bumping Jenkins server and Kubernetes plugin:

Jenkins: 2.320 -> 2.337
Plugin: kubernetes:1.30.7 -> kubernetes:3546.v6103d89542d6

Stacktrace:

io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS
	at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:176)
	at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:322)
	at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:84)
	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:413)
	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:330)
	at hudson.Launcher$ProcStarter.start(Launcher.java:509)
	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
	at namedSh.call(namedSh.groovy:18)
	at md5deep.call(md5deep.groovy:7)
	at WorkflowScript.run(WorkflowScript:293)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
	at jdk.internal.reflect.GeneratedMethodAccessor276.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.base/java.lang.reflect.Method.invoke(Unknown Source)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55)
	at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45)
	at jdk.internal.reflect.GeneratedMethodAccessor279.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.base/java.lang.reflect.Method.invoke(Unknown Source)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.LocalVariableBlock$LocalVariable.get(LocalVariableBlock.java:39)
	at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
	at com.cloudbees.groovy.cps.impl.LocalVariableBlock.evalLValue(LocalVariableBlock.java:28)
	at com.cloudbees.groovy.cps.LValueBlock$BlockImpl.eval(LValueBlock.java:55)
	at com.cloudbees.groovy.cps.LValueBlock.eval(LValueBlock.java:16)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:403)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)

@mateusvtt
Copy link

mateusvtt commented Mar 14, 2022

hey, @rg970197 were you able to fix it?
we already tried everything you mentioned too without success.

Update:
We were able to mitigate the issue with the following configuration:

kubernetes:1.30.1
kubernetes-client-api:5.4.1

Jenkins Server is in 2.338 but I think this is pointless

@sunix
Copy link
Collaborator

sunix commented Mar 18, 2022

Hey,
Discussed about this with @akram
Do you have a Jenkins jira issue opened for that problem? If not, could you log one?
From our side, it looks difficult to analyse, reproduce and debug it as we don't have the environment.

@akram
Copy link
Contributor

akram commented Mar 18, 2022

@mateusvtt and @rg970197 indeed, the issue can be on different location. So, that would help us to ensure that there were no configuration change in the way the kubernetes-plugin spawns podTemplates and allow agents to connect to the Jenkins server.

@mateusvtt
Copy link

hey @sunix I'm following this issue https://issues.jenkins.io/browse/JENKINS-67664

@acechef
Copy link

acechef commented Apr 20, 2022

I also encountered this problem, please solve it

@raeballz
Copy link

Similar issues being found on a k8s helm deployment of jenkins for my company also.

We've hit a stack trace like below:

io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS
	at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:181)
	at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:332)
	at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:85)
	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:425)
	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:328)
	at hudson.Launcher$ProcStarter.start(Launcher.java:509)
	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:98)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1225)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1034)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:41)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:158)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
	at WorkflowScript.run(WorkflowScript:81)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
	at jdk.internal.reflect.GeneratedMethodAccessor268.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:95)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

Could not update commit status, please check if your scan credentials belong to a member of the organization or a collaborator of the repository and repo:status scope is selected


GitHub has been notified of this commit's build result

Finished: FAILURE

This started happening out of the blue after deploying a firewall rule to block some k8s ports, but none of which are pertinent (as far as I am aware) to jenkins running.

deny {
    protocol = "TCP"
    ports    = ["10250", "4149", "10255", "10256", "9099", "6443"]
  }

  deny {
    protocol = "UDP"
    ports    = ["10250", "4149", "10255", "10256", "9099", "6443"]
  }

  deny {
    protocol = "SCTP"
    ports    = ["10250", "4149", "10255", "10256", "9099", "6443"]
  }

Any advice appreciated.

@Dohbedoh
Copy link
Contributor

I have been trying to troubleshoot this for some time. There are a few things to note:

  • Jenkins users started to see this exception in Jenkins Kubernetes plugin after upgrading it to 1.31.0 or later. That's where we bump k8s client from 5.4.1 to 5.10.2. A big jump that in particular contains several refactoring including the the move to CompletableFuture fixing #3001 #3186 over logging of websocket exceptions and closure #3197. That is the most notable change.
  • The change of the exception handling make it so that an error that use to show up as Interrupted while starting websocket connection, you should increase the Max connections to Kubernetes API now shows io.fabric8.kubernetes.client.KubernetesClientException: not ready after <> MILLISECONDS. As the plugin is catching a KubernetesClientException.
  • Later version of the k8s client (currently 5.12.2) does not solve the problem

When collecting data in impacted environments, the problem seems to happen even if a single connection is being made from the client. So not necessarily related to the Okhttp thread pool managing concurrent connections for example.

@shawkins your comment about the stderr handling is interesting. I am curious about why you thought about this ? It is interesting as the Jenkins kubernetes plugin does indeed handles stdout / stderr with:

Execable execable = getClient().pods().inNamespace(getNamespace()).withName(getPodName()).inContainer(containerName) //
                        .redirectingInput(STDIN_BUFFER_SIZE) // JENKINS-50429
                        .writingOutput(stream)
                        .writingError(stream)
                        .writingErrorChannel(error)
                        [...]

Do you think this has an impact on the socket connection response ? If yes, I am wondering if 6.0.0 refactors such as #4115 and the onExit() might help here ?

@shawkins
Copy link
Contributor

Later version of the k8s client (currently 5.12.2) does not solve the problem

There have been some additional changes in 6, have you tried that as well? And possibly with an alternative client - JDK or Jetty? That would help determine if it's an okhttp specific problem.

@shawkins your comment about the stderr handling is interesting. I am curious about why you thought about this ? It is interesting as the Jenkins kubernetes plugin does indeed handles stdout / stderr with:

It's been a while since I looked at this. In my local test had no stdout / stderr handling and seemed to produce a similar exception - however at least on 5.12/6.0 you can clearly see that's not due to the timeout, but a bad request. In the exception chain:

...
Caused by: java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request'

It seems that the api server wants to see that you are handling at least one of the streams to accept the request - which is not the case here based upon the Jenkins code.

Do you think this has an impact on the socket connection response ? If yes, I am wondering if 6.0.0 refactors such as #4115 and the onExit() might help here ?

If you are still getting a timeout in 5.12, then I still am not quite sure what is going on. It sounds like something highly related was already happening on the 5.4 kubernetes client - that would suggest to me that it's more due to an environmental issue.

@jwh-hutchison
Copy link

jwh-hutchison commented Jul 15, 2022

We've been troubleshooting this issue for days now and so far we have tried:

  • Setting the HTTPS_PROXY and HTTP_PROXY environment variables to null
  • Setting the HTTP2_DISABLE environment variable to true, in case this is a HTTP issue with OkHttp3
  • Setting the -Dkubernetes.websocket.timeout=60000 -Dorg.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.websocketConnectionTimeout=90 environment variables in both the controller and the podTemplate for our builds
  • Increasing the JNLP version in the podTemplate from jenkins/jnlp-slave:4.9-1 to jenkins/inbound-agent:4.13.2-1 (in case a recent GKE version change has broken the method that the JNLP image uses to communicate on our cluster)
  • We've tried rolling back to kubernetes plugin v1.31.2 from v1c1e0ec5b_650
  • We've tried deleting our entire Jenkins Helm image and re-installing from scratch
  • We've increasing the resource requests and limits on our podTemplate and the node pools that host the images
  • We've tried rolling back the Jenkins version from 2.346.2 to 2.332.3

None of these have made any difference. So we tried rolling the kubernetes plugin back to 1.30.0 (after @Dohbedoh's suggestion) but we still got the "not ready after 5000 MILLISECONDS" issue. Then we realized that Jenkins had retained our plugin version for kubernetes-client-api as 5.12.2-193.v26a_6078f65a_9 so we set this to 5.4.1 but it failed because Jenkins states that 1.30.0 depends on that 5.12.2-... version (see below) even though the dependency is listed as 5.4.1 in the release logs (https://github.com/jenkinsci/kubernetes-plugin/releases/tag/kubernetes-1.30.0) so my guess is that a lot of the people having this issue are unable to rollback to a point where v1.30.0 aligns with v5.4.1 (the last working version of kubernetes-client-api for a lot of devs).

2022-07-15 15:56:40.040 BST
Multiple plugin prerequisites not met:
2022-07-15 15:56:40.040 BST
Plugin kubernetes:1.30.0 depends on kubernetes-client-api:5.12.2-193.v26a_6078f65a_9, but there is an older version defined on the top level - kubernetes-client-api:5.4.1,
2022-07-15 15:56:40.040 BST
Plugin kubernetes:1.30.0 (via kubernetes-credentials:0.9.0) depends on kubernetes-client-api:5.12.2-193.v26a_6078f65a_9, but there is an older version defined on the top level - kubernetes-client-api:5.4.1

Does anyone know how we can force kubernetes:1.30.0 to accept kubernetes-client-api:5.4.1 (as it should)? because this appears to be the cause of everyone's trouble if versions of kubernetes-client-api higher than 5.4.1 is the real problem here.

EDIT: it was installing the wrong version because of the order that the Jenkins helm chart installs dependencies, we had to explicitly state kubernetes-client-api:5.4.1 under installPlugins and before kubernetes:1.30.0.

@ChaddFrasier
Copy link

I was getting this error recently on a corporate server. I have no access to the jenkins configurations because of that fact but I was able to get my builds to work using an additional CPU and a little extra memory. Probably not a solution to the issue but definitely a workaround for me. You may want to consider this if you need to get your builds around this issue.

Although if this fixes the issue for many of the people experiencing it. Devs may need to consider this a resource starvation problem

@Anusha-Kolli
Copy link

Hi, we are facing same issue. We are using
kubernetes pluging: 1.31.3
Jenkins server: 2.319.3

Our pipelines are failing with above error.

  • Tried updating plugins to recent versions.
  • Updating jenkins to 3.346.3
  • downgrading plugins as mentioned above
  • completely destroying and recreating
    Nothing seems working here. Please suggest

@shawkins
Copy link
Contributor

This may be related to #4319 and #4355 - if the exec is against a container with multiple pods and no container name is supplied the exception handling is very unclear for okhttp.

@NaveLevi
Copy link

Does this issue still happen on the latest version?

@rajeshflash
Copy link

Update to latest Kubernetes plugin, which introduces retry on the timeouts.

@manusa manusa added the Waiting on feedback Issues that require feedback from User/Other community members label Mar 22, 2023
@rohanKanojia
Copy link
Member

Jenkins Kubernetes Client API is using Fabric8 Kubernetes Client v6.4.1 at the moment. Is this issue still reproducible on recent versions of Jenkins Kubernetes Plugin (> 3893.v73d36f3b_9103) ?

@akloss-cibo
Copy link

I asked around and I here that we haven't seen this issue lately.

@NaveLevi
Copy link

Same here, I think this one can be closed :)

@github-project-automation github-project-automation bot moved this from Planned to Done in Eclipse JKube Apr 17, 2023
@lin-ket
Copy link

lin-ket commented Aug 17, 2023

@shawkins
I had the same problem

This is my version information

Jenkins version: 2.375.1 Kubernetes version: 3923.v294a_d4250b_91 kubernetes-cli:1.12.0
kubernetes-client-api version: 6.4.1-215.v2ed17097a_8e9 kubernetes-credentials version: 0.10.0 kubernetes-pipeline-devops-steps version: 1.6

@lin-ket
Copy link

lin-ket commented Aug 17, 2023

Failed to start websocket connection: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:129) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:122) at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:185) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.setupConnectionToPod(PodOperationsImpl.java:365) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:286) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:448) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:335) at hudson.Launcher$ProcStarter.start(Launcher.java:509) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:326) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:90) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:116) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85) at jdk.internal.reflect.GeneratedMethodAccessor601.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45) at jdk.internal.reflect.GeneratedMethodAccessor602.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:152) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:146) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:146) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.util.concurrent.TimeoutException: not ready after 5000 MILLISECONDS ... 52 more Retrying in 2s ... Retrying... Failed to start websocket connection: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:129) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:122) at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:185) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.setupConnectionToPod(PodOperationsImpl.java:365) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:286) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:448) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:335) at hudson.Launcher$ProcStarter.start(Launcher.java:509) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:326) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:90) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:116) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85) at jdk.internal.reflect.GeneratedMethodAccessor601.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45) at jdk.internal.reflect.GeneratedMethodAccessor602.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:152) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:146) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:146) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.util.concurrent.TimeoutException: not ready after 5000 MILLISECONDS ... 52 more Retrying in 4s ... Retrying... Failed to start websocket connection: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:129) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:122) at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:185) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.setupConnectionToPod(PodOperationsImpl.java:365) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:286) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:448) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:335) at hudson.Launcher$ProcStarter.start(Launcher.java:509) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:326) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:90) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:116) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85) at jdk.internal.reflect.GeneratedMethodAccessor601.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45) at jdk.internal.reflect.GeneratedMethodAccessor602.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:152) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:146) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:146) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.util.concurrent.TimeoutException: not ready after 5000 MILLISECONDS ... 52 more Retrying in 8s ... Retrying... Failed to start websocket connection: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:129) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:122) at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:185) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.setupConnectionToPod(PodOperationsImpl.java:365) at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:286) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:448) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:335) at hudson.Launcher$ProcStarter.start(Launcher.java:509) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:326) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:90) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:116) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85) at jdk.internal.reflect.GeneratedMethodAccessor601.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55) at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45) at jdk.internal.reflect.GeneratedMethodAccessor602.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:152) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:146) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:146) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.util.concurrent.TimeoutException: not ready after 5000 MILLISECONDS ... 52 more Retrying in 16s ... Retrying...

@bshah0408
Copy link

@lin-ket
How did get rid of this issue ! any idea ?

I am facing same issue with
jenkins: Version 2.414.3
Kubernetes
Version3883.v4d70a_a_a_df034
Kubernetes Client API
Version6.4.1-215.v2ed17097a_8e9

@jwh-hutchison
Copy link

jwh-hutchison commented Nov 28, 2024

@lin-ket How did get rid of this issue ! any idea ?

I am facing same issue with jenkins: Version 2.414.3 Kubernetes Version3883.v4d70a_a_a_df034 Kubernetes Client API Version6.4.1-215.v2ed17097a_8e9

3883.v4d70a_a_a_df034 is 2 years old, if you are running a version of Kubernetes on your cluster that is more recent then you should update your Kubernetes plugin - they changed something a while back with the liveness/readiness probes that made deployments using the old kubernes plugin fail.

I think your Jenkins version is fine though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Waiting on feedback Issues that require feedback from User/Other community members
Projects
None yet
Development

No branches or pull requests