Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting on ExecWatch.exitCode().get(long, TimeUnit) times out regardless #5022

Closed
asafza opened this issue Apr 2, 2023 · 1 comment · Fixed by #5054
Closed

Waiting on ExecWatch.exitCode().get(long, TimeUnit) times out regardless #5022

asafza opened this issue Apr 2, 2023 · 1 comment · Fixed by #5054
Assignees
Labels
Milestone

Comments

@asafza
Copy link

asafza commented Apr 2, 2023

Describe the bug

When executing a simple command in a container, waiting on ExecWatch.exitCode().get(long, TimeUnit) always times out. This used to work in 6.4.1, and now happens from 6.5.0.

Fabric8 Kubernetes Client version

6.5.1

Steps to reproduce

example code:

var watch = k8s.pods().inNamespace("xxx")
    .withName("xxx")
    .inContainer("xxx")
    .redirectingOutput()
    .redirectingError()
    .exec("echo", "hello");

var exitCode = watch.exitCode().get(5, TimeUnit.SECONDS);

Expected behavior

exitCode should be set with either null or 0.

Runtime

Kubernetes (vanilla)

Kubernetes API Server version

1.25.3@latest

Environment

macOS

Fabric8 Kubernetes Client Logs

No response

Additional context

No response

@shawkins
Copy link
Contributor

shawkins commented Apr 2, 2023

This likely has to do with some low level changes to the websocket message processing - but this behavior was possible in 6.4.1 as well depending upon the http client you were using. See the javadocs on the redirectingOutput and redirectingError methods - those streams need to be read to allow event processing to continue as there is no additional buffering being performed above the websocket layer. So failure to read those streams can prevent the processing of the exit code / errorChannel messages.

We can of course relax this if needed with some buffering, but it's imperative that the processing remains non-blocking - so at some point either if the buffers aren't drained we'd either have to pause processing or throw an exception.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants