-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
6.5.0
version breaks some of our tests
#5738
Comments
The other issue is that some of the tests that were taking milliseconds to run, now take tens of seconds, because of retries that happen, for example this is one of our logs:
Is there a way to tell the client (for testing at least) : "do not retry?" |
The problem is not with the MockWebServer, but with the new retry approach. To fix tests you need to either account for this, or to disable retry. I think I have a few examples for the latter, let me try to find some references. |
Not sure if this owas the one, but please try: |
Can we use @EnableKubernetesMockClient(kubernetesClientBuilderCustomizer = DisableKubernetesClientRequestRetry.class)
class KubernetesClientDisableRetryTest { |
@manusa this:
did not help, unfortunately. |
Do you have a working branch and a reference to one of the failing tests? I can try to take a look |
Yes, that could be used as a way to provide common initialization options for those tests. |
I have a public repository with a test : here What I what to achieve, is for the test to fail immediately, without logs like this:
If there is anything else needed from me, do not hesitate! Thank you |
I'm not sure what might be wrong with your execution.
The same test with the line mockClient.getConfiguration().setRequestRetryBackoffLimit(0); commented-out does show the repetition (as expected) |
Marc, you're right! I'm really sorry for being an idiot: I was looking at too many things at the same time. Setting:
indeed solved the 500 issue I was seeing. Much appreciate your time here. Awesome that you exposed also:
This solves the second problem of ours, with long running tests. Fabulous! |
Cool, thx 🚀 |
Describe the bug
Version
6.5.0
changed the way@EnableKubernetesMockClient
works.Fabric8 Kubernetes Client version
6.5.1
Steps to reproduce
I have a very simple test like this (I can provide a github repo if needed):
and a test:
In version
6.4.1
this test fails. In version6.5.0
this test passes.In version
6.5.0
, I see such logs:So it seems that there was a decision to re-try on error code
500
in that release. Unfortunately, this breaks a lot of our tests. Fixing them is not complicated, we can change the error code, from500
to400
for example.But I am still interested why this breaking change happened, and if there is a way for us to still stick to
500
without retries.Thank you.
Expected behavior
The test should fail, imho, in both versions.
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
1.25.3@latest
Environment
Linux, macOS
Fabric8 Kubernetes Client Logs
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: