-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail to redeploy to Kubernetes #19701
Comments
This looks like a bug in the Kubernetes Client.
@manusa mind taking a look please? |
Hi, This is related to fabric8io/kubernetes-client#3122 and probably caused by the change of behavior in fabric8io/kubernetes-client#3221 (comment). I haven't seen (yet) how this is performed in Quarkus. In JKube we switched to a patch operation when applying/reapplying service (eclipse-jkube/jkube#756 & eclipse-jkube/jkube#757). Most likely some similar change needs to be done in the Quarkus extension (again, haven't checked yet). Related changes in Quarkus
|
Hm... That might be a little tricky in Quarkus as don't keep any state on whether it's the first deploy or not. |
Keeping the state is not necessary. Worst case scenario the cluster should be checked for the existence of the Service. |
Okay. If you need any help, let me know |
This issue is tightly related with the change of behavior of the resource deployment that has intermittently changed both in 1.1x.x and 2.x versions before and after some regression report (#15487). So probably this issue is also reproducible in the latest 1.x (LTS) release ¿anyone can confirm?. |
I could not reproduce this issue either using a local K8s environment (kind) or using OpenShift (with the k8s and container jib extensions). The command I tried:
I've also tried to use different services types (from node-port to clusterip and loadbalancer). Every time that I redeployed the app after making some changes, everything worked fine (and the changes took effect):
I've also tried with 2.2.1.Final, 2.2.0.Final and 999-SNAPSHOT. Any more hints about how to reproduce this issue? |
With Minikube and Quarkus 2.2.1.Final:
|
Basically the issue should be reproducible by invoking |
I tried to reproduce on the RH Developer Sandbox (using S2I) but I couldn't ( I think that OpenShift takes care of the allocated nodePort issue, since the problem is not even reproducible when using |
Describe the bug
When running
mvn clean package -Dquarkus.kubernetes.deploy=true
to update an existing deployment in Kubernetes, it fails with:Ths previously worked in 1.x
Expected behavior
Image is built and updated deployment rolled out to Kubernetes
Actual behavior
Exception when running
mvn clean package -Dquarkus.kubernetes.deploy=true
if application already deployedHow to Reproduce?
No response
Output of
uname -a
orver
No response
Output of
java -version
No response
GraalVM version (if different from Java)
No response
Quarkus version or git rev
No response
Build tool (ie. output of
mvnw --version
orgradlew --version
)No response
Additional information
No response
The text was updated successfully, but these errors were encountered: