Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quarkus 3.0.1.Final Kubenetes builds break stork #33131

Closed
coricko opened this issue May 4, 2023 · 5 comments · Fixed by #33237
Closed

Quarkus 3.0.1.Final Kubenetes builds break stork #33131

coricko opened this issue May 4, 2023 · 5 comments · Fixed by #33237
Labels
area/kubernetes area/stork env/windows Impacts Windows machines kind/bug Something isn't working
Milestone

Comments

@coricko
Copy link

coricko commented May 4, 2023

Describe the bug

We use the Quarkus Kubernetes client to generate Kubernetes yaml files that are then deployed. The generated service configuration includes http(80->8008) and https(443->8443). This part works fine.

However when an other application uses quarkus.stork.my-service.service-discovery.type=kubernetes to connect to the child service, it will fail with a 500 jakarta.ws.rs.ProcessingException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.0.54.217:80 error. It's important to note that the port number it tries to connect to is 80 instead of the expected 8080 for a Java application. The IP address belongs to the correct Pod, it's just trying to use a port that is not open.

I've tested adding quarkus.http.insecure-requests=disabled when building the child deployments. This prevents the generated service from having the https 443 port open but prevents the application from actually starting since we use SSL termination at the LB and there is no SSL cert for it. This seems to be related to https://github.com/quarkusio/quarkus/wiki/Migration-Guide-3.0#the-https-container-port-is-added-to-generated-pod-resources

Expected behavior

The parent service should be able to connect to connect to the child service without issues.

Actual behavior

The parent service fails with a 500 error like this:
ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (executor-thread-1) HTTP Request to /api failed, error id: c404dda1-543a-4a7d-992f-5453d4af5513-4: jakarta.ws.rs.ProcessingException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.0.54.217:80 at org.jboss.resteasy.reactive.client.handlers.ClientSendRequestHandler$3.accept(ClientSendRequestHandler.java:223) at org.jboss.resteasy.reactive.client.handlers.ClientSendRequestHandler$3.accept(ClientSendRequestHandler.java:215) at io.smallrye.context.impl.wrappers.SlowContextualConsumer.accept(SlowContextualConsumer.java:21) at io.smallrye.mutiny.helpers.UniCallbackSubscriber.onFailure(UniCallbackSubscriber.java:62) at io.smallrye.mutiny.operators.uni.UniOperatorProcessor.onFailure(UniOperatorProcessor.java:55) at io.smallrye.mutiny.operators.uni.UniOperatorProcessor.onFailure(UniOperatorProcessor.java:55) at org.jboss.resteasy.reactive.client.AsyncResultUni.lambda$subscribe$1(AsyncResultUni.java:37) at io.vertx.core.impl.future.FutureImpl$3.onFailure(FutureImpl.java:153) at io.vertx.core.impl.future.FutureBase.lambda$emitFailure$1(FutureBase.java:69) at io.vertx.core.impl.EventLoopContext.execute(EventLoopContext.java:86) at io.vertx.core.impl.DuplicatedContext.execute(DuplicatedContext.java:163) at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:66) at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:230) at io.vertx.core.impl.future.PromiseImpl.tryFail(PromiseImpl.java:23) at io.vertx.core.http.impl.HttpClientImpl.lambda$doRequest$6(HttpClientImpl.java:689) at io.vertx.core.net.impl.pool.Endpoint.lambda$getConnection$0(Endpoint.java:52) at io.vertx.core.http.impl.SharedClientHttpStreamEndpoint$Request.handle(SharedClientHttpStreamEndpoint.java:162) at io.vertx.core.http.impl.SharedClientHttpStreamEndpoint$Request.handle(SharedClientHttpStreamEndpoint.java:123) at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:55) at io.vertx.core.impl.ContextBase.emit(ContextBase.java:239) at io.vertx.core.net.impl.pool.SimpleConnectionPool$ConnectFailed$1.run(SimpleConnectionPool.java:384) at io.vertx.core.net.impl.pool.CombinerExecutor.submit(CombinerExecutor.java:56) at io.vertx.core.net.impl.pool.SimpleConnectionPool.execute(SimpleConnectionPool.java:245) at io.vertx.core.net.impl.pool.SimpleConnectionPool.lambda$connect$2(SimpleConnectionPool.java:259) at io.vertx.core.http.impl.SharedClientHttpStreamEndpoint.lambda$connect$2(SharedClientHttpStreamEndpoint.java:104) at io.vertx.core.impl.future.FutureImpl$3.onFailure(FutureImpl.java:153) at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:75) at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:230) at io.vertx.core.impl.future.Composition$1.onFailure(Composition.java:66) at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:75) at io.vertx.core.impl.future.FailedFuture.addListener(FailedFuture.java:98) at io.vertx.core.impl.future.Composition.onFailure(Composition.java:55) at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:75) at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:230) at io.vertx.core.impl.future.PromiseImpl.tryFail(PromiseImpl.java:23) at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:55) at io.vertx.core.impl.ContextBase.emit(ContextBase.java:239) at io.vertx.core.net.impl.NetClientImpl.failed(NetClientImpl.java:339) at io.vertx.core.net.impl.NetClientImpl.lambda$connectInternal2$6(NetClientImpl.java:311) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:110) at io.vertx.core.net.impl.ChannelProvider.lambda$handleConnect$0(ChannelProvider.java:157) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.0.54.217:80 Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833)

How to Reproduce?

  1. Using the quickstart here: https://github.com/quarkusio/quarkus-quickstarts/tree/main/stork-kubernetes-quickstart, update kubernetes-setup.yml and add
    - name: https
      port: 443
      targetPort: 8443

to the service's ports list.
2) Deploy everything as normal
3) Try going to /api to test it.

Output of uname -a or ver

Windows

Output of java -version

openjdk 17.0.2 2022-01-18

GraalVM version (if different from Java)

No response

Quarkus version or git rev

No response

Build tool (ie. output of mvnw --version or gradlew --version)

Apache Maven 3.9.0 (9b58d2bad23a66be161c4664ef21ce219c2c8584)

Additional information

Quickstart parent application was created using mvn package -Dquarkus.container-image.build=true -Dquarkus.container-image.push=true and deployed to an AWS EKS cluster running v1.24.10-eks-48e63af.

@coricko coricko added the kind/bug Something isn't working label May 4, 2023
@quarkus-bot
Copy link

quarkus-bot bot commented May 4, 2023

/cc @Sgitario (kubernetes,stork), @aureamunoz (stork), @cescoffier (stork), @geoand (kubernetes), @iocanel (kubernetes)

@Sgitario
Copy link
Contributor

Sgitario commented May 5, 2023

This is a Stork issue that needs to be addressed in Smallrye Stork.
The problem is that when there are multiple ports registered in the Service, it always uses the port 80.

@aureamunoz is there any existing solution for this issue in Stork or what do you think it's the best solution for this issue (perhaps providing a new property to directly provide the port number)?

Sgitario added a commit to Sgitario/smallrye-stork that referenced this issue May 5, 2023
This is specially necessary when having endpoints that expose multiple ports. 
At the moment, when having multiple ports, Stork does not select any and uses the port 80 which is problemmatic when the application does not use this port. 

Moreover, it improves the logic to matching a port to also check the pod spec (if there is only one container with one port, it will use it). 

Relates quarkusio/quarkus#33131
@Sgitario
Copy link
Contributor

Sgitario commented May 5, 2023

@aureamunoz I proposed smallrye/smallrye-stork#547 to fix this issue. It's my first contribution to the Stork project, so feel free to close it if you consider it's not the right solution.

@aureamunoz
Copy link
Member

This is a Stork issue that needs to be addressed in Smallrye Stork. The problem is that when there are multiple ports registered in the Service, it always uses the port 80.

@aureamunoz is there any existing solution for this issue in Stork or what do you think it's the best solution for this issue (perhaps providing a new property to directly provide the port number)?

Ok, I see. In Stork we are taking just the first port from the endpoints. Looking at the fix now....

Sgitario added a commit to Sgitario/smallrye-stork that referenced this issue May 5, 2023
This is specially necessary when having endpoints that expose multiple ports. 
At the moment, when having multiple ports, Stork does not select any and uses the port 80 which is problemmatic when the application does not use this port. 

Moreover, it improves the logic to matching a port to also check the pod spec (if there is only one container with one port, it will use it). 

Relates quarkusio/quarkus#33131
Sgitario added a commit to Sgitario/smallrye-stork that referenced this issue May 5, 2023
This is specially necessary when having endpoints that expose multiple ports. 
At the moment, when having multiple ports, Stork does not select any and uses the port 80 which is problemmatic when the application does not use this port. 

Relates quarkusio/quarkus#33131
@Sgitario
Copy link
Contributor

With #33237, you would need to configure the port name by adding:

quarkus.stork.my-service.service-discovery.port-name=http

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubernetes area/stork env/windows Impacts Windows machines kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants