diff --git a/_versions/2.7/guides/0-glossary.adoc b/_versions/2.7/guides/0-glossary.adoc deleted file mode 100644 index 259e309503e..00000000000 --- a/_versions/2.7/guides/0-glossary.adoc +++ /dev/null @@ -1,18 +0,0 @@ -= Glossary - -include::./attributes.adoc[] - -This is a collection of preferred term in the documentation and website. -Please stay within these terms for consistency. - -* Live coding:: for our `quarkus:dev` capability -* GraalVM native image:: preferred term for the VM creating native executable. No space. -* Substrate VM:: non-preferred. Exclude. -* Native Executable:: the executable that is compiled to native 1s and 0s -* Docker image:: for the actual `Dockerfile` definition and when the tool chain is involved -* Container:: when we discuss Quarkus running in... containers -* Supersonic Subatomic Java:: our tagline -* Kubernetes Native Java:: our preferred tagline to say that we rock for containers -* Developer Joy:: for everything going from live reload to the opinionated layer to a single config file -* Unify Imperative and Reactive:: imperative and reactive. 'Nuff said. -* Best of breed frameworks and standards:: when we explain our stack diff --git a/_versions/2.7/guides/README.adoc b/_versions/2.7/guides/README.adoc deleted file mode 100644 index ccb24bfec4d..00000000000 --- a/_versions/2.7/guides/README.adoc +++ /dev/null @@ -1,49 +0,0 @@ -= How to Create Quarkus Documentation -include::./attributes.adoc[] - -This guide describes the asciidoc format and conventions that Quarkus has adopted. - -== References - -The following links provide background on the general conventions and Asciidoc syntax. - -* https://redhat-documentation.github.io/asciidoc-markup-conventions/[AsciiDoc Mark-up Quick Reference for Documentation] -* http://asciidoctor.org/docs/user-manual/[Asciidoctor User Manual] -* http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/[AsciiDoc Syntax Quick Reference] - -== Variables for Use in Documents - -The following variables externalize key information that can change over time, and so references -to such information should be done by using the variable inside of {} curly brackets. The -complete list of externalized variables for use is given in the following table: - -.Variables -[cols="()); - request.getRequestContext().getAuthorizer().getJwt().getClaims().put("cognito:username", "Bill"); - - given() - .contentType("application/json") - .accept("application/json") - .body(request) - .when() - .post("/_lambda_") - .then() - .statusCode(200) - .body("body", equalTo("Bill")); - } ----- - -The above example simulates sending a Cognito principal with an HTTP request to your HTTP Lambda. - -If you want to hand code raw events for the AWS HTTP API, the AWS Lambda library has the request event type which is -`com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent` and the response event type -of `com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPResponse`. This corresponds -to the `quarkus-amazon-lambda-http` extension and the AWS HTTP API. - -If you want to hand code raw events for the AWS REST API, Quarkus has its own implementation: `io.quarkus.amazon.lambda.http.model.AwsProxyRequest` -and `io.quarkus.amazon.lambda.http.model.AwsProxyResponse`. This corresponds -to `quarkus-amazon-lambda-rest` extension and the AWS REST API. - -The mock event server is also started for `@NativeImageTest` unit tests so will work -with native binaries too. All this provides similar functionality to the SAM CLI local testing, without the overhead of Docker. - -Finally, if port 8080 or port 8081 is not available on your computer, you can modify the dev -and test mode ports with application.properties - -[source, subs=attributes+] ----- -quarkus.lambda.mock-event-server.dev-port=8082 -quarkus.lambda.mock-event-server.test-port=8083 ----- - - - -== Simulate Amazon Lambda Deployment with SAM CLI - -The AWS SAM CLI allows you to run your lambda's locally on your laptop in a simulated Lambda environment. This requires Docker to be installed. -After you have built your Maven project, execute this command: - -[source,bash,subs=attributes+] ----- -sam local start-api --template target/sam.jvm.yaml ----- - -This will start a Docker container that mimics Amazon's Lambda's deployment environment. Once the environment -is started you can invoke the example lambda in your browser by going to: - -http://127.0.0.1:3000/hello - -In the console you'll see startup messages from the lambda. This particular deployment starts a JVM and loads your -lambda as pure Java. - - -== Deploy to AWS - -[source,bash,subs=attributes+] ----- -sam deploy -t target/sam.jvm.yaml -g ----- - -Answer all the questions and your lambda will be deployed and the necessary hooks to the API Gateway will be set up. If -everything deploys successfully, the root URL of your microservice will be output to the console. Something like this: - ----- -Key LambdaHttpApi -Description URL for application -Value https://234asdf234as.execute-api.us-east-1.amazonaws.com/ ----- - -The `Value` attribute is the root URL for your lambda. Copy it to your browser and add `hello` at the end. - -[NOTE] -Responses for binary types will be automatically encoded with base64. This is different than the behavior using -`quarkus:dev` which will return the raw bytes. Amazon's API has additional restrictions requiring the base64 encoding. -In general, client code will automatically handle this encoding but in certain custom situations, you should be aware -you may need to manually manage that encoding. - -== Deploying a native executable - -To deploy a native executable, you must build it with GraalVM. - -include::includes/devtools/build-native-container.adoc[] - -You can then test the executable locally with sam local - -[source,bash,subs=attributes+] ----- -sam local start-api --template target/sam.native.yaml ----- - -To deploy to AWS Lambda: -[source,bash,subs=attributes+] ----- -sam deploy -t target/sam.native.yaml -g ----- - -== Examine the POM - -There is nothing special about the POM other than the inclusion of the `quarkus-amazon-lambda-http` extension -(if you are deploying an AWS Gateway HTTP API) or the `quarkus-amazon-lambda-rest` extension (if you are deploy an AWS Gateway REST API). -These extensions automatically generate everything you might need for your lambda deployment. - -Also, at least in the generated Maven archetype `pom.xml`, the `quarkus-resteasy`, `quarkus-reactive-routes`, and `quarkus-undertow` -dependencies are all optional. Pick which HTTP framework(s) you want to use (JAX-RS, Reactive Routes, and/or Servlet) and -remove the other dependencies to shrink your deployment. - -=== Examine sam.yaml - -The `sam.yaml` syntax is beyond the scope of this document. There's a couple of things that must be highlighted just in case you are -going to craft your own custom `sam.yaml` deployment files. - -The first thing to note is that for pure Java lambda deployments require a specific handler class. -Do not change the Lambda handler name. - -[source, subs=attributes+] ----- - Properties: - Handler: io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest - Runtime: java11 ----- - -This handler is a bridge between the lambda runtime and the Quarkus HTTP framework you are using (JAX-RS, Servlet, etc.) - -If you want to go native, there's an environment variable that must be set for native GraalVM deployments. If you look at `sam.native.yaml` -you'll see this: - -[source, subs=attributes+] ----- - Environment: - Variables: - DISABLE_SIGNAL_HANDLERS: true ----- - -This environment variable resolves some incompatibilities between Quarkus and the Amazon Lambda Custom Runtime environment. - -Finally, there is one specific thing for AWS Gateway REST API deployments. -That API assumes that HTTP response bodies are text unless you explicitly tell it which media types are -binary through configuration. To make things easier, the Quarkus extension forces a binary (base 64) encoding of all -HTTP response messages and the `sam.yaml` file must configure the API Gateway to assume all media types are binary: - -[source, subs=attributes+] ----- - Globals: - Api: - EndpointConfiguration: REGIONAL - BinaryMediaTypes: - - "*/*" ----- - -== Injectable AWS Context Variables - -If you are using RESTEasy and JAX-RS, you can inject various AWS Context variables into your JAX-RS resource classes -using the JAX-RS `@Context` annotation. - -For the AWS HTTP API you can inject the AWS variables `com.amazonaws.services.lambda.runtime.Context` and -`com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent`. Here is an example: - -[source, java] ----- -import javax.ws.rs.core.Context; -import com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent; - - -@Path("/myresource") -public class MyResource { - @GET - public String ctx(@Context com.amazonaws.services.lambda.runtime.Context ctx) { } - - @GET - public String event(@Context APIGatewayV2HTTPEvent event) { } - - @GET - public String requestContext(@Context APIGatewayV2HTTPEvent.RequestContext req) { } - - -} ----- - -For the AWS REST API you can inject the AWS variables `com.amazonaws.services.lambda.runtime.Context` and -`io.quarkus.amazon.lambda.http.model.AwsProxyRequestContext`. Here is an example: - -[source, java] ----- -import javax.ws.rs.core.Context; -import io.quarkus.amazon.lambda.http.model.AwsProxyRequestContext; -import io.quarkus.amazon.lambda.http.model.AwsProxyRequest; - - -@Path("/myresource") -public class MyResource { - @GET - public String ctx(@Context com.amazonaws.services.lambda.runtime.Context ctx) { } - - @GET - public String reqContext(@Context AwsProxyRequestContext req) { } - - @GET - public String req(@Context AwsProxyRequest req) { } - -} ----- - -== Tracing with AWS XRay and GraalVM - -If you are building native images, and want to use https://aws.amazon.com/xray[AWS X-Ray Tracing] with your lambda -you will need to include `quarkus-amazon-lambda-xray` as a dependency in your pom. The AWS X-Ray -library is not fully compatible with GraalVM so we had to do some integration work to make this work. - -== Security Integration - -When you invoke an HTTP request on the API Gateway, the Gateway turns that HTTP request into a JSON event document that is -forwarded to a Quarkus Lambda. The Quarkus Lambda parses this json and converts in into an internal representation of an HTTP -request that can be consumed by any HTTP framework Quarkus supports (JAX-RS, servlet, Reactive Routes). - -API Gateway supports many different ways to securely invoke on your HTTP endpoints that are backed by Lambda and Quarkus. -If you enable it, Quarkus will automatically parse relevant parts of the https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html[event json document] -and look for security based metadata and register a `java.security.Principal` internally that can be looked up in JAX-RS -by injecting a `javax.ws.rs.core.SecurityContext`, via `HttpServletRequest.getUserPrincipal()` in servlet, and `RouteContext.user()` in Reactive Routes. -If you want more security information, the `Principal` object can be typecast to -a class that will give you more information. - -To enable this security feature, add this to your `application.properties` file: ----- -quarkus.lambda-http.enable-security=true ----- - - -Here's how its mapped: - -.HTTP `quarkus-amazon-lambda-http` -[options="header"] -|======================= -|Auth Type |Principal Class |Json path of Principal Name -|Cognito JWT |`io.quarkus.amazon.lambda.http.CognitoPrincipal`|`requestContext.authorizer.jwt.claims.cognito:username` -|IAM |`io.quarkus.amazon.lambda.http.IAMPrincipal` |`requestContext.authorizer.iam.userId` -|Custom Lambda |`io.quarkus.amazon.lambda.http.CustomPrincipal` |`requestContext.authorizer.lambda.principalId` - -|======================= - -.REST `quarkus-amazon-lambda-rest` -[options="header"] -|======================= -|Auth Type |Principal Class |Json path of Principal Name -|Cognito |`io.quarkus.amazon.lambda.http.CognitoPrincipal`|`requestContext.authorizer.claims.cognito:username` -|IAM |`io.quarkus.amazon.lambda.http.IAMPrincipal` |`requestContext.identity.user` -|Custom Lambda |`io.quarkus.amazon.lambda.http.CustomPrincipal` |`requestContext.authorizer.principalId` - -|======================= - -== Custom Security Integration - -The default support for AWS security only maps the principal name to Quarkus security -APIs and does nothing to map claims or roles or permissions. You have can full control -how security metadata in the lambda HTTP event is mapped to Quarkus security APIs using -implementations of the `io.quarkus.amazon.lambda.http.LambdaIdentityProvider` -interface. By implementing this interface, you can do things like define role mappings for your principal -or publish additional attributes provided by IAM or Cognito or your Custom Lambda security integration. - -.HTTP `quarkus-amazon-lambda-http` -[source, java] ----- -package io.quarkus.amazon.lambda.http; - -/** - * Helper interface that removes some boilerplate for creating - * an IdentityProvider that processes APIGatewayV2HTTPEvent - */ -public interface LambdaIdentityProvider extends IdentityProvider { - @Override - default public Class getRequestType() { - return LambdaAuthenticationRequest.class; - } - - @Override - default Uni authenticate(LambdaAuthenticationRequest request, AuthenticationRequestContext context) { - APIGatewayV2HTTPEvent event = request.getEvent(); - SecurityIdentity identity = authenticate(event); - if (identity == null) { - return Uni.createFrom().optional(Optional.empty()); - } - return Uni.createFrom().item(identity); - } - - /** - * You must override this method unless you directly override - * IdentityProvider.authenticate - * - * @param event - * @return - */ - default SecurityIdentity authenticate(APIGatewayV2HTTPEvent event) { - throw new IllegalStateException("You must override this method or IdentityProvider.authenticate"); - } -} ----- - -For HTTP, the important method to override is `LambdaIdentityProvider.authenticate(APIGatewayV2HTTPEvent event)`. From this -you will allocate a SecurityIdentity based on how you want to map security data from `APIGatewayV2HTTPEvent` - -.REST `quarkus-amazon-lambda-rest` -[source, java] ----- -package io.quarkus.amazon.lambda.http; - -import java.util.Optional; - -import com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent; - -import io.quarkus.amazon.lambda.http.model.AwsProxyRequest; -import io.quarkus.security.identity.AuthenticationRequestContext; -import io.quarkus.security.identity.IdentityProvider; -import io.quarkus.security.identity.SecurityIdentity; -import io.smallrye.mutiny.Uni; - -/** - * Helper interface that removes some boilerplate for creating - * an IdentityProvider that processes APIGatewayV2HTTPEvent - */ -public interface LambdaIdentityProvider extends IdentityProvider { -... - - /** - * You must override this method unless you directly override - * IdentityProvider.authenticate - * - * @param event - * @return - */ - default SecurityIdentity authenticate(AwsProxyRequest event) { - throw new IllegalStateException("You must override this method or IdentityProvider.authenticate"); - } -} ----- - -For REST, the important method to override is `LambdaIdentityProvider.authenticate(AwsProxyRequest event)`. From this -you will allocate a SecurityIdentity based on how you want to map security data from `AwsProxyRequest`. - -Your implemented provider must be a CDI bean. Here's an example: - -[source,java] ----- -package org.acme; - -import java.security.Principal; - -import javax.enterprise.context.ApplicationScoped; - -import com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent; - -import io.quarkus.amazon.lambda.http.LambdaIdentityProvider; -import io.quarkus.security.identity.SecurityIdentity; -import io.quarkus.security.runtime.QuarkusPrincipal; -import io.quarkus.security.runtime.QuarkusSecurityIdentity; - -@ApplicationScoped -public class CustomSecurityProvider implements LambdaIdentityProvider { - @Override - public SecurityIdentity authenticate(APIGatewayV2HTTPEvent event) { - if (event.getHeaders() == null || !event.getHeaders().containsKey("x-user")) - return null; - Principal principal = new QuarkusPrincipal(event.getHeaders().get("x-user")); - QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(); - builder.setPrincipal(principal); - return builder.build(); - } -} ----- - -Here's the same example, but with the AWS Gateway REST API: - -[source,java] ----- -package org.acme; - -import java.security.Principal; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.amazon.lambda.http.model.AwsProxyRequest; - -import io.quarkus.amazon.lambda.http.LambdaIdentityProvider; -import io.quarkus.security.identity.SecurityIdentity; -import io.quarkus.security.runtime.QuarkusPrincipal; -import io.quarkus.security.runtime.QuarkusSecurityIdentity; - -@ApplicationScoped -public class CustomSecurityProvider implements LambdaIdentityProvider { - @Override - public SecurityIdentity authenticate(AwsProxyRequest event) { - if (event.getMultiValueHeaders() == null || !event.getMultiValueHeaders().containsKey("x-user")) - return null; - Principal principal = new QuarkusPrincipal(event.getMultiValueHeaders().getFirst("x-user")); - QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(); - builder.setPrincipal(principal); - return builder.build(); - } -} ----- - -Quarkus should automatically discover this implementation and use it instead of the default implementation -discussed earlier. - -== Simple SAM Local Principal - -If you are testing your application with `sam local` you can -hardcode a principal name to use when your application runs by setting -the `QUARKUS_AWS_LAMBDA_FORCE_USER_NAME` environment variable diff --git a/_versions/2.7/guides/amazon-lambda.adoc b/_versions/2.7/guides/amazon-lambda.adoc deleted file mode 100644 index 2f9d2dfde5d..00000000000 --- a/_versions/2.7/guides/amazon-lambda.adoc +++ /dev/null @@ -1,623 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Amazon Lambda - -include::./attributes.adoc[] - -The `quarkus-amazon-lambda` extension allows you to use Quarkus to build your AWS Lambdas. -Your lambdas can use injection annotations from CDI or Spring and other Quarkus facilities as you need them. - -Quarkus lambdas can be deployed using the Amazon Java Runtime, or you can build a native executable and use -Amazon's Custom Runtime if you want a smaller memory footprint and faster cold boot startup time. - -Quarkus's integration with lambdas also supports Quarkus's Live Coding development cycle. You an -bring up your Quarkus lambda project in dev or test mode and code on your project live. - -== Prerequisites - -:prerequisites-time: 30 minutes -include::includes/devtools/prerequisites.adoc[] -* https://aws.amazon.com[An Amazon AWS account] -* https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html[AWS CLI] -* https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html[AWS SAM CLI], for local testing - -NOTE: For Gradle projects please <>, or for further reference consult the guide in the xref:gradle-tooling.adoc[Gradle setup page]. - -== Getting Started - -This guide walks you through generating an example Java project via a maven archetype and deploying it to AWS. - -== Installing AWS bits - -Installing all the AWS bits is probably the most difficult thing about this guide. Make sure that you follow all the steps -for installing AWS CLI. - -== Creating the Maven Deployment Project - -Create the Quarkus AWS Lambda maven project using our Maven Archetype. - - -[source,bash,subs=attributes+] ----- -mvn archetype:generate \ - -DarchetypeGroupId=io.quarkus \ - -DarchetypeArtifactId=quarkus-amazon-lambda-archetype \ - -DarchetypeVersion={quarkus-version} ----- - -[NOTE] -==== -If you prefer to use Gradle, you can quickly and easily generate a Gradle project via https://code.quarkus.io/[code.quarkus.io] -adding the `quarkus-amazon-lambda` extension as a dependency. - -Copy the build.gradle, gradle.properties and settings.gradle into the above generated Maven archetype project, to follow along with this guide. - -Execute: gradle wrapper to setup the gradle wrapper (recommended). - -For full Gradle details <>. -==== - -[[choose]] -== Choose Your Lambda - -The `quarkus-amazon-lambda` extension scans your project for a class that directly implements the Amazon `RequestHandler` or `RequestStreamHandler` interface. -It must find a class in your project that implements this interface or it will throw a build time failure. -If it finds more than one handler class, a build time exception will also be thrown. - -Sometimes, though, you might have a few related lambdas that share code and creating multiple maven modules is just -an overhead you don't want to do. The `quarkus-amazon-lambda` extension allows you to bundle multiple lambdas in one -project and use configuration or an environment variable to pick the handler you want to deploy. - -The generated project has three lambdas within it. Two that implement the `RequestHandler` interface, and one that implements the `RequestStreamHandler` interface. One that is used and two that are unused. If you open up -`src/main/resources/application.properties` you'll see this: - -[source,properties,subs=attributes+] ----- -quarkus.lambda.handler=test ----- - -The `quarkus.lambda.handler` property tells Quarkus which lambda handler to deploy. This can be overridden -with an environment variable too. - -If you look at the three generated handler classes in the project, you'll see that they are `@Named` differently. - -[source,java,subs=attributes+] ----- -@Named("test") -public class TestLambda implements RequestHandler { -} - -@Named("unused") -public class UnusedLambda implements RequestHandler { -} - -@Named("stream") -public class StreamLambda implements RequestStreamHandler { -} ----- - -The CDI name of the handler class must match the value specified within the `quarkus.lambda.handler` property. - - -== Deploy to AWS Lambda Java Runtime - -There are a few steps to get your lambda running on AWS. The generated maven project contains a helpful script to -create, update, delete, and invoke your lambdas for pure Java and native deployments. - -== Build and Deploy - -Build the project: - -include::includes/devtools/build.adoc[] - -This will compile and package your code. - -== Create an Execution Role - -View the https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html[Getting Started Guide] for deploying -a lambda with AWS CLI. Specifically, make sure you have created an `Execution Role`. You will need to define -a `LAMBDA_ROLE_ARN` environment variable in your profile or console window, Alternatively, you can edit -the `manage.sh` script that is generated by the build and put the role value directly there: - -[source,bash] ----- -LAMBDA_ROLE_ARN="arn:aws:iam::1234567890:role/lambda-role" ----- - -== Extra Build Generated Files - -After you run the build, there are a few extra files generated by the `quarkus-amazon-lambda` extension. These files -are in the the build directory: `target/` for maven, `build/` for gradle. - -* `function.zip` - lambda deployment file -* `manage.sh` - wrapper around aws lambda cli calls -* `bootstrap-example.sh` - example bootstrap script for native deployments -* `sam.jvm.yaml` - (optional) for use with sam cli and local testing -* `sam.native.yaml` - (optional) for use with sam cli and native local testing - -== Create the function - -The `target/manage.sh` script is for managing your lambda using the AWS Lambda Java runtime. This script is provided only for -your convenience. Examine the output of the `manage.sh` script if you want to learn what aws commands are executed -to create, delete, and update your lambdas. - -`manage.sh` supports four operation: `create`, `delete`, `update`, and `invoke`. - -NOTE: To verify your setup, that you have the AWS CLI installed, executed aws configure for the AWS access keys, -and setup the `LAMBDA_ROLE_ARN` environment variable (as described above), please execute `manage.sh` without any parameters. -A usage statement will be printed to guide you accordingly. - -NOTE: If using Gradle, the path to the binaries in the `manage.sh` must be changed from `target` to `build` - -To see the `usage` statement, and validate AWS configuration: -[source,bash,subs=attributes+] ----- -sh target/manage.sh ----- - -You can `create` your function using the following command: - -[source,bash,subs=attributes+] ----- -sh target/manage.sh create ----- - -or if you do not have `LAMBDA_ROLE_ARN` already defined in this shell: - -[source,bash] ----- -LAMBDA_ROLE_ARN="arn:aws:iam::1234567890:role/lambda-role" sh target/manage.sh create ----- - -WARNING: Do not change the handler switch. This must be hardcoded to `io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest`. This -handler bootstraps Quarkus and wraps your actual handler so that injection can be performed. - -If there are any problems creating the function, you must delete it with the `delete` function before re-running -the `create` command. - -[source,bash,subs=attributes+] ----- -sh target/manage.sh delete ----- - -Commands may also be stacked: -[source,bash,subs=attributes+] ----- -sh target/manage.sh delete create ----- - -== Invoke the Lambda - -Use the `invoke` command to invoke your function. - -[source,bash,subs=attributes+] ----- -sh target/manage.sh invoke ----- - -The example lambda takes input passed in via the `--payload` switch which points to a json file -in the root directory of the project. - -The lambda can also be invoked locally with the SAM CLI like this: - -[source,bash] ----- -sam local invoke --template target/sam.jvm.yaml --event payload.json ----- - -If you are working with your native image build, simply replace the template name with the native version: - -[source,bash] ----- -sam local invoke --template target/sam.native.yaml --event payload.json ----- - -== Update the Lambda - -You can update the Java code as you see fit. Once you've rebuilt, you can redeploy your lambda by executing the -`update` command. - -[source,bash,subs=attributes+] ----- -sh target/manage.sh update ----- - -== Deploy to AWS Lambda Custom (native) Runtime - -If you want a lower memory footprint and faster initialization times for your lambda, you can compile your Java -code to a native executable. Just make sure to rebuild your project with the `-Pnative` switch. - -For Linux hosts, execute: - -include::includes/devtools/build-native.adoc[] - -NOTE: If you are building on a non-Linux system, you will need to also pass in a property instructing Quarkus to use a docker build as Amazon -Lambda requires linux binaries. You can do this by passing this property to your build: -`-Dquarkus.native.container-build=true`. This requires you to have Docker installed locally, however. - -include::includes/devtools/build-native-container.adoc[] - -Either of these commands will compile and create a native executable image. It also generates a zip file `target/function.zip`. -This zip file contains your native executable image renamed to `bootstrap`. This is a requirement of the AWS Lambda -Custom (Provided) Runtime. - -The instructions here are exactly as above with one change: you'll need to add `native` as the first parameter to the -`manage.sh` script: - -[source,bash,subs=attributes+] ----- -sh target/manage.sh native create ----- - -As above, commands can be stacked. The only requirement is that `native` be the first parameter should you wish -to work with native image builds. The script will take care of the rest of the details necessary to manage your native -image function deployments. - -Examine the output of the `manage.sh` script if you want to learn what aws commands are executed -to create, delete, and update your lambdas. - -One thing to note about the create command for native is that the `aws lambda create-function` -call must set a specific environment variable: - -[source,bash,subs=attributes+] ----- ---environment 'Variables={DISABLE_SIGNAL_HANDLERS=true}' ----- - -== Examine the POM and Gradle build - -There is nothing special about the POM other than the inclusion of the `quarkus-amazon-lambda` extension -as a dependency. The extension automatically generates everything you might need for your lambda deployment. - -NOTE: In previous versions of this extension you had to set up your pom or gradle -to zip up your executable for native deployments, but this is not the case anymore. - -[[gradle]] -== Gradle build - -Similarly for Gradle projects, you also just have to add the `quarkus-amazon-lambda` dependency. The extension automatically generates everything you might need -for your lambda deployment. - -Example Gradle dependencies: - -[source,groovy] ----- -dependencies { - implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}") - implementation 'io.quarkus:quarkus-resteasy' - implementation 'io.quarkus:quarkus-amazon-lambda' - - testImplementation 'io.quarkus:quarkus-junit5' - testImplementation 'io.rest-assured:rest-assured' -} ----- - - -== Live Coding and Unit/Integration Testing -To mirror the AWS Lambda environment as closely as possible in a dev environment, -the Quarkus Amazon Lambda extension boots up a mock AWS Lambda event server in Quarkus Dev and Test mode. -This mock event server simulates a true AWS Lambda environment. - -While running in Quarkus Dev Mode, you can feed events to it by doing an HTTP POST to `http://localhost:8080`. -The mock event server will receive the events and your lambda will be invoked. You can perform live coding on your lambda -and changes will automatically be recompiled and available the next invocation you make. Here's an example: - -include::includes/devtools/dev.adoc[] - -[source,bash] ----- -$ curl -d "{\"name\":\"John\"}" -X POST http://localhost:8080 ----- - -For your unit tests, you can also invoke on the mock event server using any HTTP client you want. Here's an example -using rest-assured. Quarkus starts up a separate Mock Event server under port 8081. -The default port for Rest Assured is automatically set to 8081 by Quarkus so you can invoke -on this endpoint. - - -[source,java] ----- -import org.junit.jupiter.api.Test; - -import io.quarkus.test.junit.QuarkusTest; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.containsString; - -@QuarkusTest -public class LambdaHandlerTest { - - @Test - public void testSimpleLambdaSuccess() throws Exception { - Person in = new Person(); - in.setName("Stu"); - given() - .contentType("application/json") - .accept("application/json") - .body(in) - .when() - .post() - .then() - .statusCode(200) - .body(containsString("Hello Stu")); - } -} ----- - -The mock event server is also started for `@NativeImageTest` unit tests so will work -with native binaries too. All this provides similar functionality to the SAM CLI local testing, without the overhead of Docker. - -Finally, if port 8080 or port 8081 is not available on your computer, you can modify the dev -and test mode ports with application.properties - -[source, subs=attributes+] ----- -quarkus.lambda.mock-event-server.dev-port=8082 -quarkus.lambda.mock-event-server.test-port=8083 ----- - -== Testing with the SAM CLI -If you do not want to use the mock event server, you can test your lambdas with SAM CLI. - -The https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html[AWS SAM CLI] -allows you to run your lambdas locally on your laptop in a simulated Lambda environment. This requires -https://www.docker.com/products/docker-desktop[docker] to be installed. This is an optional approach should you choose -to take advantage of it. Otherwise, the Quarkus JUnit integration should be sufficient for most of your needs. - -A starter template has been generated for both JVM and native execution modes. - -Run the following SAM CLI command to locally test your lambda function, passing the appropriate SAM `template`. -The `event` parameter takes any JSON file, in this case the sample `payload.json`. - -NOTE: If using Gradle, the path to the binaries in the YAML templates must be changed from `target` to `build` - -[source,bash] ----- -sam local invoke --template target/sam.jvm.yaml --event payload.json ----- - -The native image can also be locally tested using the `sam.native.yaml` template: - -[source,bash] ----- -sam local invoke --template target/sam.native.yaml --event payload.json ----- - -== Modifying `function.zip` - -The are times where you may have to add some additions to the `function.zip` lambda deployment that is generated -by the build. To do this create a `zip.jvm` or `zip.native` directory within `src/main`. -Create `zip.jvm/` if you are doing a pure Java lambda. `zip.native/` if you are doing a native deployment. - -Any you files and directories you create under your zip directory will be included within `function.zip` - -== Custom `bootstrap` script - -There are times you may want to set a specific system properties or other arguments when lambda invokes -your native quarkus lambda deployment. If you include a `bootstrap` script file within -`zip.native`, the quarkus extension will automatically rename the executable to `runner` within -`function.zip` and set the unix mode of the `bootstrap` script to executable. - -NOTE: The native executable must be referenced as `runner` if you include a custom `bootstrap` script. - -The extension generates an example script within `target/bootstrap-example.sh`. - -== Tracing with AWS XRay and GraalVM - -If you are building native images, and want to use https://aws.amazon.com/xray[AWS X-Ray Tracing] with your lambda -you will need to include `quarkus-amazon-lambda-xray` as a dependency in your pom. The AWS X-Ray -library is not fully compatible with GraalVM so we had to do some integration work to make this work. - -In addition, remember to enable the AWS X-Ray tracing parameter in `manage.sh`, in the `cmd_create()` function. This can also be set in the AWS Management Console. -[source,bash] ----- - --tracing-config Mode=Active ----- - -For the sam template files, add the following to the YAML function Properties. -[source] ----- - Tracing: Active ----- - -AWS X-Ray does add many classes to your distribution, do ensure you are using at least the 256MB AWS Lambda memory size. -This is explicitly set in `manage.sh` `cmd_create()`. Whilst the native image potentially can always use a lower memory setting, it would be recommended to keep the setting the same, especially to help compare performance. - -[[https]] -== Using HTTPS or SSL/TLS - -If your code makes HTTPS calls, such as to a micro-service (or AWS service), you will need to add configuration to the native image, -as GraalVM will only include the dependencies when explicitly declared. Quarkus, by default enables this functionality on extensions that implicitly require it. -For further information, please consult the xref:native-and-ssl.adoc[Quarkus SSL guide] - -Open src/main/resources/application.properties and add the following line to enable SSL in your native image. - -[source,properties] ----- -quarkus.ssl.native=true ----- - -[[aws-sdk-v2]] -== Using the AWS Java SDK v2 - -NOTE: Quarkus now has extensions for DynamoDB, S3, SNS and SQS (more coming). -Please check link:{amazon-services-guide}[those guides] on how to use the various AWS Services with Quarkus, as opposed to wiring manually like below. - -With minimal integration, it is possible to leverage the AWS Java SDK v2, -which can be used to invoke services such as SQS, SNS, S3 and DynamoDB. - -For native image, however the URL Connection client must be preferred over the Apache HTTP Client -when using synchronous mode, due to issues in the GraalVM compilation (at present). - -Add `quarkus-jaxb` as a dependency in your Maven `pom.xml`, or Gradle `build.gradle` file. - -You must also force your AWS service client for SQS, SNS, S3 et al, to use the URL Connection client, -which connects to AWS services over HTTPS, hence the inclusion of the SSL enabled property, as described in the <> section above. - -[source,java] ----- -// select the appropriate client, in this case SQS, and -// insert your region, instead of XXXX, which also improves startup time over the default client - client = SqsClient.builder().region(Region.XXXX).httpClient(software.amazon.awssdk.http.urlconnection.UrlConnectionHttpClient.builder().build()).build(); ----- - -For Maven, add the following to your `pom.xml`. - -[source,xml] ----- - - - 2.10.69 - - - - - - - software.amazon.awssdk - bom - ${aws.sdk2.version} - pom - import - - - - - - - - software.amazon.awssdk - url-connection-client - - - - software.amazon.awssdk - apache-client - - - commons-logging - commons-logging - - - - - - software.amazon.awssdk - - sqs - - - - software.amazon.awssdk - apache-client - - - software.amazon.awssdk - netty-nio-client - - - commons-logging - commons-logging - - - - - - org.jboss.logging - commons-logging-jboss-logging - - ----- - -NOTE: if you see `java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty` or similar SSL error, due to the current status of GraalVM, -there is some additional work to bundle the `function.zip`, as below. For more information, please see the xref:native-and-ssl.adoc[Quarkus Native SSL Guide]. - -== Additional requirements for client SSL - -The native executable requires some additional steps to enable client SSL that S3 and other AWS libraries need. - -1. A custom `bootstrap` script -2. `libsunec.so` must be added to `function.zip` -3. `cacerts` must be added to `function.zip` - -To do this, first create a directory `src/main/zip.native/` with your build. Next create a shell script file called `bootstrap` -within `src/main/zip.native/`, like below. An example is create automatically in your build folder (target or build), called `bootstrap-example.sh` - -[source,bash] ----- -#!/usr/bin/env bash - -./runner -Djava.library.path=./ -Djavax.net.ssl.trustStore=./cacerts ----- - -Additional set `-Djavax.net.ssl.trustStorePassword=changeit` if your `cacerts` file is password protected. - -Next you must copy some files from your GraalVM distribution into `src/main/zip.native/`. - -NOTE: GraalVM versions can have different paths for these files, and whether you using the Java 8 or 11 version. Adjust accordingly. - -[source,bash] ----- -cp $GRAALVM_HOME/lib/libsunec.so $PROJECT_DIR/src/main/zip.native/ -cp $GRAALVM_HOME/lib/security/cacerts $PROJECT_DIR/src/main/zip.native/ ----- - -Now when you run the native build all these files will be included within `function.zip` - -NOTE: If you are using a Docker image to build, then you must extract these files from this image. - -To extract the required ssl, you must start up a Docker container in the background, and attach to that container to copy the artifacts. - -First, let's start the GraalVM container, noting the container id output. -[source,bash,subs=attributes+] ----- -docker run -it -d --entrypoint bash quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor} - -# This will output a container id, like 6304eea6179522aff69acb38eca90bedfd4b970a5475aa37ccda3585bc2abdde -# Note this value as we will need it for the commands below ----- - -First, libsunec.so, the C library used for the SSL implementation: - -[source,bash] ----- -docker cp {container-id-from-above}:/opt/graalvm/lib/libsunec.so src/main/zip.native/ ----- - -Second, cacerts, the certificate store. You may need to periodically obtain an updated copy, also. -[source,bash] ----- -docker cp {container-id-from-above}:/opt/graalvm/lib/security/cacerts src/main/zip.native/ ----- - -Your final archive will look like this: -[source,bash] ----- -jar tvf target/function.zip - - bootstrap - runner - cacerts - libsunec.so ----- - -== Amazon Alexa Integration - -To use Alexa with Quarkus native, you need to use the https://github.com/quarkiverse/quarkus-amazon-alexa[Quarkus Amazon Alexa extension hosted at the Quarkiverse Hub]. - -[source,xml] ----- - - io.quarkiverse.alexa - quarkus-amazon-alexa - ${quarkus-amazon-alexa.version} <1> - ----- -<1> Define the latest version of the extension in your POM file. - -Create your Alexa handler, as normal, by sub-classing the abstract `com.amazon.ask.SkillStreamHandler`, and add your request handler implementation. - -That's all there is to it! diff --git a/_versions/2.7/guides/amqp-dev-services.adoc b/_versions/2.7/guides/amqp-dev-services.adoc deleted file mode 100644 index 72c11b83be2..00000000000 --- a/_versions/2.7/guides/amqp-dev-services.adoc +++ /dev/null @@ -1,53 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services for AMQP - -include::./attributes.adoc[] - -Dev Services for AMQP automatically starts an AMQP 1.0 broker in dev mode and when running tests. -So, you don't have to start a broker manually. -The application is configured automatically. - -== Enabling / Disabling Dev Services for AMQP - -Dev Services for AMQP is automatically enabled unless: - -- `quarkus.amqp.devservices.enabled` is set to `false` -- the `amqp-host` or `amqp-port` is configured -- all the Reactive Messaging AMQP channels have the `host` or `port` attributes set - -Dev Services for AMQP relies on Docker to start the broker. -If your environment does not support Docker, you will need to start the broker manually, or connect to an already running broker. -You can configure the broker access using the `amqp-host`, `amqp-port`, `amqp-user` and `amqp-password` properties. - -== Shared broker - -Most of the time you need to share the broker between applications. -Dev Services for AMQP implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single broker. - -NOTE: Dev Services for AMQP starts the container with the `quarkus-dev-service-amqp` label which is used to identify the container. - -If you need multiple (shared) brokers, you can configure the `quarkus.amqp.devservices.service-name` attribute and indicate the broker name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `amqp`. - -Sharing is enabled by default in dev mode, but disabled in test mode. -You can disable the sharing with `quarkus.amqp.devservices.shared=false`. - -== Setting the port - -By default, Dev Services for AMQP picks a random port and configures the application. -You can set the port by configuring the `quarkus.amqp.devservices.port` property. - -== Configuring the image - -Dev Services for AMQP uses https://quay.io/repository/artemiscloud/activemq-artemis-broker[activemq-artemis-broker] images. -You can configure the image and version using the `quarkus.amqp.devservices.image-name` property: - -[source, properties] ----- -quarkus.amqp.devservices.image-name=quay.io/artemiscloud/activemq-artemis-broker:latest ----- \ No newline at end of file diff --git a/_versions/2.7/guides/amqp-reference.adoc b/_versions/2.7/guides/amqp-reference.adoc deleted file mode 100644 index 45b692f43b1..00000000000 --- a/_versions/2.7/guides/amqp-reference.adoc +++ /dev/null @@ -1,802 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Reactive Messaging AMQP 1.0 Connector Reference Documentation - -include::./attributes.adoc[] - -This guide is the companion from the xref:amqp.adoc[Getting Started with AMQP 1.0]. -It explains in more details the configuration and usage of the AMQP connector for reactive messaging. - -TIP: This documentation does not cover all the details of the connector. -Refer to the https://smallrye.io/smallrye-reactive-messaging[SmallRye Reactive Messaging website] for further details. - -The AMQP connector allows Quarkus applications to send and receive messages using the AMQP 1.0 protocol. -More details about the protocol can be found in http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-overview-v1.0-os.html[the AMQP 1.0 specification]. -It's important to note that AMQP 1.0 and AMQP 0.9.1 (implemented by RabbitMQ) are incompatible. -Check <> to get more details. - -== AMQP connector extension - -To use the connector, you need to add the `quarkus-smallrye-reactive-messaging-amqp` extension. - -You can add the extension to your project using: - -:add-extension-extensions: quarkus-smallrye-reactive-messaging-amqp -include::includes/devtools/extension-add.adoc[] - -Or just add the following dependency to your project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-reactive-messaging-amqp - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-reactive-messaging-amqp") ----- - -Once added to your project, you can map _channels_ to AMQP addresses by configuring the `connector` attribute: - -[source, properties] ----- -# Inbound -mp.messaging.incoming.[channel-name].connector=smallrye-amqp - -# Outbound -mp.messaging.outgoing.[channel-name].connector=smallrye-amqp ----- - -[TIP] -.Connector auto-attachment -==== -If you have a single connector on your classpath, you can omit the `connector` attribute configuration. -Quarkus automatically associates _orphan_ channels to the (unique) connector found on the classpath. -_Orphans_ channels are outgoing channels without a downstream consumer or incoming channels without an upstream producer. - -This auto-attachment can be disabled using: - -[source, properties] ----- -quarkus.reactive-messaging.auto-connector-attachment=false ----- -==== - -== Configuring the AMQP Broker access - -The AMQP connector connects to AMQP 1.0 brokers such as Apache ActiveMQ or Artemis. -To configure the location and credentials of the broker, add the following properties in the `application.properties`: - -[source, properties] ----- -amqp-host=amqp # <1> -amqp-port=5672 # <2> -amqp-username=my-username # <3> -amqp-password=my-password # <4> - -mp.messaging.incoming.prices.connector=smallrye-amqp # <5> ----- -<1> Configures the broker/router host name. You can do it per channel (using the `host` attribute) or globally using `amqp-host` -<2> Configures the broker/router port. You can do it per channel (using the `port` attribute) or globally using `amqp-port`. The default is `5672`. -<3> Configures the broker/router username if required. You can do it per channel (using the `username` attribute) or globally using `amqp-username`. -<4> Configures the broker/router password if required. You can do it per channel (using the `password` attribute) or globally using `amqp-password`. -<5> Instructs the prices channel to be managed by the AMQP connector - -In dev mode and when running tests, xref:amqp-dev-services.adoc[Dev Services for AMQP] automatically starts an AMQP broker. - -== Receiving AMQP messages - -Let's imagine your application receives `Message`. -You can consume the payload directly: - -[source, java] ----- -package inbound; - -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class AmqpPriceConsumer { - - @Incoming("prices") - public void consume(double price) { - // process your price. - } - -} ----- - -Or, you can retrieve the Message: - -[source, java] ----- -package inbound; - -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Message; - -import javax.enterprise.context.ApplicationScoped; -import java.util.concurrent.CompletionStage; - -@ApplicationScoped -public class AmqpPriceMessageConsumer { - - @Incoming("prices") - public CompletionStage consume(Message price) { - // process your price. - - // Acknowledge the incoming message, marking the AMQP message as `accepted`. - return price.ack(); - } - -} ----- - -=== Inbound Metadata -Messages coming from AMQP contain an instance of `IncomingAmqpMetadata` in the metadata. - -[source, java] ----- -Optional metadata = incoming.getMetadata(IncomingAmqpMetadata.class); -metadata.ifPresent(meta -> { - String address = meta.getAddress(); - String subject = meta.getSubject(); - boolean durable = meta.isDurable(); - // Use io.vertx.core.json.JsonObject - JsonObject properties = meta.getProperties(); - // ... -}); ----- - -=== Deserialization - -The connector converts incoming AMQP Messages into Reactive Messaging `Message` instances. -`T` depends on the _body_ of the received AMQP Message. - -The http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html[AMQP Type System] defines the supported types. - -[options="header"] -|=== -| AMQP Body Type | `` -| AMQP Value containing a http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#section-primitive-type-definitions[AMQP Primitive Type] | the corresponding Java type -| AMQP Value using the `Binary` type | `byte[]` -| AMQP Sequence | `List` -| AMQP Data (with binary content) and the `content-type` is set to `application/json` | https://vertx.io/docs/apidocs/io/vertx/core/json/JsonObject.html[`JsonObject`] -| AMQP Data with a different `content-type` | `byte[]` -|=== - -If you send objects with this AMQP connector (outbound connector), it gets encoded as JSON and sent as binary. -The `content-type` is set to `application/json`. -So, you can rebuild the object as follows: - -[source, java] ----- -import io.vertx.core.json.JsonObject; -// -@ApplicationScoped -public static class Consumer { - - List prices = new CopyOnWriteArrayList<>(); - - @Incoming("from-amqp") // <1> - public void consume(JsonObject p) { // <2> - Price price = p.mapTo(Price.class); // <3> - prices.add(price); - } - - public List list() { - return prices; - } -} ----- -<1> The `Price` instances are automatically encoded to JSON by the connector -<2> You can receive it using a `JsonObject` -<3> Then, you can reconstruct the instance using the `mapTo` method - -NOTE: The `mapTo` method uses the Quarkus Jackson mapper. Check xref:rest-json.adoc#json[this guide] to learn more about the mapper configuration. - -=== Acknowledgement - -When a Reactive Messaging Message associated with an AMQP Message is acknowledged, it informs the broker that the message has been http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-accepted[accepted]. - -=== Failure Management - -If a message produced from an AMQP message is _nacked_, a failure strategy is applied. -The AMQP connector supports six strategies: - -* `fail` - fail the application; no more AMQP messages will be processed (default). -The AMQP message is marked as rejected. -* `accept` - this strategy marks the AMQP message as _accepted_. The processing continues ignoring the failure. -Refer to the http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-accepted[accepted delivery state documentation]. -* `release` - this strategy marks the AMQP message as _released_. The processing continues with the next message. The broker can redeliver the message. -Refer to the http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-released[released delivery state documentation]. -* `reject` - this strategy marks the AMQP message as rejected. The processing continues with the next message. -Refer to the http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-rejected[rejected delivery state documentation]. -* `modified-failed` - this strategy marks the AMQP -message as _modified_ and indicates that it failed (with the `delivery-failed` attribute). The processing continues with the next message, but the broker may attempt to redeliver the message. -Refer to the http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-modified[modified delivery state documentation] -* `modified-failed-undeliverable-here` - this strategy marks the AMQP message as _modified_ and indicates that it failed (with the `delivery-failed` attribute). It also indicates that the application cannot process the message, meaning that the broker will not attempt to redeliver the message to this node. The processing continues with the next message. -Refer to the http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-modified[modified delivery state documentation] - -== Sending AMQP messages - -=== Serialization - -When sending a `Message`, the connector converts the message into an AMQP Message. -The payload is converted to the AMQP Message _body_. - -[options=header] -|=== -| `T` | AMQP Message Body -| primitive types or `String` | AMQP Value with the payload -| `Instant` or `UUID` | AMQP Value using the corresponding AMQP Type -| https://vertx.io/docs/apidocs/io/vertx/core/json/JsonObject.html[`JsonObject`] or https://vertx.io/docs/apidocs/io/vertx/core/json/JsonArray.html[`JsonArray`] | AMQP Data using a binary content. The `content-type` is set to `application/json` -| `io.vertx.mutiny.core.buffer.Buffer` | AMQP Data using a binary content. No `content-type` set -| Any other class | The payload is converted to JSON (using a Json Mapper). The result is wrapped into AMQP Data using a **binary** content. The `content-type` is set to `application/json` -|=== - -If the message payload cannot be serialized to JSON, the message is _nacked_. - -=== Outbound Metadata - -When sending `Messages`, you can add an instance of `OutgoingAmqpMetadata` to influence how the message is going to be sent to AMQP. -For example, you can configure the subjects, properties: - -[source, java] ----- - OutgoingAmqpMetadata metadata = OutgoingAmqpMetadata.builder() - .withDurable(true) - .withSubject("my-subject") - .build(); - -// Create a new message from the `incoming` message -// Add `metadata` to the metadata from the `incoming` message. -return incoming.addMetadata(metadata); ----- - -=== Dynamic address names - -Sometimes it is desirable to select the destination of a message dynamically. -In this case, you should not configure the address inside your application configuration file, but instead, use the outbound metadata to set the address. - -For example, you can send to a dynamic address based on the incoming message: - -[source, java] ----- -String addressName = selectAddressFromIncommingMessage(incoming); -OutgoingAmqpMetadata metadata = OutgoingAmqpMetadata.builder() - .withAddress(addressName) - .withDurable(true) - .build(); - -// Create a new message from the `incoming` message -// Add `metadata` to the metadata from the `incoming` message. -return incoming.addMetadata(metadata); ----- - -NOTE: To be able to set the address per message, the connector is using an _anonymous sender_. - -=== Acknowledgement - -By default, the Reactive Messaging `Message` is acknowledged when the broker acknowledged the message. -When using routers, this acknowledgement may not be enabled. -In this case, configure the `auto-acknowledgement` attribute to acknowledge the message as soon as it has been sent to the router. - -If an AMQP message is rejected/released/modified by the broker (or cannot be sent successfully), the message is nacked. - -=== Back Pressure and Credits - -The back-pressure is handled by AMQP _credits_. -The outbound connector only requests the amount of allowed credits. -When the amount of credits reaches 0, it waits (in a non-blocking fashion) until the broker grants more credits to the AMQP sender. - -== Configuring the AMQP address - -You can configure the AMQP address using the `address` attribute: - -[source, properties] ----- -mp.messaging.incoming.prices.connector=smallrye-amqp -mp.messaging.incoming.prices.address=my-queue - -mp.messaging.outgoing.orders.connector=smallrye-amqp -mp.messaging.outgoing.orders.address=my-order-queue ----- - -If the `address` attribute is not set, the connector uses the channel name. - -To use an existing queue, you need to configure the `address`, `container-id` and, optionally, the `link-name` attributes. -For example, if you have an Apache Artemis broker configured with: - -[source, xml] ----- - - -
people
- true - artemis -
-
----- - -You need the following configuration: -[source, properties] ----- -mp.messaging.outgoing.people.connector=smallrye-amqp -mp.messaging.outgoing.people.durable=true -mp.messaging.outgoing.people.address=people -mp.messaging.outgoing.people.container-id=people ----- - -You may need to configure the `link-name` attribute, if the queue name is not the channel name: - -[source, properties] ----- -mp.messaging.outgoing.people-out.connector=smallrye-amqp -mp.messaging.outgoing.people-out.durable=true -mp.messaging.outgoing.people-out.address=people -mp.messaging.outgoing.people-out.container-id=people -mp.messaging.outgoing.people-out.link-name=people ----- - -To use a `MULTICAST` queue, you need to provide the _FQQN_ (fully-qualified queue name) instead of just the name of the queue: - -[source, properties] ----- -mp.messaging.outgoing.people-out.connector=smallrye-amqp -mp.messaging.outgoing.people-out.durable=true -mp.messaging.outgoing.people-out.address=foo -mp.messaging.outgoing.people-out.container-id=foo - -mp.messaging.incoming.people-out.connector=smallrye-amqp -mp.messaging.incoming.people-out.durable=true -mp.messaging.incoming.people-out.address=foo::bar # Note the syntax: address-name::queue-name -mp.messaging.incoming.people-out.container-id=bar -mp.messaging.incoming.people-out.link-name=people ----- - -More details about the AMQP Address model can be found in the https://activemq.apache.org/components/artemis/documentation/2.0.0/address-model.html[Artemis documentation]. - -[#blocking-processing] -=== Execution model and Blocking processing - -Reactive Messaging invokes your method on an I/O thread. -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. -But, you often need to combine Reactive Messaging with blocking processing such as database interactions. -For this, you need to use the `@Blocking` annotation indicating that the processing is _blocking_ and should not be run on the caller thread. - -For example, The following code illustrates how you can store incoming payloads to a database using Hibernate with Panache: - -[source,java] ----- -import io.smallrye.reactive.messaging.annotations.Blocking; -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; -import javax.transaction.Transactional; - -@ApplicationScoped -public class PriceStorage { - - @Incoming("prices") - @Transactional - public void store(int priceInUsd) { - Price price = new Price(); - price.value = priceInUsd; - price.persist(); - } - -} ----- - -[NOTE] -==== -There are 2 `@Blocking` annotations: - -1. `io.smallrye.reactive.messaging.annotations.Blocking` -2. `io.smallrye.common.annotation.Blocking` - -They have the same effect. -Thus, you can use both. -The first one provides more fine-grained tuning such as the worker pool to use and whether it preserves the order. -The second one, used also with other reactive features of Quarkus, uses the default worker pool and preserves the order. -==== - -[TIP] -.@Transactional -==== -If your method is annotated with `@Transactional`, it will be considered _blocking_ automatically, even if the method is not annotated with `@Blocking`. -==== - -== Customizing the underlying AMQP client - -The connector uses the Vert.x AMQP client underneath. -More details about this client can be found in the https://vertx.io/docs/vertx-amqp-client/java/[Vert.x website]. - -You can customize the underlying client configuration by producing an instance of `AmqpClientOptions` as follows: - -[source, java] ----- -@Produces -@Identifier("my-named-options") -public AmqpClientOptions getNamedOptions() { - // You can use the produced options to configure the TLS connection - PemKeyCertOptions keycert = new PemKeyCertOptions() - .addCertPath("./tls/tls.crt") - .addKeyPath("./tls/tls.key"); - PemTrustOptions trust = new PemTrustOptions().addCertPath("./tlc/ca.crt"); - return new AmqpClientOptions() - .setSsl(true) - .setPemKeyCertOptions(keycert) - .setPemTrustOptions(trust) - .addEnabledSaslMechanism("EXTERNAL") - .setHostnameVerificationAlgorithm("") - .setConnectTimeout(30000) - .setReconnectInterval(5000) - .setContainerId("my-container"); -} ----- - -This instance is retrieved and used to configure the client used by the connector. -You need to indicate the name of the client using the `client-options-name` attribute: - -[source, properties] ----- -mp.messaging.incoming.prices.client-options-name=my-named-options ----- - -== Health reporting - -If you use the AMQP connector with the `quarkus-smallrye-health` extension, it contributes to the readiness and liveness probes. -The AMQP connector reports the readiness and liveness of each channel managed by the connector. -At the moment, the AMQP connector uses the same logic for the readiness and liveness checks. - -To disable health reporting, set the `health-enabled` attribute for the channel to false. -On the inbound side (receiving messages from AMQP), the check verifies that the receiver is attached to the broker. -On the outbound side (sending records to AMQP), the check verifies that the sender is attached to the broker. - -Note that a message processing failures nacks the message, which is then handled by the `failure-strategy`. -It the responsibility of the `failure-strategy` to report the failure and influence the outcome of the checks. -The `fail` failure strategy reports the failure, and so the check will report the fault. - -== Using RabbitMQ -This connector is for AMQP 1.0. RabbitMQ implements AMQP 0.9.1. -RabbitMQ does not provide AMQP 1.0 by default, but there is a plugin for it. -To use RabbitMQ with this connector, enable and configure the AMQP 1.0 plugin. - -Despite the existence of the plugin, a few AMQP 1.0 features won’t work with RabbitMQ. -Thus, we recommend the following configurations. - -To receive messages from RabbitMQ: - -* Set durable to false - -[source, properties] ----- -mp.messaging.incoming.prices.connector=smallrye-amqp -mp.messaging.incoming.prices.durable=false ----- - -To send messages to RabbitMQ: - -* set the destination address (anonymous sender are not supported) -* set `use-anonymous-sender` to false - -[source, properties] ----- -mp.messaging.outgoing.generated-price.connector=smallrye-amqp -mp.messaging.outgoing.generated-price.address=prices -mp.messaging.outgoing.generated-price.use-anonymous-sender=false ----- - -As a consequence, it’s not possible to change the destination dynamically (using message metadata) when using RabbitMQ. - -== Receiving Cloud Events - -The AMQP connector supports https://cloudevents.io/[Cloud Events]. -When the connector detects a _structured_ or _binary_ Cloud Events, it adds a `IncomingCloudEventMetadata` into the metadata of the `Message`. -`IncomingCloudEventMetadata` contains accessors to the mandatory and optional Cloud Event attributes. - -If the connector cannot extract the Cloud Event metadata, it sends the Message without the metadata. - -For more information on receiving Cloud Events, see https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.12/amqp/amqp.html#_receiving_cloud_events[Receiving Cloud Events] in SmallRye Reactive Messaging documentation. - -=== Sending Cloud Events - -The AMQP connector supports https://cloudevents.io/[Cloud Events]. -The connector sends the outbound record as Cloud Events if: - -* the message metadata contains an `io.smallrye.reactive.messaging.ce.OutgoingCloudEventMetadata` instance, -* the channel configuration defines the `cloud-events-type` and `cloud-events-source` attributes. - -For more information on sending Cloud Events, see https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.12/amqp/amqp.html#_sending_cloud_events[Sending Cloud Events] in SmallRye Reactive Messaging documentation. - -[[configuration-reference]] -== AMQP Connector Configuration Reference - -=== Quarkus specific configuration - -include::{generated-dir}/config/quarkus-smallrye-reactivemessaging-amqp.adoc[opts=optional, leveloffset=+1] - -=== Incoming channel configuration - -[cols="25, 30, 15, 20",options="header"] -|=== -|Attribute (_alias_) | Description | Mandatory | Default - -| [.no-hyphens]#*address*# | The AMQP address. If not set, the channel name is used - -Type: _string_ | false | - -| [.no-hyphens]#*auto-acknowledgement*# | Whether the received AMQP messages must be acknowledged when received - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*broadcast*# | Whether the received AMQP messages must be dispatched to multiple _subscribers_ - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*client-options-name*# - -[.no-hyphens]#_(amqp-client-options-name)_# | The name of the AMQP Client Option bean used to customize the AMQP client configuration - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events*# | Enables (default) or disables the Cloud Event support. If enabled on an _incoming_ channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an _outgoing_, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*connect-timeout*# - -[.no-hyphens]#_(amqp-connect-timeout)_# | The connection timeout in milliseconds - -Type: _int_ | false | `1000` - -| [.no-hyphens]#*container-id*# | The AMQP container id - -Type: _string_ | false | - -| [.no-hyphens]#*durable*# | Whether AMQP subscription is durable - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*failure-strategy*# | Specify the failure strategy to apply when a message produced from an AMQP message is nacked. Accepted values are `fail` (default), `accept`, `release`, `reject`, `modified-failed`, `modified-failed-undeliverable-here` - -Type: _string_ | false | `fail` - -| [.no-hyphens]#*health-timeout*# | The max number of seconds to wait to determine if the connection with the broker is still established for the readiness check. After that threshold, the check is considered as failed. - -Type: _int_ | false | `3` - -| [.no-hyphens]#*host*# - -[.no-hyphens]#_(amqp-host)_# | The broker hostname - -Type: _string_ | false | `localhost` - -| [.no-hyphens]#*link-name*# | The name of the link. If not set, the channel name is used. - -Type: _string_ | false | - -| [.no-hyphens]#*password*# - -[.no-hyphens]#_(amqp-password)_# | The password used to authenticate to the broker - -Type: _string_ | false | - -| [.no-hyphens]#*port*# - -[.no-hyphens]#_(amqp-port)_# | The broker port - -Type: _int_ | false | `5672` - -| [.no-hyphens]#*reconnect-attempts*# - -[.no-hyphens]#_(amqp-reconnect-attempts)_# | The number of reconnection attempts - -Type: _int_ | false | `100` - -| [.no-hyphens]#*reconnect-interval*# - -[.no-hyphens]#_(amqp-reconnect-interval)_# | The interval in second between two reconnection attempts - -Type: _int_ | false | `10` - -| [.no-hyphens]#*sni-server-name*# - -[.no-hyphens]#_(amqp-sni-server-name)_# | If set, explicitly override the hostname to use for the TLS SNI server name - -Type: _string_ | false | - -| [.no-hyphens]#*tracing-enabled*# | Whether tracing is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*use-ssl*# - -[.no-hyphens]#_(amqp-use-ssl)_# | Whether the AMQP connection uses SSL/TLS - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*username*# - -[.no-hyphens]#_(amqp-username)_# | The username used to authenticate to the broker - -Type: _string_ | false | - -| [.no-hyphens]#*virtual-host*# - -[.no-hyphens]#_(amqp-virtual-host)_# | If set, configure the hostname value used for the connection AMQP Open frame and TLS SNI server name (if TLS is in use) - -Type: _string_ | false | - -|=== - - -=== Outgoing channel configuration - -[cols="25, 30, 15, 20",options="header"] -|=== -|Attribute (_alias_) | Description | Mandatory | Default - -| [.no-hyphens]#*address*# | The AMQP address. If not set, the channel name is used - -Type: _string_ | false | - -| [.no-hyphens]#*client-options-name*# - -[.no-hyphens]#_(amqp-client-options-name)_# | The name of the AMQP Client Option bean used to customize the AMQP client configuration - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events*# | Enables (default) or disables the Cloud Event support. If enabled on an _incoming_ channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an _outgoing_, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*cloud-events-data-content-type*# - -[.no-hyphens]#_(cloud-events-default-data-content-type)_# | Configure the default `datacontenttype` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `datacontenttype` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-data-schema*# - -[.no-hyphens]#_(cloud-events-default-data-schema)_# | Configure the default `dataschema` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `dataschema` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-insert-timestamp*# - -[.no-hyphens]#_(cloud-events-default-timestamp)_# | Whether or not the connector should insert automatically the `time` attribute into the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `time` attribute itself - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*cloud-events-mode*# | The Cloud Event mode (`structured` or `binary` (default)). Indicates how are written the cloud events in the outgoing record - -Type: _string_ | false | `binary` - -| [.no-hyphens]#*cloud-events-source*# - -[.no-hyphens]#_(cloud-events-default-source)_# | Configure the default `source` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `source` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-subject*# - -[.no-hyphens]#_(cloud-events-default-subject)_# | Configure the default `subject` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `subject` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-type*# - -[.no-hyphens]#_(cloud-events-default-type)_# | Configure the default `type` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `type` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*connect-timeout*# - -[.no-hyphens]#_(amqp-connect-timeout)_# | The connection timeout in milliseconds - -Type: _int_ | false | `1000` - -| [.no-hyphens]#*container-id*# | The AMQP container id - -Type: _string_ | false | - -| [.no-hyphens]#*credit-retrieval-period*# | The period (in milliseconds) between two attempts to retrieve the credits granted by the broker. This time is used when the sender run out of credits. - -Type: _int_ | false | `2000` - -| [.no-hyphens]#*durable*# | Whether sent AMQP messages are marked durable - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*health-timeout*# | The max number of seconds to wait to determine if the connection with the broker is still established for the readiness check. After that threshold, the check is considered as failed. - -Type: _int_ | false | `3` - -| [.no-hyphens]#*host*# - -[.no-hyphens]#_(amqp-host)_# | The broker hostname - -Type: _string_ | false | `localhost` - -| [.no-hyphens]#*link-name*# | The name of the link. If not set, the channel name is used. - -Type: _string_ | false | - -| [.no-hyphens]#*merge*# | Whether the connector should allow multiple upstreams - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*password*# - -[.no-hyphens]#_(amqp-password)_# | The password used to authenticate to the broker - -Type: _string_ | false | - -| [.no-hyphens]#*port*# - -[.no-hyphens]#_(amqp-port)_# | The broker port - -Type: _int_ | false | `5672` - -| [.no-hyphens]#*reconnect-attempts*# - -[.no-hyphens]#_(amqp-reconnect-attempts)_# | The number of reconnection attempts - -Type: _int_ | false | `100` - -| [.no-hyphens]#*reconnect-interval*# - -[.no-hyphens]#_(amqp-reconnect-interval)_# | The interval in second between two reconnection attempts - -Type: _int_ | false | `10` - -| [.no-hyphens]#*sni-server-name*# - -[.no-hyphens]#_(amqp-sni-server-name)_# | If set, explicitly override the hostname to use for the TLS SNI server name - -Type: _string_ | false | - -| [.no-hyphens]#*tracing-enabled*# | Whether tracing is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*ttl*# | The time-to-live of the send AMQP messages. 0 to disable the TTL - -Type: _long_ | false | `0` - -| [.no-hyphens]#*use-anonymous-sender*# | Whether or not the connector should use an anonymous sender. Default value is `true` if the broker supports it, `false` otherwise. If not supported, it is not possible to dynamically change the destination address. - -Type: _boolean_ | false | - -| [.no-hyphens]#*use-ssl*# - -[.no-hyphens]#_(amqp-use-ssl)_# | Whether the AMQP connection uses SSL/TLS - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*username*# - -[.no-hyphens]#_(amqp-username)_# | The username used to authenticate to the broker - -Type: _string_ | false | - -| [.no-hyphens]#*virtual-host*# - -[.no-hyphens]#_(amqp-virtual-host)_# | If set, configure the hostname value used for the connection AMQP Open frame and TLS SNI server name (if TLS is in use) - -Type: _string_ | false | - -|=== diff --git a/_versions/2.7/guides/amqp.adoc b/_versions/2.7/guides/amqp.adoc deleted file mode 100644 index d8363b4839d..00000000000 --- a/_versions/2.7/guides/amqp.adoc +++ /dev/null @@ -1,454 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Getting Started to SmallRye Reactive Messaging with AMQP 1.0 - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can utilize SmallRye Reactive Messaging to interact with AMQP 1.0. - -IMPORTANT: If you want to use RabbitMQ, you should use the xref:rabbitmq.adoc[SmallRye Reactive Messaging RabbitMQ extension]. -Alternatively, if want to use RabbitMQ with AMQP 1.0 you need to enable the AMQP 1.0 plugin in the RabbitMQ broker; -check the https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.9/amqp/amqp.html#amqp-rabbitmq[connecting to RabbitMQ] -documentation. - -== Prerequisites - -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we are going to develop two applications communicating with an AMQP broker. -We will use https://activemq.apache.org/components/artemis/[Artemis], but you can use any AMQP 1.0 broker. -The first application sends a _quote request_ to an AMQP queue and consumes messages from the _quote_ queue. -The second application receives the _quote request_ and sends a _quote_ back. - -image::amqp-qs-architecture.png[alt=Architecture, align=center,width=80%] - -The first application, the `producer`, will let the user request some quotes over an HTTP endpoint. -For each quote request, a random identifier is generated and returned to the user, to put the quote request on _pending_. -At the same time the generated request id is sent over the `quote-requests` queue. - -image::amqp-qs-app-screenshot.png[alt=Producer App UI, align=center] - -The second application, the `processor`, in turn, will read from the `quote-requests` queue put a random price to the quote, and send it to a queue named `quotes`. - -Lastly, the `producer` will read the quotes and send them to the browser using server-sent events. -The user will therefore see the quote price updated from _pending_ to the received price in real-time. - -== Solution - -We recommend that you follow the instructions in the next sections and create applications step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `amqp-quickstart` {quickstarts-tree-url}/amqp-quickstart[directory]. - -== Creating the Maven Project - -First, we need to create two projects: the _producer_ and the _processor_. - -To create the _producer_ project, in a terminal run: - -:create-app-artifact-id: amqp-quickstart-producer -:create-app-extensions: resteasy-reactive-jackson,smallrye-reactive-messaging-amqp -:create-app-post-command: -include::includes/devtools/create-app.adoc[] - -This command creates the project structure and select the two Quarkus extensions we will be using: - -1. RESTEasy Reactive and its Jackson support to handle JSON payloads -2. The Reactive Messaging AMQP connector - -To create the _processor_ project, from the same directory, run: - -:create-app-artifact-id: amqp-quickstart-processor -:create-app-extensions: smallrye-reactive-messaging-amqp -:create-app-post-command: -include::includes/devtools/create-app.adoc[] - -At that point you should have the following structure: - -[source, text] ----- -. -├── amqp-quickstart-processor -│ ├── README.md -│ ├── mvnw -│ ├── mvnw.cmd -│ ├── pom.xml -│ └── src -│ └── main -│ ├── docker -│ ├── java -│ └── resources -│ └── application.properties -└── amqp-quickstart-producer - ├── README.md - ├── mvnw - ├── mvnw.cmd - ├── pom.xml - └── src - └── main - ├── docker - ├── java - └── resources - └── application.properties ----- - -Open the two projects in your favorite IDE. - -== The Quote object - -The `Quote` class will be used in both `producer` and `processor` projects. -For the sake of simplicity we will duplicate the class. -In both projects, create the `src/main/java/org/acme/amqp/model/Quote.java` file, with the following content: - -[source,java] ----- -package org.acme.amqp.model; - -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection -public class Quote { - - public String id; - public int price; - - /** - * Default constructor required for Jackson serializer - */ - public Quote() { } - - public Quote(String id, int price) { - this.id = id; - this.price = price; - } - - @Override - public String toString() { - return "Quote{" + - "id='" + id + '\'' + - ", price=" + price + - '}'; - } -} ----- - -JSON representation of `Quote` objects will be used in messages sent to the AMQP queues -and also in the server-sent events sent to browser clients. - -Quarkus has built-in capabilities to deal with JSON AMQP messages. - -[NOTE] -.@RegisterForReflection -==== -The `@RegisterForReflection` annotation instructs Quarkus to include the class (including fields and methods) when building the native executable. -This will be useful later when we run the applications as native executables inside containers. -Without, the native compilation would remove the fields and methods during the dead-code elimination phase. -==== - -== Sending quote request - -Inside the `producer` project locate the generated `src/main/java/org/acme/amqp/producer/QuotesResource.java` file, and update the content to be: - -[source,java] ----- -package org.acme.amqp.producer; - -import java.util.UUID; - -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.acme.amqp.model.Quote; -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -import io.smallrye.mutiny.Multi; - -@Path("/quotes") -public class QuotesResource { - - @Channel("quote-requests") Emitter quoteRequestEmitter; // <1> - - /** - * Endpoint to generate a new quote request id and send it to "quote-requests" AMQP queue using the emitter. - */ - @POST - @Path("/request") - @Produces(MediaType.TEXT_PLAIN) - public String createRequest() { - UUID uuid = UUID.randomUUID(); - quoteRequestEmitter.send(uuid.toString()); // <2> - return uuid.toString(); - } -} ----- -<1> Inject a Reactive Messaging `Emitter` to send messages to the `quote-requests` channel. -<2> On a post request, generate a random UUID and send it to the AMQP queue using the emitter. - -The `quote-requests` channel is going to be managed as a AMQP queue, as that's the only connector on the classpath. -If not indicated otherwise, like in this example, Quarkus uses the channel name as AMQP queue name. -So, in this example, the application sends messages to the `quote-requests` queue. - -TIP: When you have multiple connectors, you would need to indicate which connector you want to use in the application configuration. - -== Processing quote requests - -Now let's consume the quote request and give out a price. -Inside the `processor` project, locate the `src/main/java/org/acme/amqp/processor/QuoteProcessor.java` file and add the following: - -[source, java] ----- -package org.acme.amqp.processor; - -import java.util.Random; - -import javax.enterprise.context.ApplicationScoped; - -import org.acme.amqp.model.Quote; -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import io.smallrye.reactive.messaging.annotations.Blocking; - -/** - * A bean consuming data from the "request" AMQP queue and giving out a random quote. - * The result is pushed to the "quotes" AMQP queue. - */ -@ApplicationScoped -public class QuoteProcessor { - - private Random random = new Random(); - - @Incoming("requests") // <1> - @Outgoing("quotes") // <2> - @Blocking // <3> - public Quote process(String quoteRequest) throws InterruptedException { - // simulate some hard working task - Thread.sleep(200); - return new Quote(quoteRequest, random.nextInt(100)); - } -} ----- -<1> Indicates that the method consumes the items from the `requests` channel -<2> Indicates that the objects returned by the method are sent to the `quotes` channel -<3> Indicates that the processing is _blocking_ and cannot be run on the caller thread. - -The `process` method is called for every AMQP message from the `quote-requests` queue, and will send a `Quote` object to the `quotes` queue. - -Because we want to consume messages from the `quotes-requests` queue into the `requests` channel, we need to configure this association. -Open the `src/main/resources/application.properties` file and add: - -[source, properties] ----- -mp.messaging.incoming.requests.address=quote-requests ----- - -The configuration keys are structured as follows: - -`mp.messaging.[outgoing|incoming].{channel-name}.property=value` - -In our case, we want to configure the `address` attribute to indicate the name of the queue. - -== Receiving quotes - -Back to our `producer` project. -Let's modify the `QuotesResource` to consume quotes, bind it to an HTTP endpoint to send events to clients: - -[source,java] ----- -import io.smallrye.mutiny.Multi; -//... - -@Channel("quotes") Multi quotes; // <1> - -/** - * Endpoint retrieving the "quotes" queue and sending the items to a server sent event. - */ -@GET -@Produces(MediaType.SERVER_SENT_EVENTS) // <2> -public Multi stream() { - return quotes; // <3> -} ----- -<1> Injects the `quotes` channel using the `@Channel` qualifier -<2> Indicates that the content is sent using `Server Sent Events` -<3> Returns the stream (_Reactive Stream_) - -== The HTML page - -Final touch, the HTML page reading the converted prices using SSE. - -Create inside the `producer` project `src/main/resources/META-INF/resources/quotes.html` file, with the following content: - -[source, html] ----- - Quotes - - - - - -
-
-
-

Quotes

- -
-
-
-
- - - - ----- - -Nothing spectacular here. -On each received quote, it updates the page. - -== Get it running - -You just need to run both applications using: - -[source,bash] ----- -> mvn -f amqp-quickstart-producer quarkus:dev ----- - -And, in a separate terminal: - -[source, bash] ----- -> mvn -f amqp-quickstart-processor quarkus:dev ----- - -Quarkus starts a AMQP broker automatically, configures the application and shares the broker instance between different applications. -See xref:amqp-dev-services.adoc[Dev Services for AMQP] for more details. - - -Open `http://localhost:8080/quotes.html` in your browser and request some quotes by clicking the button. - -== Running in JVM or Native mode - -When not running in dev or test mode, you will need to start your AMQP broker. -You can follow the instructions from the https://activemq.apache.org/components/artemis/documentation/latest/using-server.html[Apache ActiveMQ Artemis website] or create a `docker-compose.yaml` file with the following content: - -[source, yaml] ----- -version: '2' - -services: - - artemis: - image: quay.io/artemiscloud/activemq-artemis-broker:0.1.2 - ports: - - "8161:8161" - - "61616:61616" - - "5672:5672" - environment: - AMQ_USER: quarkus - AMQ_PASSWORD: quarkus - networks: - - amqp-quickstart-network - - producer: - image: quarkus-quickstarts/amqp-quickstart-producer:1.0-${QUARKUS_MODE:-jvm} - build: - context: amqp-quickstart-producer - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - AMQP_HOST: artemis - AMQP_PORT: 5672 - ports: - - "8080:8080" - networks: - - amqp-quickstart-network - - processor: - image: quarkus-quickstarts/amqp-quickstart-processor:1.0-${QUARKUS_MODE:-jvm} - build: - context: amqp-quickstart-processor - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - AMQP_HOST: artemis - AMQP_PORT: 5672 - networks: - - amqp-quickstart-network - -networks: - amqp-quickstart-network: - name: amqp-quickstart ----- - -Note how the AMQP broker location is configured. -The `amqp.host` and `amqp.port` (`AMQP_HOST` and `AMQP_PORT` environment variables) properties configure location. - - -First, make sure you stopped the applications, and build both applications in JVM mode with: - -[source, bash] ----- -> mvn -f amqp-quickstart-producer clean package -> mvn -f amqp-quickstart-processor clean package ----- - -Once packaged, run `docker compose up --build`. -The UI is exposed on http://localhost:8080/quotes.html - -To run your applications as native, first we need to build the native executables: - -[source, bash] ----- -> mvn -f amqp-quickstart-producer package -Pnative -Dquarkus.native.container-build=true -> mvn -f amqp-quickstart-processor package -Pnative -Dquarkus.native.container-build=true ----- - -The `-Dquarkus.native.container-build=true` instructs Quarkus to build Linux 64bits native executables, who can run inside containers. -Then, run the system using: - -[source, bash] ----- -> export QUARKUS_MODE=native -> docker compose up --build ----- - -As before, the UI is exposed on http://localhost:8080/quotes.html - -== Going further - -This guide has shown how you can interact with AMQP 1.0 using Quarkus. -It utilizes https://smallrye.io/smallrye-reactive-messaging[SmallRye Reactive Messaging] to build data streaming applications. - -If you did the Kafka quickstart, you have realized that it's the same code. -The only difference is the connector configuration and the JSON mapping. - - diff --git a/_versions/2.7/guides/apicurio-registry-dev-services.adoc b/_versions/2.7/guides/apicurio-registry-dev-services.adoc deleted file mode 100644 index beb991b3fc9..00000000000 --- a/_versions/2.7/guides/apicurio-registry-dev-services.adoc +++ /dev/null @@ -1,60 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services for Apicurio Registry - -include::./attributes.adoc[] - -If the `quarkus-apicurio-registry-avro` extension is present, Dev Services for Apicurio Registry automatically starts an Apicurio Registry instance in dev mode and when running tests. -Also, all Kafka channels in SmallRye Reactive Messaging are automatically configured to use this registry. -(This automatic configuration of course only applies to serializers and deserializers from the Apicurio Registry Avro library.) - -== Enabling / Disabling Dev Services for Apicurio Registry - -Dev Services for Apicurio Registry is automatically enabled unless: - -- `quarkus.apicurio-registry.devservices.enabled` is set to `false` -- `mp.messaging.connector.smallrye-kafka.apicurio.registry.url` is configured -- all the Reactive Messaging Kafka channels have the `apicurio.registry.url` attribute set - -Dev Services for Apicurio Registry relies on Docker to start the registry. -If your environment does not support Docker, you will need to start the registry manually, or use an already running registry. -You can configure the registry URL for all Kafka channels in SmallRye Reactive Messaging with a single property: - -[source,properties] ----- -mp.messaging.connector.smallrye-kafka.apicurio.registry.url=http://localhost:8081/apis/registry/v2 ----- - -== Shared registry - -Most of the time you need to share the registry between applications. -Dev Services for Apicurio Registry implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single registry. - -NOTE: Dev Services for Apicurio Registry starts the container with the `quarkus-dev-service-apicurio-registry` label which is used to identify the container. - -If you need multiple (shared) registries, you can configure the `quarkus.apicurio-registry.devservices.service-name` attribute and indicate the registry name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `apicurio-registry`. - -Sharing is enabled by default in dev mode, but disabled in test mode. -You can disable the sharing with `quarkus.apicurio-registry.devservices.shared=false`. - -== Setting the port - -By default, Dev Services for Apicurio Registry picks a random port and configures the application. -You can set the port by configuring the `quarkus.apicurio-registry.devservices.port` property. - -Note that the Kafka channels in SmallRye Reactive messaging are automatically configured with the chosen port. - -== Configuring the image - -Dev Services for Apicurio Registry uses `apicurio/apicurio-registry-mem` images. -You can select any 2.x version from https://hub.docker.com/r/apicurio/apicurio-registry-mem: - -[source, properties] ----- -quarkus.apicurio-registry.devservices.image-name=apicurio/apicurio-registry-mem:latest-snapshot ----- diff --git a/_versions/2.7/guides/attributes.adoc b/_versions/2.7/guides/attributes.adoc deleted file mode 100644 index fffff07a1da..00000000000 --- a/_versions/2.7/guides/attributes.adoc +++ /dev/null @@ -1,40 +0,0 @@ -:imagesdir: /guides/images - -:project-name: Quarkus -:quarkus-version: 2.7.5.Final - -:maven-version: 3.8.1+ -:graalvm-version: 21.3.1 -:graalvm-flavor: 21.3.1-java11 -:mandrel-flavor: 21.3-java11 -:surefire-version: 3.0.0-M5 -:restassured-version: 4.4.0 -:gradle-version: 7.3.3 -:jandex-maven-plugin-version: 1.2.2 - -:generated-dir: ../../../_generated-doc/2.7 -:quarkus-home-url: https://quarkus.io -:quarkus-site-getting-started: /get-started -:quarkus-writing-extensions-guide: /guides/writing-extensions -:quarkus-site-publications: /publications -:quarkus-org-url: https://github.com/quarkusio -:quarkus-base-url: https://github.com/quarkusio/quarkus -:quarkus-clone-url: https://github.com/quarkusio/quarkus.git -:quarkus-archive-url: https://github.com/quarkusio/quarkus/archive/master.zip -:quarkus-tree-url: https://github.com/quarkusio/quarkus/tree/main -:quarkus-issues-url: https://github.com/quarkusio/quarkus/issues -:quarkus-images-url: https://github.com/quarkusio/quarkus-images/tree -:quarkus-chat-url: https://quarkusio.zulipchat.com -:quarkus-mailing-list-subscription-email: quarkus-dev+subscribe@googlegroups.com -:quarkus-mailing-list-index: https://groups.google.com/d/forum/quarkus-dev -:quickstarts-base-url: https://github.com/quarkusio/quarkus-quickstarts -:quickstarts-clone-url: https://github.com/quarkusio/quarkus-quickstarts.git -:quickstarts-archive-url: https://github.com/quarkusio/quarkus-quickstarts/archive/main.zip -:quickstarts-blob-url: https://github.com/quarkusio/quarkus-quickstarts/blob/main -:quickstarts-tree-url: https://github.com/quarkusio/quarkus-quickstarts/tree/main - -:config-consul-guide: https://quarkiverse.github.io/quarkiverse-docs/quarkus-config-extensions/dev/consul.html -:hibernate-search-orm-elasticsearch-aws-guide: https://quarkiverse.github.io/quarkiverse-docs/quarkus-hibernate-search-extras/2.x/index.html -:neo4j-guide: https://quarkiverse.github.io/quarkiverse-docs/quarkus-neo4j/dev/index.html -:vault-guide: https://quarkiverse.github.io/quarkiverse-docs/quarkus-vault/dev/index.html -:vault-datasource-guide: https://quarkiverse.github.io/quarkiverse-docs/quarkus-vault/dev/vault-datasource.html diff --git a/_versions/2.7/guides/azure-functions-http.adoc b/_versions/2.7/guides/azure-functions-http.adoc deleted file mode 100644 index 2426a42c994..00000000000 --- a/_versions/2.7/guides/azure-functions-http.adoc +++ /dev/null @@ -1,123 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Azure Functions (Serverless) with RESTEasy, Undertow, or Reactive Routes -:extension-status: preview - -include::./attributes.adoc[] - -The `quarkus-azure-functions-http` extension allows you to write microservices with RESTEasy (JAX-RS), -Undertow (servlet), Reactive Routes, or xref:funqy-http.adoc[Funqy HTTP] and make these microservices deployable to the Azure Functions runtime. - -One azure function deployment can represent any number of JAX-RS, servlet, Reactive Routes, or xref:funqy-http.adoc[Funqy HTTP] endpoints. - -include::./status-include.adoc[] - -NOTE: Only text based media types are supported at the moment as Azure Functions HTTP Trigger for Java does not support a binary format - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* https://azure.microsoft.com[An Azure Account]. Free accounts work. -* https://docs.microsoft.com/en-us/cli/azure/install-azure-cli[Azure CLI Installed] - -== Solution - -This guide walks you through running a Maven Archetype to generate a sample project that contains three http endpoints -written with JAX-RS APIs, Servlet APIs, Reactive Routes, or xref:funqy-http.adoc[Funqy HTTP] APIs. After building, you will then be able to deploy -to Azure. - -== Creating the Maven Deployment Project - -Create the Azure Maven project for your Quarkus application using our Maven Archetype. - - -[source,bash,subs=attributes+] ----- -mvn archetype:generate \ - -DarchetypeGroupId=io.quarkus \ - -DarchetypeArtifactId=quarkus-azure-functions-http-archetype \ - -DarchetypeVersion={quarkus-version} ----- - -Running this command will run maven in interactive mode and it will ask you to fill in some build properties: - -* `groupId` - The maven groupId of this generated project. Type in `org.acme`. -* `artifactId` - The maven artifactId of this generated project. Type in `quarkus-demo` -* `version` - Version of this generated project. -* `package` - defaults to `groupId` -* `appName` - Use the default value. This is the application name in Azure. It must be a unique subdomain name under `*.azurewebsites.net`. Otherwise deploying to Azure will fail. -* `appRegion` - Defaults to `westus`. Dependent on your azure region. -* `function` - Use the default which is `quarkus`. Name of your azure function. Can be anything you want. -* `resourceGroup` - Use the default value. Any value is fine though. - -The values above are defined as properties in the generated `pom.xml` file. - -== Login to Azure - -If you don't login to Azure you won't be able to deploy. - -[source,bash,subs=attributes+] ----- -az login ----- - -== Build and Deploy to Azure - -The `pom.xml` you generated in the previous step pulls in the `azure-functions-maven-plugin`. Running maven install -generates config files and a staging directory required by the `azure-functions-maven-plugin`. Here's -how to execute it. - -[source,bash,subs=attributes+] ----- -./mvnw clean install azure-functions:deploy ----- - -If you haven't already created your function up at azure, the will build an uber-jar, package it, create the function -at Azure, and deploy it. - -If deployment is a success, the azure plugin will tell you the base URL to access your function. - -i.e. -[source] ----- -Successfully deployed the artifact to https://quarkus-demo-123451234.azurewebsites.net ----- - -The URL to access the service would be - -https://{appName}.azurewebsites.net/api/hello -https://{appName}.azurewebsites.net/api/servlet/hello -https://{appName}.azurewebsites.net/api/vertx/hello -https://{appName}.azurewebsites.net/api/funqyHello - -== Extension maven dependencies - -The sample project includes the RESTEasy, Undertow, Reactive Routes, xref:funqy-http.adoc[Funqy HTTP] extensions. If you are only using one of those -APIs (i.e. jax-rs only), respectively remove the maven dependency `quarkus-resteasy`, `quarkus-undertow`, `quarkus-funqy-http`, and/or -`quarkus-reactive-routes`. - -You must include the `quarkus-azure-functions-http` extension as this is a generic bridge between the Azure Functions -runtime and the HTTP framework you are writing your microservices in. - -== Azure Deployment Descriptors - -Templates for Azure Functions deployment descriptors (`host.json`, `function.json`) are within -the `azure-config` directory. Edit them as you need to. Rerun the build when you are ready. - -*NOTE*: If you change the `function.json` `path` attribute or if you add a `routePrefix`, -your jax-rs endpoints won't route correctly. See <> for more information. - - -[#config-azure-paths] -== Configuring Root Paths - -The default route prefix for an Azure Function is `/api`. All of your JAX-RS, Servlet, Reactive Routes, and xref:funqy-http.adoc[Funqy HTTP] endpoints must -explicitly take this into account. In the generated project this is handled by the -`quarkus.http.root-path` switch in `application.properties` - -If you modify the `path` or add a `routePrefix` within the `azure-config/function.json` -deployment descriptor, your code or configuration must also reflect any prefixes you specify for your path. - diff --git a/_versions/2.7/guides/blaze-persistence.adoc b/_versions/2.7/guides/blaze-persistence.adoc deleted file mode 100644 index 219aaf7592b..00000000000 --- a/_versions/2.7/guides/blaze-persistence.adoc +++ /dev/null @@ -1,250 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Blaze-Persistence - -include::./attributes.adoc[] -:config-file: application.properties - -Blaze-Persistence offers a fluent query builder API on top of JPA with a deep Hibernate ORM integration that enables the -use of advanced SQL features like Common Table Expressions while staying in the realm of the JPA model. - -On top of that, the Blaze-Persistence Entity-View module allows for DTO definitions that can be applied to business logic -queries which are then transformed to optimized queries that only fetch the data that is needed to construct the DTO instances. -The same DTO definitions can further be used for applying database updates, leading to a great reduction in boilerplate -code and removing the need for object mapping tools. - -include::./platform-include.adoc[] - -== Setting up and configuring Blaze-Persistence - -The extension comes with default producers for `CriteriaBuilderFactory` and `EntityViewManager` that work out of the -box given a working Hibernate ORM configuration. For customization, overriding of the default producers is possible via the -standard mechanism as documented in the xref:cdi-reference.adoc#default_beans[Quarkus CDI reference]. -This is needed if you need to set custom link:https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#anchor-configuration-properties[Blaze-Persistence properties]. - -In Quarkus, you just need to: - -* `@Inject` `CriteriaBuilderFactory` or `EntityViewManager` and use it -* annotate your entity views with `@EntityView` and any other mapping annotation as usual - -Add the following dependencies to your project: - -* the Blaze-Persistence extension: `com.blazebit:blaze-persistence-integration-quarkus` -* further Blaze-Persistence integrations as needed: - - `blaze-persistence-integration-jackson` for link:https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#Jackson%20integration[Jackson] - - `blaze-persistence-integration-jaxrs` for link:https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#jaxrs-integration[JAX-RS] - - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.Example dependencies using Maven ----- - - - com.blazebit - blaze-persistence-integration-quarkus - - - com.blazebit - blaze-persistence-integration-hibernate-5.6 - runtime - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.Using Gradle ----- -implementation("com.blazebit:blaze-persistence-integration-quarkus") -runtimeOnly("com.blazebit:blaze-persistence-integration-hibernate-5.6") ----- - -The use in native images requires a dependency on the entity view annotation processor that may be extracted into a separate `native` profile: - -[source,xml] ----- - - - native - - - com.blazebit - blaze-persistence-entity-view-processor - provided - - - - ----- - -A `CriteriaBuilderFactory` and an `EntityViewManager` will be created based on the configured `EntityManagerFactory` as provided by the xref:hibernate-orm.adoc[Hibernate-ORM extension]. - -You can then access these beans via injection: - -[source,java] -.Example application bean using Hibernate ----- -@ApplicationScoped -public class SantaClausService { - @Inject - EntityManager em; <1> - @Inject - CriteriaBuilderFactory cbf; <2> - @Inject - EntityViewManager evm; <3> - - @Transactional <4> - public List findAllGifts() { - CriteriaBuilder cb = cbf.create(em, Gift.class); - return evm.applySetting(EntityViewSetting.create(GiftView.class), cb).getResultList(); - } -} ----- - -<1> Inject the `EntityManager` -<2> Inject the `CriteriaBuilderFactory` -<3> Inject the `EntityViewManager` -<4> Mark your CDI bean method as `@Transactional` so that a transaction is started or joined. - -[source,java] -.Example Entity ----- -@Entity -public class Gift { - private Long id; - private String name; - private String description; - - @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="giftSeq") - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } -} ----- - -[source,java] -.Example Entity-View ----- -@EntityView(Gift.class) -public interface GiftView { - - @IdMapping - Long getId(); - - String getName(); -} ----- - -[source,java] -.Example updatable Entity-View ----- -@UpdatableEntityView -@CreatableEntityView -@EntityView(Gift.class) -public interface GiftUpdateView extends GiftView { - - void setName(String name); -} ----- - -[source,java] -.Example JAX-RS Resource ----- -@Path("/gifts") -public class GiftResource { - @Inject - EntityManager entityManager; - @Inject - EntityViewManager entityViewManager; - @Inject - SantaClausService santaClausService; - - @POST - @Transactional - public Response createGift(GiftUpdateView view) { - entityViewManager.save(entityManager, view); - return Response.created(URI.create("/gifts/" + view.getId())).build(); - } - - @GET - @Produces(MediaType.APPLICATION_JSON) - public List getGifts() { - return santaClausService.findAllGifts(); - } - - @PUT - @Path("{id}") - @Transactional - public GiftView updateGift(@EntityViewId("id") GiftUpdateView view) { - entityViewManager.save(entityManager, view); - return entityViewManager.find(entityManager, GiftView.class, view.getId()); - } - - @GET - @Path("{id"}) - @Produces(MediaType.APPLICATION_JSON) - public GiftView getGift(@PathParam("id") Long id) { - return return entityViewManager.find(entityManager, GiftView.class, view.getId()); - } -} ----- - -[[blaze-persistence-configuration-properties]] -== Blaze-Persistence configuration properties - -There are various optional properties useful to refine your `EntityViewManager` and `CriteriaBuilderFactory` or guide guesses of Quarkus. - -There are no required properties, as long as the Hibernate ORM extension is configured properly. - -When no property is set, the Blaze-Persistence defaults apply. - -The configuration properties listed here allow you to override such defaults, and customize and tune various aspects. - -include::quarkus-blaze-persistence.adoc[opts=optional, leveloffset=+2] - -Apart from these configuration options, further configuration and customization can be applied by observing a `CriteriaBuilderConfiguration` or `EntityViewConfiguration` events and applying customizations on these objects. The various customization use cases can be found in the link:https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#quarkus-customization[Quarkus section of the entity-view documentation]. - -[source,java] -.Example CriteriaBuilderConfiguration and EntityViewConfiguration observing ----- -@ApplicationScoped -public class BlazePersistenceConfigurer { - - public void configure(@Observes CriteriaBuilderConfiguration config) { - config.setProperty("...", "..."); - } - - public void configure(@Observes EntityViewConfiguration config) { - // Register custom BasicUserType or register type test values - config.registerBasicUserType(MyClass.class, MyClassBasicUserType.class); - } -} ----- - -== Limitations - -Apache Derby:: -Blaze-Persistence currently does not come with support for Apache Derby. -This limitation could be lifted in the future, if there's a compelling need for it and if someone contributes it. diff --git a/_versions/2.7/guides/building-my-first-extension.adoc b/_versions/2.7/guides/building-my-first-extension.adoc deleted file mode 100644 index 7ae69185458..00000000000 --- a/_versions/2.7/guides/building-my-first-extension.adoc +++ /dev/null @@ -1,930 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Building my first extension - -include::./attributes.adoc[] - -Quarkus extensions enhance your application just as projects dependencies do. -The role of the extensions is to leverage Quarkus paradigms to integrate seamlessly a library into Quarkus architecture - e.g. do more things at build time. -This is how you can use your battle-tested ecosystem and take advantage of Quarkus performance and native compilation. -Go to https://code.quarkus.io/[code.quarkus.io] to get the list of the supported extensions. - -In this guide we are going to develop the *Sample Greeting Extension*. -The extension will expose a customizable HTTP endpoint which simply greets the visitor. - -[NOTE] -.Disclaimer -To be sure it's extra clear you don't need an extension to add a Servlet to your application. -This guide is a simplified example to explain the concepts of extensions development, see the xref:writing-extensions.adoc[full documentation] if you need more information. -Keep in mind it's not representative of the power of moving things to build time or simplifying the build of native images. - -== Prerequisites - -:prerequisites-time: 30 minutes -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] - -== Basic Concepts - -First things first, we will need to start with some basic concepts. - -* JVM mode vs Native mode - ** Quarkus is first and foremost a Java framework, that means you can develop, package and run classic JAR applications, that's what we call *JVM mode*. - ** Thanks to https://graalvm.org/[GraalVM] you can compile your Java application into machine specific code (like you do in Go or C++) and that's what we call *Native mode*. - -The operation of compiling Java bytecode into a native system-specific machine code is named *Ahead of Time Compilation* (aka AoT). - -* build time vs runtime in classic Java frameworks - ** The build time corresponds to all the actions you apply to your Java source files to convert them into something runnable (class files, jar/war, native images). - Usually this stage is composed by the compilation, annotation processing, bytecode generation, etc. At this point, everything is under the developer's scope and control. - ** The runtime is all the actions that happen when you execute your application. - It's obviously focused on starting your business-oriented actions but it relies on a lot of technical actions like loading libraries and configuration files, scanning the application's classpath, configuring the dependency injection, setting up your Object-Relational Mapping, instantiating your REST controllers, etc. - -Usually, Java frameworks do their bootstrapping during the runtime before actually starting the application "Business oriented layer". During bootstrap, frameworks dynamically collect metadata by scanning the classpath to find configurations, entity definitions, dependency injection binding, etc. in order to instantiate proper objects through reflection. The main consequences are: - -* Delaying the readiness of your application: you need to wait a couple of seconds before actually serving a business request. -* Having a peak of resource consumption at bootstrap: in a constrained environment, you will need to size the needed resources based on your technical bootstrap needs rather than your actual business needs. - -Quarkus' philosophy is to prevent as much as possible slow and memory intensive dynamic code execution by shifting left these actions and eventually do them during the build time. -A Quarkus extension is a Java piece of code acting as an adapter layer for your favorite library or technology. - -== Description of a Quarkus extension - -A Quarkus extension consists of two parts: - -* The *runtime module* which represents the capabilities the extension developer exposes to the application's developer (an authentication filter, an enhanced data layer API, etc). -Runtime dependencies are the ones the users will add as their application dependencies (in Maven POMs or Gradle build scripts). -* The *deployment module* which is used during the augmentation phase of the build, it describes how to "deploy" a library -following the Quarkus philosophy. -In other words, it applies all the Quarkus optimizations to your application during the build. -The deployment module is also where we prepare things for GraalVM's native compilation. - -IMPORTANT: Users should not be adding the deployment modules of extension as application dependencies. The deployment dependencies are resolved by -Quarkus during the augmentation phase from the runtime dependencies of the application. - -At this point, you should have understood that most of the magic will happen at the Augmentation build time thanks to the deployment module. - -== Quarkus Application Bootstrap - -There are three distinct bootstrap phases of a Quarkus application. - -* *Augmentation*. During the build time, the Quarkus extensions will load and scan your application's bytecode (including the dependencies) and configuration. -At this stage, the extension can read configuration files, scan classes for specific annotations, etc. -Once all the metadata has been collected, the extensions can pre-process the libraries bootstrap actions like your ORM, DI or REST controllers configurations. -The result of the bootstrap is directly recorded into bytecode and will be part of your final application package. -* *Static Init*. During the run time, Quarkus will execute first a static init method which contains some extensions actions/configurations. -When you will do your native packaging, this static method will be pre-processed during the build time and the objects it has generated will be serialized into the final native executable, so the initialization code will not be executed in the native mode (imagine you execute a Fibonacci function during this phase, the result of the computation will be directly recorded in the native executable). -When running the application in JVM mode, this static init phase is executed at the start of the application. - -* *Runtime Init*. Well nothing fancy here, we do classic run time code execution. -So, the more code you run during the two phases above, the faster your application will start. - -Now that everything is explained, we can start coding! - -== Project setup - -Extensions can be built with either Maven or Gradle. Depending on your build tool, setup can be done as following: - -NOTE: The Gradle extension plugin is still experimental and may be missing features available in the Maven plugin. - -=== Maven setup - -Quarkus provides `create-extension` Maven Mojo to initialize your extension project. - -It will try to auto-detect its options: - -* from `quarkus` (Quarkus Core) or `quarkus/extensions` directory, it will use the 'Quarkus Core' extension layout and defaults. -* with `-DgroupId=io.quarkiverse.[extensionId]`, it will use the 'Quarkiverse' extension layout and defaults. -* in other cases it will use the 'Standalone' extension layout and defaults. -* we may introduce other layout types in the future. - -TIP: You may call it without any parameter to use the interactive mode: `mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create-extension -N` - -[source,shell,subs=attributes+] ----- -$ mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create-extension -N \ - -DgroupId=org.acme \ #<1> - -DextensionId=greeting-extension \ #<2> - -DwithoutTests #<3> - -[INFO] --- quarkus-maven-plugin:{quarkus-version}:create-extension (default-cli) @ standalone-pom --- - -Detected layout type is 'standalone' #<4> -Generated runtime artifactId is 'greeting-extension' #<5> - - -applying codestarts... -🔠 java -🧰 maven -🗃 quarkus-extension -🐒 extension-base - ------------ -👍 extension has been successfully generated in: ---> /Users/ia3andy/workspace/redhat/quarkus/demo/greeting-extension ------------ -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -[INFO] Total time: 1.659 s -[INFO] Finished at: 2021-01-25T16:17:16+01:00 -[INFO] ------------------------------------------------------------------------ - ----- - -<1> The extension groupId -<2> The extension id (not namespaced). -<3> Indicate that we don't want to generate any test -<4> From a directory with no pom.xml and without any further options, the generator will automatically pick the 'standalone' extension layout -<5> With the 'standalone' layout, the `namespaceId` is empty by default, so the computed runtime module artifactId is the `extensionId` - -Maven has generated a `greeting-extension` directory containing the extension project which consists of the parent `pom.xml`, the `runtime` and the `deployment` modules. - -==== The parent pom.xml - -Your extension is a multi-module project. So let's start by checking out the parent POM at `./greeting-extension/pom.xml`. - -[source, xml, subs=attributes+] ----- - - - 4.0.0 - org.acme - greeting-extension-parent - 1.0.0-SNAPSHOT - pom - Greeting Extension - Parent - - deployment - runtime - - - 3.8.1 - ${surefire-plugin.version} - 11 - UTF-8 - UTF-8 - {quarkus-version} - 3.0.0-M5 - false - - - - - io.quarkus - quarkus-bom - ${quarkus.version} - pom - import - - - - - - - - maven-surefire-plugin - ${surefire-plugin.version} - - - org.jboss.logmanager.LogManager - ${maven.home} - ${settings.localRepository} - - - - - maven-failsafe-plugin - ${failsafe-plugin.version} - - - org.jboss.logmanager.LogManager - ${maven.home} - ${settings.localRepository} - - - - - maven-compiler-plugin - ${compiler-plugin.version} - - - -parameters - - - - - - - ----- - -<1> Your extension declares 2 sub-modules `deployment` and `runtime`. -<2> Quarkus requires a recent version of the Maven compiler plugin supporting the annotationProcessorPaths configuration. -<3> The `quarkus-bom` aligns your dependencies with those used by Quarkus during the augmentation phase. -<4> Quarkus requires these configs to run tests properly. -<5> Setting the `parameters` flag this way works around https://issues.apache.org/jira/browse/MCOMPILER-413[MCOMPILER-413]. - -==== The Deployment module - -Let's have a look at the deployment's `./greeting-extension/deployment/pom.xml`. -[source, xml] ----- - - - 4.0.0 - - org.acme - greeting-extension-parent - 1.0.0-SNAPSHOT - - - greeting-extension-deployment - Greeting Extension - Deployment - - - - io.quarkus - quarkus-arc-deployment - - - org.acme - greeting-extension - ${project.version} - - - io.quarkus - quarkus-junit5-internal - test - - - - - - - maven-compiler-plugin - - - - io.quarkus - quarkus-extension-processor - ${quarkus.version} - - - - - - - - ----- - -The key points are: - -<1> By convention, the deployment module has the `-deployment` suffix (`greeting-extension-deployment`). -<2> The deployment module depends on the `quarkus-arc-deployment` artifact. -We will see later which dependencies are convenient to add. -<3> The deployment module also *must* depend on the runtime module. -<4> We add the `quarkus-extension-processor` to the compiler annotation processors. - -In addition to the `pom.xml` `create-extension` also generated the `org.acme.greeting.extension.deployment.GreetingExtensionProcessor` class. - -[source, java] ----- -package org.acme.greeting.extension.deployment; - -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.deployment.builditem.FeatureBuildItem; - -class GreetingExtensionProcessor { - - private static final String FEATURE = "greeting-extension"; - - @BuildStep - FeatureBuildItem feature() { - return new FeatureBuildItem(FEATURE); - } - -} ----- - -NOTE: `FeatureBuildItem` represents a functionality provided by an extension. -The name of the feature gets displayed in the log during application bootstrap. -An extension should provide at most one feature. - -Be patient, we will explain the `Build Step Processor` concept and all the extension deployment API later on. -At this point, you just need to understand that this class explains to Quarkus how to deploy a feature named `greeting` which is your extension. -In other words, you are augmenting your application to use the `greeting` extension with all the Quarkus benefits (build time optimization, native support, etc.). - -==== The Runtime module - -Finally `./greeting-extension/runtime/pom.xml`. - -[source, xml] ----- - - - 4.0.0 - - org.acme - greeting-extension-parent - 0.0.1-snapshot - - - greeting-extension - Greeting Extension - Runtime - - - - io.quarkus - quarkus-arc - - - - - - - io.quarkus - quarkus-bootstrap-maven-plugin - ${quarkus.version} - - - compile - - extension-descriptor - - - ${project.groupId}:${project.artifactId}-deployment:${project.version} - - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - - - io.quarkus - quarkus-extension-processor - ${quarkus.version} - - - - - - - ----- - -The key points are: - -<1> By convention, the runtime module has no suffix (`greeting-extension`) as it is the artifact exposed to the end user. -<2> The runtime module depends on the `quarkus-arc` artifact. -<3> We add the `quarkus-bootstrap-maven-plugin` to generate the Quarkus extension descriptor included into the runtime artifact which links it with the corresponding deployment artifact. -<4> We add the `quarkus-extension-processor` to the compiler annotation processors. - -=== Gradle setup - -Quarkus does not provide any way to initialize a Gradle project for extensions yet. - -As mentionned before, an extension is composed of two modules: - -* `runtime` -* `deployment` - -We are going to create a Gradle multi module project with those two modules. Here is a simple `settings.gradle` example file: - -[source, groovy] ----- -pluginManagement { - repositories { - mavenCentral() - gradlePluginPortal() - } - plugins { - id 'io.quarkus.extension' version "${quarkus.version}" <1> - } -} - -include 'runtime', 'deployment' <2> - -rootProject.name = 'greeting-extension' ----- - -<1> Configure the quarkus extension plugin version -<2> Include both `runtime` and `deployment` modules - -Here is a sample of a root `build.gradle` file: - -[source, groovy] ----- -subprojects { - apply plugin: 'java-library' <1> - apply plugin: 'maven-publish' <2> - - group 'org.acme' <3> - version '1.0-SNAPSHOT' -} ----- - -<1> Apply the `java-library` plugin for all sub-modules -<2> Apply the `maven-publish` plugin used to publish our artifacts -<3> Globally set the group id used for publication - -The `io.quarkus.extension` plugin will be used in order to help us building the extension. -The plugin will *only* be applied to the `runtime` module. - -==== The deployment module - -The deployment module does not require any specific plugin. -Here is an example of a minimal `build.gradle` file for the `deployment` module: - -[source, groovy] ----- -name = 'greeting-extension-deployment' <1> - -dependencies { - implementation project(':runtime') <2> - - implementation platform("io.quarkus:quarkus-bom:${quarkus.version}") - - testImplementation 'io.quarkus:quarkus-junit5-internal' -} ----- - -<1> By convention, the deployment module has the `-deployment` suffix (`greeting-extension-deployment`). -<2> The deployment module *must* depend on the `runtime` module. - -==== The runtime module - -The runtime module applies the `io.quarkus.extension` plugin. This will: - -* Add `quarkus-extension-process` as annotation processor to both modules. -* Generate extension description files. - -Here is an example of `build.gradle` file for the `runtime` module: - -[source, groovy] ----- -plugins { - id 'io.quarkus.extension' <1> -} - -name = 'greeting-extension' <2> -description = 'Greeting extension' - -dependencies { - implementation platform("io.quarkus:quarkus-bom:${quarkus.version}") -} ----- - -<1> Apply the `io.quarkus.extension` plugin. -<2> By convention, the runtime module doesn't have a suffix (and thus is named `greeting-extension`) as it is the artifact exposed to the end user. - -== Basic version of the Sample Greeting extension - -=== Implementing the Greeting feature -The (killer) feature proposed by our extension is to greet the user. -To do so, our extension will deploy, in the user application, a Servlet exposing the HTTP endpoint `/greeting` which responds to the GET verb with a plain text `Hello`. - -The `runtime` module is where you develop the feature you want to propose to your users, so it's time to create our Web Servlet. - -To use Servlets in your applications you need to have a Servlet Container such as http://undertow.io[Undertow]. -Luckily, `quarkus-bom` imported by our parent `pom.xml` already includes the Undertow Quarkus extension. - -All we need to do is add `quarkus-undertow` as dependency to our `./greeting-extension/runtime/pom.xml`: -[source, xml] ----- - - io.quarkus - quarkus-undertow - ----- - -For Gradle, add the dependency in `./greeting-extension/runtime/build.gradle` file: - -[source, groovy] ----- - implementation 'io.quarkus:quarkus-undertow' ----- - -NOTE: The dependency on `quarkus-arc` generated by the `create-extension` mojo can now be removed since -`quarkus-undertow` already depends on it. - -Now we can create our Servlet `org.acme.greeting.extension.GreetingExtensionServlet` in the `runtime` module. - -[source,bash] ----- -mkdir -p ./greeting-extension/runtime/src/main/java/org/acme/greeting/extension ----- - -[source, java] ----- -package org.acme.greeting.extension; - -import javax.servlet.annotation.WebServlet; -import javax.servlet.http.HttpServlet; -import javax.servlet.http.HttpServletRequest; -import javax.servlet.http.HttpServletResponse; -import java.io.IOException; - -@WebServlet -public class GreetingExtensionServlet extends HttpServlet { // <1> - - @Override - protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { // <2> - resp.getWriter().write("Hello"); - } -} ----- - -<1> As usual, defining a servlet requires to extend `javax.servlet.http.HttpServlet`. -<2> Since we want to respond to the HTTP GET verb, we override the `doGet` method and write `Hello` in the Servlet response's output stream. - -=== Deploying the Greeting feature - -Quarkus magic relies on bytecode generation at build time rather than waiting for the runtime code evaluation, that's the role of your extension's `deployment` module. -Calm down, we know, bytecode is hard and you don't want to do it manually, Quarkus proposes a high level API to make your life easier. -Thanks to basic concepts, you will describe the items to produce/consume and the corresponding steps in order to generate the bytecode to produce during the deployment time. - -The `io.quarkus.builder.item.BuildItem` concept represents object instances you will produce or consume (and at some point convert into bytecode) thanks to methods annotated with `@io.quarkus.deployment.annotations.BuildStep` which describe your extension's deployment tasks. - -NOTE:: See xref:all-builditems.adoc[the complete list of BuildItem implementations in core] for more information - - -Go back to the generated `org.acme.greeting.extension.deployment.GreetingExtensionProcessor` class. - -[source, java] ----- -package org.acme.greeting.extension.deployment; - -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.deployment.builditem.FeatureBuildItem; - -class GreetingExtensionProcessor { - - private static final String FEATURE = "greeting-extension"; - - @BuildStep // <1> - FeatureBuildItem feature() { - return new FeatureBuildItem(FEATURE); // <2> - } - -} ----- - -<1> `feature()` method is annotated with `@BuildStep` which means it is identified as a deployment task Quarkus will have to execute during the deployment. -`BuildStep` methods are run concurrently at augmentation time to augment the application. -They use a producer/consumer model, where a step is guaranteed not to be run until all the items that it is consuming have been produced. - -<2> `io.quarkus.deployment.builditem.FeatureBuildItem` is an implementation of `BuildItem` which represents the description of an extension. -This `BuildItem` will be used by Quarkus to display information to the users when the application is starting. - -There are many `BuildItem` implementations, each one represents an aspect of the deployment process. -Here are some examples: - -* `ServletBuildItem`: describes a Servlet (name, path, etc.) we want to generate during the deployment. -* `BeanContainerBuildItem`: describes a container used to store and retrieve object instances during the deployment. - -If you don't find a `BuildItem` for what you want to achieve, you can create your own implementation. Keep in mind that a `BuildItem` should be as fine-grained as possible, representing a specific part of the deployment. -To create your `BuildItem` you can extend: - -* `io.quarkus.builder.item.SimpleBuildItem` if you need only a single instance of the item during the deployment (e.g. `BeanContainerBuildItem`, you only want one container). -* `io.quarkus.builder.item.MultiBuildItem` if you want to have multiple instances (e.g. `ServletBuildItem`, you can produce many Servlets during the deployment). - -It's now time to declare our HTTP endpoint. To do so, we need to produce a `ServletBuildItem`. -At this point, we are sure you understood that if the `quarkus-undertow` dependency proposes Servlet support for our `runtime` module, we will need the `quarkus-undertow-deployment` dependency in our `deployment` module to have access to the `io.quarkus.undertow.deployment.ServletBuildItem`. - -Let's add `quarkus-undertow-deployment` as dependency to our `./greeting-extension/deployment/pom.xml`: -[source, xml] ----- - - io.quarkus - quarkus-undertow-deployment - ----- -NOTE: The dependency on `quarkus-arc-deployment` generated by the `create-extension` mojo can now be removed since -`quarkus-undertow-deployment` already depends on it. - -For Gradle, add the dependency in `./greeting-extension/deployment/build.gradle` file: - -[source, groovy] ----- - implementation 'io.quarkus:quarkus-undertow-deployment' ----- - -We can now update `org.acme.greeting.extension.deployment.GreetingExtensionProcessor`: - -[source, java] ----- -package org.acme.greeting.extension.deployment; - -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.deployment.builditem.FeatureBuildItem; -import org.acme.greeting.extension.GreetingExtensionServlet; -import io.quarkus.undertow.deployment.ServletBuildItem; - -class GreetingExtensionProcessor { - - private static final String FEATURE = "greeting-extension"; - - @BuildStep - FeatureBuildItem feature() { - return new FeatureBuildItem(FEATURE); - } - - @BuildStep - ServletBuildItem createServlet() { // <1> - ServletBuildItem servletBuildItem = ServletBuildItem.builder("greeting-extension", GreetingExtensionServlet.class.getName()) - .addMapping("/greeting") - .build(); // <2> - return servletBuildItem; - } - -} ----- - -<1> We add a `createServlet` method which returns a `ServletBuildItem` and annotate it with `@BuildStep`. -Now, Quarkus will process this new task which will result in the bytecode generation of the Servlet registration at build time. - -<2> `ServletBuildItem` proposes a fluent API to instantiate a Servlet named `greeting-extension` of type `GreetingExtensionServlet` (it's our class provided by our extension `runtime` module), and map it the `/greeting` path. - -=== Testing the Greeting Extension feature - -When developing a Quarkus extension, you mainly want to test your feature is properly deployed in an application and works as expected. -That's why the tests will be hosted in the `deployment` module. - -Quarkus proposes facilities to test extensions via the `quarkus-junit5-internal` artifact (which should already be in the deployment pom.xml), in particular the `io.quarkus.test.QuarkusUnitTest` runner which starts an application with your extension. - -We will use http://rest-assured.io[RestAssured] (massively used in Quarkus) to test our HTTP endpoint. -Let's add the `rest-assured` dependency into the `./greeting-extension/deployment/pom.xml`. - -[source, xml] ----- - ... - - io.rest-assured - rest-assured - test - ----- - -For Gradle, add the dependency in `./greeting-extension/deployment/build.gradle` file: - -[source, groovy] ----- - ... - testImplementation 'io.rest-assured:rest-assured' ----- - - -The `create-extension` Maven Mojo can create the test and integration-test structure (drop the `-DwithoutTests`). Here, we'll create it ourselves: - -[source,bash] ----- -mkdir -p ./greeting-extension/deployment/src/test/java/org/acme/greeting/extension/deployment ----- - -To start testing your extension, create the following `org.acme.greeting.extension.deployment.GreetingExtensionTest` test class: - -[source, java] ----- -package org.acme.greeting.extension.deployment; - -import io.quarkus.test.QuarkusUnitTest; -import io.restassured.RestAssured; -import org.jboss.shrinkwrap.api.ShrinkWrap; -import org.jboss.shrinkwrap.api.spec.JavaArchive; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.extension.RegisterExtension; - -import static org.hamcrest.Matchers.containsString; - -public class GreetingExtensionTest { - - @RegisterExtension - static final QuarkusUnitTest config = new QuarkusUnitTest() - .withEmptyApplication(); // <1> - - @Test - public void testGreeting() { - RestAssured.when().get("/greeting").then().statusCode(200).body(containsString("Hello")); // <2> - } - -} ----- - -<1> We register a Junit Extension which will start a Quarkus application with the Greeting extension. -<2> We verify the application has a `greeting` endpoint responding to a HTTP GET request with a OK status (200) and a plain text body containing `Hello` - -Time to test and install to our local maven repository! - -[source,shell,subs=attributes+] ----- -$ mvn clean install -[INFO] Scanning for projects... -[INFO] ------------------------------------------------------------------------ -[INFO] Reactor Build Order: -[INFO] -[INFO] Greeting Extension - Parent [pom] -[INFO] Greeting Extension - Runtime [jar] -[INFO] Greeting Extension - Deployment [jar] -[INFO] -[INFO] -----------------< org.acme:greeting-extension-parent >----------------- -[INFO] Building Greeting Extension - Parent 1.0.0-SNAPSHOT [1/3] -[INFO] --------------------------------[ pom ]--------------------------------- -... -[INFO] ------------------------------------------------------- -[INFO] T E S T S -[INFO] ------------------------------------------------------- -[INFO] Running org.acme.greeting.extension.deployment.GreetingExtensionTest -2021-01-27 10:24:42,506 INFO [io.quarkus] (main) Quarkus {quarkus-version} on JVM started in 0.470s. Listening on: http://localhost:8081 -2021-01-27 10:24:42,508 INFO [io.quarkus] (main) Profile test activated. -2021-01-27 10:24:42,508 INFO [io.quarkus] (main) Installed features: [cdi, greeting-extension, servlet] -2021-01-27 10:24:43,764 INFO [io.quarkus] (main) Quarkus stopped in 0.018s -[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.799 s - in org.acme.greeting.extension.deployment.GreetingExtensionTest -[INFO] -[INFO] Results: -[INFO] -[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 -[INFO] -[INFO] -[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ greeting-extension-deployment --- -[INFO] Building jar: /Users/ia3andy/workspace/redhat/quarkus/demo/greeting-extension/deployment/target/greeting-extension-deployment-1.0.0-SNAPSHOT.jar -[INFO] -[INFO] --- maven-install-plugin:2.4:install (default-install) @ greeting-extension-deployment --- -... -[INFO] ------------------------------------------------------------------------ -[INFO] Reactor Summary for Greeting Extension - Parent 1.0.0-SNAPSHOT: -[INFO] -[INFO] Greeting Extension - Parent ........................ SUCCESS [ 0.303 s] -[INFO] Greeting Extension - Runtime ....................... SUCCESS [ 3.345 s] -[INFO] Greeting Extension - Deployment .................... SUCCESS [ 7.365 s] -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -[INFO] Total time: 11.246 s -[INFO] Finished at: 2021-01-27T10:24:44+01:00 -[INFO] ------------------------------------------------------------------------ ----- - -Looks good! Congratulations you just finished your first extension. - -=== Debugging your extension - -_If debugging is the process of removing bugs, then programming must be the process of putting them in._ -Edsger W. Dijkstra - - -==== Debugging your application build - -Since your extension deployment is made during the application build, this process is triggered by your build tool. -That means if you want to debug this phase you need to launch your build tool with the remote debug mode switched one. - -===== Maven - -You can activate Maven remote debugging by using `mvnDebug`. -You can launch your application with the following command line: - -[source,bash] ----- -mvnDebug clean compile quarkus:dev ----- - -By default, Maven will wait for a connection on `localhost:8000`. -Now, you can run your IDE `Remote` configuration to attach it to `localhost:8000`. - -===== Gradle - -You can activate Gradle remote debugging by using the flags `org.gradle.debug=true` or `org.gradle.daemon.debug=true` in daemon mode. -You can launch your application with the following command line: - -[source,bash] ----- -./gradlew quarkusDev -Dorg.gradle.daemon.debug=true ----- - -By default, Gradle will wait for a connection on `localhost:5005`. -Now, you can run your IDE `Remote` configuration to attach it to `localhost:5005`. - - -==== Debugging your extension tests - -We have seen together how to test your extension and sometimes things don't go so well and you want to debug your tests. -Same principle here, the trick is to enable the Maven Surefire remote debugging in order to attach an IDE `Remote` configuration. - -[source,shell] ----- -cd ./greeting-extension -mvn clean test -Dmaven.surefire.debug ----- - -By default, Maven will wait for a connection on `localhost:5005`. - -=== Time to use your new extension - -Now that you just finished building your first extension you should be eager to use it in a Quarkus application! - -*Classic Maven publication* - -If not already done in the previous step, you should install the `greeting-extension` in your local repository: -[source,shell] ----- -cd ./greeting-extension -mvn clean install ----- - -Then from another directory, use our tooling to create a new `greeting-app` Quarkus application with your new extension: -[source,bash, subs=attributes+] ----- -mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create \ - -DprojectGroupId=org.acme \ - -DprojectArtifactId=greeting-app \ - -Dextensions="org.acme:greeting-extension:1.0.0-SNAPSHOT" \ - -DnoCode ----- - -`cd` into `greeting-app`. - -NOTE: The `greeting-extension` extension has to be installed in the local Maven repository to be usable in the application. - - -Run the application and notice the `Installed Features` list contains the `greeting-extension` extension. - -[source,shell,subs=attributes+] ----- -$ mvn clean compile quarkus:dev -[INFO] Scanning for projects... -[INFO] -[INFO] -----------------------< org.acme:greeting-app >------------------------ -[INFO] Building greeting-app 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -[INFO] -[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ greeting-app --- -[INFO] -[INFO] --- quarkus-maven-plugin:{quarkus-version}:generate-code (default) @ greeting-app --- -[INFO] -[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ greeting-app --- -[INFO] Using 'UTF-8' encoding to copy filtered resources. -[INFO] Copying 1 resource -[INFO] -[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ greeting-app --- -[INFO] Nothing to compile - all classes are up to date -[INFO] -[INFO] --- quarkus-maven-plugin:{quarkus-version}:dev (default-cli) @ greeting-app --- -Listening for transport dt_socket at address: 5005 -__ ____ __ _____ ___ __ ____ ______ - --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ - -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ ---\___\_\____/_/ |_/_/|_/_/|_|\____/___/ -2021-01-27 10:28:07,240 INFO [io.quarkus] (Quarkus Main Thread) greeting-app 1.0.0-SNAPSHOT on JVM (powered by Quarkus {quarkus-version}) started in 0.531s. Listening on: http://localhost:8080 -2021-01-27 10:28:07,242 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. -2021-01-27 10:28:07,243 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, greeting-extension, servlet] ----- - -From an extension developer standpoint the Maven publication strategy is very handy and fast but Quarkus wants to go one step further by also ensuring a reliability of the ecosystem for the people who will use the extensions. -Think about it, we all had a poor Developer Experience with an unmaintained library, an incompatibility between dependencies (and we don't even talk about legal issues). -That's why there is the Quarkus Platform. - -*Quarkus Platform* - -Quarkus platform is a set of extensions that target the primary use-cases of Quarkus as a development framework and can safely be used in any combination in the same application without creating a dependency conflict. -From an application developer perspective, a Quarkus platform is represented as one or more Maven BOMs, for example `io.quarkus.platform:quarkus-bom:{quarkus-version}`, `io.quarkus.platform:quarkus-kogito-bom:{quarkus-version}`, etc, whose dependency version constraints were globally aligned so that these BOMs can be imported in the same application in any order without introducing a dependency conflict. - -*Quarkiverse Hub* - -link:https://github.com/quarkiverse[Quarkiverse Hub] is the GitHub organization that provides repository hosting (including build, CI and release publishing setup) for Quarkus extension projects contributed by the community. - -In case you are wondering about creating a new Quarkus extension and adding it to the Quarkus ecosystem so that the Quarkus community can discover it using the Quarkus dev tools (including the https://quarkus.io/guides/cli-tooling[Quarkus CLI] and https://code.quarkus.io[code.quarkus.io]), the https://github.com/quarkiverse[Quarkiverse Hub] GitHub organization will be a good home for it. - -You can get started by creating an link:https://github.com/quarkusio/quarkus/issues/new/choose[Extension Request] issue (check first if one wasn't already submitted link:https://github.com/quarkusio/quarkus/labels/kind%2Fextension-proposal[here]) and asking to lead it. - -We'll take care of provisioning a new repository and set it up to: - -- Be supported by our tooling; -- Publish the documentation you produce for your extension to the Quarkiverse website; -- Configure your extension to use the link:https://github.com/quarkusio/quarkus-ecosystem-ci#quarkus-ecosystem-ci[Quarkus Ecosystem CI] to build against the latest Quarkus Core changes; -- Give you the freedom to manage the project and release to Maven Central as you like. - -NOTE: Extensions hosted in the Quarkiverse Hub may or may not end up in the Quarkus platform. - -For more information, check link:https://github.com/quarkiverse/quarkiverse/wiki[the Quarkiverse Wiki] and link:https://quarkus.io/blog/quarkiverse/[this blog post]. - -== Conclusion - -Creating new extensions may appear to be an intricate task at first but once you understood the Quarkus game-changer paradigm (build time vs runtime) the structure of an extension makes perfectly sense. - -As usual, along the path Quarkus simplifies things under the hood (Maven Mojo, bytecode generation or testing) to make it pleasant to develop new features. - -== Further reading - -- xref:writing-extensions.adoc[Writing your own extension] for the full documentation. -- xref:dev-ui.adoc[Quarkus Dev UI] to learn how to support the Dev UI in your extension diff --git a/_versions/2.7/guides/building-native-image.adoc b/_versions/2.7/guides/building-native-image.adoc deleted file mode 100644 index 649093217ff..00000000000 --- a/_versions/2.7/guides/building-native-image.adoc +++ /dev/null @@ -1,809 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Building a Native Executable - -include::./attributes.adoc[] - -This guide covers: - -* Compiling the application to a native executable -* Packaging the native executable in a container -* Debugging native executable - -This guide takes as input the application developed in the xref:getting-started.adoc[Getting Started Guide]. - -== GraalVM - -Building a native executable requires using a distribution of GraalVM. -There are three distributions: -Oracle GraalVM Community Edition (CE), Oracle GraalVM Enterprise Edition (EE) and Mandrel. -The differences between the Oracle and Mandrel distributions are as follows: - -* Mandrel is a downstream distribution of the Oracle GraalVM CE. -Mandrel's main goal is to provide a way to build native executables specifically designed to support Quarkus. - -* Mandrel releases are built from a code base derived from the upstream Oracle GraalVM CE code base, -with only minor changes but some significant exclusions that are not necessary for Quarkus native apps. -They support the same capabilities to build native executables as Oracle GraalVM CE, -with no significant changes to functionality. -Notably, they do not include support for polyglot programming. -The reason for these exclusions is to provide a better level of support for the majority of Quarkus users. -These exclusions also mean Mandrel offers a considerable reduction in its distribution size -when compared with Oracle GraalVM CE/EE. - -* Mandrel is built slightly differently to Oracle GraalVM CE, using the standard OpenJDK project. -This means that it does not profit from a few small enhancements that Oracle have added to the version of OpenJDK used to build their own GraalVM downloads. -This enhancements are omitted because upstream OpenJDK does not manage them, and cannot vouch for. -This is particularly important when it comes to conformance and security. - -* Mandrel is recommended for building native executables that target Linux containerized environments. -This means that Mandrel users are encouraged to use containers to build their native executables. -If you are building native executables for macOS, -you should consider using Oracle GraalVM instead, -because Mandrel does not currently target this platform. -Building native executables directly on bare metal Linux or Windows is possible, -with details available in the https://github.com/graalvm/mandrel/blob/default/README.md[Mandrel README] -and https://github.com/graalvm/mandrel/releases[Mandrel releases]. - -== Prerequisites - -:prerequisites-docker: -:prerequisites-graalvm-mandatory: -include::includes/devtools/prerequisites.adoc[] -* A xref:configuring-c-development[working C development environment] -* The code of the application developed in the xref:getting-started.adoc[Getting Started Guide]. - -.Supporting native compilation in C -[[configuring-c-development]] -[NOTE] -==== -What does having a working C developer environment mean? - -* On Linux, you will need GCC, and the glibc and zlib headers. Examples for common distributions: -+ -[source,bash] ----- -# dnf (rpm-based) -sudo dnf install gcc glibc-devel zlib-devel libstdc++-static -# Debian-based distributions: -sudo apt-get install build-essential libz-dev zlib1g-dev ----- -* XCode provides the required dependencies on macOS: -+ -[source,bash] ----- -xcode-select --install ----- -* On Windows, you will need to install the https://aka.ms/vs/15/release/vs_buildtools.exe[Visual Studio 2017 Visual C++ Build Tools] -==== - -[[configuring-graalvm]] -=== Configuring GraalVM - -[TIP] -==== -If you cannot install GraalVM, you can use a multi-stage Docker build to run Maven inside a Docker container that embeds GraalVM. There is an explanation of how to do this at the end of this guide. -==== - -Version {graalvm-version} is required. Using the community edition is enough. - -1. Install GraalVM if you haven't already. You have a few options for this: -** Download the appropriate archive from or , and unpack it like you would any other JDK. -** Use platform-specific install tools like https://sdkman.io/jdks#Oracle[sdkman], https://github.com/graalvm/homebrew-tap[homebrew], or https://github.com/ScoopInstaller/Java[scoop]. -2. Configure the runtime environment. Set `GRAALVM_HOME` environment variable to the GraalVM installation directory, for example: -+ -[source,bash] ----- -export GRAALVM_HOME=$HOME/Development/graalvm/ ----- -+ -On macOS (not supported by Mandrel), point the variable to the `Home` sub-directory: -+ -[source,bash] ----- -export GRAALVM_HOME=$HOME/Development/graalvm/Contents/Home/ ----- -+ -On Windows, you will have to go through the Control Panel to set your environment variables. -+ -[TIP] -==== -Installing via scoop will do this for you. -==== -3. (Only for Oracle GraalVM CE/EE) Install the `native-image` tool using `gu install`: -+ -[source,bash] ----- -${GRAALVM_HOME}/bin/gu install native-image ----- -+ -Some previous releases of GraalVM included the `native-image` tool by default. This is no longer the case; it must be installed as a second step after GraalVM itself is installed. Note: there is an outstanding issue xref:graal-and-catalina[using GraalVM with macOS Catalina]. -4. (Optional) Set the `JAVA_HOME` environment variable to the GraalVM installation directory. -+ -[source,bash] ----- -export JAVA_HOME=${GRAALVM_HOME} ----- -5. (Optional) Add the GraalVM `bin` directory to the path -+ -[source,bash] ----- -export PATH=${GRAALVM_HOME}/bin:$PATH ----- - -[[graal-and-catalina]] -.Issues using GraalVM with macOS Catalina -[NOTE] -==== -GraalVM binaries are not (yet) notarized for macOS Catalina as reported in this https://github.com/oracle/graal/issues/1724[GraalVM issue]. This means that you may see the following error when using `gu`: - -[source,bash] ----- -“gu” cannot be opened because the developer cannot be verified ----- - -Use the following command to recursively delete the `com.apple.quarantine` extended attribute on the GraalVM install directory as a workaround: - -[source,bash] ------ -xattr -r -d com.apple.quarantine ${GRAALVM_HOME}/../.. ------ -==== - -== Solution - -We recommend that you follow the instructions in the next sections and package the application step by step. However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `getting-started` directory. - -== Producing a native executable - -The native executable for our application will contain the application code, required libraries, Java APIs, and a reduced version of a VM. The smaller VM base improves the startup time of the application and produces a minimal disk footprint. - -image:native-executable-process.png[Creating a native executable] - -If you have generated the application from the previous tutorial, you can find in the `pom.xml` the following _profile_: - -[source,xml] ----- - - - native - - native - - - ----- - -[TIP] -==== -You can provide custom options for the `native-image` command using the `` property. -Multiple options may be separated by a comma. - -Another possibility is to include the `quarkus.native.additional-build-args` configuration property in your `application.properties`. - -You can find more information about how to configure the native image building process in the <> section below. -==== - -We use a profile because, you will see very soon, packaging the native executable takes a _few_ minutes. You could -just pass -Dquarkus.package.type=native as a property on the command line, however it is better to use a profile as -this allows native image tests to also be run. - -Create a native executable using: - -include::includes/devtools/build-native.adoc[] - -[[graal-and-windows]] -[NOTE] -.Issues with packaging on Windows -==== -The Microsoft Native Tools for Visual Studio must first be initialized before packaging. You can do this by starting -the `x64 Native Tools Command Prompt` that was installed with the Visual Studio Build Tools. At -`x64 Native Tools Command Prompt` you can navigate to your project folder and run `mvnw package -Pnative`. - -Another solution is to write a script to do this for you: - -[source,bash] ----- -cmd /c 'call "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Auxiliary\Build\vcvars64.bat" && mvn package -Pnative' ----- -==== - -In addition to the regular files, the build also produces `target/getting-started-1.0.0-SNAPSHOT-runner`. -You can run it using: `./target/getting-started-1.0.0-SNAPSHOT-runner`. - -[[graal-package-preview]] -[NOTE] -.Java preview features -==== -Java code that relies on preview features requires special attention. -To produce a native executable, this means that the `--enable-preview` flag needs to be passed to the underlying native image invocation. -You can do so by prepending the flag with `-J` and passing it as additional native build argument: `-Dquarkus.native.additional-build-args=-J--enable-preview`. -==== - -== Testing the native executable - -Producing a native executable can lead to a few issues, and so it's also a good idea to run some tests against the application running in the native file. - -In the `pom.xml` file, the `native` profile contains: - -[source, xml] ----- - - org.apache.maven.plugins - maven-failsafe-plugin - ${surefire-plugin.version} - - - - integration-test - verify - - - - ${project.build.directory}/${project.build.finalName}-runner - org.jboss.logmanager.LogManager - ${maven.home} - - - - - ----- - -This instructs the failsafe-maven-plugin to run integration-test and indicates the location of the produced native executable. - -Then, open the `src/test/java/org/acme/quickstart/NativeGreetingResourceIT.java`. It contains: - -[source,java] ----- -package org.acme.quickstart; - - -import io.quarkus.test.junit.NativeImageTest; - -@NativeImageTest // <1> -public class NativeGreetingResourceIT extends GreetingResourceTest { // <2> - - // Run the same tests - -} ----- -<1> Use another test runner that starts the application from the native file before the tests. -The executable is retrieved using the `native.image.path` system property configured in the _Failsafe Maven Plugin_. -<2> We extend our previous tests, but you can also implement your tests - -To see the `NativeGreetingResourceIT` run against the native executable, use `./mvnw verify -Pnative`: -[source,shell] ----- -$ ./mvnw verify -Pnative -... -[getting-started-1.0.0-SNAPSHOT-runner:18820] universe: 587.26 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] (parse): 2,247.59 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] (inline): 1,985.70 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] (compile): 14,922.77 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] compile: 20,361.28 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] image: 2,228.30 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] write: 364.35 ms -[getting-started-1.0.0-SNAPSHOT-runner:18820] [total]: 52,777.76 ms -[INFO] -[INFO] --- maven-failsafe-plugin:2.22.1:integration-test (default) @ getting-started --- -[INFO] -[INFO] ------------------------------------------------------- -[INFO] T E S T S -[INFO] ------------------------------------------------------- -[INFO] Running org.acme.quickstart.NativeGreetingResourceIT -Executing [/data/home/gsmet/git/quarkus-quickstarts/getting-started/target/getting-started-1.0.0-SNAPSHOT-runner, -Dquarkus.http.port=8081, -Dtest.url=http://localhost:8081, -Dquarkus.log.file.path=build/quarkus.log] -2019-04-15 11:33:20,348 INFO [io.quarkus] (main) Quarkus 999-SNAPSHOT started in 0.002s. Listening on: http://[::]:8081 -2019-04-15 11:33:20,348 INFO [io.quarkus] (main) Installed features: [cdi, resteasy] -[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.387 s - in org.acme.quickstart.NativeGreetingResourceIT -... ----- - -[TIP] -==== -By default, Quarkus waits for 60 seconds for the native image to start before automatically failing the native tests. This -duration can be changed using the `quarkus.test.wait-time` system property. For example, to increase the duration -to 300 seconds, use: `./mvnw verify -Pnative -Dquarkus.test.wait-time=300`. -==== - -[WARNING] -==== -In the future, `@NativeImageTest` will be deprecated in favor of `@QuarkusIntegrationTest` which provides a superset of the testing -capabilities of `@NativeImageTest`. More information about `@QuarkusIntegrationTest` can be found in the xref:getting-started-testing.adoc#quarkus-integration-test[Testing Guide]. -==== - -By default, native tests runs using the `prod` profile. -This can be overridden using the `quarkus.test.native-image-profile` property. -For example, in your `application.properties` file, add: `quarkus.test.native-image-profile=test`. -Alternatively, you can run your tests with: `./mvnw verify -Pnative -Dquarkus.test.native-image-profile=test`. -However, don't forget that when the native executable is built the `prod` profile is enabled. -So, the profile you enable this way must be compatible with the produced executable. - -[[graal-test-preview]] -[NOTE] -.Java preview features -==== -Java code that relies on preview features requires special attention. -To test a native executable, this means that the `--enable-preview` flag needs to be passed to the Surefire plugin. -Adding `--enable-preview` to its `configuration` section is one way to do so. -==== - -=== Excluding tests when running as a native executable - -When running tests this way, the only things that actually run natively are you application endpoints, which -you can only test via HTTP calls. Your test code does not actually run natively, so if you are testing code -that does not call your HTTP endpoints, it's probably not a good idea to run them as part of native tests. - -If you share your test class between JVM and native executions like we advise above, you can mark certain tests -with the `@DisabledOnNativeImage` annotation in order to only run them on the JVM. - - -=== Testing an existing native executable - -It is also possible to re-run the tests against a native executable that has already been built. To do this run -`./mvnw test-compile failsafe:integration-test`. This will discover the existing native image and run the tests against it using -failsafe. - -If the process cannot find the native image for some reason, or you want to test a native image that is no longer in the -target directory you can specify the executable with the `-Dnative.image.path=` system property. - -[#container-runtime] -== Creating a Linux executable without GraalVM installed - -IMPORTANT: Before going further, be sure to have a working container runtime (Docker, podman) environment. If you use Docker -on Windows you should share your project's drive at Docker Desktop file share settings and restart Docker Desktop. - -Quite often one only needs to create a native Linux executable for their Quarkus application (for example in order to run in a containerized environment) and would like to avoid -the trouble of installing the proper GraalVM version in order to accomplish this task (for example, in CI environments it's common practice -to install as little software as possible). - -To this end, Quarkus provides a very convenient way of creating a native Linux executable by leveraging a container runtime such as Docker or podman. -The easiest way of accomplishing this task is to execute: - -include::includes/devtools/build-native-container.adoc[] - -[TIP] -==== -By default Quarkus automatically detects the container runtime. -If you want to explicitely select the container runtime, you can do it with: - -For Docker: - -:build-additional-parameters: -Dquarkus.native.container-runtime=docker -include::includes/devtools/build-native-container-parameters.adoc[] -:!build-additional-parameters: - -For podman: - -:build-additional-parameters: -Dquarkus.native.container-runtime=podman -include::includes/devtools/build-native-container-parameters.adoc[] -:!build-additional-parameters: - -These are normal Quarkus config properties, so if you always want to build in a container -it is recommended you add these to your `application.properties` in order to avoid specifying them every time. -==== - -[TIP] -==== -If you see the following invalid path error for your application JAR when trying to create a native executable using a container build, even though your JAR was built successfully, you're most likely using a remote daemon for your container runtime. ----- -Error: Invalid Path entry getting-started-1.0.0-SNAPSHOT-runner.jar -Caused by: java.nio.file.NoSuchFileException: /project/getting-started-1.0.0-SNAPSHOT-runner.jar ----- -In this case, use the parameter `-Dquarkus.native.remote-container-build=true` instead of `-Dquarkus.native.container-build=true`. - -The reason for this is that the local build driver invoked through `-Dquarkus.native.container-build=true` uses volume mounts to make the JAR available in the build container, but volume mounts do not work with remote daemons. The remote container build driver copies the necessary files instead of mounting them. Note that even though the remote driver also works with local daemons, the local driver should be preferred in the local case because mounting is usually more performant than copying. -==== - -[TIP] -==== -Building with Mandrel requires a custom builder image parameter to be passed additionally: - -:build-additional-parameters: -Dquarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-mandrel:{mandrel-flavor} -include::includes/devtools/build-native-container-parameters.adoc[] -:!build-additional-parameters: - -Please note that the above command points to a floating tag. -It is highly recommended to use the floating tag, -so that your builder image remains up-to-date and secure. -If you absolutely must, you may hard-code to a specific tag -(see https://quay.io/repository/quarkus/ubi-quarkus-mandrel?tab=tags[here] for available tags), -but be aware that you won't get security updates that way and it's unsupported. -==== - -== Creating a container - -=== Using the container-image extensions - -By far the easiest way to create a container-image from your Quarkus application is to leverage one of the container-image extensions. - -If one of those extensions is present, then creating a container image for the native executable is essentially a matter of executing a single command: - -[source,bash] ----- -./mvnw package -Pnative -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true ----- - -* `quarkus.native.container-build=true` allows for creating a Linux executable without GraalVM being installed (and is only necessary if you don't have GraalVM installed locally or your local operating system is not Linux) -* `quarkus.container-image.build=true` instructs Quarkus to create a container-image using the final application artifact (which is the native executable in this case) - -See the xref:container-image.adoc[Container Image guide] for more details. - -=== Manually using the micro base image - -You can run the application in a container using the JAR produced by the Quarkus Maven Plugin. -However, in this section we focus on creating a container image using the produced native executable. - -image:containerization-process.png[Containerization Process] - -When using a local GraalVM installation, the native executable targets your local operating system (Linux, macOS, Windows etc). -However, as a container may not use the same _executable_ format as the one produced by your operating system, -we will instruct the Maven build to produce an executable by leveraging a container runtime (as described in <<#container-runtime,this section>>): - -The produced executable will be a 64 bit Linux executable, so depending on your operating system it may no longer be runnable. -However, it's not an issue as we are going to copy it to a container. -The project generation has provided a `Dockerfile.native-micro` in the `src/main/docker` directory with the following content: - -[source,dockerfile] ----- -FROM quay.io/quarkus/quarkus-micro-image:1.0 -WORKDIR /work/ -COPY target/*-runner /work/application -RUN chmod 775 /work -EXPOSE 8080 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -[NOTE] -.Quarkus Micro Image? -==== -The Quarkus Micro Image is a small container image providing the right set of dependencies to run your native application. -It is based on https://catalog.redhat.com/software/containers/ubi8-micro/601a84aadd19c7786c47c8ea?container-tabs=overview[UBI Micro]. -This base image has been tailored to work perfectly in containers. - -You can read more about UBI images on: - -* https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image[Introduction to Universal Base Image] -* https://catalog.redhat.com/software/container-stacks/detail/5ec53f50ef29fd35586d9a56[Red Hat Universal Base Image 8] - -UBI images can be used without any limitations. - -xref:quarkus-runtime-base-image.adoc[This page] explains how to extend the `quarkus-micro` image when your application has specific requirements. -==== - -Then, if you didn't delete the generated native executable, you can build the docker image with: - -[source,bash] ----- -docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . ----- - -And finally, run it with: - -[source,bash] ----- -docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started ----- - -=== Manually using the minimal base image - -The project generation has also provided a `Dockerfile.native` in the `src/main/docker` directory with the following content: - -[source,dockerfile] ----- -FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 -WORKDIR /work/ -COPY target/*-runner /work/application -RUN chmod 775 /work -EXPOSE 8080 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -The UBI minimal image is bigger than the micro one mentioned above. -It contains more utilities such as the `microdnf` package manager. - -[#multistage-docker] -=== Using a multi-stage Docker build - -The previous section showed you how to build a native executable using Maven or Gradle, but it requires you to have created the native executable first. -In addition, this native executable must be a Linux 64 bits executable. - -You may want to build the native executable directly in a container without having a final container containing the build tools. -That approach is possible with a multi-stage Docker build: - -1. The first stage builds the native executable using Maven or Gradle -2. The second stage is a minimal image copying the produced native executable - -Such a multi-stage build can be achieved as follows: - -Sample Dockerfile for building with Maven: -[source,dockerfile,subs=attributes+] ----- -## Stage 1 : build with maven builder image with native capabilities -FROM quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor} AS build -COPY --chown=quarkus:quarkus mvnw /code/mvnw -COPY --chown=quarkus:quarkus .mvn /code/.mvn -COPY --chown=quarkus:quarkus pom.xml /code/ -USER quarkus -WORKDIR /code -RUN ./mvnw -B org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline -COPY src /code/src -RUN ./mvnw package -Pnative - -## Stage 2 : create the docker final image -FROM quay.io/quarkus/quarkus-micro-image:1.0 -WORKDIR /work/ -COPY --from=build /code/target/*-runner /work/application - -# set up permissions for user `1001` -RUN chmod 775 /work /work/application \ - && chown -R 1001 /work \ - && chmod -R "g+rwX" /work \ - && chown -R 1001:root /work - -EXPOSE 8080 -USER 1001 - -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -NOTE: This multi-stage Docker build copies the Maven wrapper from the host machine. -The Maven wrapper (or the Gradle wrapper) is a convenient way to provide a specific version of Maven/Gradle. -It avoids having to create a base image with Maven and Gradle. -To provision the Maven Wrapper in your project, use: `mvn -N org.apache.maven.plugins:maven-wrapper-plugin:3.1.0:wrapper`. - -Save this file in `src/main/docker/Dockerfile.multistage` as it is not included in the getting started quickstart. - -Sample Dockerfile for building with Gradle: -[source,dockerfile,subs=attributes+] ----- -## Stage 1 : build with maven builder image with native capabilities -FROM quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor} AS build -COPY --chown=quarkus:quarkus gradlew /code/gradlew -COPY --chown=quarkus:quarkus /code/gradle -COPY --chown=quarkus:quarkus build.gradle /code/ -COPY --chown=quarkus:quarkus settings.gradle /code/ -COPY --chown=quarkus:quarkus gradle.properties /code/ -USER quarkus -WORKDIR /code -COPY src /code/src -RUN gradle -b /code/build.gradle buildNative - -## Stage 2 : create the docker final image -FROM quay.io/quarkus/quarkus-micro-image:1.0 -WORKDIR /work/ -COPY --from=build /code/build/*-runner /work/application -RUN chmod 775 /work -EXPOSE 8080 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -If you are using Gradle in your project, you can use this sample Dockerfile. Save it in `src/main/docker/Dockerfile.multistage`. - -[WARNING] -==== -Before launching our Docker build, we need to update the default `.dockerignore` file as it filters everything except the `target` directory. As we plan to build inside a container, we need to copy the `src` directory. Thus, edit your `.dockerignore` and update the content. -==== - -[source,bash] ----- -docker build -f src/main/docker/Dockerfile.multistage -t quarkus-quickstart/getting-started . ----- - -And, finally, run it with: - -[source,bash] ----- -docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started ----- - -[TIP] -==== -If you need SSL support in your native executable, you can easily include the necessary libraries in your Docker image. - -Please see xref:native-and-ssl.adoc#working-with-containers[our Using SSL With Native Executables guide] for more information. -==== - -NOTE: To use Mandrel instead of GraalVM CE, update the `FROM` clause to: `FROM quay.io/quarkus/ubi-quarkus-mandrel:$TAG AS build`. -`$TAG` can be found on the https://quay.io/repository/quarkus/ubi-quarkus-mandrel?tab=tags[Quarkus Mandrel Images Tags page]. - -=== Using a Distroless base image - -IMPORTANT: Distroless image support is experimental. - -If you are looking for small container images, the https://github.com/GoogleContainerTools/distroless[distroless] approach reduces the size of the base layer. -The idea behind _distroless_ is the usage of a single and minimal base image containing all the requirements, and sometimes even the application itself. - -Quarkus provides a distroless base image that you can use in your `Dockerfile`. -You only need to copy your application, and you are done: - -[source, dockerfile] ----- -FROM quay.io/quarkus/quarkus-distroless-image:1.0 -COPY target/*-runner /application - -EXPOSE 8080 -USER nonroot - -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -Quarkus provides the `quay.io/quarkus/quarkus-distroless-image:1.0` image. -It contains the required packages to run a native executable and is only **9Mb**. -Just add your application on top of this image, and you will get a tiny container image. - -Distroless images should not be used in production without rigorous testing. - -=== Native executable compression - -Quarkus can compress the produced native executable using UPX. -More details on xref:./upx.adoc[UPX Compression documentation]. - -=== Separating Java and native image compilation - -In certain circumstances, you may want to build the native image in a separate step. -For example, in a CI/CD pipeline, you may want to have one step to generate the source that will be used for the native image generation and another step to use these sources to actually build the native executable. -For this use case, you can set the `quarkus.package.type=native-sources`. -This will execute the java compilation as if you would have started native compilation (`-Pnative`), but stops before triggering the actual call to GraalVM's `native-image`. - -[source,bash] ----- -$ ./mvnw clean package -Dquarkus.package.type=native-sources ----- - -After compilation has finished, you find the build artifact in `target/native-sources`: - -[source,bash] ----- -$ cd target/native-sources -$ ls -native-image.args getting-started-1.0.0-SNAPSHOT-runner.jar lib ----- - -From the output above one can see that, in addition to the produced jar file and the associated lib directory, a text file named `native-image.args` was created. -This file holds all parameters (including the name of the JAR to compile) to pass along to GraalVM's `native-image` command. -If you have GraalVM installed, you can start the native compilation by executing: - -[source,bash] ----- -$ cd target/native-source -$ native-image $(cat native-image.args) -... -$ ls -native-image.args -getting-started-1.0.0-SNAPSHOT-runner -getting-started-1.0.0-SNAPSHOT-runner.build_artifacts.txt -getting-started-1.0.0-SNAPSHOT-runner.jar ----- - -The process for Gradle is analogous. - -Running the build process in a container is also possible: - -[source,bash,subs=attributes+] ----- -cd target/native-image -docker run \ - -it \ - --rm \ - --v $(pwd):/work <1> - -w /work <2> - --entrypoint bin/sh \ - quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor} \ <3> - -c "native-image $(cat native-image.args) -J-Xmx4g" <4> ----- - -<1> Mount the host's directory `target/native-image` to the container's `/work`. Thus, the generated binary will also be written to this directory. -<2> Switch the working directory to `/work`, which we have mounted in <1>. -<3> Use the `quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}` docker image introduced in <<#multistage-docker,Using a multi-stage Docker build>> to build the native image. -<4> Call `native-image` with the content of file `native-image.args` as arguments. We also supply an additional argument to limit the process's maximum memory to 4 Gigabytes (this may vary depending on the project being built and the machine building it). - -[WARNING] -==== -If you are running on a Windows machine, please keep in mind that the binary was created within a Linux docker container. -Hence, the binary will not be executable on the host Windows machine. -==== - -A high level overview of what the various steps of a CI/CD pipeline would look is the following: - -1. Register the output of the step executing `./mvnw ...` command (i.e. directory `target/native-image`) as a build artifact, -2. Require this artifact in the step executing the `native-image ...` command, and -3. Register the output of the step executing the `native-image ...` command (i.e. files matching `target/*runner`) as build artifact. - -The environment executing step `1` only needs Java and Maven (or Gradle) installed, while the environment executing step `3` only needs a GraalVM installation (including the `native-image` feature). - -Depending on what the final desired output of the CI/CD pipeline is, the generated binary might then be used to create a container image. - -== Debugging native executable - -Starting with Oracle GraalVM 20.2 or Mandrel 20.1, -debug symbols for native executables can be generated for Linux environments -(Windows support is still under development, macOS is not supported). -These symbols can be used to debug native executables with tools such as `gdb`. - -To generate debug symbols, -add `-Dquarkus.native.debug.enabled=true` flag when generating the native executable. -You will find the debug symbols for the native executable in a `.debug` file next to the native executable. - -[NOTE] -==== -The generation of the `.debug` file depends on `objcopy`. -On common Linux distributions you will need to install the `binutils` package: - -[source,bash] ----- -# dnf (rpm-based) -sudo dnf install binutils -# Debian-based distributions -sudo apt-get install binutils ----- - -When `objcopy` is not available debug symbols are embedded in the executable. -==== - -Aside from debug symbols, -setting `-Dquarkus.native.debug.enabled=true` flag generates a cache of source files -for any JDK runtime classes, GraalVM classes and application classes resolved during native executable generation. -This source cache is useful for native debugging tools, -to establish the link between the symbols and matching source code. -It provides a convenient way of making just the necessary sources available to the debugger/IDE when debugging a native executable. - -Sources for third party jar dependencies, including Quarkus source code, -are not added to the source cache by default. -To include those, make sure you invoke `mvn dependency:sources` first. -This step is required in order to pull the sources for these dependencies, -and get them included in the source cache. - -The source cache is located in the `target/sources` folder. - -[TIP] -==== -If running `gdb` from a different directory than `target`, then the sources can be loaded by running: - -[source,bash] ----- -directory path/to/target ----- - -in the `gdb` prompt. - -Or start `gdb` with: - -[source,bash] ----- -gdb -ex 'directory path/to/target' path/to/target/{project.name}-{project.version}-runner ----- - -e.g., -[source,bash] ----- -gdb -ex 'directory ./target' ./target/getting-started-1.0.0-SNAPSHOT-runner ----- -==== - -For a more detailed guide about debugging native images please refer to the xref:native-reference.adoc[Native Reference Guide]. - -[[configuration-reference]] -== Configuring the Native Executable - -There are a lot of different configuration options that can affect how the native executable is generated. -These are provided in `application.properties` the same as any other config property. - -The properties are shown below: - -include::{generated-dir}/config/quarkus-native-pkg-native-config.adoc[opts=optional] - -== What's next? - -This guide covered the creation of a native (binary) executable for your application. -It provides an application exhibiting a swift startup time and consuming less memory. -However, there is much more. - -We recommend continuing the journey with the xref:deploying-to-kubernetes.adoc[deployment to Kubernetes and OpenShift]. diff --git a/_versions/2.7/guides/cache.adoc b/_versions/2.7/guides/cache.adoc deleted file mode 100644 index 5c0c42ce648..00000000000 --- a/_versions/2.7/guides/cache.adoc +++ /dev/null @@ -1,833 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Application Data Caching -:extension-status: preview - -include::./attributes.adoc[] - -In this guide, you will learn how to enable application data caching in any CDI managed bean of your Quarkus application. - -include::./status-include.adoc[] - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Scenario - -Let's imagine you want to expose in your Quarkus application a REST API that allows users to retrieve the weather forecast for the next three days. -The problem is that you have to rely on an external meteorological service which only accepts requests for one day at a time and takes forever to answer. -Since the weather forecast is updated once every twelve hours, caching the service responses would definitely improve your API performances. - -We'll do that using a single Quarkus annotation. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `cache-quickstart` {quickstarts-tree-url}/cache-quickstart[directory]. - -== Creating the Maven project - -First, we need to create a new Quarkus project with the following command: - -:create-app-artifact-id: cache-quickstart -:create-app-extensions: resteasy,cache,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates the project and imports the `cache` and `resteasy-jackson` extensions. - -If you already have your Quarkus project configured, you can add the `cache` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: cache -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-cache - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-cache") ----- - -== Creating the REST API - -Let's start by creating a service that will simulate an extremely slow call to the external meteorological service. -Create `src/main/java/org/acme/cache/WeatherForecastService.java` with the following content: - -[source,java] ----- -package org.acme.cache; - -import java.time.LocalDate; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class WeatherForecastService { - - public String getDailyForecast(LocalDate date, String city) { - try { - Thread.sleep(2000L); <1> - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - } - return date.getDayOfWeek() + " will be " + getDailyResult(date.getDayOfMonth() % 4) + " in " + city; - } - - private String getDailyResult(int dayOfMonthModuloFour) { - switch (dayOfMonthModuloFour) { - case 0: - return "sunny"; - case 1: - return "cloudy"; - case 2: - return "chilly"; - case 3: - return "rainy"; - default: - throw new IllegalArgumentException(); - } - } -} ----- -<1> This is where the slowness comes from. - -We also need a class that will contain the response sent to the users when they ask for the next three days weather forecast. -Create `src/main/java/org/acme/cache/WeatherForecast.java` this way: - -[source,java] ----- -package org.acme.cache; - -import java.util.List; - -public class WeatherForecast { - - private List dailyForecasts; - - private long executionTimeInMs; - - public WeatherForecast(List dailyForecasts, long executionTimeInMs) { - this.dailyForecasts = dailyForecasts; - this.executionTimeInMs = executionTimeInMs; - } - - public List getDailyForecasts() { - return dailyForecasts; - } - - public long getExecutionTimeInMs() { - return executionTimeInMs; - } -} ----- - -Now, we just need to create the REST resource. -Create the `src/main/java/org/acme/cache/WeatherForecastResource.java` file with this content: - -[source,java] ----- -package org.acme.cache; - -import java.time.LocalDate; -import java.util.Arrays; -import java.util.List; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -@Path("/weather") -public class WeatherForecastResource { - - @Inject - WeatherForecastService service; - - @GET - public WeatherForecast getForecast(@QueryParam String city, @QueryParam long daysInFuture) { <1> - long executionStart = System.currentTimeMillis(); - List dailyForecasts = Arrays.asList( - service.getDailyForecast(LocalDate.now().plusDays(daysInFuture), city), - service.getDailyForecast(LocalDate.now().plusDays(daysInFuture + 1L), city), - service.getDailyForecast(LocalDate.now().plusDays(daysInFuture + 2L), city) - ); - long executionEnd = System.currentTimeMillis(); - return new WeatherForecast(dailyForecasts, executionEnd - executionStart); - } -} ----- -<1> If the `daysInFuture` query parameter is omitted, the three days weather forecast will start from the current day. -Otherwise, it will start from the current day plus the `daysInFuture` value. - -We're all done! Let's check if everything's working. - -First, run the application using dev mode from the project directory: - -include::includes/devtools/dev.adoc[] - -Then, call `http://localhost:8080/weather?city=Raleigh` from a browser. -After six long seconds, the application will answer something like this: - -[source,json] ----- -{"dailyForecasts":["MONDAY will be cloudy in Raleigh","TUESDAY will be chilly in Raleigh","WEDNESDAY will be rainy in Raleigh"],"executionTimeInMs":6001} ----- - -[TIP] -==== -The response content may vary depending on the day you run the code. -==== - -You can try calling the same URL again and again, it will always take six seconds to answer. - -== Enabling the cache - -Now that your Quarkus application is up and running, let's tremendously improve its response time by caching the external meteorological service responses. -Update the `WeatherForecastService` class like this: - -[source,java] ----- -package org.acme.cache; - -import java.time.LocalDate; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.cache.CacheResult; - -@ApplicationScoped -public class WeatherForecastService { - - @CacheResult(cacheName = "weather-cache") <1> - public String getDailyForecast(LocalDate date, String city) { - try { - Thread.sleep(2000L); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - } - return date.getDayOfWeek() + " will be " + getDailyResult(date.getDayOfMonth() % 4) + " in " + city; - } - - private String getDailyResult(int dayOfMonthModuloFour) { - switch (dayOfMonthModuloFour) { - case 0: - return "sunny"; - case 1: - return "cloudy"; - case 2: - return "chilly"; - case 3: - return "rainy"; - default: - throw new IllegalArgumentException(); - } - } -} ----- -<1> We only added this annotation (and the associated import of course). - -Let's try to call `http://localhost:8080/weather?city=Raleigh` again. -You're still waiting a long time before receiving an answer. -This is normal since the server just restarted and the cache was empty. - -Wait a second! The server restarted by itself after the `WeatherForecastService` update? -Yes, this is one of Quarkus amazing features for developers called `live coding`. - -Now that the cache was loaded during the previous call, try calling the same URL. -This time, you should get a super fast answer with an `executionTimeInMs` value close to 0. - -Let's see what happens if we start from one day in the future using the `http://localhost:8080/weather?city=Raleigh&daysInFuture=1` URL. -You should get an answer two seconds later since two of the requested days were already loaded in the cache. - -You can also try calling the same URL with a different city and see the cache in action again. -The first call will take six seconds and the following ones will be answered immediately. - -Congratulations! You just added application data caching to your Quarkus application with a single line of code! - -Do you want to learn more about the Quarkus application data caching abilities? -The following sections will show you everything there is to know about it. - -[#annotations-api] -== Caching using annotations - -Quarkus offers a set of annotations that can be used in a CDI managed bean to enable caching abilities. - -[WARNING] -==== -Caching annotations are not allowed on private methods. -They will work fine with any other access modifier including package-private (no explicit modifier). -==== - -=== @CacheResult - -Loads a method result from the cache without executing the method body whenever possible. - -When a method annotated with `@CacheResult` is invoked, Quarkus will compute a cache key and use it to check in the cache whether the method has been already invoked. -If the method has one or more arguments, the key computation is done from all the method arguments if none of them is annotated with `@CacheKey`, or all the arguments annotated with `@CacheKey` otherwise. -Each non-primitive method argument that is part of the key must implement `equals()` and `hashCode()` correctly for the cache to work as expected. -This annotation can also be used on a method with no arguments, a default key derived from the cache name is used in that case. -If a value is found in the cache, it is returned and the annotated method is never actually executed. -If no value is found, the annotated method is invoked and the returned value is stored in the cache using the computed key. - -A method annotated with `CacheResult` is protected by a lock on cache miss mechanism. -If several concurrent invocations try to retrieve a cache value from the same missing key, the method will only be invoked once. -The first concurrent invocation will trigger the method invocation while the subsequent concurrent invocations will wait for the end of the method invocation to get the cached result. -The `lockTimeout` parameter can be used to interrupt the lock after a given delay. -The lock timeout is disabled by default, meaning the lock is never interrupted. -See the parameter Javadoc for more details. - -This annotation cannot be used on a method returning `void`. - -[NOTE] -==== -Quarkus is able to also cache `null` values unlike the underlying Caffeine provider. -See <>. -==== - -=== @CacheInvalidate - -Removes an entry from the cache. - -When a method annotated with `@CacheInvalidate` is invoked, Quarkus will compute a cache key and use it to try to remove an existing entry from the cache. -If the method has one or more arguments, the key computation is done from all the method arguments if none of them is annotated with `@CacheKey`, or all the arguments annotated with `@CacheKey` otherwise. -This annotation can also be used on a method with no arguments, a default key derived from the cache name is used in that case. -If the key does not identify any cache entry, nothing will happen. - -=== @CacheInvalidateAll - -When a method annotated with `@CacheInvalidateAll` is invoked, Quarkus will remove all entries from the cache. - -=== @CacheKey - -When a method argument is annotated with `@CacheKey`, it is identified as a part of the cache key during an invocation of a -method annotated with `@CacheResult` or `@CacheInvalidate`. - -This annotation is optional and should only be used when some of the method arguments are NOT part of the cache key. - -=== Composite cache key building logic - -When a cache key is built from several method arguments, whether they are explicitly identified with `@CacheKey` or not, the building logic depends on the order of these arguments in the method signature. On the other hand, the arguments names are not used at all and do not have any effect on the cache key. - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.cache.CacheInvalidate; -import io.quarkus.cache.CacheResult; - -@ApplicationScoped -public class CachedService { - - @CacheResult(cacheName = "foo") - public Object load(String keyElement1, Integer keyElement2) { - // Call expensive service here. - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate1(String keyElement2, Integer keyElement1) { <1> - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate2(Integer keyElement2, String keyElement1) { <2> - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate3(Object notPartOfTheKey, @CacheKey String keyElement1, @CacheKey Integer keyElement2) { <3> - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate4(Object notPartOfTheKey, @CacheKey Integer keyElement2, @CacheKey String keyElement1) { <4> - } -} ----- -<1> Calling this method WILL invalidate values cached by the `load` method even if the key elements names have been swapped. -<2> Calling this method WILL NOT invalidate values cached by the `load` method because the key elements order is different. -<3> Calling this method WILL invalidate values cached by the `load` method because the key elements order is the same. -<4> Calling this method WILL NOT invalidate values cached by the `load` method because the key elements order is different. - -[#programmatic-api] -== Caching using the programmatic API - -Quarkus also offers a programmatic API which can be used to store, retrieve or delete values from any cache declared using the annotations API. -All operations from the programmatic API are non-blocking and rely on https://smallrye.io/smallrye-mutiny/[Mutiny] under the hood. - -Before accessing programmatically the cached data, you need to retrieve an `io.quarkus.cache.Cache` instance. -The following sections will show you how to do that. - -=== Injecting a `Cache` with the `@CacheName` annotation - -`io.quarkus.cache.CacheName` can be used on a field, a constructor parameter or a method parameter to inject a `Cache`: - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import io.quarkus.cache.Cache; -import io.quarkus.cache.CacheName; -import io.smallrye.mutiny.Uni; - -@ApplicationScoped -public class CachedExpensiveService { - - @Inject //<1> - @CacheName("my-cache") - Cache cache; - - public Uni getNonBlockingExpensiveValue(Object key) { //<2> - return cache.get(key, k -> { //<3> - /* - * Put an expensive call here. - * It will be executed only if the key is not already associated with a value in the cache. - */ - }); - } - - public String getBlockingExpensiveValue(Object key) { - return cache.get(key, k -> { - // Put an expensive call here. - }).await().indefinitely(); //<4> - } -} ----- -<1> This is optional. -<2> This method returns the `Uni` type which is non-blocking. -<3> The `k` argument contains the cache key value. -<4> If you don't need the call to be non-blocking, this is how you can retrieve the cache value in a blocking way. - -=== Retrieving a `Cache` from the `CacheManager` - -Another way to retrieve a `Cache` instance consists in injecting the `io.quarkus.cache.CacheManager` first and then retrieving the desired `Cache` from its name: - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import io.quarkus.cache.Cache; -import io.quarkus.cache.CacheManager; - -import java.util.Optional; - -@ApplicationScoped -public class CacheClearer { - - @Inject - CacheManager cacheManager; - - public void clearCache(String cacheName) { - Optional cache = cacheManager.getCache(cacheName); - if (cache.isPresent()) { - cache.get().invalidateAll().await().indefinitely(); - } - } -} ----- - -=== Building a programmatic cache key - -Before building a programmatic cache key, you need to know how cache keys are built by the annotations API when an annotated method is invoked: - -* If the method has no arguments, then the cache key is an instance of `io.quarkus.cache.DefaultCacheKey` built from the cache name. -* If the method has exactly one argument, then this argument is the cache key. -* If the method has multiple arguments but only one annotated with `@CacheKey`, then this annotated argument is the cache key. -* In all other cases, the cache key is an instance of `io.quarkus.cache.CompositeCacheKey` built from multiple method arguments (annotated with `@CacheKey` or not). - -Now, if you want to retrieve or delete, using the programmatic API, a cache value that was stored using the annotations API, you just need to make sure the same key is used with both APIs. - -=== Retrieving all keys from a `CaffeineCache` - -The cache keys from a specific `CaffeineCache` can be retrieved as an unmodifiable `Set` as shown below. -If the cache entries are modified while an iteration over the set is in progress, the set will remain unchanged. - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.cache.Cache; -import io.quarkus.cache.CacheName; -import io.quarkus.cache.CaffeineCache; - -import java.util.Set; - -@ApplicationScoped -public class CacheKeysService { - - @CacheName("my-cache") - Cache cache; - - public Set getAllCacheKeys() { - return cache.as(CaffeineCache.class).keySet(); - } -} ----- - -== Configuring the underlying caching provider - -This extension uses https://github.com/ben-manes/caffeine[Caffeine] as its underlying caching provider. -Caffeine is a high performance, near optimal caching library. - -=== Caffeine configuration properties - -Each of the Caffeine caches backing up the Quarkus application data caching extension can be configured using the following -properties in the `application.properties` file. By default caches do not perform any type of eviction if not configured. - -[TIP] -==== -You need to replace `cache-name` in all of the following properties with the real name of the cache you want to configure. -==== - -include::{generated-dir}/config/quarkus-cache-config-group-cache-config-caffeine-config.adoc[opts=optional, leveloffset=+1] - -Here's what your cache configuration could look like: - -[source,properties] ----- -quarkus.cache.caffeine."foo".initial-capacity=10 <1> -quarkus.cache.caffeine."foo".maximum-size=20 -quarkus.cache.caffeine."foo".expire-after-write=60S -quarkus.cache.caffeine."bar".maximum-size=1000 <2> ----- -<1> The `foo` cache is being configured. -<2> The `bar` cache is being configured. - -== Enabling Micrometer metrics - -Each cache declared using the <<#annotations-api,annotations caching API>> can be monitored using Micrometer metrics. - -[NOTE] -==== -The cache metrics collection will only work if your application depends on a `quarkus-micrometer-registry-*` extension. -See the xref:micrometer.adoc[Micrometer metrics guide] to learn how to use Micrometer in Quarkus. -==== - -The cache metrics collection is disabled by default. -It can be enabled from the `application.properties` file: - -[source,properties] ----- -quarkus.cache.caffeine."foo".metrics-enabled=true ----- - -[WARNING] -==== -Like all instrumentation methods, collecting metrics comes with a small overhead that can impact the application performances. -==== - -The collected metrics contain cache statistics such as: - -- the approximate current number of entries in the cache -- the number of entries that were added to the cache -- the number of times a cache lookup has been performed, including information about hits and misses -- the number of evictions and the weight of the evicted entries - -Here is an example of cache metrics available for an application that depends on the `quarkus-micrometer-registry-prometheus` extension: - -[source] ----- -# HELP cache_size The number of entries in this cache. This may be an approximation, depending on the type of cache. -# TYPE cache_size gauge -cache_size{cache="foo",} 8.0 -# HELP cache_puts_total The number of entries added to the cache -# TYPE cache_puts_total counter -cache_puts_total{cache="foo",} 12.0 -# HELP cache_gets_total The number of times cache lookup methods have returned a cached value. -# TYPE cache_gets_total counter -cache_gets_total{cache="foo",result="hit",} 53.0 -cache_gets_total{cache="foo",result="miss",} 12.0 -# HELP cache_evictions_total cache evictions -# TYPE cache_evictions_total counter -cache_evictions_total{cache="foo",} 4.0 -# HELP cache_eviction_weight_total The sum of weights of evicted entries. This total does not include manual invalidations. -# TYPE cache_eviction_weight_total counter -cache_eviction_weight_total{cache="foo",} 540.0 ----- - -== Annotated beans examples - -=== Implicit simple cache key - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.cache.CacheInvalidate; -import io.quarkus.cache.CacheInvalidateAll; -import io.quarkus.cache.CacheResult; - -@ApplicationScoped -public class CachedService { - - @CacheResult(cacheName = "foo") - public Object load(Object key) { <1> - // Call expensive service here. - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate(Object key) { <1> - } - - @CacheInvalidateAll(cacheName = "foo") - public void invalidateAll() { - } -} ----- -<1> The cache key is implicit since there's no `@CacheKey` annotation. - -=== Explicit composite cache key - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.Dependent; - -import io.quarkus.cache.CacheInvalidate; -import io.quarkus.cache.CacheInvalidateAll; -import io.quarkus.cache.CacheKey; -import io.quarkus.cache.CacheResult; - -@Dependent -public class CachedService { - - @CacheResult(cacheName = "foo") - public String load(@CacheKey Object keyElement1, @CacheKey Object keyElement2, Object notPartOfTheKey) { <1> - // Call expensive service here. - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate(@CacheKey Object keyElement1, @CacheKey Object keyElement2, Object notPartOfTheKey) { <1> - } - - @CacheInvalidateAll(cacheName = "foo") - public void invalidateAll() { - } -} ----- -<1> The cache key is explicitly composed of two elements. The method signature also contains a third argument which is not part of the key. - -=== Default cache key - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.Dependent; - -import io.quarkus.cache.CacheInvalidate; -import io.quarkus.cache.CacheInvalidateAll; -import io.quarkus.cache.CacheResult; - -@Dependent -public class CachedService { - - @CacheResult(cacheName = "foo") - public String load() { <1> - // Call expensive service here. - } - - @CacheInvalidate(cacheName = "foo") - public void invalidate() { <1> - } - - @CacheInvalidateAll(cacheName = "foo") - public void invalidateAll() { - } -} ----- -<1> A unique default cache key derived from the cache name is used because the method has no arguments. - -=== Multiple annotations on a single method - -[source,java] ----- -package org.acme.cache; - -import javax.inject.Singleton; - -import io.quarkus.cache.CacheInvalidate; -import io.quarkus.cache.CacheInvalidateAll; -import io.quarkus.cache.CacheResult; - -@Singleton -public class CachedService { - - @CacheInvalidate(cacheName = "foo") - @CacheResult(cacheName = "foo") - public String forceCacheEntryRefresh(Object key) { <1> - // Call expensive service here. - } - - @CacheInvalidateAll(cacheName = "foo") - @CacheInvalidateAll(cacheName = "bar") - public void multipleInvalidateAll(Object key) { <2> - } -} ----- -<1> This method can be used to force a refresh of the cache entry corresponding to the given key. -<2> This method will invalidate all entries from the `foo` and `bar` caches with a single call. - -=== Clear all application caches - -[source,java] ----- -package org.acme.cache; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import io.quarkus.cache.CacheManager; - -@ApplicationScoped -public class CacheClearer { - - @Inject - CacheManager cacheManager; - - public void clearAllCaches() { - for (String cacheName : cacheManager.getCacheNames()) { - cacheManager.getCache(cacheName).get().invalidateAll().await().indefinitely(); - } - } -} ----- - -[#negative-cache] -== Negative caching and nulls - -Sometimes one wants to cache the result of an (expensive) remote call. -If the remote call fails, one may not want to cache the result or exception, -but rather re-try the remote call on the next invocation. - -A simple approach could be to catch the exception and return `null`, so that the caller can -act accordingly: - -.Sample code -[source,java] ----- - public void caller(int val) { - - Integer result = callRemote(val); //<1> - if (result != null) { - System.out.println("Result is " + result); - else { - System.out.println("Got an exception"); - } - } - - @CacheResult(cacheName = "foo") - public Integer callRemote(int val) { - - try { - Integer val = remoteWebServer.getResult(val); //<2> - return val; - } catch (Exception e) { - return null; // <3> - } - } ----- -<1> Call the method to call the remote -<2> Do the remote call and return its result -<3> Return in case of exception - -This approach has an unfortunate side effect: as we said before, Quarkus can also cache -`null` values. Which means that the next call to `callRemote()` with the same parameter value -will be answered out of the cache, returning `null` and no remote call will be done. -This may be desired in some scenarios, but usually one wants to retry the remote call until it returns a result. - -=== Let exceptions bubble up - -To prevent the cache from caching (marker) results from a remote call, we need to let -the exception bubble out of the called method and catch it at the caller side: - -.With Exception bubbling up -[source,java] ----- - public void caller(int val) { - try { - Integer result = callRemote(val); //<1> - System.out.println("Result is " + result); - } catch (Exception e) { - System.out.println("Got an exception"); - } - - @CacheResult(cacheName = "foo") - public Integer callRemote(int val) throws Exception { // <2> - - Integer val = remoteWebServer.getResult(val); //<3> - return val; - - } ----- -<1> Call the method to call the remote -<2> Exceptions may bubble up -<3> This can throw all kinds of remote exceptions - -When the call to the remote throws an exception, the cache does not store the result, -so that a subsequent call to `callRemote()` with the same parameter value will not be -answered out of the cache. -It will instead result in another attempt to call the remote. - -== Going native - -The Cache extension supports building native executables. - -However, to optimize runtime speed, Caffeine embarks many cache implementation classes that are selected depending on the cache configuration. -We are not registering all of them for reflection -(and the ones not registered are not included into the native executables) as registering all of them would be very costly. - -We are registering the most common implementations but, depending on your cache configuration, you might encounter errors like: - -[source] ----- -2021-12-08 02:32:02,108 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.lang.ClassNotFoundException: com.github.benmanes.caffeine.cache.PSAMS <1> - at java.lang.Class.forName(DynamicHub.java:1433) - at java.lang.Class.forName(DynamicHub.java:1408) - at com.github.benmanes.caffeine.cache.NodeFactory.newFactory(NodeFactory.java:111) - at com.github.benmanes.caffeine.cache.BoundedLocalCache.(BoundedLocalCache.java:240) - at com.github.benmanes.caffeine.cache.SS.(SS.java:31) - at com.github.benmanes.caffeine.cache.SSMS.(SSMS.java:64) - at com.github.benmanes.caffeine.cache.SSMSA.(SSMSA.java:43) ----- -<1> `PSAMS` is one of the many cache implementation classes of Caffeine so this part may vary. - -When you encounter this error, you can easily fix it by adding the following annotation to any of your application classes -(or you can create a new class such as `Reflections` just to host this annotation if you prefer): - -[source,java] ----- -@RegisterForReflection(classNames = { "com.github.benmanes.caffeine.cache.PSAMS" }) <1> ----- -<1> It is an array so you can register several cache implementations in one go if your configuration requires several of them. - -This annotation will register the cache implementation classes for reflection and this will include the classes into the native executable. diff --git a/_versions/2.7/guides/camel.adoc b/_versions/2.7/guides/camel.adoc deleted file mode 100644 index 6a9f60131a8..00000000000 --- a/_versions/2.7/guides/camel.adoc +++ /dev/null @@ -1,15 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Apache Camel on Quarkus - -include::./attributes.adoc[] - -http://camel.apache.org/[Apache Camel] is the Swiss knife of integrating heterogeneous systems with more than a decade -of history and a lively community of users and developers. - -The support for Apache Camel on top of Quarkus is provided by the -https://github.com/apache/camel-quarkus[Apache Camel Quarkus project]. Please refer to -https://camel.apache.org/camel-quarkus/latest/[their documentation] for more information. diff --git a/_versions/2.7/guides/capabilities.adoc b/_versions/2.7/guides/capabilities.adoc deleted file mode 100644 index c44596c1a02..00000000000 --- a/_versions/2.7/guides/capabilities.adoc +++ /dev/null @@ -1,124 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Extension Capabilities - -include::./attributes.adoc[] - -Quarkus extensions may declare in their descriptors (`META-INF/quarkus-extension.properties` found in the runtime extension JAR artifact) that they provide certain capabilities. A capability represents a technical aspect, e.g. an implementation of some functionality or a specification. Each capability has a name which should follow the Java package naming convention, e.g. `io.quarkus.rest`. - -IMPORTANT: Only a single provider of any given capability is allowed in an application. If more than one provider of a capability is detected, the application build will fail with the corresponding error message. - -At build time all the capabilities found in the application will be aggregated in an instance of the `io.quarkus.deployment.Capabilities` build item that extension build steps can inject to check whether a given capability is available or not. - -== Declaring capabilities - -The `quarkus-bootstrap-maven-plugin:extension-descriptor` Maven goal, that generates the extension descriptor, allows to declare provided capabilities in the following way: -[source,xml] ----- - - io.quarkus - quarkus-bootstrap-maven-plugin - - - io.quarkus.rest - io.quarkus.resteasy - - - ----- - -In this case, the extension is declaring two capabilities. - -=== Declaring conditional capabilities - -A capability may be provided only if a certain condition is satisfied, e.g. if a certain configuration option is enabled or based on some other condition. This could be expressed in the following way: -[source,xml] ----- - - io.quarkus - quarkus-bootstrap-maven-plugin - - - <1> - io.quarkus.container.image.openshift.deployment.OpenshiftBuild <2> - io.quarkus.container.image.openshift <3> - - - - ----- -<1> declaration of a conditional capability -<2> condition that must be resolved to `true` by a class implementing `java.util.function.BooleanSupplier` -<3> provided capability name - -NOTE: `providesIf` allows listing multiple `` as well as `` elements. - -In this case, `io.quarkus.container.image.openshift.deployment.OpenshiftBuild` should be included in one of the extension deployment dependencies and implement `java.util.function.BooleanSupplier`. At build time, the Quarkus bootstrap will create an instance of it and register `io.quarkus.container.image.openshift` capability only if its `getAsBoolean()` method returns true. - -An implementation of the `OpenshiftBuild` could look like this: -[source,java] ----- -import java.util.function.BooleanSupplier; - -import io.quarkus.container.image.deployment.ContainerImageConfig; - -public class OpenshiftBuild implements BooleanSupplier { - - private ContainerImageConfig containerImageConfig; - - OpenshiftBuild(ContainerImageConfig containerImageConfig) { - this.containerImageConfig = containerImageConfig; - } - - @Override - public boolean getAsBoolean() { - return containerImageConfig.builder.map(b -> b.equals(OpenshiftProcessor.OPENSHIFT)).orElse(true); - } -} ----- - -== CapabilityBuildItem - -Each provided capability will be represented with an instance of `io.quarkus.deployment.builditem.CapabilityBuildItem` at build time. Theoretically, `CapabilityBuildItem`'s could be produced by extension build steps directly, bypassing the corresponding declaration in the extension descriptors. However, this way of providing capabilities should be avoided, unless there is a very good reason not to declare a capability in the descriptor. - -IMPORTANT: Capabilities produced from extension build steps aren't available for the Quarkus dev tools. As a consequences, such capabilities can not be taken into account when analyzing extension compatibility during project creation or when adding new extensions to a project. - -== Querying capabilities - -All the capabilities found in an application will be aggregated during the build in an instance of `io.quarkus.deployment.Capabilities` build item, which can be injected by extension build steps to check whether a certain capability is present or not. E.g. - -[source,java] ----- - @BuildStep - HealthBuildItem addHealthCheck(Capabilities capabilities, DataSourcesBuildTimeConfig dataSourcesBuildTimeConfig) { - if (capabilities.isPresent(Capability.SMALLRYE_HEALTH)) { - return new HealthBuildItem("io.quarkus.agroal.runtime.health.DataSourceHealthCheck", - dataSourcesBuildTimeConfig.healthEnabled); - } else { - return null; - } - } ----- - -=== Capability prefixes - -Like a capability name, a capability prefix is a dot-separated string that is composed of either the first capability name element or a dot-separated sequence of the capability name elements starting from the first one. E.g. for capability `io.quarkus.resteasy.json.jackson` the following prefixes will be registered: - -* `io` -* `io.quarkus` -* `io.quarkus.resteasy` -* `io.quarkus.resteasy.json` - -`Capabilities.isCapabilityWithPrefixPresent(prefix)` could be used to check whether a capability with a given prefix is present. - -Given that only a single provider of a given capability is allowed in an application, capability prefixes allow expressing a certain common aspect among different but somewhat related capabilities. E.g. there could be extensions providing the following capabilities: - -* `io.quarkus.resteasy.json.jackson` -* `io.quarkus.resteasy.json.jackson.client` -* `io.quarkus.resteasy.json.jsonb` -* `io.quarkus.resteasy.json.jsonb.client` - -Including any one of those extensions in an application will enable the RESTEasy JSON serializer. In case a build step needs to check whether the RESTEasy JSON serializer is already enabled in an application, instead of checking whether any of those capabilities is present, it could simply check whether an extension with prefix `io.quarkus.resteasy.json` is present. diff --git a/_versions/2.7/guides/cassandra.adoc b/_versions/2.7/guides/cassandra.adoc deleted file mode 100644 index 649eac9dfb4..00000000000 --- a/_versions/2.7/guides/cassandra.adoc +++ /dev/null @@ -1,775 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the Cassandra Client - -include::./attributes.adoc[] - -Apache Cassandra® is a free and open-source, distributed, wide column store, NoSQL database -management system designed to handle large amounts of data across many commodity servers, providing -high availability with no single point of failure. - -In this guide, we will see how you can get your REST services to use a Cassandra database. - -include::./platform-include.adoc[] - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* A running link:https://cassandra.apache.org[Apache Cassandra], -link:https://www.datastax.fr/products/datastax-enterprise[DataStax Enterprise] (DSE) or -link:https://astra.datastax.com[DataStax Astra] database; or alternatively, a fresh Docker -installation. - -== Architecture - -This quickstart guide shows how to build a REST application using the -link:https://github.com/datastax/cassandra-quarkus[Cassandra Quarkus extension], which allows you to -connect to an Apache Cassandra, DataStax Enterprise (DSE) or DataStax Astra database, using the -link:https://docs.datastax.com/en/developer/java-driver/latest[DataStax Java driver]. - -This guide will also use the -link:https://docs.datastax.com/en/developer/java-driver/latest/manual/mapper[DataStax Object Mapper] -– a powerful Java-to-CQL mapping framework that greatly simplifies your application's data access -layer code by sparing you the hassle of writing your CQL queries by hand. - -The application built in this quickstart guide is quite simple: the user can add elements in a list -using a form, and the items list is updated. All the information between the browser and the server -is formatted as JSON, and the elements are stored in the Cassandra database. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step -by step. However, you can go right to the completed example. - -The solution is located in the -link:https://github.com/datastax/cassandra-quarkus/tree/main/quickstart[quickstart directory] of -the Cassandra Quarkus extension GitHub repository. - -== Creating a Blank Maven Project - -First, create a new Maven project and copy the `pom.xml` file that is present in the `quickstart` -directory. - -The `pom.xml` is importing all the Quarkus extensions and dependencies you need. - -== Creating the Data Model and Data Access Objects - -In this example, we will create an application to manage a list of fruits. - -First, let's create our data model – represented by the `Fruit` class – as follows: - -[source,java] ----- -@Entity -@PropertyStrategy(mutable = false) -public class Fruit { - - @PartitionKey - private final String name; - - private final String description; - - public Fruit(String name, String description) { - this.name = name; - this.description = description; - } - // getters, hashCode, equals, toString methods omitted for brevity -} ----- - -As stated above, we are using the DataStax Object Mapper. In other words, we are not going to write -our CQL queries manually; instead, we will annotate our data model with a few annotations, and the -mapper will generate proper CQL queries underneath. - -This is why the `Fruit` class is annotated with `@Entity`: this annotation marks it as an _entity -class_ that is mapped to a Cassandra table. Its instances are meant to be automatically persisted -into, and retrieved from, the Cassandra database. Here, the table name will be inferred from the -class name: `fruit`. - -Also, the `name` field represents a Cassandra partition key, and so we are annotating it with -`@PartitionKey` – another annotation from the Object Mapper library. - -IMPORTANT: Entity classes are normally required to have a default no-arg constructor, unless they -are annotated with `@PropertyStrategy(mutable = false)`, which is the case here. - -The next step is to create a DAO (Data Access Object) interface that will manage instances of -`Fruit` entities: - -[source,java] ----- -@Dao -public interface FruitDao { - @Update - void update(Fruit fruit); - - @Select - PagingIterable findAll(); -} ----- - -This interface exposes operations that will be used in our REST service. Again, the annotation -`@Dao` comes from the DataStax Object Mapper, which will also automatically generate an -implementation of this interface for you. - -Note also the special return type of the `findAll` method, -link:https://docs.datastax.com/en/drivers/java/latest/com/datastax/oss/driver/api/core/PagingIterable.html[`PagingIterable`]: -it's the base type of result sets returned by the driver. - -Finally, let's create the a Mapper interface: - -[source,java] ----- -@Mapper -public interface FruitMapper { - @DaoFactory - FruitDao fruitDao(); -} ----- - -The `@Mapper` annotation is yet another annotation recognized by the DataStax Object Mapper. A -mapper is responsible for constructing instances of DAOs – in this case, out mapper is constructing -an instance of our only DAO, `FruitDao`. - -== Creating a Service & JSON REST Endpoint - -Now let's create a `FruitService` that will be the business layer of our application and store/load -the fruits from the Cassandra database. - -[source,java] ----- -@ApplicationScoped -public class FruitService { - - @Inject FruitDao dao; - - public void save(Fruit fruit) { - dao.update(fruit); - } - - public List getAll() { - return dao.findAll().all(); - } -} ----- - -Note how the service is being injected a `FruitDao` instance. This DAO instance is injected -automatically. - -The Cassandra Quarkus extension allows you to inject any of the following beans in your own -components: - -- All `@Mapper`-annotated interfaces in your project. -- All `@Dao`-annotated interfaces in your project, as long as they are produced by a corresponding -`@DaoFactory`-annotated method declared in a mapper interface from your project. -- The -link:https://javadoc.io/doc/com.datastax.oss.quarkus/cassandra-quarkus-client/latest/com/datastax/oss/quarkus/runtime/api/session/QuarkusCqlSession.html[`QuarkusCqlSession`] -bean: this application-scoped, singleton bean is your main entry point to the Cassandra client; it -is a specialized Cassandra driver session instance with a few methods tailored especially for -Quarkus. Read its javadocs carefully! - -In our example, both `FruitMapper` and `FruitDao` could be injected anywhere. We chose to inject -`FruitDao` in `FruitService`. - -The last missing piece is the REST API that will expose GET and POST methods: - -[source,java] ----- -@Path("/fruits") -@Produces(MediaType.APPLICATION_JSON) -@Consumes(MediaType.APPLICATION_JSON) -public class FruitResource { - - @Inject FruitService fruitService; - - @GET - public List getAll() { - return fruitService.getAll().stream().map(this::convertToDto).collect(Collectors.toList()); - } - - @POST - public void add(FruitDto fruit) { - fruitService.save(convertFromDto(fruit)); - } - - private FruitDto convertToDto(Fruit fruit) { - return new FruitDto(fruit.getName(), fruit.getDescription()); - } - - private Fruit convertFromDto(FruitDto fruitDto) { - return new Fruit(fruitDto.getName(), fruitDto.getDescription()); - } -} ----- - -Notice how `FruitResource` is being injected a `FruitService` instance automatically. - -It is generally not recommended using the same entity object between the REST API and the data -access layer. These layers should indeed be decoupled and use distinct APIs in order to allow each -API to evolve independently of the other. This is the reason why our REST API is using a different -object: the `FruitDto` class – the word DTO stands for "Data Transfer Object". This DTO object will -be automatically converted to and from JSON in HTTP messages: - -[source,java] ----- -public class FruitDto { - - private String name; - private String description; - - public FruitDto() {} - - public FruitDto(String name, String description) { - this.name = name; - this.description = description; - } - // getters and setters omitted for brevity -} ----- - -The translation to and from JSON is done automatically by the Quarkus RestEasy extension, which is -included in this guide's pom.xml file. If you want to add it manually to your application, add the -below snippet to your application's ppm.xml file: - -[source,xml] ----- - - io.quarkus - quarkus-resteasy - - - io.quarkus - quarkus-resteasy-jsonb - ----- - -IMPORTANT: DTO classes used by the JSON serialization layer are required to have a default no-arg -constructor. - -The conversion from DTO to JSON is handled automatically for us, but we still must convert from -`Fruit` to `FruitDto` and vice versa. This must be done manually, which is why we have two -conversion methods declared in `FruitResource`: `convertToDto` and `convertFromDto`. - -TIP: In our example, `Fruit` and `FruitDto` are very similar, so you might wonder why not use -`Fruit` everywhere. In real life cases though, it's not uncommon to see DTOs and entities having -very different structures. - -== Connecting to the Cassandra Database - -=== Connecting to Apache Cassandra or DataStax Enterprise (DSE) - -The main properties to configure are: `contact-points`, to access the Cassandra database; -`local-datacenter`, which is required by the driver; and – optionally – the keyspace to bind to. - -A sample configuration should look like this: - -[source,properties] ----- -quarkus.cassandra.contact-points={cassandra_ip}:9042 -quarkus.cassandra.local-datacenter={dc_name} -quarkus.cassandra.keyspace={keyspace} ----- - -In this example, we are using a single instance running on localhost, and the keyspace containing -our data is `k1`: - -[source,properties] ----- -quarkus.cassandra.contact-points=127.0.0.1:9042 -quarkus.cassandra.local-datacenter=datacenter1 -quarkus.cassandra.keyspace=k1 ----- - -If your cluster requires plain text authentication, you must also provide two more settings: -`username` and `password`. - -[source,properties] ----- -quarkus.cassandra.auth.username=john -quarkus.cassandra.auth.password=s3cr3t ----- - -=== Connecting to a DataStax Astra Cloud Database - -When connecting to link:https://astra.datastax.com[DataStax Astra], instead of providing a contact -point and a datacenter, you should provide a so-called _secure connect bundle_, which should point -to a valid path to an Astra secure connect bundle file. You can download your secure connect bundle -from the Astra web console. - -You will also need to provide a username and password, since authentication is always required on -Astra clusters. - -A sample configuration for DataStax Astra should look like this: - -[source,properties] ----- -quarkus.cassandra.cloud.secure-connect-bundle=/path/to/secure-connect-bundle.zip -quarkus.cassandra.auth.username=john -quarkus.cassandra.auth.password=s3cr3t -quarkus.cassandra.keyspace=k1 ----- - -=== Advanced Driver Configuration - -You can configure other Java driver settings using `application.conf` or `application.json` files. -They need to be located in the classpath of your application. All settings will be passed -automatically to the underlying driver configuration mechanism. Settings defined in -`application.properties` with the `quarkus.cassandra` prefix will have priority over settings -defined in `application.conf` or `application.json`. - -To see the full list of settings, please refer to the -link:https://docs.datastax.com/en/developer/java-driver/latest/manual/core/configuration/reference/[driver -settings reference]. - -== Running a Local Cassandra Database - -By default, the Cassandra client is configured to access a local Cassandra database on port 9042 -(the default Cassandra port). - -IMPORTANT: Make sure that the setting `quarkus.cassandra.local-datacenter` matches the datacenter of -your Cassandra cluster. - -TIP: If you don't know the name of your local datacenter, this value can be found by running the -following CQL query: `SELECT data_center FROM system.local`. - -If you want to use Docker to run a Cassandra database, you can use the following command to launch -one in the background: - -[source,shell] ----- -docker run --name local-cassandra-instance -p 9042:9042 -d cassandra ----- - -Next you need to create the keyspace and table that will be used by your application. If you are -using Docker, run the following commands: - -[source,shell] ----- -docker exec -it local-cassandra-instance cqlsh -e "CREATE KEYSPACE IF NOT EXISTS k1 WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}" -docker exec -it local-cassandra-instance cqlsh -e "CREATE TABLE IF NOT EXISTS k1.fruit(name text PRIMARY KEY, description text)" ----- - -You can also use the CQLSH utility to interactively interrogate your database: - -[source,shell] ----- -docker exec -it local-cassandra-instance cqlsh ----- - -== Testing the REST API - -In the project root directory: - -- Run `mvn clean package` and then `java -jar ./target/cassandra-quarkus-quickstart-*-runner.jar` to start the application; -- Or better yet, run the application in dev mode: `mvn clean quarkus:dev`. - -Now you can use curl commands to interact with the underlying REST API. - -To create a fruit: - -[source,shell] ----- -curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"name":"apple","description":"red and tasty"}' \ - http://localhost:8080/fruits ----- - -To retrieve fruits: - -[source,shell] ----- -curl -X GET http://localhost:8080/fruits ----- - -== Creating a Frontend - -Now let's add a simple web page to interact with our `FruitResource`. - -Quarkus automatically serves static resources located under the `META-INF/resources` directory. In -the `src/main/resources/META-INF/resources` directory, add a `fruits.html` file with the contents -from link:src/main/resources/META-INF/resources/fruits.html[this file] in it. - -You can now interact with your REST service: - -* If you haven't done yet, start your application with `mvn clean quarkus:dev`; -* Point your browser to `http://localhost:8080/fruits.html`; -* Add new fruits to the list via the form. - -[[reactive]] -== Reactive Programming with the Cassandra Client - -The -link:https://javadoc.io/doc/com.datastax.oss.quarkus/cassandra-quarkus-client/latest/com/datastax/oss/quarkus/runtime/api/session/QuarkusCqlSession.html[`QuarkusCqlSession` -interface] gives you access to a series of reactive methods that integrate seamlessly with Quarkus -and its reactive framework, Mutiny. - -TIP: If you are not familiar with Mutiny, please check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. - -Let's rewrite our application using reactive programming with Mutiny. - -First, let's to declare another DAO interface that works in a reactive way: - -[source,java] ----- -@Dao -public interface ReactiveFruitDao { - - @Update - Uni updateAsync(Fruit fruit); - - @Select - MutinyMappedReactiveResultSet findAll(); -} - ----- - -Note the usage of `MutinyMappedReactiveResultSet` - it is a specialized `Mutiny` type converted from -the original `Publisher` returned by the driver, which also exposes a few extra methods, e.g. to -obtain the query execution info. If you don't need anything in that interface, you can also simply -declare your method to return `Multi`: `Multi findAll()`, - -Similarly, the method `updateAsync` returns a `Uni` - it is automatically converted from the -original result set returned by the driver. - -NOTE: The Cassandra driver uses the Reactive Streams `Publisher` API for reactive calls. The Quarkus -framework however uses Mutiny. Because of that, the `CqlQuarkusSession` interface transparently -converts the `Publisher` instances returned by the driver into the reactive type `Multi`. -`CqlQuarkusSession` is also capable of converting a `Publisher` into a `Uni` – in this case, the -publisher is expected to emit at most one row, then complete. This is suitable for write queries -(they return no rows), or for read queries guaranteed to return one row at most (count queries, for -example). - -Next, we need to adapt the `FruitMapper` to construct a `ReactiveFruitDao` instance: - -[source,java] ----- -@Mapper -public interface FruitMapper { - // the existing method omitted - - @DaoFactory - ReactiveFruitDao reactiveFruitDao(); -} - ----- - -Now, we can create a `ReactiveFruitService` that leverages our reactive DAO: - -[source,java] ----- -@ApplicationScoped -public class ReactiveFruitService { - - @Inject ReactiveFruitDao fruitDao; - - public Uni add(Fruit fruit) { - return fruitDao.update(fruit); - } - - public Multi getAll() { - return fruitDao.findAll(); - } -} ----- - -Finally, we can create a `ReactiveFruitResource`: - -[source,java] ----- -@Path("/reactive-fruits") -@Produces(MediaType.APPLICATION_JSON) -@Consumes(MediaType.APPLICATION_JSON) -public class ReactiveFruitResource { - - @Inject ReactiveFruitService service; - - @GET - public Multi getAll() { - return service.getAll().map(this::convertToDto); - } - - @POST - public Uni add(FruitDto fruitDto) { - return service.add(convertFromDto(fruitDto)); - } - - private FruitDto convertToDto(Fruit fruit) { - return new FruitDto(fruit.getName(), fruit.getDescription()); - } - - private Fruit convertFromDto(FruitDto fruitDto) { - return new Fruit(fruitDto.getName(), fruitDto.getDescription()); - } -} ----- - -The above resource is exposing a new endpoint, `reactive-fruits`. Its capabilities are identical to -the ones that we created before with `FruitResource`, but everything is handled in a reactive -fashion, without any blocking operation. - -NOTE: The `getAll()` method above returns `Multi`, and the `add()` method returns `Uni`. These types -are the same Mutiny types that we met before; they are automatically recognized by the Quarkus -reactive REST API, so we don't need to convert them into JSON ourselves. - -To effectively integrate the reactive logic with the REST API, your application needs to declare a -dependency to the Quarkus RestEasy Mutiny extension: - -[source,xml] ----- - - io.quarkus - quarkus-resteasy-mutiny - ----- - -This dependency is already included in this guide's pom.xml, but if you are starting a new project -from scratch, make sure to include it. - -== Testing the Reactive REST API - -Run the application in dev mode as explained above, then you can use curl commands to interact with -the underlying REST API. - -To create a fruit using the reactive REST endpoint: - -[source,shell] ----- -curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"name":"banana","description":"yellow and sweet"}' \ - http://localhost:8080/reactive-fruits ----- - -To retrieve fruits with the reactive REST endpoint: - -[source,shell] ----- -curl -X GET http://localhost:8080/reactive-fruits ----- - -== Creating a Reactive Frontend - -Now let's add a simple web page to interact with our `ReactiveFruitResource`. In the -`src/main/resources/META-INF/resources` directory, add a `reactive-fruits.html` file with the -contents from link:src/main/resources/META-INF/resources/reactive-fruits.html[this file] in it. - -You can now interact with your reactive REST service: - -* If you haven't done yet, start your application with `mvn clean quarkus:dev`; -* Point your browser to `http://localhost:8080/reactive-fruits.html`; -* Add new fruits to the list via the form. - -== Health Checks - -If you are using the Quarkus SmallRye Health extension, then the Cassandra client will automatically -add a readiness health check to validate the connection to the Cassandra cluster. This extension is -already included in this guide's pom.xml, but if you need to include it manually in your -application, add the following: - -[source,xml] ----- - - io.quarkus - quarkus-smallrye-health - ----- - -When health checks are available, you can access the `/health/ready` endpoint of your application -and have information about the connection validation status. - -Running in dev mode with `mvn clean quarkus:dev`, if you point your browser to -http://localhost:8080/health/ready you should see an output similar to the following one: - -[source,text] ----- -{ - "status": "UP", - "checks": [ - { - "name": "DataStax Apache Cassandra Driver health check", - "status": "UP", - "data": { - "cqlVersion": "3.4.4", - "releaseVersion": "3.11.7", - "clusterName": "Test Cluster", - "datacenter": "datacenter1", - "numberOfNodes": 1 - } - } - ] -} ----- - -TIP: If you need health checks globally enabled in your application, but don't want to activate -Cassandra health checks, you can disable Cassandra health checks by setting the -`quarkus.cassandra.health.enabled` property to `false` in your `application.properties`. - -== Metrics - -The Cassandra Quarkus client can provide metrics about the Cassandra session and about individual -Cassandra nodes. It supports both Micrometer and MicroProfile. - -The first step to enable metrics is to add a few additional dependencies depending on the metrics -framework you plan to use. - -=== Enabling Metrics with Micrometer - -Micrometer is the recommended metrics framework in Quarkus applications. - -To enable Micrometer metrics in your application, you need to add the following to your pom.xml. - -[source,xml] ----- - - com.datastax.oss - java-driver-metrics-micrometer - - - io.quarkus - quarkus-micrometer-registry-prometheus - ----- - -This guide uses Micrometer, so the above dependencies are already included in this guide's pom.xml. - -=== Enabling Metrics with MicroProfile Metrics - -Remove any dependency to Micrometer from your pom.xml, then add the following ones instead: - -[source,xml] ----- - - com.datastax.oss - java-driver-metrics-microprofile - - - io.quarkus - quarkus-smallrye-metrics - ----- - -=== Enabling Cassandra Metrics - -Even when metrics are enabled in your application, the Cassandra client will not report any metrics, -unless you opt-in for this feature. So your next step is to enable Cassandra metrics in your -`application.properties` file. - -[source,properties] ----- -quarkus.cassandra.metrics.enabled=true ----- - -That's it! - -The final (and optional) step is to customize which specific Cassandra metrics you would like the -Cassandra client to track. Several metrics can be tracked; if you skip this step, a default set of -useful metrics will be automatically tracked. - -TIP: For the full list of available metric names, please refer to the -link:https://docs.datastax.com/en/developer/java-driver/latest/manual/core/configuration/reference/[driver -settings reference] page; search for the `advanced.metrics` section. -Also, Cassandra driver metrics are covered in detail in the -https://docs.datastax.com/en/developer/java-driver/latest/manual/core/metrics/[driver manual]. - -If you do wish to customize which metrics to track, you should use the following properties: - -* `quarkus.cassandra.metrics.session.enabled` should contain the session-level metrics to enable -(metrics that are global to the session). -* `quarkus.cassandra.metrics.node.enabled` should contain the node-level metrics to enable (metrics -for which each node contacted by the Cassandra client gets its own metric value). - -Both properties accept a comma-separated list of valid metric names. - -For example, let's assume that you wish to enable the following three Cassandra metrics: - -* Session-level: `session.connected-nodes` and `session.bytes-sent`; -* Node-level: `node.pool.open-connections`. - -Then you should add the following settings to your `application.properties`: - -[source,properties] ----- -quarkus.cassandra.metrics.enabled=true -quarkus.cassandra.metrics.session.enabled=connected-nodes,bytes-sent -quarkus.cassandra.metrics.node.enabled=pool.open-connections ----- - -This guide's `application.properties` file has already many metrics enabled; you can use its metrics -list as a good starting point for exposing useful Cassandra metrics in your application. - -When metrics are properly enabled, metric reports for all enabled metrics are available at the -`/metrics` REST endpoint of your application. - -Running in dev mode with `mvn clean quarkus:dev`, if you point your browser to -`http://localhost:8080/metrics` you should see a list of metrics; search for metrics whose names -contain `cassandra`. - -IMPORTANT: For Cassandra metrics to show up, the Cassandra client needs to be initialized and -connected; if you are using lazy initialization (see below), you won't see any Cassandra metrics -until your application actually connects and hits the database for the first time. - -== Running in native mode - -If you installed GraalVM, you can link:https://quarkus.io/guides/building-native-image[build a -native image] using: - -[source,shell] ----- -mvn clean package -Dnative ----- - -Beware that native compilation can take a significant amount of time! Once the compilation is done, -you can run the native executable as follows: - -[source,shell] ----- -./target/cassandra-quarkus-quickstart-*-runner ----- - -You can then point your browser to `http://localhost:8080/fruits.html` and use your application. - -== Eager vs Lazy Initialization - -This extension allows you to inject either: - -- a `QuarkusCqlSession` bean; -- or the asynchronous version of this bean, that is, `CompletionStage`; -- or the reactive version of this bean, that is, `Uni`. - -The most straightforward approach is obviously to inject `QuarkusCqlSession` directly. This should -work just fine for most applications; however, the `QuarkusCqlSession` bean needs to be initialized -before it can be used, and this process is blocking. - -Fortunately, it is possible to control when the initialization should happen: the -`quarkus.cassandra.init.eager-init` parameter determines if the `QuarkusCqlSession` bean should be -initialized on its first access (lazy) or when the application is starting (eager). The default -value of this parameter is `false`, meaning the init process is lazy: the `QuarkusCqlSession` bean -will be initialized lazily on its first access – for example, when there is a first REST request -that needs to interact with the Cassandra database. - -Using lazy initialization speeds up your application startup time, and avoids startup failures if -the Cassandra database is not available. However, it could also prove dangerous if your code is -fully asynchronous, e.g. if you are using https://quarkus.io/guides/reactive-routes[reactive -routes]: indeed, the lazy initialization could accidentally happen on a thread that is not allowed -to block, such as a Vert.x event loop thread. Therefore, setting `quarkus.cassandra.init.eager-init` -to `false` and injecting `QuarkusCqlSession` should be avoided in these contexts. - -If you want to use Vert.x (or any other reactive framework) and keep the lazy initialization -behavior, you should instead inject only `CompletionStage` or -`Uni`. When injecting these beans, the initialization process will be triggered -lazily, but it will happen in the background, in a non-blocking way, leveraging the Vert.x event -loop. This way you don't risk blocking the Vert.x thread. - -Alternatively, you can set `quarkus.cassandra.init.eager-init` to true: in this case the session -bean will be initialized eagerly during application startup, on the Quarkus main thread. This would -eliminate any risk of blocking a Vert.x thread, at the cost of making your startup time (much) -longer. - -== Conclusion - -Accessing a Cassandra database from a client application is easy with Quarkus and the Cassandra -extension, which provides configuration and native support for the DataStax Java driver for Apache -Cassandra. diff --git a/_versions/2.7/guides/cdi-integration.adoc b/_versions/2.7/guides/cdi-integration.adoc deleted file mode 100644 index 338fef2856c..00000000000 --- a/_versions/2.7/guides/cdi-integration.adoc +++ /dev/null @@ -1,557 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= CDI Integration Guide - -include::./attributes.adoc[] -:numbered: -:toc: -:toclevels: 2 - -ArC, the CDI container, is bootstrapped at build time. -The downside of this approach is that CDI Portable Extensions cannot be supported. -Nevertheless, the functionality can be achieved using the Quarkus-specific extensions API. - -The container is bootstrapped in multiple phases. -From a high level perspective these phases go as follows: - -1. Initialization -2. Bean discovery -3. Registration of synthetic components -4. Validation - -In the _initialization_ phase the preparatory work is being carried out and custom contexts are registered. -_Bean discovery_ is then the process where the container analyzes all application classes, identifies beans and wires them all together based on the provided metadata. -Subsequently, the extensions can register _synthetic components_. -Attributes of these components are fully controlled by the extensions, i.e. are not derived from an existing class. -Finally, the _deployment is validated_. -For example, the container validates every injection point in the application and fails the build if there is no bean that satisfies the given required type and qualifiers. - -TIP: You can see more information about the bootstrap by enabling additional logging. Simply run the Maven build with `-X` or `--debug` and grep the lines that contain `io.quarkus.arc`. In the <>, you can use `quarkus.log.category."io.quarkus.arc.processor".level=DEBUG` and two special endpoints are also registered automatically to provide some basic debug info in the JSON format. - -Quarkus build steps can produce and consume various build items and hook into each phase. -In the following sections we will describe all the relevant build items and common scenarios. - -== Metadata Sources - -Classes and annotations are the primary source of bean-level metadata. -The initial metadata are read from the _bean archive index_, an immutable https://github.com/wildfly/jandex[Jandex index, window="_blank"] which is built from various sources during <>. -However, extensions can add, remove or transform the metadata at certain stages of the bootstrap. -Moreover, extensions can also register <>. -This is an important aspect to realize when integrating CDI components in Quarkus. - -This way, extensions can turn classes, that would be otherwise ignored, into beans and vice versa. -For example, a class that declares a `@Scheduled` method is always registered as a bean even if it is not annotated with a bean defining annotation and would be normally ignored. - -:sectnums: -:sectnumlevels: 4 - -== Use Case - My Class Is Not Recognized as a Bean - -An `UnsatisfiedResolutionException` indicates a problem during <>. -Sometimes an injection point cannot be satisfied even if there is a class on the classpath that appears to be eligible for injection. -There are several reasons why a class is not recognized and also several ways to fix it. -In the first step we should identify the _reason_. - -[[additional_bean_build_item]] -=== _Reason 1_: Class Is Not discovered - -Quarkus has a <>. -It might happen that the class is not part of the application index. -For example, classes from the _runtime module_ of a Quarkus extension are not indexed automatically. - -_Solution_: Use the `AdditionalBeanBuildItem`. -This build item can be used to specify one or more additional classes to be analyzed during the discovery. -Additional bean classes are transparently added to the application index processed by the container. - -IMPORTANT: It is not possible to conditionally enable/disable additional beans via the `@IfBuildProfile`, `@UnlessBuildProfile`, `@IfBuildProperty` and `@UnlessBuildProperty` annotations as described in <> and <>. Extensions should inspect the configuration or the current profile and only produce an `AdditionalBeanBuildItem` if really needed. - -.`AdditionalBeanBuildItem` Example -[source,java] ----- -@BuildStep -AdditionalBeanBuildItem additionalBeans() { - return new AdditionalBeanBuildItem(SmallRyeHealthReporter.class, HealthServlet.class)); <1> -} ----- -<1> `AdditionalBeanBuildItem.Builder` can be used for more complex use cases. - -Bean classes added via `AdditionalBeanBuildItem` are _removable_ by default. -If the container considers them <>, they are just ignored. -However, you can use `AdditionalBeanBuildItem.Builder.setUnremovable()` method to instruct the container to never remove bean classes registered via this build item. -See also <> and <> for more details. - -It is aso possible to set the default scope via `AdditionalBeanBuildItem.Builder#setDefaultScope()`. -The default scope is only used if there is no scope declared on the bean class. - -NOTE: If no default scope is specified the `@Dependent` pseudo-scope is used. - -=== _Reason 2_: Class Is Discovered but Has No Bean Defining Annotation - -In Quarkus, the application is represented by a single bean archive with the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#default_bean_discovery[bean discovery mode `annotated`, window="_blank"]. -Therefore, bean classes that don't have a https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#bean_defining_annotations[bean defining annotation, window="_blank"] are ignored. -Bean defining annotations are declared on the class-level and include scopes, stereotypes and `@Interceptor`. - -_Solution 1_: Use the `AutoAddScopeBuildItem`. This build item can be used to add a scope to a class that meets certain conditions. - -.`AutoAddScopeBuildItem` Example -[source,java] ----- -@BuildStep -AutoAddScopeBuildItem autoAddScope() { - return AutoAddScopeBuildItem.builder().containsAnnotations(SCHEDULED_NAME, SCHEDULES_NAME) <1> - .defaultScope(BuiltinScope.SINGLETON) <2> - .build(); -} ----- -<1> Find all classes annotated with `@Scheduled`. -<2> Add `@Singleton` as default scope. Classes already annotated with a scope are skipped automatically. - -_Solution 2_: If you need to process classes annotated with a specific annotation then it's possible to extend the set of bean defining annotations via the `BeanDefiningAnnotationBuildItem`. - -.`BeanDefiningAnnotationBuildItem` Example -[source,java] ----- -@BuildStep -BeanDefiningAnnotationBuildItem additionalBeanDefiningAnnotation() { - return new BeanDefiningAnnotationBuildItem(Annotations.GRAPHQL_API); <1> -} ----- -<1> Add `org.eclipse.microprofile.graphql.GraphQLApi` to the set of bean defining annotations. - -Bean classes added via `BeanDefiningAnnotationBuildItem` are _not removable_ by default, i.e. the resulting beans must not be removed even if they are considered unused. -However, you can change the default behavior. -See also <> and <> for more details. - -It is also possible to specify the default scope. -The default scope is only used if there is no scope declared on the bean class. - -NOTE: If no default scope is specified the `@Dependent` pseudo-scope is used. - -[[unremovable_builditem]] -=== _Reason 3_: Class Was Discovered and Has a Bean Defining Annotation but Was Removed - -The container attempts to <> during the build by default. -This optimization allows for _framework-level dead code elimination_. -In few special cases, it's not possible to correctly identify an unused bean. -In particular, Quarkus is not able to detect the usage of the `CDI.current()` static method yet. -Extensions can eliminate possible false positives by producing an `UnremovableBeanBuildItem`. - -.`UnremovableBeanBuildItem` Example -[source,java] ----- -@BuildStep -UnremovableBeanBuildItem unremovableBeans() { - return UnremovableBeanBuildItem.targetWithAnnotation(STARTUP_NAME); <1> -} ----- -<1> Make all classes annotated with `@Startup` unremovable. - -== Use Case - My Annotation Is Not Recognized as a Qualifier or an Interceptor Binding - -It is likely that the annotation class is not part of the application index. -For example, classes from the _runtime module_ of a Quarkus extension are not indexed automatically. - -_Solution_: Use the `AdditionalBeanBuildItem` as described in <>. - -[[annotations_transformer_build_item]] -== Use Case - I Need To Transform Metadata - -In some cases, it's useful to be able to modify the metadata. -Quarkus provides a powerful alternative to https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#process_annotated_type[`javax.enterprise.inject.spi.ProcessAnnotatedType`, window="_blank"]. -With an `AnnotationsTransformerBuildItem` it's possible to override the annotations that exist on bean classes. - -For example, you might want to add an interceptor binding to a specific bean class. -Here is how to do it: - -.`AnnotationsTransformerBuildItem` Example -[source,java] ----- -@BuildStep -AnnotationsTransformerBuildItem transform() { - return new AnnotationsTransformerBuildItem(new AnnotationsTransformer() { - - public boolean appliesTo(org.jboss.jandex.AnnotationTarget.Kind kind) { - return kind == org.jboss.jandex.AnnotationTarget.Kind.CLASS; <1> - } - - public void transform(TransformationContext context) { - if (context.getTarget().asClass().name().toString().equals("org.acme.Bar")) { - context.transform().add(MyInterceptorBinding.class).done(); <2> - } - } - }); -} ----- -<1> The transformer is only applied to classes. -<2> If the class name equals to `org.acme.Bar` then add `@MyInterceptorBinding`. Don't forget to invoke `Transformation#done()`. - -NOTE: Keep in mind that annotation transformers must be produced _before_ the bean discovery starts. - -Build steps can query the transformed annotations for a given annotation target via the `TransformedAnnotationsBuildItem`. - -.`TransformedAnnotationsBuildItem` Example -[source,java] ----- -@BuildStep -void queryAnnotations(TransformedAnnotationsBuildItem transformedAnnotations, BuildProducer myBuildItem) { - ClassInfo myClazz = ...; - if (transformedAnnotations.getAnnotations(myClazz).isEmpty()) { <1> - myBuildItem.produce(new MyBuildItem()); - } -} ----- -<1> `TransformedAnnotationsBuildItem.getAnnotations()` will return a possibly transformed set of annotations. - -NOTE: There are other build items specialized in transformation: <> and <>. - -[[inspect_beans]] -== Use Case - Inspect Beans, Observers and Injection Points - -=== _Solution 1_: `BeanDiscoveryFinishedBuildItem` - -Consumers of `BeanDiscoveryFinishedBuildItem` can easily inspect all class-based beans, observers and injection points registered in the application. -However, synthetic beans and observers are _not included_ because this build item is produced _before_ the synthetic components are registered. - -Additionaly, the bean resolver returned from `BeanDiscoveryFinishedBuildItem#getBeanResolver()` can be used to apply the type-safe resolution rules, e.g. to find out whether there is a bean that would satisfy certain combination of required type and qualifiers. - -.`BeanDiscoveryFinishedBuildItem` Example -[source,java] ----- -@BuildStep -void doSomethingWithNamedBeans(BeanDiscoveryFinishedBuildItem beanDiscovery, BuildProducer namedBeans) { - List namedBeans = beanDiscovery.beanStream().withName().collect(toList())); <1> - namedBeans.produce(new NamedBeansBuildItem(namedBeans)); -} ----- -<1> The resulting list will not contain `@Named` synthetic beans. - -=== _Solution 2_: `SynthesisFinishedBuildItem` - -Consumers of `SynthesisFinishedBuildItem` can easily inspect all beans, observers and injection points registered in the application. Synthetic beans and observers are included because this build item is produced _after_ the synthetic components are registered. - -Additionaly, the bean resolver returned from `SynthesisFinishedBuildItem#getBeanResolver()` can be used to apply the type-safe resolution rules, e.g. to find out whether there is a bean that would satisfy certain combination of required type and qualifiers. - -.`SynthesisFinishedBuildItem` Example -[source,java] ----- -@BuildStep -void doSomethingWithNamedBeans(SynthesisFinishedBuildItem synthesisFinished, BuildProducer namedBeans) { - List namedBeans = synthesisFinished.beanStream().withName().collect(toList())); <1> - namedBeans.produce(new NamedBeansBuildItem(namedBeans)); -} ----- -<1> The resulting list will contain `@Named` synthetic beans. - -[[synthetic_beans]] -== Use Case - The Need for Synthetic Beans - -Sometimes it is practical to be able to register a _synthetic bean_. -Bean attributes of a synthetic bean are not derived from a Java class, method or field. -Instead, all the attributes are defined by an extension. -In regular CDI, this could be achieved using the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#after_bean_discovery[`AfterBeanDiscovery.addBean()`, window="_blank"] methods. - -_Solution_: If you need to register a synthetic bean then use the `SyntheticBeanBuildItem`. - -.`SyntheticBeanBuildItem` Example 1 -[source,java] ----- -@BuildStep -SyntheticBeanBuildItem syntheticBean() { - return SyntheticBeanBuildItem.configure(String.class) - .qualifiers(new MyQualifierLiteral()) - .creator(mc -> mc.returnValue(mc.load("foo"))) <1> - .done(); -} ----- -<1> Generate the bytecode of the `javax.enterprise.context.spi.Contextual#create(CreationalContext)` implementation. - -The output of a bean configurator is recorded as bytecode. -Therefore, there are some limitations in how a synthetic bean instance is created at runtime. -You can: - -1. Generate the bytecode of the `Contextual#create(CreationalContext)` method directly via `ExtendedBeanConfigurator.creator(Consumer)`. -2. Pass a `io.quarkus.arc.BeanCreator` implementation class via `ExtendedBeanConfigurator#creator(Class>)`, and possibly specify some parameters via `ExtendedBeanConfigurator#param()`. -3. Produce the runtime instance through a proxy returned from a <> and set it via `ExtendedBeanConfigurator#runtimeValue(RuntimeValue)` or `ExtendedBeanConfigurator#supplier(Supplier)`. - -.`SyntheticBeanBuildItem` Example 2 -[source,java] ----- -@BuildStep -@Record(STATIC_INIT) <1> -SyntheticBeanBuildItem syntheticBean(TestRecorder recorder) { - return SyntheticBeanBuildItem.configure(Foo.class).scope(Singleton.class) - .runtimeValue(recorder.createFoo()) <2> - .done(); -} ----- -<1> By default, a synthetic bean is initialized during `STATIC_INIT`. -<2> The bean instance is supplied by a value returned from a recorder method. - -It is possible to mark a synthetic bean to be initialized during `RUNTIME_INIT`. -See the <> for more information about the difference between `STATIC_INIT` and `RUNTIME_INIT`. - -.`RUNTIME_INIT` `SyntheticBeanBuildItem` Example -[source,java] ----- -@BuildStep -@Record(RUNTIME_INIT) <1> -SyntheticBeanBuildItem syntheticBean(TestRecorder recorder) { - return SyntheticBeanBuildItem.configure(Foo.class).scope(Singleton.class) - .setRuntimeInit() <2> - .runtimeValue(recorder.createFoo()) - .done(); -} ----- -<1> The recorder must be executed in the `ExecutionTime.RUNTIME_INIT` phase. -<2> The bean instance is initialized during `RUNTIME_INIT`. - -[IMPORTANT] -==== -Synthetic beans initialized during `RUNTIME_INIT` must not be accessed during `STATIC_INIT`. `RUNTIME_INIT` build steps that access a runtime-init synthetic bean should consume the `SyntheticBeansRuntimeInitBuildItem`: - -[source,java] ----- -@BuildStep -@Record(RUNTIME_INIT) -@Consume(SyntheticBeansRuntimeInitBuildItem.class) <1> -void accessFoo(TestRecorder recorder) { - recorder.foo(); <2> -} ----- -<1> This build step must be executed after `syntheticBean()` completes. -<2> This recorder method results in an invocation upon the `Foo` bean instance and thus we need to make sure that the build step is executed after all synthetic beans are initialized. -==== - -NOTE: It is also possible to use the `BeanRegistrationPhaseBuildItem` to register a synthetic bean. However, we recommend extension authors to stick with `SyntheticBeanBuildItem` which is more idiomatic for Quarkus. - -[[synthetic_observers]] -== Use Case - Synthetic Observers - -Similar to <>, the attributes of a synthetic observer method are not derived from a Java method. Instead, all the attributes are defined by an extension. - -_Solution_: If you need to register a synthetic observer, use the `ObserverRegistrationPhaseBuildItem`. - -IMPORTANT: A build step that consumes the `ObserverRegistrationPhaseBuildItem` should always produce an `ObserverConfiguratorBuildItem` or at least inject a `BuildProducer` for this build item, otherwise it could be ignored or processed at the wrong time (e.g. after the correct CDI bootstrap phase). - -.`ObserverRegistrationPhaseBuildItem` Example -[source,java] ----- -@BuildStep -void syntheticObserver(ObserverRegistrationPhaseBuildItem observerRegistrationPhase, - BuildProducer myBuildItem, - BuildProducer observerConfigurationRegistry) { - observerConfigurationRegistry.produce(new ObserverConfiguratorBuildItem(observerRegistrationPhase.getContext() - .configure() - .beanClass(DotName.createSimple(MyBuildStep.class.getName())) - .observedType(String.class) - .notify(mc -> { - // do some gizmo bytecode generation... - }))); - myBuildItem.produce(new MyBuildItem()); -} ----- - -The output of a `ObserverConfigurator` is recorded as bytecode. -Therefore, there are some limitations in how a synthetic observer is invoked at runtime. -Currently, you must generate the bytecode of the method body directly. - -[[generated_beans]] -== Use Case - I Have a Generated Bean Class - -No problem. -You can generate the bytecode of a bean class manually and then all you need to do is to produce a `GeneratedBeanBuildItem` instead of `GeneratedClassBuildItem`. - -.`GeneratedBeanBuildItem` Example -[source,java] ----- -@BuildStep -void generatedBean(BuildProducer generatedBeans) { - ClassOutput beansClassOutput = new GeneratedBeanGizmoAdaptor(generatedBeans); <1> - ClassCreator beanClassCreator = ClassCreator.builder().classOutput(beansClassOutput) - .className("org.acme.MyBean") - .build(); - beanClassCreator.addAnnotation(Singleton.class); - beanClassCreator.close(); <2> -} ----- -<1> `io.quarkus.arc.deployment.GeneratedBeanGizmoAdaptor` makes it easy to produce ``GeneratedBeanBuildItem``s from Gizmo constructs. -<2> The resulting bean class is something like `public class @Singleton MyBean { }`. - -== Use Case - I Need to Validate the Deployment - -Sometimes extensions need to inspect the beans, observers and injection points, then perform additional validations and fail the build if something is wrong. - -_Solution_: If an extension needs to validate the deployment it should use the `ValidationPhaseBuildItem`. - -IMPORTANT: A build step that consumes the `ValidationPhaseBuildItem` should always produce a `ValidationErrorBuildItem` or at least inject a `BuildProducer` for this build item, otherwise it could be ignored or processed at the wrong time (e.g. after the correct CDI bootstrap phase). - -[source,java] ----- -@BuildStep -void validate(ValidationPhaseBuildItem validationPhase, - BuildProducer myBuildItem, - BuildProducer errors) { - if (someCondition) { - errors.produce(new ValidationErrorBuildItem(new IllegalStateException())); - myBuildItem.produce(new MyBuildItem()); - } -} ----- - -TIP: You can easily filter all registered beans via the convenient `BeanStream` returned from the `ValidationPhaseBuildItem.getContext().beans()` method. - -[[custom_context]] -== Use Case - Register a Custom CDI Context - -Sometimes extensions need to extend the set of built-in CDI contexts. - -_Solution_: If you need to register a custom context, use the `ContextRegistrationPhaseBuildItem`. - -IMPORTANT: A build step that consumes the `ContextRegistrationPhaseBuildItem` should always produce a `ContextConfiguratorBuildItem` or at least inject a `BuildProducer` for this build item, otherwise it could be ignored or processed at the wrong time (e.g. after the correct CDI bootstrap phase). - -`ContextRegistrationPhaseBuildItem` Example -[source,java] ----- -@BuildStep -ContextConfiguratorBuildItem registerContext(ContextRegistrationPhaseBuildItem phase) { - return new ContextConfiguratorBuildItem(phase.getContext().configure(TransactionScoped.class).normal().contextClass(TransactionContext.class)); -} ----- - -Additionally, each extension that registers a custom CDI context via `ContextRegistrationPhaseBuildItem` should also produce the `CustomScopeBuildItem` in order to contribute the custom scope annotation name to the set of bean defining annotations. - -`CustomScopeBuildItem` Example -[source,java] ----- -@BuildStep -CustomScopeBuildItem customScope() { - return new CustomScopeBuildItem(DotName.createSimple(TransactionScoped.class.getName())); -} ----- - -=== What if I Need to Know All the Scopes Used in the Application? - -_Solution_: You can inject the `CustomScopeAnnotationsBuildItem` in a build step and use the convenient methods such as `CustomScopeAnnotationsBuildItem.isScopeDeclaredOn()`. - -[[additional_interceptor_bindings]] -== Use Case - Additional Interceptor Bindings - -In rare cases it might be handy to programmatically register an existing annotation that is not annotated with `@javax.interceptor.InterceptorBinding` as an interceptor binding. -This is similar to what CDI achieves through `BeforeBeanDiscovery#addInterceptorBinding()`. -We are going to use `InterceptorBindingRegistrarBuildItem` to get it done. - -.`InterceptorBindingRegistrarBuildItem` Example -[source,java] ----- -@BuildStep -InterceptorBindingRegistrarBuildItem addInterceptorBindings() { - return new InterceptorBindingRegistrarBuildItem(new InterceptorBindingRegistrar() { - @Override - public List getAdditionalBindings() { - return List.of(InterceptorBinding.of(NotAnInterceptorBinding.class)); - } - }); -} ----- - -== Use Case - Additional Qualifiers - -Sometimes it might be useful to register an existing annotation that is not annotated with `@javax.inject.Qualifier` as a CDI qualifier. -This is similar to what CDI achieves through `BeforeBeanDiscovery#addQualifier()`. -We are going to use `QualifierRegistrarBuildItem` to get it done. - -.`QualifierRegistrarBuildItem` Example -[source,java] ----- -@BuildStep -QualifierRegistrarBuildItem addQualifiers() { - return new QualifierRegistrarBuildItem(new QualifierRegistrar() { - @Override - public Map> getAdditionalQualifiers() { - return Collections.singletonMap(DotName.createSimple(NotAQualifier.class.getName()), - Collections.emptySet()); - } - }); -} ----- - -[[injection_point_transformation]] -== Use Case - Injection Point Transformation - -Every now and then it is handy to be able to change the qualifiers of an injection point programmatically. -You can do just that with `InjectionPointTransformerBuildItem`. -The following sample shows how to apply transformation to injection points with type `Foo` that contain qualifier `MyQualifier`: - -.`InjectionPointTransformerBuildItem` Example -[source,java] ----- -@BuildStep -InjectionPointTransformerBuildItem transformer() { - return new InjectionPointTransformerBuildItem(new InjectionPointsTransformer() { - - public boolean appliesTo(Type requiredType) { - return requiredType.name().equals(DotName.createSimple(Foo.class.getName())); - } - - public void transform(TransformationContext context) { - if (context.getQualifiers().stream() - .anyMatch(a -> a.name().equals(DotName.createSimple(MyQualifier.class.getName())))) { - context.transform() - .removeAll() - .add(DotName.createSimple(MyOtherQualifier.class.getName())) - .done(); - } - } - }); -} ----- - -NOTE: In theory, you can use <> to achieve the same goal. However, there are few differences that make `InjectionPointsTransformer` more suitable for this particular task: (1) annotation transformers are applied to all classes during bean discovery, whereas `InjectionPointsTransformer` is only applied to discovered injection points after bean discovery; (2) with `InjectionPointsTransformer` you don't need to handle various types of injection points (field, parameters of initializer methods, etc.). - -== Use Case - Resource Annotations and Injection - -The `ResourceAnnotationBuildItem` can be used to specify resource annotations that make it possible to resolve non-CDI injection points, such as Jakarta EE resources. -An integrator must also provide a corresponding `io.quarkus.arc.ResourceReferenceProvider` service provider implementation. - -.`ResourceAnnotationBuildItem` Example -[source,java] ----- -@BuildStep -void setupResourceInjection(BuildProducer resourceAnnotations, BuildProducer resources) { - resources.produce(new GeneratedResourceBuildItem("META-INF/services/io.quarkus.arc.ResourceReferenceProvider", - MyResourceReferenceProvider.class.getName().getBytes())); - resourceAnnotations.produce(new ResourceAnnotationBuildItem(DotName.createSimple(MyAnnotation.class.getName()))); -} ----- - -[[build_metadata]] -== Available Build Time Metadata - -Any of the above extensions that operates with `BuildExtension.BuildContext` can leverage certain build time metadata that are generated during build. -The built-in keys located in `io.quarkus.arc.processor.BuildExtension.Key` are: - -ANNOTATION_STORE:: Contains an `AnnotationStore` that keeps information about all `AnnotationTarget` annotations after application of annotation transformers -INJECTION_POINTS:: `Collection` containing all injection points -BEANS:: `Collection` containing all beans -REMOVED_BEANS:: `Collection` containing all the removed beans; see <> for more information -OBSERVERS:: `Collection` containing all observers -SCOPES:: `Collection` containing all scopes, including custom ones -QUALIFIERS:: `Map` containing all qualifiers -INTERCEPTOR_BINDINGS:: `Map` containing all interceptor bindings -STEREOTYPES:: `Map` containing all stereotypes - -To get hold of these, simply query the extension context object for given key. -Note that these metadata are made available as build proceeds which means that extensions can only leverage metadata that were built before the extensions are invoked. -If your extension attempts to retrieve metadata that wasn't yet produced, `null` will be returned. -Here is a summary of which extensions can access which metadata: - -AnnotationsTransformer:: Shouldn't rely on any metadata as it could be used at any time in any phase of the bootstrap -ContextRegistrar:: Has access to `ANNOTATION_STORE`, `QUALIFIERS`, `INTERCEPTOR_BINDINGS`, `STEREOTYPES` -InjectionPointsTransformer:: Has access to `ANNOTATION_STORE`, `QUALIFIERS`, `INTERCEPTOR_BINDINGS`, `STEREOTYPES` -ObserverTransformer:: Has access to `ANNOTATION_STORE`, `QUALIFIERS`, `INTERCEPTOR_BINDINGS`, `STEREOTYPES` -BeanRegistrar:: Has access to `ANNOTATION_STORE`, `QUALIFIERS`, `INTERCEPTOR_BINDINGS`, `STEREOTYPES`, `BEANS` (class-based beans only), `OBSERVERS` (class-based observers only), `INJECTION_POINTS` -ObserverRegistrar:: Has access to `ANNOTATION_STORE`, `QUALIFIERS`, `INTERCEPTOR_BINDINGS`, `STEREOTYPES`, `BEANS`, `OBSERVERS` (class-based observers only), `INJECTION_POINTS` -BeanDeploymentValidator:: Has access to all build metadata diff --git a/_versions/2.7/guides/cdi-reference.adoc b/_versions/2.7/guides/cdi-reference.adoc deleted file mode 100644 index f7c94becfae..00000000000 --- a/_versions/2.7/guides/cdi-reference.adoc +++ /dev/null @@ -1,1030 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Contexts and Dependency Injection - -include::./attributes.adoc[] -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -Quarkus DI solution (also called ArC) is based on the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html[Contexts and Dependency Injection for Java 2.0, window="_blank"] specification. -However, it is not a full CDI implementation verified by the TCK. -Only a subset of the CDI features is implemented - see also <> and <>. - -TIP: If you're new to CDI then we recommend you to read the xref:cdi.adoc[Introduction to CDI] first. - -NOTE: Most of the existing CDI code should work just fine but there are some small differences which follow from the Quarkus architecture and goals. - -[[bean_discovery]] -== Bean Discovery - -Bean discovery in CDI is a complex process which involves legacy deployment structures and accessibility requirements of the underlying module architecture. -However, Quarkus is using a *simplified bean discovery*. -There is only single bean archive with the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#default_bean_discovery[bean discovery mode `annotated`, window="_blank"] and no visibility boundaries. - -The bean archive is synthesized from: - -* the application classes, -* dependencies that contain a `beans.xml` descriptor (content is ignored), -* dependencies that contain a Jandex index - `META-INF/jandex.idx`, -* dependencies referenced by `quarkus.index-dependency` in `application.properties`, -* and Quarkus integration code. - -Bean classes that don't have a https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#bean_defining_annotations[bean defining annotation, window="_blank"] are not discovered. -This behavior is defined by CDI. -But producer methods and fields and observer methods are discovered even if the declaring class is not annotated with a bean defining annotation (this behavior is different to what is defined in CDI). -In fact, the declaring bean classes are considered annotated with `@Dependent`. - -NOTE: Quarkus extensions may declare additional discovery rules. For example, `@Scheduled` business methods are registered even if the declaring class is not annotated with a bean defining annotation. - -=== How to Generate a Jandex Index - -A dependency with a Jandex index is automatically scanned for beans. -To generate the index just add the following to your `pom.xml`: - -[source,xml,subs="attributes+"] ----- - - - - org.jboss.jandex - jandex-maven-plugin - {jandex-maven-plugin-version} - - - make-index - - jandex - - - - - - ----- - -If you are are using gradle, you can apply the following plugin to your `build.gradle`: - -[source,groovy] ----- -plugins { - id 'org.kordamp.gradle.jandex' version '0.11.0' -} ----- - -If you can't modify the dependency, you can still index it by adding `quarkus.index-dependency` entries to your `application.properties`: - -[source,properties] ----- -quarkus.index-dependency..group-id= -quarkus.index-dependency..artifact-id= -quarkus.index-dependency..classifier=(this one is optional) ----- - -For example, the following entries ensure that the `org.acme:acme-api` dependency is indexed: - -.Example application.properties -[source,properties] ----- -quarkus.index-dependency.acme.group-id=org.acme <1> -quarkus.index-dependency.acme.artifact-id=acme-api <2> ----- -<1> Value is a group id for a dependency identified by name `acme`. -<2> Value is an artifact id for a dependency identified by name `acme`. - -=== How To Exclude Types and Dependencies from Discovery - -It may happen that some beans from third-party libraries do not work correctly in Quarkus. -A typical example is a bean injecting a portable extension. -In such case, it's possible to exclude types and dependencies from the bean discovery. -The `quarkus.arc.exclude-types` property accepts a list of string values that are used to match classes that should be excluded. - -.Value Examples -|=== -|Value|Description -|`org.acme.Foo`| Match the fully qualified name of the class -|`org.acme.*`| Match classes with package `org.acme` -|`org.acme.**`| Match classes where the package starts with `org.acme` -|`Bar`| Match the simple name of the class -|=== - -.Example application.properties -[source,properties] ----- -quarkus.arc.exclude-types=org.acme.Foo,org.acme.*,Bar <1><2><3> ----- -<1> Exclude the type `org.acme.Foo`. -<2> Exclude all types from the `org.acme` package. -<3> Exclude all types whose simple name is `Bar` - -It is also possible to exclude a dependency artifact that would be otherwise scanned for beans. -For example, because it contains a `beans.xml` descriptor. - -.Example application.properties -[source,properties] ----- -quarkus.arc.exclude-dependency.acme.group-id=org.acme <1> -quarkus.arc.exclude-dependency.acme.artifact-id=acme-services <2> ----- -<1> Value is a group id for a dependency identified by name `acme`. -<2> Value is an artifact id for a dependency identified by name `acme`. - -== Native Executables and Private Members - -Quarkus is using GraalVM to build a native executable. -One of the limitations of GraalVM is the usage of https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/Limitations.md#reflection[Reflection, window="_blank"]. -Reflective operations are supported but all relevant members must be registered for reflection explicitly. -Those registrations result in a bigger native executable. - -And if Quarkus DI needs to access a private member it *has to use reflection*. -That's why Quarkus users are encouraged __not to use private members__ in their beans. -This involves injection fields, constructors and initializers, observer methods, producer methods and fields, disposers and interceptor methods. - -How to avoid using private members? -You can use package-private modifiers: - -[source,java] ----- -@ApplicationScoped -public class CounterBean { - - @Inject - CounterService counterService; <1> - - void onMessage(@Observes Event msg) { <2> - } -} ----- -<1> A package-private injection field. -<2> A package-private observer method. - -Or constructor injection: - -[source,java] ----- -@ApplicationScoped -public class CounterBean { - - private CounterService service; - - CounterBean(CounterService service) { <1> - this.service = service; - } -} ----- -<1> A package-private constructor injection. `@Inject` is optional in this particular case. - -[[supported_features]] -== Supported Features - -* Programming model -** Managed beans implemented by a Java class -*** `@PostConstruct` and `@PreDestroy` lifecycle callbacks -** Producer methods and fields, disposers -** Qualifiers -** Alternatives -** Stereotypes -* Dependency injection and lookup -** Field, constructor and initializer/setter injection -** Type-safe resolution -** Programmatic lookup via `javax.enterprise.inject.Instance` -** Client proxies -** Injection point metadata -* Scopes and contexts -** `@Dependent`, `@ApplicationScoped`, `@Singleton`, `@RequestScoped` and `@SessionScoped` -** Custom scopes and contexts -* Interceptors -** Business method interceptors: `@AroundInvoke` -** Interceptors for lifecycle event callbacks: `@PostConstruct`, `@PreDestroy`, `@AroundConstruct` -* Decorators -* Events and observer methods, including asynchronous events and transactional observer methods - -[[limitations]] -== Limitations - -* `@ConversationScoped` is not supported -* Portable Extensions are not supported -* `BeanManager` - only the following methods are implemented: `getBeans()`, `createCreationalContext()`, `getReference()`, `getInjectableReference()` , `resolve()`, `getContext()`, `fireEvent()`, `getEvent()` and `createInstance()` -* Specialization is not supported -* `beans.xml` descriptor content is ignored -* Passivation and passivating scopes are not supported -* Interceptor methods on superclasses are not implemented yet -* `@Interceptors` is not supported - -[[nonstandard_features]] -== Non-standard Features - -=== Eager Instantiation of Beans - -[[lazy_by_default]] -==== Lazy By Default - -By default, CDI beans are created lazily, when needed. -What exactly "needed" means depends on the scope of a bean. - -* A *normal scoped bean* (`@ApplicationScoped`, `@RequestScoped`, etc.) is needed when a method is invoked upon an injected instance (contextual reference per the specification). -+ -In other words, injecting a normal scoped bean will not suffice because a _client proxy_ is injected instead of a contextual instance of the bean. - -* A *bean with a pseudo-scope* (`@Dependent` and `@Singleton` ) is created when injected. - -.Lazy Instantiation Example -[source,java] ----- -@Singleton // => pseudo-scope -class AmazingService { - String ping() { - return "amazing"; - } -} - -@ApplicationScoped // => normal scope -class CoolService { - String ping() { - return "cool"; - } -} - -@Path("/ping") -public class PingResource { - - @Inject - AmazingService s1; <1> - - @Inject - CoolService s2; <2> - - @GET - public String ping() { - return s1.ping() + s2.ping(); <3> - } -} ----- -<1> Injection triggers the instantiation of `AmazingService`. -<2> Injection itself does not result in the instantiation of `CoolService`. A client proxy is injected. -<3> The first invocation upon the injected proxy triggers the instantiation of `CoolService`. - -[[startup_event]] -==== Startup Event - -However, if you really need to instantiate a bean eagerly you can: - -* Declare an observer of the `StartupEvent` - the scope of the bean does not matter in this case: -+ -[source,java] ----- -@ApplicationScoped -class CoolService { - void startup(@Observes StartupEvent event) { <1> - } -} ----- -<1> A `CoolService` is created during startup to service the observer method invocation. - -* Use the bean in an observer of the `StartupEvent` - normal scoped beans must be used as described in <>: -+ -[source,java] ----- -@Dependent -class MyBeanStarter { - - void startup(@Observes StartupEvent event, AmazingService amazing, CoolService cool) { <1> - cool.toString(); <2> - } -} ----- -<1> The `AmazingService` is created during injection. -<2> The `CoolService` is a normal scoped bean so we have to invoke a method upon the injected proxy to force the instantiation. - -* Annotate the bean with `@io.quarkus.runtime.Startup` as described in xref:lifecycle.adoc#startup_annotation[Startup annotation]: -+ -[source,java] ----- -@Startup // <1> -@ApplicationScoped -public class EagerAppBean { - - private final String name; - - EagerAppBean(NameGenerator generator) { // <2> - this.name = generator.createName(); - } -} ----- -<1> For each bean annotated with `@Startup` a synthetic observer of `StartupEvent` is generated. The default priority is used. -<2> The bean constructor is called when the application starts and the resulting contextual instance is stored in the application context. - -NOTE: Quarkus users are encouraged to always prefer the `@Observes StartupEvent` to `@Initialized(ApplicationScoped.class)` as explained in the xref:lifecycle.adoc[Application Initialization and Termination] guide. - -=== Request Context Lifecycle - -The request context is also active: - -* during notification of a synchronous observer method. - -The request context is destroyed: - -* after the observer notification completes for an event, if it was not already active when the notification started. - -NOTE: An event with qualifier `@Initialized(RequestScoped.class)` is fired when the request context is initialized for an observer notification. Moreover, the events with qualifiers `@BeforeDestroyed(RequestScoped.class)` and `@Destroyed(RequestScoped.class)` are fired when the request context is destroyed. - -=== Qualified Injected Fields - -In CDI, if you declare a field injection point you need to use `@Inject` and optionally a set of qualifiers. - -[source,java] ----- - @Inject - @ConfigProperty(name = "cool") - String coolProperty; ----- - -In Quarkus, you can skip the `@Inject` annotation completely if the injected field declares at least one qualifier. - -[source,java] ----- - @ConfigProperty(name = "cool") - String coolProperty; ----- - -NOTE: With the notable exception of one special case discussed below, `@Inject` is still required for constructor and method injection. - -=== Simplified Constructor Injection - -In CDI, a normal scoped bean must always declare a no-args constructor (this constructor is normally generated by the compiler unless you declare any other constructor). -However, this requirement complicates constructor injection - you need to provide a dummy no-args constructor to make things work in CDI. - -[source,java] ----- -@ApplicationScoped -public class MyCoolService { - - private SimpleProcessor processor; - - MyCoolService() { // dummy constructor needed - } - - @Inject // constructor injection - MyCoolService(SimpleProcessor processor) { - this.processor = processor; - } -} ----- - -There is no need to declare dummy constructors for normal scoped bean in Quarkus - they are generated automatically. -Also if there's only one constructor there is no need for `@Inject`. - -[source,java] ----- -@ApplicationScoped -public class MyCoolService { - - private SimpleProcessor processor; - - MyCoolService(SimpleProcessor processor) { - this.processor = processor; - } -} ----- - -NOTE: We don't generate a no-args constructor automatically if a bean class extends a class that does not declare a no-args constructor. - -[[remove_unused_beans]] -=== Removing Unused Beans - -The container attempts to remove all unused beans, interceptors and decorators during build by default. -This optimization helps to minimize the amount of generated classes, thus conserving memory. -However, Quarkus can't detect the programmatic lookup performed via the `CDI.current()` static method. -Therefore, it is possible that a removal results in a false positive error, i.e. a bean is removed although it's actually used. -In such cases, you'll notice a big warning in the log. -Users and extension authors have several options <>. - -The optimization can be disabled by setting `quarkus.arc.remove-unused-beans` to `none` or `false`. -Quarkus also provides a middle ground where application beans are never removed whether or not they are unused, while the optimization proceeds normally for non application classes. -To use this mode, set `quarkus.arc.remove-unused-beans` to `fwk` or `framework`. - -==== What's Removed? - -Quarkus first identifies so-called _unremovable_ beans that form the roots in the dependency tree. -A good example is a JAX-RS resource class or a bean which declares a `@Scheduled` method. - -An _unremovable_ bean: - -* is excluded from removal by an extension, or -* has a name designated via `@Named`, or -* declares an observer method. - -An _unused_ bean: - -* is not _unremovable_, and -* is not eligible for injection to any injection point in the dependency tree, and -* does not declare any producer which is eligible for injection to any injection point in the dependency tree, and -* is not eligible for injection into any `javax.enterprise.inject.Instance` or `javax.inject.Provider` injection point. - -Unused interceptors and decorators are not associated with any bean. - -[TIP] -==== -When using the dev mode (running `./mvnw clean compile quarkus:dev`), you can see more information about which beans are being removed: - -1. In the console - just enable the DEBUG level in your `application.properties`, i.e. `quarkus.log.category."io.quarkus.arc.processor".level=DEBUG` -2. In the relevant Dev UI page -==== - -[[eliminate_false_positives]] -==== How To Eliminate False Positives - -Users can instruct the container to not remove any of their specific beans (even if they satisfy all the rules specified above) by annotating them with `@io.quarkus.arc.Unremovable`. -This annotation can be declared on a class, a producer method or field. - -Since this is not always possible, there is an option to achieve the same via `application.properties`. -The `quarkus.arc.unremovable-types` property accepts a list of string values that are used to match beans based on their name or package. - -.Value Examples -|=== -|Value|Description -|`org.acme.Foo`| Match the fully qualified name of the bean class -|`org.acme.*`| Match beans where the package of the bean class is `org.acme` -|`org.acme.**`| Match beans where the package of the bean class starts with `org.acme` -|`Bar`| Match the simple name of the bean class -|=== - -.Example application.properties -[source,properties] ----- -quarkus.arc.unremovable-types=org.acme.Foo,org.acme.*,Bar ----- - -Furthermore, extensions can eliminate false positives by producing an `UnremovableBeanBuildItem`. - -[[default_beans]] -=== Default Beans - -Quarkus adds a capability that CDI currently does not support which is to conditionally declare a bean if no other bean with equal types and qualifiers was declared by any available means (bean class, producer, synthetic bean, ...) -This is done using the `@io.quarkus.arc.DefaultBean` annotation and is best explained with an example. - -Say there is a Quarkus extension that among other things declares a few CDI beans like the following code does: - -[source,java] ----- -@Dependent -public class TracerConfiguration { - - @Produces - public Tracer tracer(Reporter reporter, Configuration configuration) { - return new Tracer(reporter, configuration); - } - - @Produces - @DefaultBean - public Configuration configuration() { - // create a Configuration - } - - @Produces - @DefaultBean - public Reporter reporter(){ - // create a Reporter - } -} ----- - -The idea is that the extension auto-configures things for the user, eliminating a lot of boilerplate - we can just `@Inject` a `Tracer` wherever it is needed. -Now imagine that in our application we would like to utilize the configured `Tracer`, but we need to customize it a little, for example by providing a custom `Reporter`. -The only thing that would be needed in our application would be something like the following: - - -[source,java] ----- -@Dependent -public class CustomTracerConfiguration { - - @Produces - public Reporter reporter(){ - // create a custom Reporter - } -} ----- - -`@DefaultBean` allows extensions (or any other code for that matter) to provide defaults while backing off if beans of that type are supplied in any -way Quarkus supports. - -[[enable_build_profile]] -=== Enabling Beans for Quarkus Build Profile - -Quarkus adds a capability that CDI currently does not support which is to conditionally enable a bean when a Quarkus build time profile is enabled, -via the `@io.quarkus.arc.profile.IfBuildProfile` and `@io.quarkus.arc.profile.UnlessBuildProfile` annotations. -When used in conjunction with `@io.quarkus.arc.DefaultBean`, these annotations allow for the creation of different bean configurations for different build profiles. - -Imagine for instance that an application contains a bean named `Tracer`, which needs to be do nothing when in tests or dev-mode, but works in its normal capacity for the production artifact. -An elegant way to create such beans is the following: - -[source,java] ----- -@Dependent -public class TracerConfiguration { - - @Produces - @IfBuildProfile("prod") - public Tracer realTracer(Reporter reporter, Configuration configuration) { - return new RealTracer(reporter, configuration); - } - - @Produces - @DefaultBean - public Tracer noopTracer() { - return new NoopTracer(); - } -} ----- - -If instead, it is required that the `Tracer` bean also works in dev-mode and only default to doing nothing for tests, then `@UnlessBuildProfile` would be ideal. The code would look like: - -[source,java] ----- -@Dependent -public class TracerConfiguration { - - @Produces - @UnlessBuildProfile("test") // this will be enabled for both prod and dev build time profiles - public Tracer realTracer(Reporter reporter, Configuration configuration) { - return new RealTracer(reporter, configuration); - } - - @Produces - @DefaultBean - public Tracer noopTracer() { - return new NoopTracer(); - } -} ----- - -NOTE: The runtime profile has absolutely no effect on the bean resolution using `@IfBuildProfile` and `@UnlessBuildProfile`. - -[[enable_build_properties]] -=== Enabling Beans for Quarkus Build Properties - -Quarkus adds a capability that CDI currently does not support which is to conditionally enable a bean when a Quarkus build time property has/has not a specific value, -via the `@io.quarkus.arc.properties.IfBuildProperty` and `@io.quarkus.arc.properties.UnlessBuildProperty` annotations. -When used in conjunction with `@io.quarkus.arc.DefaultBean`, this annotation allow for the creation of different bean configurations for different build properties. - -The scenario we mentioned above with `Tracer` could also be implemented in the following way: - -[source,java] ----- -@Dependent -public class TracerConfiguration { - - @Produces - @IfBuildProperty(name = "some.tracer.enabled", stringValue = "true") - public Tracer realTracer(Reporter reporter, Configuration configuration) { - return new RealTracer(reporter, configuration); - } - - @Produces - @DefaultBean - public Tracer noopTracer() { - return new NoopTracer(); - } -} ----- - -TIP: `@IfBuildProperty` and `@UnlessBuildProperty` are repeatable annotations, i.e. a bean will only be enabled if **all** of the conditions defined by these annotations are satisfied. - -If instead, it is required that the `RealTracer` bean is only used if the `some.tracer.enabled` property is not `false`, then `@UnlessBuildProperty` would be ideal. The code would look like: - -[source,java] ----- -@Dependent -public class TracerConfiguration { - - @Produces - @UnlessBuildProperty(name = "some.tracer.enabled", stringValue = "false") - public Tracer realTracer(Reporter reporter, Configuration configuration) { - return new RealTracer(reporter, configuration); - } - - @Produces - @DefaultBean - public Tracer noopTracer() { - return new NoopTracer(); - } -} ----- - -NOTE: Properties set at runtime have absolutely no effect on the bean resolution using `@IfBuildProperty`. - -=== Declaring Selected Alternatives - -In CDI, an alternative bean may be selected either globally for an application by means of `@Priority`, or for a bean archive using a `beans.xml` descriptor. -Quarkus has a simplified bean discovery and the content of `beans.xml` is ignored. - -The disadvantage of `@javax.annotation.Priority` is that it has `@Target({ TYPE, PARAMETER })` and so it cannot be used for producer methods and fields. -This problem should be fixed in Common Annotations 2.1. -Users are encouraged to use `@io.quarkus.arc.Priority` instead, until Quarkus upgrades to this version of `jakarta.annotation-api`. - -However, it is also possible to select alternatives for an application using the unified configuration. -The `quarkus.arc.selected-alternatives` property accepts a list of string values that are used to match alternative beans. -If any value matches then the priority of `Integer#MAX_VALUE` is used for the relevant bean. -The priority declared via `@Priority` or `@AlternativePriority` is overridden. - -.Value Examples -|=== -|Value|Description -|`org.acme.Foo`| Match the fully qualified name of the bean class or the bean class of the bean that declares the producer -|`org.acme.*`| Match beans where the package of the bean class is `org.acme` -|`org.acme.**`| Match beans where the package of the bean class starts with `org.acme` -|`Bar`| Match the simple name of the bean class or the bean class of the bean that declares the producer -|=== - -.Example application.properties -[source,properties] ----- -quarkus.arc.selected-alternatives=org.acme.Foo,org.acme.*,Bar ----- - -=== Simplified Producer Method Declaration - -In CDI, a producer method must be always annotated with `@Produces`. - -[source,java] ----- -class Producers { - - @Inject - @ConfigProperty(name = "cool") - String coolProperty; - - @Produces - @ApplicationScoped - MyService produceService() { - return new MyService(coolProperty); - } -} ----- - -In Quarkus, you can skip the `@Produces` annotation completely if the producer method is annotated with a scope annotation, a stereotype or a qualifier. - -[source,java] ----- -class Producers { - - @ConfigProperty(name = "cool") - String coolProperty; - - @ApplicationScoped - MyService produceService() { - return new MyService(coolProperty); - } -} ----- - -=== Interception of Static Methods - -The Interceptors specification is clear that _around-invoke_ methods must not be declared static. -However, this restriction was driven mostly by technical limitations. -And since Quarkus is a build-time oriented stack that allows for additional class transformations, those limitations don't apply anymore. -It's possible to annotate a non-private static method with an interceptor binding: - -[source,java] ----- -class Services { - - @Logged <1> - static BigDecimal computePrice(long amount) { <2> - BigDecimal price; - // Perform computations... - return price; - } -} ----- -<1> `Logged` is an interceptor binding. -<2> Each method invocation is intercepted if there is an interceptor associated with `Logged`. - -==== Limitations - -* Only *method-level bindings* are considered for backward compatibility reasons (otherwise static methods of bean classes that declare class-level bindings would be suddenly intercepted) -* Private static methods are never intercepted -* `InvocationContext#getTarget()` returns `null` for obvious reasons; therefore not all existing interceptors may behave correctly when intercepting static methods -+ -NOTE: Interceptors can use `InvocationContext.getMethod()` to detect static methods and adjust the behavior accordingly. - -=== Ability to handle 'final' classes and methods - -In normal CDI, classes that are marked as `final` and / or have `final` methods are not eligible for proxy creation, -which in turn means that interceptors and normal scoped beans don't work properly. -This situation is very common when trying to use CDI with alternative JVM languages like Kotlin where classes and methods are `final` by default. - -Quarkus however, can overcome these limitations when `quarkus.arc.transform-unproxyable-classes` is set to `true` (which is the default value). - -=== Container-managed Concurrency - -There is no standard concurrency control mechanism for CDI beans. -Nevertheless, a bean instance can be shared and accessed concurrently from multiple threads. -In that case it should be thread-safe. -You can use standard Java constructs (`volatile`, `synchronized`, `ReadWriteLock`, etc.) or let the container control the concurrent access. -Quarkus provides `@io.quarkus.arc.Lock` and a built-in interceptor for this interceptor binding. -Each interceptor instance associated with a contextual instance of an intercepted bean holds a separate `ReadWriteLock` with non-fair ordering policy. - -TIP: `io.quarkus.arc.Lock` is a regular interceptor binding and as such can be used for any bean with any scope. However, it is especially useful for "shared" scopes, e.g. `@Singleton` and `@ApplicationScoped`. - -.Container-managed Concurrency Example -[source,java] ----- -import io.quarkus.arc.Lock; - -@Lock <1> -@ApplicationScoped -class SharedService { - - void addAmount(BigDecimal amount) { - // ...changes some internal state of the bean - } - - @Lock(value = Lock.Type.READ, time = 1, unit = TimeUnit.SECONDS) <2> <3> - BigDecimal getAmount() { - // ...it is safe to read the value concurrently - } -} ----- -<1> `@Lock` (which maps to `@Lock(Lock.Type.WRITE)`) declared on the class instructs the container to lock the bean instance for any invocation of any business method, i.e. the client has "exclusive access" and no concurrent invocations will be allowed. -<2> `@Lock(Lock.Type.READ)` overrides the value specified at class level. It means that any number of clients can invoke the method concurrently, unless the bean instance is locked by `@Lock(Lock.Type.WRITE)`. -<3> You can also specify the "wait time". If it's not possible to acquire the lock in the given time a `LockException` is thrown. - -=== Repeatable interceptor bindings - -Quarkus has limited support for `@Repeatable` interceptor binding annotations. - -When binding an interceptor to a component, you can declare multiple `@Repeatable` annotations on methods. -Repeatable interceptor bindings declared on classes and stereotypes are not supported, because there are some open questions around interactions with the Interceptors specification. -This might be added in the future. - -As an example, suppose we have an interceptor that clears a cache. -The corresponding interceptor binding would be called `@CacheInvalidateAll` and would be declared as `@Repeatable`. -If we wanted to clear two caches at the same time, we would add `@CacheInvalidateAll` twice: - -[source,java] ----- -@ApplicationScoped -class CachingService { - @CacheInvalidateAll(cacheName = "foo") - @CacheInvalidateAll(cacheName = "bar") - void heavyComputation() { - // ... - // some computation that updates a lot of data - // and requires 2 caches to be invalidated - // ... - } -} ----- - -This is how interceptors are used. -What about creating an interceptor? - -When declaring interceptor bindings of an interceptor, you can add multiple `@Repeatable` annotations to the interceptor class as usual. -This is useless when the annotation members are `@Nonbinding`, as would be the case for the `@Cached` annotation, but is important otherwise. - -For example, suppose we have an interceptor that can automatically log method invocations to certain targets. -The interceptor binding annotation `@Logged` would have a member called `target`, which specifies where to store the log. -Our implementation could be restricted to console logging and file logging: - -[source,java] ----- -@Interceptor -@Logged(target = "console") -@Logged(target = "file") -class NaiveLoggingInterceptor { - // ... -} ----- - -Other interceptors could be provided to log method invocations to different targets. - -=== Caching the Result of Programmatic Lookup - -In certain situations, it is practical to obtain a bean instance programmatically via an injected `javax.enterprise.inject.Instance` and `Instance.get()`. -However, according to the specification the `get()` method must identify the matching bean and obtain a contextual reference. -As a consequence, a new instance of a `@Dependent` bean is returned from each invocation of `get()`. -Moreover, this instance is a dependent object of the injected `Instance`. -This behavior is well-defined but it may lead to unexpected errors and memory leaks. -Therefore, Quarkus comes with the `io.quarkus.arc.WithCaching` annotation. -An injected `Instance` annotated with this annotation will cache the result of the `Instance#get()` operation. -The result is computed on the first call and the same value is returned for all subsequent calls, even for `@Dependent` beans. - -[source,java] ----- -class Producer { - - AtomicLong nextLong = new AtomicLong(); - AtomicInteger nextInt = new AtomicInteger(); - - @Dependent - @Produces - Integer produceInt() { - return nextInt.incrementAndGet(); - } - - @Dependent - @Produces - Long produceLong() { - return nextLong.incrementAndGet(); - } -} - -class Consumer { - - @Inject - Instance longInstance; - - @Inject - @WithCaching - Instance intInstance; - - // this method should always return true - // Producer#produceInt() is only called once - boolean pingInt() { - return intInstance.get().equals(intInstance.get()); - } - - // this method should always return false - // Producer#produceLong() is called twice per each pingLong() invocation - boolean pingLong() { - return longInstance.get().equals(longInstance.get()); - } -} ----- - -TIP: It is also possible to clear the cached value via `io.quarkus.arc.InjectableInstance.clearCache()`. In this case, you'll need to inject the Quarkus-specific `io.quarkus.arc.InjectableInstance` instead of `javax.enterprise.inject.Instance`. - -=== Declaratively Choose Beans That Can Be Obtained by Programmatic Lookup - -It is sometimes useful to narrow down the set of beans that can be obtained by programmatic lookup via `javax.enterprise.inject.Instance`. -Typically, a user needs to choose the appropriate implementation of an interface based on a runtime configuration property. - -Imagine that we have two beans implementing the interface `org.acme.Service`. -You can't inject the `org.acme.Service` directly unless your implementations declare a CDI qualifier. -However, you can inject the `Instance` instead, then iterate over all implementations and choose the correct one manually. -Alternatively, you can use the `@LookupIfProperty` and `@LookupUnlessProperty` annotations. -`@LookupIfProperty` indicates that a bean should only be obtained if a runtime configuration property matches the provided value. -`@LookupUnlessProperty`, on the other hand, indicates that a bean should only be obtained if a runtime configuration property does not match the provided value. - -.`@LookupIfProperty` Example -[source,java] ----- - interface Service { - String name(); - } - - @LookupIfProperty(name = "service.foo.enabled", stringValue = "true") - @ApplicationScoped - class ServiceFoo implements Service { - - public String name() { - return "foo"; - } - } - - @ApplicationScoped - class ServiceBar implements Service { - - public String name() { - return "bar"; - } - } - - @ApplicationScoped - class Client { - - @Inject - Instance service; - - void printServiceName() { - // This will print "bar" if the property "service.foo.enabled" is NOT set to "true" - // If "service.foo.enabled" is set to "true" then service.get() would result in an AmbiguousResolutionException - System.out.println(service.get().name()); - } - } ----- - -=== Injecting Multiple Bean Instances Intuitively - -In CDI, it's possible to inject multiple bean instances (aka contextual references) via the `javax.enterprise.inject.Instance` which implements `java.lang.Iterable`. -However, it's not exactly intuitive. -Therefore, a new way was introduced in Quarkus - you can inject a `java.util.List` annotated with the `io.quarkus.arc.All` qualifier. -The type of elements in the list is used as the required type when performing the lookup. - -[source,java] ----- -@ApplicationScoped -public class Processor { - - @Inject - @All - List services; <1> <2> -} ----- -<1> The injected instance is an _immutable list_ of the contextual references of the _disambiguated_ beans. -<2> For this injection point the required type is `Service` and no additional qualifiers are declared. - -TIP: By default, the list of beans is sorted by priority as defined by `io.quarkus.arc.InjectableBean#getPriority()`. Higher priority goes first. In general, the `@javax.annotation.Priority` and `@io.quarkus.arc.Priority` annotations can be used to assign the priority to a class bean, producer method or producer field. - -If an injection point declares no other qualifier than `@All` then `@Any` is used, i.e. the behavior is equivalent to `@Inject @Any Instance`. - -You can also inject a list of bean instances wrapped in `io.quarkus.arc.InstanceHandle`. -This can be useful if you need to inspect the related bean metadata. - -[source,java] ----- -@ApplicationScoped -public class Processor { - - @Inject - @All - List> services; - - public void doSomething() { - for (InstanceHandle handle : services) { - if (handle.getBean().getScope().equals(Dependent.class)) { - handle.get().process(); - break; - } - } - } -} ----- - -=== Ignoring Class-Level Interceptor Bindings for Methods and Constructors - -If a managed bean declares interceptor binding annotations on the class level, the corresponding `@AroundInvoke` interceptors will apply to all business methods. -Similarly, the corresponding `@AroundConstruct` interceptors will apply to the bean constructor. - -For example, suppose we have a logging interceptor with the `@Logged` binding annotation and a tracing interceptor with the `@Traced` binding annotation: - -[source, java] ----- -@ApplicationScoped -@Logged -public class MyService { - public void doSomething() { - ... - } - - @Traced - public void doSomethingElse() { - ... - } -} ----- - -In this example, both `doSomething` and `doSomethingElse` will be intercepted by the hypothetical logging interceptor. -Additionally, the `doSomethingElse` method will be intercepted by the hypothetical tracing interceptor. - -Now, if that `@Traced` interceptor also performed all the necessary logging, we'd like to skip the `@Logged` interceptor for this method, but keep it for all other methods. -To achieve that, you can annotate the method with `@NoClassInterceptors`: - -[source, java] ----- -@Traced -@NoClassInterceptors -public void doSomethingElse() { - ... -} ----- - -The `@NoClassInterceptors` annotation may be put on methods and constructors and means that all class-level interceptors are ignored for these methods and constructors. -In other words, if a method/constructor is annotated `@NoClassInterceptors`, then the only interceptors that will apply to this method/constructor are interceptors declared directly on the method/constructor. - -This annotation affects only business method interceptors (`@AroundInvoke`) and constructor lifecycle callback interceptors (`@AroundConstruct`). - -[[build_time_apis]] -== Build Time Extensions - -Quarkus incorporates build-time optimizations in order to provide instant startup and low memory footprint. -The downside of this approach is that CDI Portable Extensions cannot be supported. -Nevertheless, most of the functionality can be achieved using Quarkus xref:writing-extensions.adoc[extensions]. -See the xref:cdi-integration.adoc[integration guide] for more information. - -== Development Mode - -In the development mode, two special endpoints are registered automatically to provide some basic debug info in the JSON format: - -* HTTP GET `/q/arc` - returns the summary; number of beans, config properties, etc. -* HTTP GET `/q/arc/beans` - returns the list of all beans -** You can use query params to filter the output: -*** `scope` - include beans with scope that ends with the given value, i.e. `http://localhost:8080/q/arc/beans?scope=ApplicationScoped` -*** `beanClass` - include beans with bean class that starts with the given value, i.e. `http://localhost:8080/q/arc/beans?beanClass=org.acme.Foo` -*** `kind` - include beans of the specified kind (`CLASS`, `PRODUCER_FIELD`, `PRODUCER_METHOD`, `INTERCEPTOR` or `SYNTHETIC`), i.e. `http://localhost:8080/q/arc/beans?kind=PRODUCER_METHOD` -* HTTP GET `/q/arc/removed-beans` - returns the list of unused beans removed during build -* HTTP GET `/q/arc/observers` - returns the list of all observer methods - -NOTE: These endpoints are only available in the development mode, i.e. when you run your application via `mvn quarkus:dev` (or `./gradlew quarkusDev`). - - -[[arc-configuration-reference]] -== ArC Configuration Reference - -include::{generated-dir}/config/quarkus-arc.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/cdi.adoc b/_versions/2.7/guides/cdi.adoc deleted file mode 100644 index 42ff0c39fa1..00000000000 --- a/_versions/2.7/guides/cdi.adoc +++ /dev/null @@ -1,481 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Introduction to Contexts and Dependency Injection - -include::./attributes.adoc[] -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -In this guide we're going to describe the basic principles of the Quarkus programming model that is based on the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html[Contexts and Dependency Injection for Java 2.0, window="_blank"] specification. - -== OK. Let's start simple. What is a bean? - -Well, a bean is a _container-managed_ object that supports a set of basic services, such as injection of dependencies, lifecycle callbacks and interceptors. - -== Wait a minute. What does "container-managed" mean? - -Simply put, you don't control the lifecycle of the object instance directly. -Instead, you can affect the lifecycle through declarative means, such as annotations, configuration, etc. -The container is the _environment_ where your application runs. -It creates and destroys the instances of beans, associates the instances with a designated context, and injects them into other beans. - -== What is it good for? - -An application developer can focus on the business logic rather than finding out "where and how" to obtain a fully initialized component with all of its dependencies. - -NOTE: You've probably heard of the _inversion of control_ (IoC) programming principle. Dependency injection is one of the implementation techniques of IoC. - -== What does a bean look like? - -There are several kinds of beans. -The most common ones are class-based beans: - -.Simple Bean Example -[source,java] ----- -import javax.inject.Inject; -import javax.enterprise.context.ApplicationScoped; -import org.eclipse.microprofile.metrics.annotation.Counted; - -@ApplicationScoped <1> -public class Translator { - - @Inject - Dictionary dictionary; <2> - - @Counted <3> - String translate(String sentence) { - // ... - } -} ----- -<1> This is a scope annotation. It tells the container which context to associate the bean instance with. In this particular case, a *single bean instance* is created for the application and used by all other beans that inject `Translator`. -<2> This is a field injection point. It tells the container that `Translator` depends on the `Dictionary` bean. If there is no matching bean the build fails. -<3> This is an interceptor binding annotation. In this case, the annotation comes from the MicroProfile Metrics. The relevant interceptor intercepts the invocation and updates the relevant metrics. We will talk about <> later. - -[[typesafe_resolution]] -== Nice. How does the dependency resolution work? I see no names or identifiers. - -That's a good question. -In CDI the process of matching a bean to an injection point is *type-safe*. -Each bean declares a set of bean types. -In our example above, the `Translator` bean has two bean types: `Translator` and `java.lang.Object`. -Subsequently, a bean is assignable to an injection point if the bean has a bean type that matches the _required type_ and has all the _required qualifiers_. -We'll talk about qualifiers later. -For now, it's enough to know that the bean above is assignable to an injection point of type `Translator` and `java.lang.Object`. - -== Hm, wait a minute. What happens if multiple beans declare the same type? - -There is a simple rule: *exactly one bean must be assignable to an injection point, otherwise the build fails*. -If none is assignable the build fails with `UnsatisfiedResolutionException`. -If multiple are assignable the build fails with `AmbiguousResolutionException`. -This is very useful because your application fails fast whenever the container is not able to find an unambiguous dependency for any injection point. - -[TIP] -==== -Your can use programmatic lookup via `javax.enterprise.inject.Instance` to resolve ambiguities at runtime and even iterate over all beans implementing a given type: - -[source,java] ----- -public class Translator { - - @Inject - Instance dictionaries; <1> - - String translate(String sentence) { - for (Dictionary dict : dictionaries) { <2> - // ... - } - } -} ----- -<1> This injection point will not result in an ambiguous dependency even if there are multiple beans that implement the `Dictionary` type. -<2> `javax.enterprise.inject.Instance` extends `Iterable`. -==== - -== Can I use setter and constructor injection? - -Yes, you can. -In fact, in CDI the "setter injection" is superseded by more powerful https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#initializer_methods[initializer methods, window="_blank"]. -Initializers may accept multiple parameters and don't have to follow the JavaBean naming conventions. - -.Initialized and Constructor Injection Example -[source,java] ----- -@ApplicationScoped -public class Translator { - - private final TranslatorHelper helper; - - Translator(TranslatorHelper helper) { <1> - this.helper = helper; - } - - @Inject <2> - void setDeps(Dictionary dic, LocalizationService locService) { <3> - / ... - } -} ----- -<1> This is a constructor injection. -In fact, this code would not work in regular CDI implementations where a bean with a normal scope must always declare a no-args constructor and the bean constructor must be annotated with `@Inject`. -However, in Quarkus we detect the absence of no-args constructor and "add" it directly in the bytecode. -It's also not necessary to add `@Inject` if there is only one constructor present. -<2> An initializer method must be annotated with `@Inject`. -<3> An initializer may accept multiple parameters - each one is an injection point. - -== You talked about some qualifiers? - -https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#qualifiers[Qualifiers, window="_blank"] are annotations that help the container to distinguish beans that implement the same type. -As we already said a bean is assignable to an injection point if it has all the required qualifiers. -If you declare no qualifier at an injection point the `@Default` qualifier is assumed. - -A qualifier type is a Java annotation defined as `@Retention(RUNTIME)` and annotated with the `@javax.inject.Qualifier` meta-annotation: - -.Qualifier Example -[source,java] ----- -@Qualifier -@Retention(RUNTIME) -@Target({METHOD, FIELD, PARAMETER, TYPE}) -public @interface Superior {} ----- - -The qualifiers of a bean are declared by annotating the bean class or producer method or field with the qualifier types: - -.Bean With Custom Qualifier Example -[source,java] ----- -@Superior <1> -@ApplicationScoped -public class SuperiorTranslator extends Translator { - - String translate(String sentence) { - // ... - } -} ----- -<1> `@Superior` is a https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#defining_qualifier_types[qualifier annotation, window="_blank"]. - -This bean would be assignable to `@Inject @Superior Translator` and `@Inject @Superior SuperiorTranslator` but not to `@Inject Translator`. -The reason is that `@Inject Translator` is automatically transformed to `@Inject @Default Translator` during typesafe resolution. -And since our `SuperiorTranslator` does not declare `@Default` only the original `Translator` bean is assignable. - -== Looks good. What is the bean scope? - -The scope of a bean determines the lifecycle of its instances, i.e. when and where an instance should be created and destroyed. - -NOTE: Every bean has exactly one scope. - -== What scopes can I actually use in my Quarkus application? - -You can use all the built-in scopes mentioned by the specification except for `javax.enterprise.context.ConversationScoped`. - -[options="header",cols="1,1"] -|=== -|Annotation |Description -//---------------------- -|`@javax.enterprise.context.ApplicationScoped` | A single bean instance is used for the application and shared among all injection points. The instance is created lazily, i.e. once a method is invoked upon the <>. -|`@javax.inject.Singleton` | Just like `@ApplicationScoped` except that no client proxy is used. The instance is created when an injection point that resolves to a @Singleton bean is being injected. -|`@javax.enterprise.context.RequestScoped` | The bean instance is associated with the current _request_ (usually an HTTP request). -|`@javax.enterprise.context.Dependent` | This is a pseudo-scope. The instances are not shared and every injection point spawns a new instance of the dependent bean. The lifecycle of dependent bean is bound to the bean injecting it - it will be created and destroyed along with the bean injecting it. -|`@javax.enterprise.context.SessionScoped` | This scope is backed by a `javax.servlet.http.HttpSession` object. It's only available if the `quarkus-undertow` extension is used. -|=== - -NOTE: There can be other custom scopes provided by Quarkus extensions. For example, `quarkus-narayana-jta` provides `javax.transaction.TransactionScoped`. - -== `@ApplicationScoped` and `@Singleton` look very similar. Which one should I choose for my Quarkus application? - -It depends ;-). - -A `@Singleton` bean has no <> and hence an instance is _created eagerly_ when the bean is injected. By contrast, an instance of an `@ApplicationScoped` bean is _created lazily_, i.e. -when a method is invoked upon an injected instance for the first time. - -Furthermore, client proxies only delegate method invocations and thus you should never read/write fields of an injected `@ApplicationScoped` bean directly. -You can read/write fields of an injected `@Singleton` safely. - -`@Singleton` should have a slightly better performance because the is no indirection (no proxy that delegates to the current instance from the context). - -On the other hand, you cannot mock `@Singleton` beans using <>. - -`@ApplicationScoped` beans can be also destroyed and recreated at runtime. -Existing injection points just work because the injected proxy delegates to the current instance. - -Therefore, we recommend to stick with `@ApplicationScoped` by default unless there's a good reason to use `@Singleton`. - -[[client_proxies]] -== I don't understand the concept of client proxies. - -Indeed, the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#client_proxies[client proxies, window="_blank"] could be hard to grasp but they provide some useful functionality. -A client proxy is basically an object that delegates all method invocations to a target bean instance. -It's a container construct that implements `io.quarkus.arc.ClientProxy` and extends the bean class. - -IMPORTANT: Client proxies only delegate method invocations. So never read or write a field of a normal scoped bean, otherwise you will work with non-contextual or stale data. - -.Generated Client Proxy Example -[source,java] ----- -@ApplicationScoped -class Translator { - - String translate(String sentence) { - // ... - } -} - -// The client proxy class is generated and looks like... -class Translator_ClientProxy extends Translator { <1> - - String translate(String sentence) { - // Find the correct translator instance... - Translator translator = getTranslatorInstanceFromTheApplicationContext(); - // And delegate the method invocation... - return translator.translate(sentence); - } -} ----- -<1> The `Translator_ClientProxy` instance is always injected instead of a direct reference to a https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#contextual_instance[contextual instance, window="_blank"] of the `Translator` bean. - -Client proxies allow for: - -* Lazy instantiation - the instance is created once a method is invoked upon the proxy. -* Ability to inject a bean with "narrower" scope to a bean with "wider" scope; i.e. you can inject a `@RequestScoped` bean into an `@ApplicationScoped` bean. -* Circular dependencies in the dependency graph. Having circular dependencies is often an indication that a redesign should be considered, but sometimes it's inevitable. -* In rare cases it's practical to destroy the beans manually. A direct injected reference would lead to a stale bean instance. - - -== OK. You said that there are several kinds of beans? - -Yes. In general, we distinguish: - -1. Class beans -2. Producer methods -3. Producer fields -4. Synthetic beans - -NOTE: Synthetic beans are usually provided by extensions. Therefore, we are not going to cover them in this guide. - -Producer methods and fields are useful if you need additional control over instantiation of a bean. -They are also useful when integrating third-party libraries where you don't control the class source and may not add additional annotations etc. - -.Producers Example -[source,java] ----- -@ApplicationScoped -public class Producers { - - @Produces <1> - double pi = Math.PI; <2> - - @Produces <3> - List names() { - List names = new ArrayList<>(); - names.add("Andy"); - names.add("Adalbert"); - names.add("Joachim"); - return names; <4> - } -} - -@ApplicationScoped -public class Consumer { - - @Inject - double pi; - - @Inject - List names; - - // ... -} ----- -<1> The container analyses the field annotations to build a bean metadata. -The _type_ is used to build the set of bean types. -In this case, it will be `double` and `java.lang.Object`. -No scope annotation is declared and so it's defaulted to `@Dependent`. -<2> The container will read this field when creating the bean instance. -<3> The container analyses the method annotations to build a bean metadata. -The _return type_ is used to build the set of bean types. -In this case, it will be `List`, `Collection`, `Iterable` and `java.lang.Object`. -No scope annotation is declared and so it's defaulted to `@Dependent`. -<4> The container will call this method when creating the bean instance. - -There's more about producers. -You can declare qualifiers, inject dependencies into the producer methods parameters, etc. -You can read more about producers for example in the https://docs.jboss.org/weld/reference/latest/en-US/html/producermethods.html[Weld docs, window="_blank"]. - -== OK, injection looks cool. What other services are provided? - -=== Lifecycle Callbacks - -A bean class may declare lifecycle `@PostConstruct` and `@PreDestroy` callbacks: - -.Lifecycle Callbacks Example -[source,java] ----- -import javax.annotation.PostConstruct; -import javax.annotation.PreDestroy; - -@ApplicationScoped -public class Translator { - - @PostConstruct <1> - void init() { - // ... - } - - @PreDestroy <2> - void destroy() { - // ... - } -} ----- -<1> This callback is invoked before the bean instance is put into service. It is safe to perform some initialization here. -<2> This callback is invoked before the bean instance is destroyed. It is safe to perform some cleanup tasks here. - -TIP: It's a good practice to keep the logic in the callbacks "without side effects", i.e. you should avoid calling other beans inside the callbacks. - -[[interceptors]] -=== Interceptors - -Interceptors are used to separate cross-cutting concerns from business logic. -There is a separate specification - Java Interceptors - that defines the basic programming model and semantics. - -.Simple Interceptor Example -[source,java] ----- -import javax.interceptor.Interceptor; -import javax.annotation.Priority; - -@Logged <1> -@Priority(2020) <2> -@Interceptor <3> -public class LoggingInterceptor { - - @Inject <4> - Logger logger; - - @AroundInvoke <5> - Object logInvocation(InvocationContext context) { - // ...log before - Object ret = context.proceed(); <6> - // ...log after - return ret; - } - -} ----- -<1> This is an interceptor binding annotation that is used to bind our interceptor to a bean. Simply annotate a bean class with `@Logged`. -<2> `Priority` enables the interceptor and affects the interceptor ordering. Interceptors with smaller priority values are called first. -<3> Marks an interceptor component. -<4> An interceptor instance may be the target of dependency injection. -<5> `AroundInvoke` denotes a method that interposes on business methods. -<6> Proceed to the next interceptor in the interceptor chain or invoke the intercepted business method. - -NOTE: Instances of interceptors are dependent objects of the bean instance they intercept, i.e. a new interceptor instance is created for each intercepted bean. - -[[decorators]] -=== Decorators - -Decorators are similar to interceptors, but because they implement interfaces with business semantics, they are able to implement business logic. - -.Simple Decorator Example -[source,java] ----- -import javax.decorator.Decorator; -import javax.decorator.Delegate; -import javax.annotation.Priority; -import javax.inject.Inject; -import javax.enterprise.inject.Any; - -public interface Account { - void withdraw(BigDecimal amount); -} - -@Priority(10) <1> -@Decorator <2> -public class LargeTxAccount implements Account { <3> - - @Inject - @Any - @Delegate - Account delegate; <4> - - @Inject - LogService logService; <5> - - void withdraw(BigDecimal amount) { - delegate.withdraw(amount); <6> - if (amount.compareTo(1000) > 0) { - logService.logWithdrawal(delegate, amount); - } - } - -} ----- -<1> `@Priority` enables the decorator. Decorators with smaller priority values are called first. -<2> `@Decorator` marks a decorator component. -<3> The set of decorated types includes all bean types which are Java interfaces, except for `java.io.Serializable`. -<4> Each decorator must declare exactly one _delegate injection point_. The decorator applies to beans that are assignable to this delegate injection point. -<5> Decorators can inject other beans. -<6> The decorator may invoke any method of the delegate object. And the container invokes either the next decorator in the chain or the business method of the intercepted instance. - -NOTE: Instances of decorators are dependent objects of the bean instance they intercept, i.e. a new decorator instance is created for each intercepted bean. - -=== Events and Observers - -Beans may also produce and consume events to interact in a completely decoupled fashion. -Any Java object can serve as an event payload. -The optional qualifiers act as topic selectors. - -.Simple Event Example -[source,java] ----- - -class TaskCompleted { - // ... -} - -@ApplicationScoped -class ComplicatedService { - - @Inject - Event event; <1> - - void doSomething() { - // ... - event.fire(new TaskCompleted()); <2> - } - -} - -@ApplicationScoped -class Logger { - - void onTaskCompleted(@Observes TaskCompleted task) { <3> - // ...log the task - } - -} ----- -<1> `javax.enterprise.event.Event` is used to fire events. -<2> Fire the event synchronously. -<3> This method is notified when a `TaskCompleted` event is fired. - -TIP: For more info about events/observers visit https://docs.jboss.org/weld/reference/latest/en-US/html/events.html[Weld docs, window="_blank"]. - -== Conclusion - -In this guide, we've covered some of the basic topics of the Quarkus programming model that is based on the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html[Contexts and Dependency Injection for Java 2.0, window="_blank"] specification. -However, a full CDI implementation is not used under the hood. -Quarkus only implements a subset of the CDI features - see also <> and <>. -On the other hand, there are quite a few <> and <>. -We believe that our efforts will drive the innovation of the CDI specification towards the build-time oriented developer stacks in the future. - -TIP: If you wish to learn more about Quarkus-specific features and limitations there is a Quarkus xref:cdi-reference.adoc[CDI Reference Guide]. -We also recommend you to read the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html[CDI specification] and the https://docs.jboss.org/weld/reference/latest/en-US/html/[Weld documentation] (Weld is a CDI Reference Implementation) to get acquainted with more complex topics. diff --git a/_versions/2.7/guides/centralized-log-management.adoc b/_versions/2.7/guides/centralized-log-management.adoc deleted file mode 100644 index 9c299942ba1..00000000000 --- a/_versions/2.7/guides/centralized-log-management.adoc +++ /dev/null @@ -1,442 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Centralized log management (Graylog, Logstash, Fluentd) - -include::./attributes.adoc[] -:es-version: 6.8.2 - -This guide explains how you can send your logs to a centralized log management system like Graylog, Logstash (inside the Elastic Stack or ELK - Elasticsearch, Logstash, Kibana) or -Fluentd (inside EFK - Elasticsearch, Fluentd, Kibana). - -There are a lot of different ways to centralize your logs (if you are using Kubernetes, the simplest way is to log to the console and ask you cluster administrator to integrate a central log manager inside your cluster). -In this guide, we will expose how to send them to an external tool using the `quarkus-logging-gelf` extension that can use TCP or UDP to send logs in the Graylog Extended Log Format (GELF). - -The `quarkus-logging-gelf` extension will add a GELF log handler to the underlying logging backend that Quarkus uses (jboss-logmanager). -By default, it is disabled, if you enable it but still use another handler (by default the console handler is enabled), your logs will be sent to both handlers. - -== Prerequisites - -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -== Example application - -The following examples will all be based on the same example application that you can create with the following steps. - -Create an application with the `quarkus-logging-gelf` extension. You can use the following command to create it: - -:create-app-artifact-id: gelf-logging -:create-app-extensions: resteasy,logging-gelf -include::includes/devtools/create-app.adoc[] - -If you already have your Quarkus project configured, you can add the `logging-gelf` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: logging-gelf -include::includes/devtools/extension-add.adoc[] - -This will add the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-logging-gelf - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-logging-gelf") ----- - -For demonstration purposes, we create an endpoint that does nothing but log a sentence. You don't need to do this inside your application. - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.jboss.logging.Logger; - -@Path("/gelf-logging") -@ApplicationScoped -public class GelfLoggingResource { - private static final Logger LOG = Logger.getLogger(GelfLoggingResource.class); - - @GET - public void log() { - LOG.info("Some useful log message"); - } - -} ----- - -Configure the GELF log handler to send logs to an external UDP endpoint on the port 12201: - -[source,properties] ----- -quarkus.log.handler.gelf.enabled=true -quarkus.log.handler.gelf.host=localhost -quarkus.log.handler.gelf.port=12201 ----- - -== Send logs to Graylog - -To send logs to Graylog, you first need to launch the components that compose the Graylog stack: - -- MongoDB -- Elasticsearch -- Graylog - -You can do this via the following `docker-compose.yml` file that you can launch via `docker-compose up -d`: - -[source,yaml,subs="attributes"] ----- -version: '3.2' - -services: - elasticsearch: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:{es-version} - ports: - - "9200:9200" - environment: - ES_JAVA_OPTS: "-Xms512m -Xmx512m" - networks: - - graylog - - mongo: - image: mongo:4.0 - networks: - - graylog - - graylog: - image: graylog/graylog:3.1 - ports: - - "9000:9000" - - "12201:12201/udp" - - "1514:1514" - environment: - GRAYLOG_HTTP_EXTERNAL_URI: "http://127.0.0.1:9000/" - networks: - - graylog - depends_on: - - elasticsearch - - mongo - -networks: - graylog: - driver: bridge ----- - -Then, you need to create a UDP input in Graylog. -You can do it from the Graylog web console (System -> Input -> Select GELF UDP) available at http://localhost:9000 or via the API. - -This curl example will create a new Input of type GELF UDP, it uses the default login from Graylog (admin/admin). - -[source,bash] ----- -curl -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "X-Requested-By: curl" -X POST -v -d \ -'{"title":"udp input","configuration":{"recv_buffer_size":262144,"bind_address":"0.0.0.0","port":12201,"decompress_size_limit":8388608},"type":"org.graylog2.inputs.gelf.udp.GELFUDPInput","global":true}' \ -http://localhost:9000/api/system/inputs ----- - -Launch your application, you should see your logs arriving inside Graylog. - -== Send logs to Logstash / the Elastic Stack (ELK) - -Logstash comes by default with an Input plugin that can understand the GELF format, we will first create a pipeline that enables this plugin. - -Create the following file in `$HOME/pipelines/gelf.conf`: - -[source] ----- -input { - gelf { - port => 12201 - } -} -output { - stdout {} - elasticsearch { - hosts => ["http://elasticsearch:9200"] - } -} ----- - -Finally, launch the components that compose the Elastic Stack: - -- Elasticsearch -- Logstash -- Kibana - -You can do this via the following `docker-compose.yml` file that you can launch via `docker-compose up -d`: - -[source,yaml,subs="attributes"] ----- -# Launch Elasticsearch -version: '3.2' - -services: - elasticsearch: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:{es-version} - ports: - - "9200:9200" - - "9300:9300" - environment: - ES_JAVA_OPTS: "-Xms512m -Xmx512m" - networks: - - elk - - logstash: - image: docker.elastic.co/logstash/logstash-oss:{es-version} - volumes: - - source: $HOME/pipelines - target: /usr/share/logstash/pipeline - type: bind - ports: - - "12201:12201/udp" - - "5000:5000" - - "9600:9600" - networks: - - elk - depends_on: - - elasticsearch - - kibana: - image: docker.elastic.co/kibana/kibana-oss:{es-version} - ports: - - "5601:5601" - networks: - - elk - depends_on: - - elasticsearch - -networks: - elk: - driver: bridge - ----- - -Launch your application, you should see your logs arriving inside the Elastic Stack; you can use Kibana available at http://localhost:5601/ to access them. - -== Send logs to Fluentd (EFK) - -First, you need to create a Fluentd image with the needed plugins: elasticsearch and input-gelf. -You can use the following Dockerfile that should be created inside a `fluentd` directory. - -[source,dockerfile] ----- -FROM fluent/fluentd:v1.3-debian -RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"] -RUN ["gem", "install", "fluent-plugin-input-gelf", "--version", "0.3.1"] ----- - -You can build the image or let docker-compose build it for you. - -Then you need to create a fluentd configuration file inside `$HOME/fluentd/fluent.conf` - -[source] ----- - - type gelf - tag example.gelf - bind 0.0.0.0 - port 12201 - - - - @type elasticsearch - host elasticsearch - port 9200 - logstash_format true - ----- - -Finally, launch the components that compose the EFK Stack: - -- Elasticsearch -- Fluentd -- Kibana - -You can do this via the following `docker-compose.yml` file that you can launch via `docker-compose up -d`: - -[source,yaml,subs="attributes"] ----- -version: '3.2' - -services: - elasticsearch: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:{es-version} - ports: - - "9200:9200" - - "9300:9300" - environment: - ES_JAVA_OPTS: "-Xms512m -Xmx512m" - networks: - - efk - - fluentd: - build: fluentd - ports: - - "12201:12201/udp" - volumes: - - source: $HOME/fluentd - target: /fluentd/etc - type: bind - networks: - - efk - depends_on: - - elasticsearch - - kibana: - image: docker.elastic.co/kibana/kibana-oss:{es-version} - ports: - - "5601:5601" - networks: - - efk - depends_on: - - elasticsearch - -networks: - efk: - driver: bridge ----- - -Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them. - -== GELF alternative: use Syslog - -You can also send your logs to Fluentd using a Syslog input. -As opposed to the GELF input, the Syslog input will not render multiline logs in one event, that's why we advise to use the GELF input that we implement in Quarkus. - -First, you need to create a Fluentd image with the elasticsearch plugin. -You can use the following Dockerfile that should be created inside a `fluentd` directory. - -[source,dockerfile] ----- -FROM fluent/fluentd:v1.3-debian -RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"] ----- - -Then, you need to create a fluentd configuration file inside `$HOME/fluentd/fluent.conf` - -[source] ----- - - @type syslog - port 5140 - bind 0.0.0.0 - message_format rfc5424 - tag system - - - - @type elasticsearch - host elasticsearch - port 9200 - logstash_format true - ----- - -Then, launch the components that compose the EFK Stack: - -- Elasticsearch -- Fluentd -- Kibana - -You can do this via the following `docker-compose.yml` file that you can launch via `docker-compose up -d`: - -[source,yaml,subs="attributes"] ----- -version: '3.2' - -services: - elasticsearch: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:{es-version} - ports: - - "9200:9200" - - "9300:9300" - environment: - ES_JAVA_OPTS: "-Xms512m -Xmx512m" - networks: - - efk - - fluentd: - build: fluentd - ports: - - "5140:5140/udp" - volumes: - - source: $HOME/fluentd - target: /fluentd/etc - type: bind - networks: - - efk - depends_on: - - elasticsearch - - kibana: - image: docker.elastic.co/kibana/kibana-oss:{es-version} - ports: - - "5601:5601" - networks: - - efk - depends_on: - - elasticsearch - -networks: - efk: - driver: bridge ----- - -Finally, configure your application to send logs to EFK using Syslog: - -[source,properties] ----- -quarkus.log.syslog.enable=true -quarkus.log.syslog.endpoint=localhost:5140 -quarkus.log.syslog.protocol=udp -quarkus.log.syslog.app-name=quarkus -quarkus.log.syslog.hostname=quarkus-test ----- - -Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them. - -== Elasticsearch indexing consideration - -Be careful that, by default, Elasticsearch will automatically map unknown fields (if not disabled in the index settings) by detecting their type. -This can become tricky if you use log parameters (which are included by default), or if you enable MDC inclusion (disabled by default), -as the first log will define the type of the message parameter (or MDC parameter) field inside the index. - -Imagine the following case: - -[source, java] ----- -LOG.info("some {} message {} with {} param", 1, 2, 3); -LOG.info("other {} message {} with {} param", true, true, true); ----- - -With log message parameters enabled, the first log message sent to Elasticsearch will have a `MessageParam0` parameter with an `int` type; -this will configure the index with a field of type `integer`. -When the second message will arrive to Elasticsearch, it will have a `MessageParam0` parameter with the boolean value `true`, and this will generate an indexing error. - -To work around this limitation, you can disable sending log message parameters via `logging-gelf` by configuring `quarkus.log.handler.gelf.include-log-message-parameters=false`, -or you can configure your Elasticsearch index to store those fields as text or keyword, Elasticsearch will then automatically make the translation from int/boolean to a String. - -See the following documentation for Graylog (but the same issue exists for the other central logging stacks): link:https://docs.graylog.org/en/3.2/pages/configuration/elasticsearch.html#custom-index-mappings[Custom Index Mappings]. - -[[configuration-reference]] -== Configuration Reference - -Configuration is done through the usual `application.properties` file. - -include::{generated-dir}/config/quarkus-logging-gelf.adoc[opts=optional, leveloffset=+1] - -This extension uses the `logstash-gelf` library that allow more configuration options via system properties, -you can access its documentation here: https://logging.paluch.biz/ . diff --git a/_versions/2.7/guides/class-loading-reference.adoc b/_versions/2.7/guides/class-loading-reference.adoc deleted file mode 100644 index f471d7e76ee..00000000000 --- a/_versions/2.7/guides/class-loading-reference.adoc +++ /dev/null @@ -1,215 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Class Loading Reference - -include::./attributes.adoc[] - -This document explains the Quarkus class loading architecture. It is intended for extension -authors and advanced users who want to understand exactly how Quarkus works. - -The Quarkus class loading architecture is slightly different depending on the mode that -the application is run in. When running a production application everything is loaded -in the system ClassLoader, so it is a completely flat class path. This also applies to -native image mode which does not really support multiple ClassLoaders, and is based on -a normal production Quarkus application. - -For all other use cases (e.g. tests, dev mode, and building the application) Quarkus -uses the class loading architecture outlined here. - - -== Bootstrapping Quarkus - -All Quarkus applications are created by the QuarkusBootstrap class in the `independent-projects/bootstrap` module. This -class is used to resolve all the relevant dependencies (both deployment and runtime) that are needed for the Quarkus -application. The end result of this process is a `CuratedApplication`, which contains all the class loading information -for the application. - -The `CuratedApplication` can then be used to create an `AugmentAction` instance, which can create production application -and start/restart runtime ones. This application instance exists within an isolated ClassLoader, it is not necessary -to have any of the Quarkus deployment classes on the class path as the curate process will resolve them for you. - -This bootstrap process should be the same no matter how Quarkus is launched, just with different parameters passed in. - -=== Current Run Modes - -At the moment we have the following use cases for bootstrapping Quarkus: - -- Maven creating production application -- Maven dev mode -- Gradle creating a production application -- Gradle dev mode -- QuarkusTest (Maven, Gradle and IDE) -- QuarkusUnitTest (Maven, Gradle and IDE) -- QuarkusDevModeTest (Maven, Gradle and IDE) -- Arquillian Adaptor - -One of the goals of this refactor is to have all these different run modes boot Quarkus in fundamentally the same way. - -=== Notes on Transformer Safety - -A ClassLoader is said to be 'transformer safe' if it is safe to load classes in the class loader before the transformers -are ready. Once a class has been loaded it cannot be changed, so if a class is loaded before the transformers have been -prepared this will prevent the transformation from working. Loading classes in a transformer safe ClassLoader will not -prevent the transformation, as the loaded class is not used at runtime. - -== ClassLoader Implementations - -Quarkus has the following ClassLoaders: - -Base ClassLoader:: - -This is usually the normal JVM System ClassLoader. In some environments such as Maven it may be different. This ClassLoader -is used to load the bootstrap classes, and other ClassLoader instances will delegate the loading of JDK classes to it. - -Augment ClassLoader:: - -This loads all the `-deployment` artifacts and their dependencies, as well as other user dependencies. It does not load the -application root or any hot deployed code. This ClassLoader is persistent, even if the application restarts it will remain -(which is why it cannot load application classes that may be hot deployed). Its parent is the base ClassLoader, and it is -transformer safe. - -At present this can be configured to delegate to the Base ClassLoader, however the plan is for this option to go away and -always have this as an isolated ClassLoader. Making this an isolated ClassLoader is complicated as it means that all -the builder classes are isolated, which means that use cases that want to customise the build chains are slightly more complex. - -Deployment ClassLoader:: - -This can load all application classes, its parent is the Augment ClassLoader so it can also load all deployment classes. - -This ClassLoader is non-persistent, it will be re-created when the application is started, and is isolated. This ClassLoader -is the context ClassLoader that is used when running the build steps. It is also transformer safe. - -Base Runtime ClassLoader:: - -This loads all the runtime extension dependencies, as well as other user dependencies (note that this may include duplicate -copies of classes also loaded by the Augment ClassLoader). It does not load the application root or any hot deployed -code. This ClassLoader is persistent, even if the application restarts it will remain (which is why it cannot load -application classes that may be hot deployed). Its parent is the base ClassLoader. - -This loads code that is not hot-reloadable, but it does support transformation (although once the class is loaded this -transformation is no longer possible). This means that only transformers registered in the first application start -will take effect, however as these transformers are expected to be idempotent this should not cause problems. An example -of the sort of transformation that might be required here is a Panache entity packaged in an external jar. This class -needs to be transformed to have its static methods implemented, however this transformation only happens once, so -restarts use the copy of the class that was created on the first start. - -This ClassLoader is isolated from the Augment and Deployment ClassLoaders. This means that it is not possible to set -values in a static field in the deployment side, and expect to read it at runtime. This allows dev and test applications -to behave more like a production application (production applications are isolated in that they run in a whole new JVM). - -This also means that the runtime version can be linked against a different set of dependencies, e.g. the hibernate -version used at deployment time might want to include ByteBuddy, while the version used at runtime does not. - -Runtime Class Loader:: - -This ClassLoader is used to load the application classes and other hot deployable resources. Its parent is the base runtime -ClassLoader, and it is recreated when the application is restarted. - - -== Isolated ClassLoaders - -The runtime ClassLoader is always isolated. This means that it will have its own copies of almost every class from the -resolved dependency list. The exception to this are: - -- JDK classes -- Classes from artifacts that extensions have marked as parent first (more on this later). - -=== Parent First Dependencies - -There are some classes that should not be loaded in an isolated manner, but that should always be loaded by the system -ClassLoader (or whatever ClassLoader is responsible for bootstrapping Quarkus). Most extensions do not need to worry about -this, however there are a few cases where this is necessary: - -- Some logging related classes, as logging must be loaded by the system ClassLoader -- Quarkus bootstrap itself - -If this is required it can be configured in the `quarkus-bootstrap-maven-plugin`. Note that if you -mark a dependency as parent first then all of its dependencies must also be parent first, -or a `LinkageError` can occur. - -[source,xml] ----- - - io.quarkus - quarkus-bootstrap-maven-plugin - - - io.quarkus:quarkus-bootstrap-core - io.quarkus:quarkus-development-mode-spi - org.jboss.logmanager:jboss-logmanager-embedded - org.jboss.logging:jboss-logging - org.ow2.asm:asm - - - ----- - -=== Banned Dependencies - -There are some dependencies that we can be sure we do not want. This generally happens when a dependency has had a name -change (e.g. smallrye-config changing groups from `org.smallrye` to `org.smallrye.config`, the `javax` -> `jakarta` rename). -This can cause problems, as if these artifacts end up in the dependency tree out of date classes can be loaded that are -not compatible with Quarkus. To deal with this extensions can specify artifacts that should never be loaded. This is -done by modifying the `quarkus-bootstrap-maven-plugin` config in the pom (which generates the `quarkus-extension.properties` -file). Simply add an `excludedArtifacts` section as shown below: - -[source,xml] ----- - - io.quarkus - quarkus-bootstrap-maven-plugin - - - io.smallrye:smallrye-config - javax.enterprise:cdi-api - - - ----- - -This should only be done if the extension depends on a newer version of these artifacts. If the extension does not bring -in a replacement artifact as a dependency then classes the application needs might end up missing. - -== Configuring Class Loading - -It is possible to configure some aspects of class loading in dev and test mode. This can be done using `application.properties`. -Note that class loading config is different to normal config, in that it does not use the standard Quarkus config mechanisms -(as it is needed too early), so only supports `application.properties`. The following options are supported. - - -include::{generated-dir}/config/quarkus-class-loading-configuration-class-loading-config.adoc[opts=optional, leveloffset=+1] - -== Hiding/Removing classes and resources from dependencies - -It is possible to hide/remove classes and resources from dependencies. This is an advanced option, but it can be useful -at times. This is done via the `quarkus.class-loading.removed-resources` config key, for example: - -`quarkus.class-loading.removed-resources."io.quarkus\:quarkus-integration-test-shared-library"=io/quarkus/it/shared/RemovedResource.class` - -This will remove the `RemovedResource.class` file from the `io.quarkus:quarkus-integration-test-shared-library` artifact. - -Even though this option is a class loading option it will also affect the generated application, so when the application -is created removed resources will not be accessible. - -== Reading Class Bytecode - -It is important to use the correct `ClassLoader`. The recommended approach is to get it by calling the -`Thread.currentThread().getContextClassLoader()` method. - -Example: - - -[source,java,subs=attributes+] ----- -@BuildStep -GeneratedClassBuildItem instrument(final CombinedIndexBuildItem index) { - final String classname = "com.example.SomeClass"; - final ClassLoader cl = Thread.currentThread().getContextClassLoader(); - final byte[] originalBytecode = IoUtil.readClassAsBytes(cl, classname); - final byte[] enhancedBytecode = ... // class instrumentation from originalBytecode - return new GeneratedClassBuildItem(true, classname, enhancedBytecode)); -} ----- \ No newline at end of file diff --git a/_versions/2.7/guides/cli-tooling.adoc b/_versions/2.7/guides/cli-tooling.adoc deleted file mode 100644 index 225f1ece12f..00000000000 --- a/_versions/2.7/guides/cli-tooling.adoc +++ /dev/null @@ -1,526 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Building Quarkus apps with Quarkus Command Line Interface (CLI) -:extension-status: preview - -include::./attributes.adoc[] - -The `quarkus` command lets you create projects, manage extensions and -do essential build and dev commands using the underlying project build tool. - -include::./status-include.adoc[] - -== Installing the CLI - -The Quarkus CLI is available in several developer-oriented package managers such as: - -* https://sdkman.io[SDKMAN!] -* https://brew.sh[Homebrew] -* https://community.chocolatey.org/packages/quarkus[Chocolatey] - -If you already use (or want to use) one of these tools, it is the simplest way to install the Quarkus CLI and keep it updated. - -In addition to these package managers, the Quarkus CLI is also installable via https://www.jbang.dev[JBang]. -Choose the alternative that is the most practical for you: - -* JBang - for Linux, macOS and Windows -* SDKMAN! - for Linux and macOS -* Homebrew - for Linux and macOS -* Chocolatey - for Windows - -[role="primary asciidoc-tabs-sync-jbang"] -.JBang -**** -The Quarkus CLI is available as a jar installable using https://jbang.dev[JBang]. - -JBang will use your existing Java or install one for you if needed. - -On Linux, macOS, and Windows (using WSL or bash compatible shell like Cygwin or MinGW) -[source,bash] ----- -curl -Ls https://sh.jbang.dev | bash -s - trust add https://repo1.maven.org/maven2/io/quarkus/quarkus-cli/ -curl -Ls https://sh.jbang.dev | bash -s - app install --fresh --force quarkus@quarkusio ----- - -On Windows using Powershell: -[source,powershell] ----- -iex "& { $(iwr https://ps.jbang.dev) } trust add https://repo1.maven.org/maven2/io/quarkus/quarkus-cli/" -iex "& { $(iwr https://ps.jbang.dev) } app install --fresh --force quarkus@quarkusio" ----- - -If JBang has already been installed, you can directly use it: -[source,bash] ----- -# This can also be used to update to the latest version -jbang app install --fresh --force quarkus@quarkusio ----- - -If you want to use a specific version, you can directly target a version: -[source,bash] ----- -# Create an alias in order to use a specific version -jbang app install --name qs2.2.5 io.quarkus:quarkus-cli:2.2.5.Final:runner ----- - -If you have built Quarkus locally, you can use that version: -[source,bash] ----- -# Use the latest (or locally built) snapshot (with qss as an alias) -jbang app install --force --name qss ~/.m2/repository/io/quarkus/quarkus-cli/999-SNAPSHOT/quarkus-cli-999-SNAPSHOT-runner.jar ----- - -Once installed `quarkus` will be in your `$PATH` and if you run `quarkus --version` it will print the installed version: - -[source,shell,subs=attributes+] ----- -quarkus --version -{quarkus-version} ----- - -[CAUTION] -.Use a recent JBang version -==== -If you get an error about `app` not being readable then you probably -have a JBang version older than v0.56.0 installed. Please remove or upgrade it to a recent version. - -If you are installing JBang for the first time, start a new session to update your `PATH`. -==== -**** - -[role="secondary asciidoc-tabs-sync-sdkman"] -.SDKMAN! -**** -https://sdkman.io/[SDKMAN!] can be used to manage development environments. -It can manage parallel versions of multiple Software Development Kits on most Unix based systems, -making it a very good alternative to keep multiple JDK versions handy. - -With SDKMAN!, you can also install popular Java tools, including the Quarkus CLI. - -[NOTE] -==== -Make sure you have https://sdkman.io/jdks[a JDK installed] before installing the Quarkus CLI. - -To list the available versions of Java, use `sdk list java`. -You can then install the version of OpenJDK you want with `sdk install java x.y.z-open` -(or the JDK of another vendor if it is your preference). -==== - -To install the Quarkus CLI using SDKMAN!, run the following command: - -[source,shell] ----- -sdk install quarkus ----- - -It will install the latest version of the Quarkus CLI. - -Once installed `quarkus` will be in your `$PATH` and if you run `quarkus --version` it will print the installed version: - -[source,shell,subs=attributes+] ----- -quarkus --version -{quarkus-version} ----- - -SDKMAN! will let you know when new versions are available so updates will be straightforward: - -[source,shell] ----- -sdk upgrade quarkus ----- -**** - -[role="secondary asciidoc-tabs-sync-homebrew"] -.Homebrew -**** -https://brew.sh[Homebrew] is a package manager for macOS (and Linux). - -You can use Homebrew to install (and update) the Quarkus CLI. - -[NOTE] -==== -Make sure you have a JDK installed before installing the Quarkus CLI. -We haven't added an explicit dependency as we wanted to make sure you could use your preferred JDK version. - -You can install a JDK with `brew install openjdk` for Java 17 or `brew install openjdk@11` for Java 11. -==== - -To install the Quarkus CLI using Homebrew, run the following command: - -[source,shell] ----- -brew install quarkusio/tap/quarkus ----- - -It will install the latest version of the Quarkus CLI. -This command can also be used to update the Quarkus CLI. - -Once installed `quarkus` will be in your `$PATH` and if you run `quarkus --version` it will print the installed version: - -[source,shell,subs=attributes+] ----- -quarkus --version -{quarkus-version} ----- - -You can upgrade the Quarkus CLI with: - -[source,shell] ----- -brew update <1> -brew upgrade quarkus <2> ----- -<1> Update all package definitions and Homebrew itself -<2> Upgrade Quarkus CLI to the latest version -**** - -[role="secondary asciidoc-tabs-sync-chocolatey"] -.Chocolatey -**** -https://chocolatey.org[Chocolatey] is a package manager for Windows. - -You can use Chocolatey to install (and update) the Quarkus CLI. - -[NOTE] -==== -Make sure you have a JDK installed before installing the Quarkus CLI. - -You can install a JDK with `choco install ojdkbuild17` for Java 17 or `choco install ojdkbuild11` for Java 11. -==== - -To install the Quarkus CLI using Chocolatey, run the following command: - -[source,shell] ----- -choco install quarkus ----- - -It will install the latest version of the Quarkus CLI. - -Once installed `quarkus` will be in your `$PATH` and if you run `quarkus --version` it will print the installed version: - -[source,shell,subs=attributes+] ----- -quarkus --version -{quarkus-version} ----- - -You can upgrade the Quarkus CLI with: - -[source,shell] ----- -choco upgrade quarkus ----- -**** - -== Using the CLI - -Use `--help` to display help information with all the available commands: - -[source,shell] ----- -quarkus --help -Usage: quarkus [-ehv] [--verbose] [-D=]... [COMMAND] - -Options: - -D= Java properties - -e, --errors Display error messages. - -h, --help Show this help message and exit. - -v, --version Print version information and exit. - --verbose Verbose mode. - -Commands: - create Create a new project. - app Create a Quarkus application project. - cli Create a Quarkus command-line project. - extension Create a Quarkus extension project - build Build the current project. - dev Run the current project in dev (live coding) mode. - extension, ext Configure extensions of an existing project. - list, ls List platforms and extensions. - categories, cat List extension categories. - add Add extension(s) to this project. - remove, rm Remove extension(s) from this project. - registry Configure Quarkus registry client - list List enabled Quarkus registries - add Add a Quarkus extension registry - remove Remove a Quarkus extension registry - version Display version information. ----- - -[TIP] -==== -While this document is a useful reference, the client help is the definitive source. - -If you don't see the output you expect, use `--help` to verify that you are invoking the command with the right arguments. -==== - -[[project-creation]] -=== Creating a new project - -To create a new project we use the `create` command -(the `app` subcommand is implied when not specified): - -[source,shell] ----- -quarkus create ------------ - -applying codestarts... -📚 java -🔨 maven -📦 quarkus -📝 config-properties -🔧 dockerfiles -🔧 maven-wrapper -🚀 resteasy-codestart - ------------ -[SUCCESS] ✅ quarkus project has been successfully generated in: ---> //code-with-quarkus ----- - -This will create a folder called 'code-with-quarkus' in your current working directory using default groupId, artifactId and version values -(groupId='org.acme', artifactId='code-with-quarkus' and version='1.0.0-SNAPSHOT'). - -Note: the emoji shown above may not match precisely. The appearance of emoji can vary by font, and terminal/environment. IntelliJ IDEA, for example, has several long-running issues open regarding the behavior/rendering of emoji in the terminal. - -As of 2.0.2.Final, you should specify the groupId, artifactId and version using group:artifactId:version coordinate syntax directly on the command line. You can selectively omit segments to use default values: - -[source,shell] ----- -# Create a project with groupId=org.acme, artifactId=bar, and version=1.0.0-SNAPSHOT -quarkus create app bar - -# Create a project with groupId=com.foo, artifactId=bar, and version=1.0.0-SNAPSHOT -quarkus create app com.foo:bar - -# Create a project with groupId=com.foo, artifactId=bar, and version=1.0 -quarkus create app com.foo:bar:1.0 ----- - -The output will show your project being created: - -[source,shell] ----- ------------ - -applying codestarts... -📚 java -🔨 maven -📦 quarkus -📝 config-properties -🔧 dockerfiles -🔧 maven-wrapper -🚀 resteasy-codestart - ------------ -[SUCCESS] ✅ quarkus project has been successfully generated in: ---> //bar ------------ ----- - -Use the help option to display options for creating projects: - -[source,shell] ----- -quarkus create app --help -quarkus create cli --help ----- - -[WARNING] -==== -Previous versions of the CLI used options `--group-id` (`-g`),`--artifact-id` (`-a`) and `--version` (`-v`) to specify the groupId, artifactId, and version. If the output isn't what you expect, double check your client version `quarkus version` and help `quarkus create app --help`. -==== - -[[specifying-quarkus-version]] -=== Specifying the Quarkus version - -Both `quarkus create` and `quarkus extension list` allow you to explicitly specify a version of Quarkus in one of two ways: - -1. Specify a specific Platform Release BOM -+ -A https://quarkus.io/guides/platform#quarkus-platform-bom[Quarkus Platform release BOM] is identified by `groupId:artifactId:version` (GAV) coordinates. When specifying a platform release BOM, you may use empty segments to fallback to default values (shown with `quarkus create app --help`). If you specify only one segment (no `:`), it is assumed to be a version. -+ -For example: -+ -- With the `2.0.0.Final` version of the CLI, specifying `-P :quarkus-bom:` is equivalent to `-P io.quarkus:quarkus-bom:2.0.0.Final`. Specifying `-P 999-SNAPSHOT` is equivalent to `-P io.quarkus:quarkus-bom:999-SNAPSHOT`. -- With the `2.1.0.Final` version of the CLI, `io.quarkus.platform` is the default group id. Specifying `-P :quarkus-bom:` is equivalent to `-P io.quarkus.platform:quarkus-bom:2.1.0.Final`. Note that you need to specify the group id to work with a snapshot, e.g `-P io.quarkus::999-SNAPSHOT` is equivalent to `-P io.quarkus:quarkus-bom:999-SNAPSHOT`. -+ -Note: default values are subject to change. Using the `--dry-run` option will show you the computed value. - -2. Specify a Platform Stream -+ -A platform stream operates against a remote registry. Each registry defines one or more platform streams, and each stream defines one or more platform release BOM files that define how projects using that stream should be configured. -+ -Streams are identified using `platformKey:streamId` syntax. A specific stream can be specified using `-S platformKey:streamId`. When specifying a stream, empty segments will be replaced with _discovered_ defaults, based on stream resource resolution rules. - -=== Working with extensions - -[source,shell] ----- -quarkus ext --help ----- - -==== Listing extensions - -The Quarkus CLI can be used to list Quarkus extensions. - -[source,shell] ----- -quarkus ext ls ----- - -The format of the result can be controlled with one of four options: - -- `--name` Display the name (artifactId) only -- `--concise` Display the name (artifactId) and description -- `--full` Display concise format and version/status-related columns. -- `--origins` Display concise information along with the Quarkus platform release origin of the extension. - -The behavior of `quarkus ext ls` will vary depending on context. - -===== Listing Extensions for a Quarkus release - -If you invoke the Quarkus CLI from outside of a project, Quarkus will list all of the extensions available for the Quarkus release used by the CLI itself. - -You can also list extensions for a specific release of Quarkus using `-P` or `-S`, as described in <>. - -This mode uses the `--origins` format by default. - -===== Listing Extensions for a Quarkus project - -When working with a Quarkus project, the CLI will list the extensions the current project has installed, using the `--name` format by default. - -Use the `--installable` or `-i` option to list extensions that can be installed from the Quarkus platform the project is using. - -You can narrow or filter the list using search (`--search` or `-s`). - -[source,shell] ----- -quarkus ext list --concise -i -s jdbc -JDBC Driver - DB2 quarkus-jdbc-db2 -JDBC Driver - PostgreSQL quarkus-jdbc-postgresql -JDBC Driver - H2 quarkus-jdbc-h2 -JDBC Driver - MariaDB quarkus-jdbc-mariadb -JDBC Driver - Microsoft SQL Server quarkus-jdbc-mssql -JDBC Driver - MySQL quarkus-jdbc-mysql -JDBC Driver - Oracle quarkus-jdbc-oracle -JDBC Driver - Derby quarkus-jdbc-derby -Elytron Security JDBC quarkus-elytron-security-jdbc -Agroal - Database connection pool quarkus-agroal ----- - - -==== Adding extension(s) - -The Quarkus CLI can add Quarkus one or more extensions to your project with the 'add' -command: - -[source,shell] ----- -quarkus ext add kubernetes health -[SUCCESS] ✅ Extension io.quarkus:quarkus-kubernetes has been installed -[SUCCESS] ✅ Extension io.quarkus:quarkus-smallrye-health has been installed ----- - -==== Removing extension(s) - -The Quarkus CLI can remove one or more extensions from your project with the 'remove' -command: - -[source,shell] ----- -quarkus ext rm kubernetes -[SUCCESS] ✅ Extension io.quarkus:quarkus-kubernetes has been uninstalled ----- - -=== Build your project - -To build your project using the Quarkus CLI (using the default configuration in this example): - -[source,shell] ----- -quarkus build -[INFO] Scanning for projects... -[INFO] -[INFO] ---------------------< org.acme:code-with-quarkus >--------------------- -[INFO] Building code-with-quarkus 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -[INFO] -... -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -[INFO] Total time: 8.331 s -[INFO] Finished at: 2021-05-27T10:13:28-04:00 -[INFO] ------------------------------------------------------------------------ ----- - -NOTE: Output will vary depending on the build tool your project is using (Maven, Gradle, or JBang). - -=== Development mode - -[source,shell] ----- -quarkus dev --help ----- - -To start dev mode from the Quarkus CLI do: - -[source,shell] ----- -quarkus dev -[INFO] Scanning for projects... -[INFO] -[INFO] ---------------------< org.acme:code-with-quarkus >--------------------- -[INFO] Building code-with-quarkus 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -[INFO] -... -Listening for transport dt_socket at address: 5005 -__ ____ __ _____ ___ __ ____ ______ ---/ __ \/ / / / _ | / _ \/ //_/ / / / __/ --/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ ---\___\_\____/_/ |_/_/|_/_/|_|\____/___/ -2021-05-27 10:15:56,032 INFO [io.quarkus] (Quarkus Main Thread) code-with-quarkus 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.387s. Listening on: http://localhost:8080 -2021-05-27 10:15:56,035 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. -2021-05-27 10:15:56,035 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, resteasy, smallrye-context-propagation] - --- -Tests paused, press [r] to resume ----- - -NOTE: Output will vary depending on the build tool your project is using (Maven, Gradle, or JBang). - - -[[quarkus-version-compatibility]] -[WARNING] -.Compatibility with Quarkus 1.x -==== -The version 2 Quarkus CLI can not be used with 1.x Quarkus projects or releases. Use the Maven/Gradle plugins when working with Quarkus 1.x projects. -==== - -== Shell autocomplete and aliases - -Automatic command completion is available for Bash and Zsh: - -[source,shell] ----- -# Setup autocompletion in the current shell -source <(quarkus completion) ----- - -If you choose to use an alias for the quarkus command, adjust command completion with the following commands: - -[source,shell] ----- -# Add an alias for the quarkus command -alias q=quarkus -# Add q to list of commands included in quarkus autocompletion -complete -F _complete_quarkus q ----- diff --git a/_versions/2.7/guides/command-mode-reference.adoc b/_versions/2.7/guides/command-mode-reference.adoc deleted file mode 100644 index 7234f114451..00000000000 --- a/_versions/2.7/guides/command-mode-reference.adoc +++ /dev/null @@ -1,275 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Command Mode Applications - -include::./attributes.adoc[] - -This reference covers how to write applications that run and then exit. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `getting-started-command-mode` {quickstarts-tree-url}/getting-started-command-mode[directory]. - -== Writing Command Mode Applications - -There are two different approaches that can be used to implement applications -that exit. - -. Implement `QuarkusApplication` and have Quarkus run this method automatically -. Implement `QuarkusApplication` and a Java main method, and use the Java main method to launch Quarkus - -In this document the `QuarkusApplication` instance is referred to as the application main, -and a class with a Java main method is the Java main. - -The simplest possible command mode application with access to Quarkus API's might appear as follows: - -[source,java] ----- -import io.quarkus.runtime.QuarkusApplication; -import io.quarkus.runtime.annotations.QuarkusMain; - -@QuarkusMain // <.> -public class HelloWorldMain implements QuarkusApplication { - @Override - public int run(String... args) throws Exception { // <.> - System.out.println("Hello " + args[1]); - return 0; - } -} ----- -<.> The `@QuarkusMain` annotation tells Quarkus that this is the main entry point. -<.> The `run` method is invoked once Quarkus starts, and the application stops when it finishes. - -=== Contexts - -[sidebar] -.Got a `ContextNotActiveException`? --- -A command mode application (like a CLI) is a bit different from say a HTTP service, there is a single call from the command line. -So the notion of _request_ let alone multiple requests does not exist per se. -Therefore request scope is not the default. - -To get access to your application beans and services, be aware that a `@QuarkusMain` instance is an application scoped bean by default. -It has access to singletons, application and dependent scoped beans. - -If you want to interact with beans that requires a request scope, simply add the `@ActivateRequestContext` annotation on your `run()` method. -This let `run()` have access to methods like `listAll()` and `query*` methods on a Panache Entity. -Without it you will eventually get a `ContextNotActiveException` when accessing such classes/beans. --- - -=== Main method -If we want to use a Java main to run the application main it would look like: - -[source,java] ----- -import io.quarkus.runtime.Quarkus; -import io.quarkus.runtime.annotations.QuarkusMain; - -@QuarkusMain -public class JavaMain { - - public static void main(String... args) { - Quarkus.run(HelloWorldMain.class, args); - } -} ----- - -This is effectively the same as running the `HelloWorldMain` application main directly, but has the advantage it can -be run from the IDE. - -NOTE: If a class that implements `QuarkusApplication` and has a Java main then the Java main will be run. - -WARNING: It is recommended that a Java main perform very little logic, and just -launch the application main. In development mode the Java main will run in a -different ClassLoader to the main application, so may not behave as you would -expect. - -==== Multiple Main Methods - -It is possible to have multiple main methods in an application, and select between them at build time. -The `@QuarkusMain` annotation takes an optional 'name' parameter, and this can be used to select the -main to run using the `quarkus.package.main-class` build time configuration option. If you don't want -to use annotations this can also be used to specify the fully qualified name of a main class. - -By default the `@QuarkusMain` with no name (i.e. the empty string) will be used, and if it is not present -and `quarkus.package.main-class` is not specified then Quarkus will automatically generate a main class -that just runs the application. - -NOTE: The `name` of `@QuarkusMain` must be unique (including the default of the empty string). If you -have multiple `@QuarkusMain` annotations in your application the build will fail if the names are not -unique. - -=== The command mode lifecycle - -When running a command mode application the basic lifecycle is as follows: - -. Start Quarkus -. Run the `QuarkusApplication` main method -. Shut down Quarkus and exit the JVM after the main method returns - -Shutdown is always initiated by the application main thread returning. If you want to run some logic on startup, -and then run like a normal application (i.e. not exit) then you should call `Quarkus.waitForExit` from the main -thread (A non-command mode application is essentially just running an application that just calls `waitForExit`). - -If you want to shut down a running application and you are not in the main thread then you should call `Quarkus.asyncExit` -in order to unblock the main thread and initiate the shutdown process. - -=== Development Mode - -Also for command mode applications the dev mode is supported. -When you start your application in dev mode, the command mode application is executed: - -include::includes/devtools/dev.adoc[] - -As command mode applications will often require arguments to be passed on the command line, this is also possible in dev mode: - -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus dev '--help' ----- - -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw quarkus:dev -Dquarkus.args='--help' ----- - -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew quarkusDev --quarkus-args='--help' ----- - -You should see the following down the bottom of the screen after the application is stopped: - -[source] ----- --- -Press [space] to restart, [e] to edit command line args (currently '-w --tags 1.0.1.Final'), [r] to resume testing, [o] Toggle test output, [h] for more options> ----- - -You can press the `Space bar` key and the application will be started again. -You can also use the `e` hotkey to edit the command line arguments and restart your application. - -== Testing Command Mode Applications - -Command Mode applications can be tested using the `@QuarkusMainTest` and `@QuarkusMainIntegrationTest` annotations. These -work in a similar way to `@QuarkusTest` and `@QuarkusIntegrationTest` where `@QuarkusMainTest` will run the CLI tests -within the current JVM, while `QuarkusIntegrationTest` is used to run the generated executable (both jars and native). - -We can write a simple test for our CLI application above as follows: - -[source,java] ----- -import io.quarkus.test.junit.main.Launch; -import io.quarkus.test.junit.main.LaunchResult; -import io.quarkus.test.junit.main.QuarkusMainLauncher; -import io.quarkus.test.junit.main.QuarkusMainTest; -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; - -@QuarkusMainTest -public class HelloTest { - - @Test - @Launch("World") - public void testLaunchCommand(LaunchResult result) { - Assertions.assertEquals("Hello World", result.getOutput()); - } - - @Test - @Launch(value = {}, exitCode = 1) - public void testLaunchCommandFailed() { - } - - @Test - public void testManualLaunch(QuarkusMainLauncher launcher) { - LaunchResult result = launcher.launch("Everyone"); - Assertions.assertEquals(0, result.exitCode()); - Assertions.assertEquals("Hello Everyone", result.getOutput()); - } -} - ----- - -We can then extend this with an integration test that can be used to test the native executable or runnable jar: - -[source,java] ----- -import io.quarkus.test.junit.main.QuarkusMainIntegrationTest; - -@QuarkusMainIntegrationTest -public class HelloIT extends HelloTest { -} ----- - -=== Mocking - -CDI injection is not supported in the `@QuarkusMainTest` tests. -Consequently, mocking CDI beans with `QuarkusMock` or `@InjectMock` is not supported either. - -It is possible to mock CDI beans by leveraging xref:getting-started-testing.adoc#testing_different_profiles[test profiles] though. - -For instance, in the following test, the singleton `CdiBean1` will be mocked by `MockedCdiBean1`: - -[source,java] ----- -package org.acme.commandmode.test; - -import java.util.Set; - -import javax.enterprise.inject.Alternative; -import javax.inject.Singleton; - -import org.junit.jupiter.api.Test; -import org.acme.commandmode.test.MyCommandModeTest.MyTestProfile; - -import io.quarkus.test.junit.QuarkusTestProfile; -import io.quarkus.test.junit.TestProfile; -import io.quarkus.test.junit.main.Launch; -import io.quarkus.test.junit.main.LaunchResult; -import io.quarkus.test.junit.main.QuarkusMainTest; - -@QuarkusMainTest -@TestProfile(MyTestProfile.class) -public class MyCommandModeTest { - - @Test - @Launch(value = {}) - public void testLaunchCommand(LaunchResult result) { - // ... assertions ... - } - - public static class MyTestProfile implements QuarkusTestProfile { - - @Override - public Set> getEnabledAlternatives() { - return Set.of(MockedCdiBean1.class); <1> - } - } - - @Alternative <2> - @Singleton <3> - public static class MockedCdiBean1 implements CdiBean1 { - - @Override - public String myMethod() { - return "mocked value"; - } - } -} ----- -<1> List all the CDI beans for which you want to enable an alternative mocked bean. -<2> Use `@Alternative` without a `@Priority`. Make sure you don't use `@Mock`. -<3> The scope of the mocked bean should be consistent with the original one. - -Using this pattern, you can enable specific alternatives for any given test. diff --git a/_versions/2.7/guides/conditional-extension-dependencies.adoc b/_versions/2.7/guides/conditional-extension-dependencies.adoc deleted file mode 100644 index b8e655d6a6c..00000000000 --- a/_versions/2.7/guides/conditional-extension-dependencies.adoc +++ /dev/null @@ -1,178 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Conditional Extension Dependencies - -include::./attributes.adoc[] - -Quarkus extension dependencies are usually configured in the same way as any other project dependencies in the project's build file, e.g. the Maven `pom.xml` or the Gradle build scripts. However, there are dependency types that aren't yet supported out-of-the-box by Maven and Gradle. What we refer here to as "conditional dependencies" is one example. - -== Conditional Dependencies - -The idea behind the notion of the conditional dependency is that such a dependency must be activated only if a certain condition is satisfied. If the condition is not satisfied then the dependency **must not** be activated. In that regard, conditional dependencies can be categorized as optional, i.e. they may or may not appear in the resulting set of project dependencies. - -In which cases could conditional dependencies be useful? A typical example would be a component that should be activated **only** in case all of its required dependencies are available. If one or more of the component's required dependencies aren't available, instead of failing, the component should simply not be activated. - -== Quarkus Conditional Extension Dependencies - -Quarkus supports conditional extension dependencies. I.e. one Quarkus extension may declare one or more conditional dependencies on other Quarkus extensions. Conditional dependencies on and from non-extension artifacts aren't supported. - -Let's take the following scenario as an example: `quarkus-extension-a` has an optional dependency on `quarkus-extension-b` which should be included in a Quarkus application only if `quarkus-extension-c` is found among its dependencies (direct or transitive). In other words, the presence of `quarkus-extension-c` is the condition which, if satisfied, enables `quarkus-extension-b` during the build of a Quarkus application. - -The condition which triggers activation of an extension is configured in the extension's descriptor, which is included into the runtime artifact of the extension as `META-INF/quarkus-extension.properties`. Given that extension descriptor is generated by the Quarkus plugin at extension build time, extension developers can add the following configuration to express the condition which would have to be satisfied for the extension to be activated: - -[source,xml] ----- - - - - - quarkus-extension-b <1> - - - - - - - io.quarkus - quarkus-bootstrap-maven-plugin - ${quarkus.version} - - - process-resources - - extension-descriptor <2> - - - <3> - org.acme:quarkus-extension-c <4> - - - - - - - ----- - -<1> runtime Quarkus extension artifact ID, in our example `quarkus-extension-b`; -<2> the goal that generates the extension descriptor which every Quarkus runtime extension project should be configured with; -<3> configuration of the condition which will have to be satisfied for this extension to be included into a Quarkus application expressed as a list of artifacts that must be present among the application dependencies; -<4> an artifact key (in the format of `groupId:artifactId[::]` but typically simply `:`) of the artifact that must be present among the application dependencies for the condition to be satisfied. - -NOTE: In the example above the `artifact` used in the condition configuration happens to be a runtime Quarkus extension artifact but it could as well be any other artifact. There could also be more than one `artifact` element in the body of `dependencyCondition`. - -Now, having a dependency activating condition in the descriptor of `quarkus-extension-b`, other extensions may declare a conditional dependency on it. - -A conditional dependency is configured in the runtime artifact of a Quarkus extension. In our example, it's the `quarkus-extension-a` that has a conditional dependency on `quarkus-extension-b`, which can be expressed in two ways. - -=== Declaring a dependency as `optional` - -If an extension was configured with a dependency condition in its descriptor, other extensions may configure a conditional dependency on it by simply adding `true` to the dependency configuration. In our example it would look like this: - -[source,xml] ----- - - - - - quarkus-extension-a <1> - - - - - - org.acme - quarkus-extension-b <2> - true - - - ----- - -<1> the runtime extension artifact `quarkus-extension-a` -<2> declares an optional Maven dependency on the runtime extension artifact `quarkus-extension-b` - -IMPORTANT: In general, for every runtime extension artifact dependency on another runtime extension artifact there must be a corresponding deployment extension artifact dependency on the other deployment extension artifact. And if the runtime dependency is declared as optional then the corresponding deployment dependency **must** also be configured as optional. - -[source,xml] ----- - - - - - quarkus-extension-a-deployment <1> - - - - - - org.acme - quarkus-extension-b-deployment <2> - true - - - ----- - -<1> the deployment extension artifact `quarkus-extension-a-deployment` -<2> declares an optional Maven dependency on the deployment extension artifact `quarkus-extension-b-deployment` - -Normally, optional Maven extension dependencies are ignored by the Quarkus dependency resolver at build time. In this case though, the optional dependency `quarkus-extension-b` includes a dependency condition in its extension descriptor, which turns this optional Maven dependency into a Quarkus conditional extension dependency. - -IMPORTANT: If `quarkus-extension-b` wasn't declared as `true` that would make `quarkus-extension-b` a required dependency of `quarkus-extension-a` and its dependency condition would be ignored. - -=== Declaring a conditional dependency in the Quarkus extension descriptor - -Conditional dependencies can also be configured in the Quarkus extension descriptor. The conditional dependency configured above could be expressed in the extension descriptor of `quarkus-extension-a` as: - -[source,xml] ----- - - - - - quarkus-extension-a <1> - - - - - - - io.quarkus - quarkus-bootstrap-maven-plugin - ${quarkus.version} - - - process-resources - - extension-descriptor <2> - - - <3> - org.acme:quarkus-extension-b:${b.version} <4> - - - - - - - ----- - -<1> runtime Quarkus extension artifact ID, in our example `quarkus-extension-a` -<2> the goal that generates the extension descriptor which every Quarkus runtime extension project should be configured with -<3> conditional dependency configuration element -<4> artifact coordinates of conditional dependencies on other extensions. - -In this case, the Maven dependency is not at all required in the `pom.xml`. diff --git a/_versions/2.7/guides/config-extending-support.adoc b/_versions/2.7/guides/config-extending-support.adoc deleted file mode 100644 index f1c843711f5..00000000000 --- a/_versions/2.7/guides/config-extending-support.adoc +++ /dev/null @@ -1,403 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Extending Configuration Support - -include::./attributes.adoc[] - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -[[custom-config-source]] -== Custom `ConfigSource` - -It's possible to create a custom `ConfigSource` as specified in -link:https://github.com/eclipse/microprofile-config/blob/master/spec/src/main/asciidoc/configsources.asciidoc#custom-configsources-via-configsourceprovider[MicroProfile Config]. - -With a Custom `ConfigSource` it is possible to read additional configuration values and add them to the `Config` -instance in a defined ordinal. This allows overriding values from other sources or falling back to other values. - -image::config-sources.png[align=center,width=90%] - -A custom `ConfigSource` requires an implementation of `org.eclipse.microprofile.config.spi.ConfigSource` or -`org.eclipse.microprofile.config.spi.ConfigSourceProvider`. Each implementation requires registration via -the https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html[ServiceLoader] mechanism, either in -`META-INF/services/org.eclipse.microprofile.config.spi.ConfigSource` or -`META-INF/services/org.eclipse.microprofile.config.spi.ConfigSourceProvider` files. - -=== Example - -Consider a simple in-memory `ConfigSource`: - -.org.acme.config.InMemoryConfigSource -[source,java] ----- -package org.acme.config; - -import org.eclipse.microprofile.config.spi.ConfigSource; - -import java.util.HashMap; -import java.util.Map; -import java.util.Set; - -public class InMemoryConfigSource implements ConfigSource { - private static final Map configuration = new HashMap<>(); - - static { - configuration.put("my.prop", "1234"); - } - - @Override - public int getOrdinal() { - return 275; - } - - @Override - public Set getPropertyNames() { - return configuration.keySet(); - } - - @Override - public String getValue(final String propertyName) { - return configuration.get(propertyName); - } - - @Override - public String getName() { - return InMemoryConfigSource.class.getSimpleName(); - } -} ----- - -And registration in: - -.META-INF/services/org.eclipse.microprofile.config.spi.ConfigSource -[source,properties] ----- -org.acme.config.InMemoryConfigSource ----- - -The `InMemoryConfigSource` will be ordered between the `.env` source, and the `application.properties` source due to -the `275` ordinal: - - -|=== -|ConfigSource |Ordinal - -|System Properties -|400 - -|Environment Variables from System -|300 - -|Environment Variables from `.env` file -|295 - -|InMemoryConfigSource -|275 - -|`application.properties` from `/config` -|260 - -|`application.properties` from application -|250 - -|`microprofile-config.properties` from application -|100 -|=== - -In this case, `my.prop` from `InMemoryConfigSource` will only be used if the config engine is unable to find a value -in xref:config-reference.adoc#system-properties[System Properties], -xref:config-reference.adoc#environment-variables[Environment Variables from System] or -xref:config-reference.adoc#env-file[Environment Variables from .env file] in this order. - -=== ConfigSource Init - -When a Quarkus application starts, a `ConfigSource` can be initialized twice. One time for _STATIC INIT_ and a second -time for _RUNTIME INIT_: - -==== STATIC INIT - -Quarkus starts some of its services during static initialization, and `Config` is usually one of the first things that -is created. In certain situations it may not be possible to add a custom `ConfigSource`. For instance, if the -`ConfigSource` requires other services, like a database access, it will not be available at this stage, and cause a -chicken-egg problem. For this reason, any custom `ConfigSource` requires the annotation -`@io.quarkus.runtime.configuration.StaticInitSafe` to mark the source as safe to be used at this stage. - -===== Example - -Consider: - -.org.acme.config.InMemoryConfigSource -[source,java] ----- -package org.acme.config; - -import org.eclipse.microprofile.config.spi.ConfigSource; -import io.quarkus.runtime.annotations.StaticInitSafe; - -@StaticInitSafe -public class InMemoryConfigSource implements ConfigSource { - -} ----- - -And registration in: - -.META-INF/services/org.eclipse.microprofile.config.spi.ConfigSource -[source,properties] ----- -org.acme.config.InMemoryConfigSource ----- - -The `InMemoryConfigSource` will be available during _STATIC INIT_. - -IMPORTANT: A custom `ConfigSource` is not automatically added during Quarkus _STATIC INIT_. It requires to be marked with -the `@io.quarkus.runtime.configuration.StaticInitSafe` annotation. - -==== RUNTIME INIT - -The _RUNTIME INIT_ stage happens after _STATIC INIT_. In this stage a `ConfigSource` can be initialized again. There -are no restrictions at this stage, and a custom source is added to the `Config` instance as expected. - -[[config-source-factory]] -== `ConfigSourceFactory` - -Another way to create a `ConfigSource` is via the https://github.com/smallrye/smallrye-config[SmallRye Config] -`io.smallrye.config.ConfigSourceFactory` API. The difference between the -https://github.com/smallrye/smallrye-config[SmallRye Config] factory and the standard way to create a `ConfigSource` as -specified in -link:https://github.com/eclipse/microprofile-config/blob/master/spec/src/main/asciidoc/configsources.asciidoc#custom-configsources-via-configsourceprovider[MicroProfile Config], -is the factory ability to provide a context with access to the available configuration. - -Each implementation of `io.smallrye.config.ConfigSourceFactory` requires registration via -the https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html[ServiceLoader] -mechanism in the `META-INF/services/io.smallrye.config.ConfigSourceFactory` file. - -=== Example - -Consider: - -.org.acme.config.URLConfigSourceFactory -[source,java] ----- -package org.acme.config; - -import java.util.Collections; -import java.util.OptionalInt; - -import org.eclipse.microprofile.config.spi.ConfigSource; - -import io.smallrye.config.ConfigSourceContext; -import io.smallrye.config.ConfigSourceFactory; -import io.smallrye.config.ConfigValue; -import io.smallrye.config.PropertiesConfigSource; - -public class URLConfigSourceFactory implements ConfigSourceFactory { - @Override - public Iterable getConfigSources(final ConfigSourceContext context) { - final ConfigValue value = context.getValue("config.url"); - if (value == null || value.getValue() == null) { - return Collections.emptyList(); - } - - try { - return Collections.singletonList(new PropertiesConfigSource(new URL(value.getValue()))); - } catch (IOException e) { - throw new RuntimeException(e); - } - } - - @Override - public OptionalInt getPriority() { - return OptionalInt.of(290); - } -} ----- - -And registration in: - -.META-INF/services/io.smallrye.config.ConfigSourceFactory -[source,properties] ----- -org.acme.config.URLConfigSourceFactory ----- - -By implementing `io.smallrye.config.ConfigSourceFactory`, a list of `ConfigSource` may be provided via the -`Iterable getConfigSources(ConfigSourceContext context)` method. The `ConfigSourceFactory` may also -assign a priority by overriding the method `OptionalInt getPriority()`. The priority values is used to sort -multiple `io.smallrye.config.ConfigSourceFactory` (if found). - -IMPORTANT: `io.smallrye.config.ConfigSourceFactory` priority does not affect the `ConfigSource` ordinal. These are -sorted independently. - -When the Factory is initializing, the provided `ConfigSourceContext` may call the method -`ConfigValue getValue(String name)`. This method looks up configuration names in all ``ConfigSource``s that were already -initialized by the `Config` instance, including sources with lower ordinals than the ones defined in the -`ConfigSourceFactory`. The `ConfigSource` list provided by a `ConfigSourceFactory` is not taken into consideration to -configure other sources produced by a lower priority `ConfigSourceFactory`. - -[[custom-converter]] -== Custom `Converter` - -It is possible to create a custom `Converter` type as specified by -link:https://github.com/eclipse/microprofile-config/blob/master/spec/src/main/asciidoc/converters.asciidoc#adding-custom-converters[MicroProfile Config]. - -A custom `Converter` requires an implementation of `org.eclipse.microprofile.config.spi.Converter`. Each implementation -requires registration via the https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html[ServiceLoader] -mechanism in the `META-INF/services/org.eclipse.microprofile.config.spi.Converter` file. Consider: - -[source,java] ----- -package org.acme.config; - -public class MicroProfileCustomValue { - - private final int number; - - public MicroProfileCustomValue(int number) { - this.number = number; - } - - public int getNumber() { - return number; - } -} ----- - -The corresponding converter will look similar to the one below. - -[source,java] ----- -package org.acme.config; - -import org.eclipse.microprofile.config.spi.Converter; - -public class MicroProfileCustomValueConverter implements Converter { - - @Override - public MicroProfileCustomValue convert(String value) { - return new MicroProfileCustomValue(Integer.parseInt(value)); - } -} ----- - -NOTE: The custom converter class must be `public`, must have a `public` constructor with no arguments, and must not be -`abstract`. - -The custom configuration type converts the configuration value automatically: - -[source,java] ----- -@ConfigProperty(name = "configuration.value.name") -MicroProfileCustomValue value; ----- - -=== Converter priority - -The `javax.annotation.Priority` annotation overrides the `Converter` priority and change converters precedence to fine -tune the execution order. By default, if no `@Priority` is specified by the `Converter`, the converter is registered -with a priority of `100`. Consider: - -[source,java] ----- -package org.acme.config; - -import javax.annotation.Priority; -import org.eclipse.microprofile.config.spi.Converter; - -@Priority(150) -public class MyCustomConverter implements Converter { - - @Override - public MicroProfileCustomValue convert(String value) { - - final int secretNumber; - if (value.startsFrom("OBF:")) { - secretNumber = Integer.parseInt(SecretDecoder.decode(value)); - } else { - secretNumber = Integer.parseInt(value); - } - - return new MicroProfileCustomValue(secretNumber); - } -} ----- - -Since it converts the same value type (`MicroProfileCustomValue`) and has a priority of `150`, it will be used -instead of a `MicroProfileCustomValueConverter` which has a default priority of `100`. - -NOTE: All Quarkus core converters use the priority value of `200`. To override any Quarkus specific converter, the -priority value should be higher than `200`. - -[[config-interceptors]] -== Config Interceptors - -https://github.com/smallrye/smallrye-config[SmallRye Config] provides an interceptor chain that hooks into the -configuration values resolution. This is useful to implement features like -xref:config-reference.adoc#profiles[Profiles], -xref:config-reference.adoc#property-expressions[Property Expressions], -or just logging to find out where the config value was loaded from. - -An interceptor requires an implementation of `io.smallrye.config.ConfigSourceInterceptor`. Each implementation -requires registration via the https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html[ServiceLoader] -mechanism in the `META-INF/services/io.smallrye.config.ConfigSourceInterceptor` file. - -The `io.smallrye.config.ConfigSourceInterceptor` is able to intercept the resolution of a configuration name with the -method `ConfigValue getValue(ConfigSourceInterceptorContext context, String name)`. The `ConfigSourceInterceptorContext` -is used to proceed with the interceptor chain. The chain can be short-circuited by returning an instance of -`io.smallrye.config.ConfigValue`. The `ConfigValue` objects hold information about the key name, value, config source -origin and ordinal. - -NOTE: The interceptor chain is applied before any conversion is performed on the configuration value. - -Interceptors may also be created with an implementation of `io.smallrye.config.ConfigSourceInterceptorFactory`. Each -implementation requires registration via the https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html[ServiceLoader] -mechanism in the `META-INF/services/io.smallrye.config.ConfigSourceInterceptorFactory` file. - -The `ConfigSourceInterceptorFactory` may initialize an interceptor with access to the current chain -(so it can be used to configure the interceptor and retrieve configuration values) and set the priority. - -=== Example - -.org.acme.config.LoggingConfigSourceInterceptor -[source,java] ----- -package org.acme.config; - -import javax.annotation.Priority; - -import io.smallrye.config.ConfigSourceInterceptor; -import io.smallrye.config.ConfigLogging; - -@Priority(Priorities.LIBRARY + 200) -public class LoggingConfigSourceInterceptor implements ConfigSourceInterceptor { - private static final long serialVersionUID = 367246512037404779L; - - @Override - public ConfigValue getValue(final ConfigSourceInterceptorContext context, final String name) { - ConfigValue configValue = doLocked(() -> context.proceed(name)); - if (configValue != null) { - ConfigLogging.log.lookup(configValue.getName(), configValue.getLocation(), configValue.getValue()); - } else { - ConfigLogging.log.notFound(name); - } - return configValue; - } -} ----- - -And registration in: - -.META-INF/services/io.smallrye.config.ConfigSourceInterceptor -[source,properties] ----- -org.acme.config.LoggingConfigSourceInterceptor ----- - -The `LoggingConfigSourceInterceptor` logs looks up configuration names in the provided logging platform. The log -information includes config name and value, the config source origin and location if exists. diff --git a/_versions/2.7/guides/config-mappings.adoc b/_versions/2.7/guides/config-mappings.adoc deleted file mode 100644 index 010efd66702..00000000000 --- a/_versions/2.7/guides/config-mappings.adoc +++ /dev/null @@ -1,772 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Mapping configuration to objects - -include::./attributes.adoc[] - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -With config mappings it is possible to group multiple configuration properties in a single interface that -share the same prefix. - -[[config-mappings]] -== `@ConfigMapping` - -A config mapping requires an interface with minimal metadata configuration and annotated with the -`@io.smallrye.config.ConfigMapping` annotation. - -[source,java] ----- -@ConfigMapping(prefix = "server") -interface Server { - String host(); - - int port(); -} ----- - -The `Server` interface is able to map configuration properties with the name `server.host` into the `Server.host()` -method and `server.port` into `Server.port()` method. The configuration property name to look up is built from the -prefix, and the method name with `.` (dot) as the separator. - -NOTE: If a mapping fails to match a configuration property a `NoSuchElementException` is thrown, unless the mapped -element is an `Optional`. - -=== Registration - -When a Quarkus application starts, a config mapping can be registered twice. One time for _STATIC INIT_ and a second -time for _RUNTIME INIT_: - -==== STATIC INIT - -Quarkus starts some of its services during static initialization, and `Config` is usually one of the first things that -is created. In certain situations it may not be possible to correctly initialize a config mapping. For instance, if the -mapping requires values from a custom `ConfigSource`. For this reason, any config mapping requires the annotation -`@io.quarkus.runtime.configuration.StaticInitSafe` to mark the mapping as safe to be used at this stage. Learn more -about xref:config-extending-support.adoc#custom-config-source[registration] of a custom `ConfigSource`. - -===== Example - -[source,java] ----- -@StaticInitSafe -@ConfigMapping(prefix = "server") -interface Server { - String host(); - - int port(); -} ----- - -==== RUNTIME INIT - -The _RUNTIME INIT_ stage happens after _STATIC INIT_. There are no restrictions at this stage, and any config mapping -is added to the `Config` instance as expected. - -=== Retrieval - -A config mapping interface can be injected into any CDI aware bean: - -[source,java] ----- -class BusinessBean { - @Inject - Server server; - - public void businessMethod() { - String host = server.host(); - } -} ----- - -In non-CDI contexts, use the API `io.smallrye.config.SmallRyeConfig#getConfigMapping` to retrieve the config mapping -instance: - -[source,java] ----- -SmallRyeConfig config = ConfigProvider.getConfig().unwrap(SmallRyeConfig.class); -Server server = config.getConfigMapping(Server.class); ----- - -=== Nested groups - -A nested mapping provides a way to subgroup other config properties: - -[source,java] ----- -@ConfigMapping(prefix = "server") -public interface Server { - String host(); - - int port(); - - Log log(); - - interface Log { - boolean enabled(); - - String suffix(); - - boolean rotate(); - } -} ----- - -.application.properties -[source,properties] ----- -server.host=localhost -server.port=8080 -server.log.enabled=true -server.log.suffix=.log -server.log.rotate=false ----- - -The method name of a mapping group acts as sub-namespace to the configurations properties. - -=== Overriding property names - -==== `@WithName` - -If a method name, or a property name do not match with each other, the `@WithName` annotation can override the method -name mapping and use the name supplied in the annotation: - -[source,java] ----- -@ConfigMapping(prefix = "server") -interface Server { - @WithName("name") - String host(); - - int port(); -} ----- - -.application.properties -[source,properties] ----- -server.name=localhost -server.port=8080 ----- - -==== `@WithParentName` - -The `@WithParent` annotation allows to configurations mapping to inherit its container name, simplifying the -configuration property name required to match the mapping: - -[source,java] ----- -interface Server { - @WithParentName - ServerHostAndPort hostAndPort(); - - @WithParentName - ServerInfo info(); -} - -interface ServerHostAndPort { - String host(); - - int port(); -} - -interface ServerInfo { - String name(); -} ----- - -.application.properties -[source,properties] ----- -server.host=localhost -server.port=8080 -server.name=konoha ----- - -Without the `@WithParentName` the method `name()` requires the configuration property `server.info.name`. Because we use -`@WithParentName`, the `info()` mapping will inherit the parent name from `Server` and `name()` maps to `server.name` -instead. - -==== NamingStrategy - -Method names in camelCase map to kebab-case property names: - -[source,java] ----- -@ConfigMapping(prefix = "server") -interface Server { - String theHost(); - - int thePort(); -} ----- - -.application.properties -[source,properties] ----- -server.the-host=localhost -server.the-port=8080 ----- - -The mapping strategy can be adjusted by setting `namingStrategy` value in the `@ConfigMapping` annotation: - -[source,java] ----- -@ConfigMapping(prefix = "server", namingStrategy = ConfigMapping.NamingStrategy.VERBATIM) -public interface ServerVerbatimNamingStrategy { - String theHost(); - - int thePort(); -} ----- - -.application.properties -[source,properties] ----- -server.theHost=localhost -server.thePort=8080 ----- - -The `@ConfigMapping` annotation support the following naming stategies: - -- `KEBAB_CASE` (default) - The method name is derived by replacing case changes with a dash to map the configuration property. -- `VERBATIM` - The method name is used as is to map the configuration property. -- `SNAKE_CASE` - The method name is derived by replacing case changes with an underscore to map the configuration property. - -=== Conversions - -A config mapping class support automatic conversions of all types available for conversion in `Config`: - -[source,java] ----- -@ConfigMapping -public interface SomeTypes { - @WithName("int") - int intPrimitive(); - - @WithName("int") - Integer intWrapper(); - - @WithName("long") - long longPrimitive(); - - @WithName("long") - Long longWrapper(); - - @WithName("float") - float floatPrimitive(); - - @WithName("float") - Float floatWrapper(); - - @WithName("double") - double doublePrimitive(); - - @WithName("double") - Double doubleWrapper(); - - @WithName("char") - char charPrimitive(); - - @WithName("char") - Character charWrapper(); - - @WithName("boolean") - boolean booleanPrimitive(); - - @WithName("boolean") - Boolean booleanWrapper(); -} ----- - -.application.properties -[source,properties] ----- -int=9 -long=9999999999 -float=99.9 -double=99.99 -char=c -boolean=true ----- - -This is also valid for `Optional` and friends: - -[source,java] ----- -@ConfigMapping -public interface Optionals { - Optional server(); - - Optional optional(); - - @WithName("optional.int") - OptionalInt optionalInt(); - - interface Server { - String host(); - - int port(); - } -} ----- - -In this case, the mapping won't fail if there are not configuration properties to match the mapping. - -==== `@WithConverter` - -The `@WithConverter` annotation provides a way to set a `Converter` to use in a specific mapping: - -[source,java] ----- -@ConfigMapping -public interface Converters { - @WithConverter(FooBarConverter.class) - String foo(); -} - -public static class FooBarConverter implements Converter { - @Override - public String convert(final String value) { - return "bar"; - } -} ----- - -.application.properties -[source,properties] ----- -foo=foo ----- - -A call to `Converters.foo()` results in the value `bar`. - -==== Collections ==== - -A config mapping is also able to map collections types `List` and `Set`: - -[source,java] ----- -@ConfigMapping(prefix = "server") -public interface ServerCollections { - Set environments(); - - interface Environment { - String name(); - - List apps(); - - interface App { - String name(); - - List services(); - - Optional> databases(); - } - } -} ----- - -.application.properties -[source,properties] ----- -server.environments[0].name=dev -server.environments[0].apps[0].name=rest -server.environments[0].apps[0].services=bookstore,registration -server.environments[0].apps[0].databases=pg,h2 -server.environments[0].apps[1].name=batch -server.environments[0].apps[1].services=stock,warehouse ----- - -The `List` or `Set` mappings can use xref:config-reference.adoc#indexed-properties[indexed properties] to map -configuration values in mapping groups. For collection with simple element types like `String`, their configuration -value is a comma separated string. - -==== Maps ==== - -A config mapping is also able to map a `Map`: - -[source,java] ----- -@ConfigMapping(prefix = "server") -public interface Server { - String host(); - - int port(); - - Map form(); -} ----- - -.application.properties -[source,properties] ----- -server.host=localhost -server.port=8080 -server.form.login-page=login.html -server.form.error-page=error.html -server.form.landing-page=index.html ----- - -The configuration property needs to specify an additional name to act as the key. In this case the `form()` `Map` will -contain three elements with the keys `login-page`, `error-page` and `landing-page`. - -=== Defaults - -The `@WithDefault` annotation allows to set a default property into a mapping (and prevent and error if the -configuration value is not available in any `ConfigSource`): - -[source,java] ----- -public interface Defaults { - @WithDefault("foo") - String foo(); - - @WithDefault("bar") - String bar(); -} ----- - -No configuration properties required. The `Defaults.foo()` will return the value `foo` and `Defaults.bar()` will return -thevalue `bar`. - -=== Validation - -A config mapping may combine annotations from https://beanvalidation.org[Bean Validation] to validate configuration -values: - -[source,java] ----- -@ConfigMapping(prefix = "server") -interface Server { - @Size(min = 2, max = 20) - String host(); - - @Max(10000) - int port(); -} ----- - -WARNING: For validation to work, the `quarkus-hibernate-validator` extension is required, and it is performed -automatically. - -=== Mocking - -A mapping interface implementation is not a proxy, so it cannot be mocked directly with `@InjectMock` like other CDI -beans. One trick is to make it proxyable with a producer method: - -[source,java] ----- -public class ServerMockProducer { - @Inject - Config config; - - @Produces - @ApplicationScoped - @io.quarkus.test.Mock - Server server() { - return config.unwrap(SmallRyeConfig.class).getConfigMapping(Server.class); - } -} ----- - -The `Server` can be injected as a mock into a Quarkus test class with `@InjectMock`: - -[source,java] ----- -@QuarkusTest -class ServerMockTest { - @InjectMock - Server server; - - @Test - void localhost() { - Mockito.when(server.host()).thenReturn("localhost"); - assertEquals("localhost", server.host()); - } -} ----- - -NOTE: The mock is just an empty shell without any actual configuration values. - -If the goal is to only mock certain configuration values and retain the original configuration, the mocking instance -requires a spy: - -[source,java] ----- -@ConfigMapping(prefix = "app") -public interface AppConfig { - @WithDefault("app") - String name(); - - Info info(); - - interface Info { - @WithDefault("alias") - String alias(); - @WithDefault("10") - Integer count(); - } -} - -public static class AppConfigProducer { - @Inject - Config config; - - @Produces - @ApplicationScoped - @io.quarkus.test.Mock - AppConfig appConfig() { - AppConfig appConfig = config.unwrap(SmallRyeConfig.class).getConfigMapping(AppConfig.class); - AppConfig appConfigSpy = Mockito.spy(appConfig); - AppConfig.Info infoSpy = Mockito.spy(appConfig.info()); - Mockito.when(appConfigSpy.info()).thenReturn(infoSpy); - return appConfigSpy; - } -} ----- - -The `AppConfig` can be injected as a mock into a Quarkus test class with `@Inject`: - -[source,java] ----- -@QuarkusTest -class AppConfigTest { - @Inject - AppConfig appConfig; - - @Test - void localhost() { - Mockito.when(appConfig.name()).thenReturn("mocked-app"); - assertEquals("mocked-app", server.host()); - - Mockito.when(appConfig.info().alias()).thenReturn("mocked-alias"); - assertEquals("mocked-alias", server.info().alias()); - } -} ----- - -NOTE: Nested elements need to be spied individually by Mockito. - -[[config-properties]] -== [.line-through]#`@ConfigProperties`# (Deprecated) - -IMPORTANT: This feature will be removed soon, please update your code base and use `@ConfigMapping` instead. - -The `@io.quarkus.arc.config.ConfigProperties` annotation is able to group multiple related configuration values in its -own class: - -[source,java] ----- -package org.acme.config; - -import io.quarkus.arc.config.ConfigProperties; -import java.util.Optional; - -@ConfigProperties(prefix = "greeting") <1> -public class GreetingConfiguration { - - private String message; - private String suffix = "!"; <2> - private Optional name; - - public String getMessage() { - return message; - } - - public void setMessage(String message) { - this.message = message; - } - - public String getSuffix() { - return suffix; - } - - public void setSuffix(String suffix) { - this.suffix = suffix; - } - - public Optional getName() { - return name; - } - - public void setName(Optional name) { - this.name = name; - } -} ----- -<1> `prefix` is optional. If not set then the prefix to be used will be determined by the class name. In this case it -would still be `greeting` (since the `Configuration` suffix is removed). If the class were named -`GreetingExtraConfiguration` then the resulting default prefix would be `greeting-extra`. -<2> `!` will be the default value if `greeting.suffix` is not set. - -Inject the `GreetingResource` with CDI `@Inject`: - -[source,java] ----- -@Inject -GreetingConfiguration greetingConfiguration; ----- - -Another alternative style provided by Quarkus is to create `GreetingConfiguration` as an interface: - -[source,java] ----- -package org.acme.config; - -import io.quarkus.arc.config.ConfigProperties; -import org.eclipse.microprofile.config.inject.ConfigProperty; -import java.util.Optional; - -@ConfigProperties(prefix = "greeting") -public interface GreetingConfiguration { - - @ConfigProperty(name = "message") <1> - String message(); - - @ConfigProperty(defaultValue = "!") - String getSuffix(); <2> - - Optional getName(); <3> -} ----- -<1> The `@ConfigProperty` annotation is needed because the name of the configuration property that the method -corresponds does not follow the getter method naming conventions. -<2> In this case since `name` is not set, the corresponding property will be `greeting.suffix`. -<3> It is unnecessary to specify the `@ConfigProperty` annotation because the method name follows the getter method -naming conventions (`greeting.name` being the corresponding property) and no default value is required. - -When using `@ConfigProperties` on a class or an interface, if the value of one of its fields is not provided, the -application startup will fail, and a `javax.enterprise.inject.spi.DeploymentException` will be thrown. This does not -apply to `Optional` fields and fields with a default value. - -=== Additional notes on @ConfigProperties - -When using a regular class annotated with `@ConfigProperties` the class doesn't necessarily have to declare getters and -setters. Having simple public non-final fields is valid as well. - -Furthermore, the configuration classes support nested object configuration. Suppose there was a need to have an extra -layer of greeting configuration named `content` that would contain a few fields: - -[source,java] ----- -@ConfigProperties(prefix = "greeting") -public class GreetingConfiguration { - - public String message; - public String suffix = "!"; - public Optional name; - public ContentConfig content; <1> - - public static class ContentConfig { - public Integer prizeAmount; - public List recipients; - } -} ----- -<1> The name of the field (not the class name) will determine the name of the properties that are bound to the object. - -Setting the properties would occur in the normal manner: - -.application.properties -[source,properties] ----- -greeting.message = hello -greeting.name = quarkus -greeting.content.prize-amount=10 -greeting.content.recipients=Jane,John ----- - -Furthermore, classes annotated with `@ConfigProperties` can be annotated with Bean Validation annotations: - -[source,java] ----- -@ConfigProperties(prefix = "greeting") -public class GreetingConfiguration { - - @Size(min = 20) - public String message; - public String suffix = "!"; - -} ----- - -WARNING: For validation to work, the `quarkus-hibernate-validator` extension is required. - -=== Using same ConfigProperties with different prefixes - -Quarkus also supports the use of the same `@ConfigProperties` object with different prefixes for each injection point -using the `io.quarkus.arc.config.@ConfigPrefix` annotation. If `GreetingConfiguration` from above needs to be used for -both the `greeting` prefix and the `other` prefix: - -[source,java] ----- -@ConfigProperties(prefix = "greeting") -public class GreetingConfiguration { - - @Size(min = 20) - public String message; - public String suffix = "!"; - -} ----- - -[source,java] ----- -@ApplicationScoped -public class SomeBean { - - @Inject <1> - GreetingConfiguration greetingConfiguration; - - @ConfigPrefix("other") <2> - GreetingConfiguration otherConfiguration; - -} ----- -<1> At this injection point `greetingConfiguration` will use the `greeting` prefix since that is what has been defined -on `@ConfigProperties`. -<2> At this injection point `otherConfiguration` will use the `other` prefix from `@ConfigPrefix` instead of the -`greeting` prefix. Notice that in this case `@Inject` is not required. - -=== Combining ConfigProperties with build time conditions - -Quarkus allows you to define conditions evaluated at build time (`@IfBuildProfile`, `@UnlessBuildProfile`, -`@IfBuildProperty` and `@UnlessBuildProperty`) to enable or not the annotations `@ConfigProperties` and `@ConfigPrefix` -which gives you a very flexible way to map your configuration. - -Let's assume that the configuration of a service is mapped thanks to a `@ConfigProperties` and you don't need this part -of the configuration for your tests as it will be mocked, in that case you can define a build time condition like in -the next example: - -`ServiceConfiguration.java` -[source,java] ----- -@UnlessBuildProfile("test") <1> -@ConfigProperties -public class ServiceConfiguration { - public String user; - public String password; -} ----- -<1> The annotation `@ConfigProperties` is considered if and only if the active profile is not `test`. - -`SomeBean.java` -[source,java] ----- -@ApplicationScoped -public class SomeBean { - - @Inject - Instance serviceConfiguration; <1> - -} ----- -<1> As the configuration of the service could be missing, we need to use `Instance` as type at -the injection point. diff --git a/_versions/2.7/guides/config-reference.adoc b/_versions/2.7/guides/config-reference.adoc deleted file mode 100644 index 521382b5559..00000000000 --- a/_versions/2.7/guides/config-reference.adoc +++ /dev/null @@ -1,557 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Configuration Reference Guide - -include::./attributes.adoc[] - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -IMPORTANT: The content of this guide has been revised and split into additional topics. Please check the <> section. - -In this reference guide we're going to describe various aspects of Quarkus configuration. A Quarkus application and -Quarkus itself (core and extensions) are both configured via the same mechanism that leverages -the https://github.com/smallrye/smallrye-config[SmallRye Config] API an implementation of the -https://microprofile.io/project/eclipse/microprofile-config[MicroProfile Config] specification. - -TIP: If you're looking for information how to make a Quarkus extension configurable then see the -<> guide. - -[[configuration-sources]] -== Config Sources - -By default, Quarkus reads configuration properties from multiple sources (by descending ordinal): - -1. (400) <> -2. (300) <> -3. (295) <> file in the current working directory -4. (260) <> in `$PWD/config/application.properties` -5. (250) <> `application.properties` in classpath -6. (100) <> -`META-INF/microprofile-config.properties` in classpath - -The final configuration is the aggregation of the properties defined by all these sources. A configuration property -lookup starts by the highest ordinal configuration source available and works it way down to other sources until a -match is found. This means that any configuration property may override a value just by setting a different value in a -higher ordinal config source. For example, a property configured using an environment property overrides the value -provided using the `application.properties` file. - -image::config-sources.png[align=center,align=center,width=90%] - -[[system-properties]] -=== System properties - -System properties can be handed to the application through the `-D` flag during startup. The following examples assign -the value `youshallnotpass` to the attribute `quarkus.datasource.password`. - -* For Quarkus dev mode: `./mvnw quarkus:dev -Dquarkus.datasource.password=youshallnotpass` -* For a runner jar: `java -Dquarkus.datasource.password=youshallnotpass -jar target/quarkus-app/quarkus-run.jar` -* For a native executable: `./target/myapp-runner -Dquarkus.datasource.password=youshallnotpass` - -[[environment-variables]] -=== Environment variables - -* For a runner jar: `export QUARKUS_DATASOURCE_PASSWORD=youshallnotpass ; java -jar target/quarkus-app/quarkus-run.jar` -* For a native executable: `export QUARKUS_DATASOURCE_PASSWORD=youshallnotpass ; ./target/myapp-runner` - -NOTE: Environment variables names follow the conversion rules specified by -link:https://github.com/eclipse/microprofile-config/blob/master/spec/src/main/asciidoc/configsources.asciidoc#default-configsources[MicroProfile Config]. - -[[env-file]] -=== `.env` file in the current working directory - -..env -[source,properties] ----- -QUARKUS_DATASOURCE_PASSWORD=youshallnotpass <1> ----- -<1> The name `QUARKUS_DATASOURCE_PASSWORD` the same conversion rules used for <>. - -For `dev` mode, this file can be placed in the root of the project, but it is advised to **not** check it in to version -control. - -IMPORTANT: Environment variables in the `.env` file are not available via the `System.getenv(String)` API. - -[[application-properties-file]] -=== Quarkus Application configuration file - -The Quarkus Application configuration file is loaded from the classpath resources, for instance -`src/main/resources/application.properties`, `src/test/resources/application.properties` or from a `jar` dependency that -contains an `application.properties` entry. Each `application.properties` found is treated as a separate `ConfigSource` -and follow the same rules as every other source (override per property). Additionally, the configuration file may also -reside in `$PWD/config/application.properties`. The loading starts from the config folder and then classpath order -(`application.properties` files in the application sources will have priority on the classloader loading order). - -.`application.properties` -[source,properties] ----- -greeting.message=hello <1> -quarkus.http.port=9090 <2> ----- -<1> This is a user-defined configuration property. -<2> This is a configuration property consumed by the `quarkus-vertx-http` extension. - -NOTE: The `config/application.properties` is also available in `dev` mode. The file needs to be placed inside the build -tool output directory (`target` for Maven and `build/classes/java/main` for Gradle). Keep in mind however that any -cleaning operation from the build tool like `mvn clean` or `gradle clean` will remove the `config` directory as well. - -[[microprofile-config-properties-file]] -=== MicroProfile Config configuration file - -The MicroProfile Config configuration file in `src/main/resources/META-INF/microprofile-config.properties`. - -.`microprofile-config.properties` -[source,properties] ----- -greeting.message=hello <1> -quarkus.http.port=9090 <2> ----- -<1> This is a user-defined configuration property. -<2> This is a configuration property consumed by the `quarkus-vertx-http` extension. - -TIP: It works in the exact same way as Quarkus Application configuration file `application.properties`. Recommendation -is to use Quarkus `application.properties`. - -=== Additional Config Sources - -Quarkus provides additional extensions which cover other configuration formats and stores: - -* xref:config-yaml.adoc[YAML] -* xref:vault.adoc[HashiCorp Vault] -* xref:consul-config.adoc[Consul] -* xref:spring-cloud-config-client.adoc[Spring Cloud] - -TIP: It is also possible to create a xref:config-extending-support.adoc#custom-config-source[Custom Config Source]. - -== Inject - -Quarkus uses https://microprofile.io/project/eclipse/microprofile-config[MicroProfile Config] annotations to inject the -configuration properties in the application. - -[source,java] ----- -@ConfigProperty(name = "greeting.message") <1> -String message; ----- -<1> You can use `@Inject @ConfigProperty` or just `@ConfigProperty`. The `@Inject` annotation is not necessary for -members annotated with `@ConfigProperty`. - -NOTE: If the application attempts to inject a configuration property that is not set, an error is thrown. - -[source,java] ----- -@ConfigProperty(name = "greeting.message") <1> -String message; - -@ConfigProperty(name = "greeting.suffix", defaultValue="!") <2> -String suffix; - -@ConfigProperty(name = "greeting.name") -Optional name; <3> ----- -<1> If you do not provide a value for this property, the application startup fails with `javax.enterprise.inject.spi.DeploymentException: No config value of type [class java.lang.String] exists for: greeting.message`. -<2> The default value is injected if the configuration does not provide a value for `greeting.suffix`. -<3> This property is optional - an empty `Optional` is injected if the configuration does not provide a value for `greeting.name`. - -TIP: Use xref:config-mappings.adoc#config-mappings[Config Mappings] to group similar configuration properties. - -=== Default Values - -If a property is associated with a default value (by way of the `defaultValue` attribute), and no configuration value -is supplied for the property, then rather than throwing a `javax.enterprise.inject.spi.DeploymentException`, the -default value will be used. The `defaultValue` value is expressed as a `String`, and uses the same conversion mechanism -used to process configuration values. Several Built-in Converters already exist for primitives, boxed primitives, and -other classes; for example: - -* Primitives: `boolean`, `byte`, `short`, etc. -* Boxed primitives: `java.lang.Boolean`, `java.lang.Byte`, `java.lang.Short`, etc. -* Optional containers: `java.util.Optional`, `java.util.OptionalInt`, `java.util.OptionalLong`, and `java.util.OptionalDouble` -* Java `enum` types -* JSR 310 `java.time.Duration` -* JDK networking `java.net.SocketAddress`, `java.net.InetAddress`, etc. - -As you might expect, these converters are `org.eclipse.microprofile.config.spi.Converter` implementations. Therefore -these converters comply with the Microprofile or custom implementation providers expression rules, like: - -* Boolean values will be `true` in cases "true", "1", "YES", "Y" "ON". Otherwise, value will be interpreted as false -* For float and double values the fractional digits must be separated by a dot `.` - -Note that when a combination of `Optional*` types and the `defaultValue` attribute are used, the defined `defaultValue` -will still be used and if no value is given for the property, the `Optional*` will be present and populated with the -converted default value. However, when the property is explicitly empty, the default value is not used and the -`Optional` will be empty. Consider this example: - -[source,properties] ----- -# missing value, optional property -greeting.name = ----- - -In this case, since `greeting.name` was configured to be `Optional*` up above, the corresponding property value will -be an empty `Optional` and execution will continue normally. This would be the case even if there was a default value -configured: the default value is *not* used if the property is explicitly cleared in the configuration. - -On the other hand, this example: - -[source,properties] ----- -# missing value, non-optional -greeting.suffix = ----- - -will result in a `java.util.NoSuchElementException: SRCFG02004: Required property greeting.message not found` on -startup and the default value will not be assigned. - -Below is an example of a Quarkus-supplied converter: - -[source,java] ----- -@ConfigProperty(name = "server.address", defaultValue = "192.168.1.1") -InetAddress serverAddress; ----- - -== Programmatically access - -The `org.eclipse.microprofile.config.ConfigProvider.getConfig()` API allows to access the Config API programmatically. -This API is mostly useful in situations where CDI injection is not available. - -[source,java] ----- -String databaseName = ConfigProvider.getConfig().getValue("database.name", String.class); -Optional maybeDatabaseName = ConfigProvider.getConfig().getOptionalValue("database.name", String.class); ----- - -IMPORTANT: Do not use `System.getProperty(String)` or `System.getEnv(String)` to retrieve configuration values. These -APIs are not configuration aware and do not support the features described in this guide. - -[[profiles]] -== Profiles - -We often need to configure differently our application depending on the target _environment_. For example, the local -development environment may be different from the production environment. - -Configuration Profiles allow for multiple configurations in the same file or separate files and select between them via -a profile name. - -=== Profile in the property name -To be able to set properties with the same name, each property needs to be prefixed with a percentage sign `%` followed -by the profile name and a dot `.` in the syntax `%{profile-name}.config.name`: - -.application.properties -[source,properties] ----- -quarkus.http.port=9090 -%dev.quarkus.http.port=8181 ----- - -The Quarkus HTTP port will be 9090. If the `dev` profile is active it will be 8181. - -Profiles in the `.env` file follow the syntax `_{PROFILE}_CONFIG_KEY=value`: - -..env -[source, properties] ----- -QUARKUS_HTTP_PORT=9090 -_DEV_QUARKUS_HTTP_PORT=8181 ----- - -If a profile does not define a value for a specific attribute, the _default_ (no profile) value is used: - -.application.properties -[source, properties] ----- -bar=”hello” -baz=”bonjour” -%dev.bar=”hallo” ----- - -With the `dev` profile enabled, the property `bar` has the value `hallo`, but the property `baz` has the value -`bonjour`. If the `prod` profile is enabled, `bar` has the value `hello` (as there is no specific value for the `prod` -profile), and `baz` the value `bonjour`. - -=== Default Profiles - -By default, Quarkus provides three profiles, that activate automatically in certain conditions: - -* *dev* - Activated when in development mode (i.e. `quarkus:dev`) -* *test* - Activated when running tests -* *prod* - The default profile when not running in development or test mode - -=== Custom Profiles - -It is also possible to create additional profiles and activate them with the `quarkus.profile` configuration property. A -single config property with the new profile name is the only requirement: - -.application.properties -[source,properties] ----- -quarkus.http.port=9090 -%staging.quarkus.http.port=9999 ----- - -Setting `quarkus.profile` to `staging` will activate the `staging` profile. - -IMPORTANT: Only a single profile may be active at a time. - -[NOTE] -==== -The `io.quarkus.runtime.configuration.ProfileManager#getActiveProfile` API provides a way to retrieve the active profile -programmatically. - -Using `@ConfigProperty("quarkus.profile")` will *not* work properly. -==== - -=== Profile aware files - -In this case, properties for a specific profile may reside in an `application-{profile}.properties` named file. The previous -example may be expressed as: - -.application.properties -[source,properties] ----- -quarkus.http.port=9090 -%staging.quarkus.http.test-port=9091 ----- - -.application-staging.properties -[source,properties] ----- -quarkus.http.port=9190 -quarkus.http.test-port=9191 ----- - -[NOTE] -==== -In this style, the configuration names in the profile aware file do not need to be prefixed with the profile name. - -Properties in the profile aware file have priority over profile aware properties defined in the main file. -==== - -=== Parent Profile - -A Parent Profile adds one level of hierarchy to the current profile. The configuration `quarkus.config.profile.parent` -accepts a single profile name. - -When the Parent Profile is active, if a property cannot be found in the current active Profile, the config lookup -fallbacks to the Parent Profile. Consider: - -[source,properties] ----- -quarkus.profile=dev -quarkus.config.profile.parent=common - -%common.quarkus.http.port=9090 -%dev.quarkus.http.ssl-port=9443 - -quarkus.http.port=8080 -quarkus.http.ssl-port=8443 ----- - -Then - -* The active profile is `dev` -* The parent profile is `common` -* `quarkus.http.port` is 9090 -* `quarkus.http.ssl-port` is 9443 - -=== Default Runtime Profile - -The default Quarkus runtime profile is set to the profile used to build the application: - -[source,bash] ----- -./mvnw package -Pnative -Dquarkus.profile=prod-aws -./target/my-app-1.0-runner // <1> ----- -<1> The command will run with the `prod-aws` profile. This can be overridden using the `quarkus.profile` configuration. - -[[property-expressions]] -== Property Expressions - -Quarkus provides property expressions expansion on configuration values. An expression string is -a mix of plain strings and expression segments, which are wrapped by the sequence `${ ... }`. - -These expressions are resolved when the property is read. So if the configuration property is build time the property -expression will be resolved at build time. If the configuration property is overridable at runtime it will be resolved -at runtime. - -Consider: - -.application.properties -[source,properties] ----- -remote.host=quarkus.io -callable.url=https://${remote.host}/ ----- - -The resolved value of the `callable.url` property is `https://quarkus.io/`. - -Another example would be defining different database servers by profile: - -.application.properties -[source,properties] ----- -%dev.quarkus.datasource.jdbc.url=jdbc:mysql://localhost:3306/mydatabase?useSSL=false -quarkus.datasource.jdbc.url=jdbc:mysql://remotehost:3306/mydatabase?useSSL=false ----- - -can be simplified to: - -.application.properties -[source,properties] ----- -%dev.application.server=localhost -application.server=remotehost - -quarkus.datasource.jdbc.url=jdbc:mysql://${application.server}:3306/mydatabase?useSSL=false ----- - -Additionally, the Expression Expansion engine supports the following segments: - -* `${expression:value}` - Provides a default value after the `:` if the expansion doesn't find a value. -* `${my.prop${compose}}` - Composed expressions. Inner expressions are resolved first. -* `${my.prop}${my.prop}` - Multiple expressions. - -If an expression cannot be expanded and no default is supplied a `NoSuchElementException` is thrown. - -NOTE: Expressions lookups are performed in all config sources. The expression values and expansion values may reside in -different config sources. - -=== With Environment Variables - -Property Expressions also work with Environment Variables. - -.application.properties -[source,properties] ----- -remote.host=quarkus.io -application.host=${HOST:${remote.host}} ----- - -This will expand the `HOST` environment variable and use the value of the property `remote.host` as the default value -if `HOST` is not set. - -== Accessing a generating UUID - -The default config source from Quarkus provides a random UUID value. -It generates the UUID at startup time. -So, the value changes between startups, including reloads in dev mode. - -You can access the generated value using the `quarkus.uuid` property. -Use <> to access it: `${quarkus.uuid}`. -For example, it can be useful to configure a Kafka client with a unique consumer group: - -[source, properties] ----- -mp.messaging.incoming.prices.group.id=${quarkus.uuid} ----- - -== Clearing properties - -Run time properties which are optional, and which have had values set at build time or which have a default value, -may be explicitly cleared by assigning an empty string to the property. Note that this will _only_ affect -runtime properties, and will _only_ work with properties whose values are not required. - -.application.properties -[source,properties] ----- -remote.host=quarkus.io ----- - -A lookup to `remote.host` with `-Dremote.host=` will throw an Exception, because the system property cleared the value. - -[[indexed-properties]] -== Indexed Properties - -A config value which contains unescaped commas may be converted to `Collection`. This works for simple cases, but it -becomes cumbersome and limited for more advanced cases. - -Indexed Properties provide a way to use indexes in config property names to map specific elements in a `Collection` -type. Since the indexed element is part of the property name and not contained in the value, this can also be used to -map complex object types as `Collectionª elements. Consider: - -.application.properties -[source,properties] ----- -my.collection=dog,cat,turtle - -my.indexed.collection[0]=dog -my.indexed.collection[1]=cat -my.indexed.collection[2]=turtle ----- - -The indexed property syntax uses the property name and square brackets `[ ] with an index in between. - -A call to `Config#getValues("my.collection", String.class)`, will automatically create and convert a `List` -that contains the values `dog`, `cat` and `turtle`. A call to `Config#getValues("my.indexed.collection", String.class)` -returns the exact same result. If the same property name exists in both froms (regular and indexed), the regular -value has priority. - -The indexed property is sorted by their index before being added to the target `Collection`. Any gaps contained in the -indexes do not resolve to the target `Collection`, which means that the `Collection` result will store all values -without any gaps. - -IMPORTANT: Indexed Properties are not supported in Environment Variables. - -[[configuring_quarkus]] -== Configuring Quarkus - -Quarkus itself is configured via the same mechanism as your application. Quarkus reserves the `quarkus.` namespace -for its own configuration. For example to configure the HTTP server port you can set `quarkus.http.port` in -`application.properties`. All the Quarkus configuration properties are xref:all-config.adoc[documented and searchable]. - -[IMPORTANT] -==== -As mentioned above, properties prefixed with `quarkus.` are effectively reserved for configuring Quarkus itself and its -extensions. Therefore, the `quarkus.` prefix should **never** be used for application specific properties. -==== - -=== Build Time configuration - -Some Quarkus configurations only take effect during build time, meaning is not possible to change them at runtime. These -configurations are still available at runtime but as read-only and have no effect in Quarkus behaviour. A change to any -of these configurations requires a rebuild of the application itself to reflect changes of such properties. - -TIP: The properties fixed at build time are marked with a lock icon (icon:lock[]) in the xref:all-config.adoc[list of all configuration options]. - -However, some extensions do define properties _overridable at runtime_. A simple example is the database URL, username -and password which is only known specifically in your target environment, so they can be set and influence the -application behaviour at runtime. - -== Change build time properties after your application has been published - -If you are in the rare situation that you need to change the build time configuration after your application is built, then check out how xref:reaugmentation.adoc[re-augmentation] can be used to rebuild the augmentation output for a different build time configuration. - -[[additional-information]] -== Additional Information - -* xref:config-yaml.adoc[YAML ConfigSource Extension] -* xref:vault.adoc[HashiCorp Vault ConfigSource Extension] -* xref:consul-config.adoc[Consul ConfigSource Extension] -* xref:spring-cloud-config-client.adoc[Spring Cloud ConfigSource Extension] -* xref:config-mappings.adoc[Mapping configuration to objects] -* xref:config-extending-support.adoc[Extending configuration support] - -Quarkus relies on link:https://github.com/smallrye/smallrye-config/[SmallRye Config] and inherits its features: - -* Additional ``ConfigSource``s -* Additional ``Converter``s -* Indexed properties -* Parent profile -* Interceptors for configuration value resolution -* Relocate configuration properties -* Fallback configuration properties -* Logging -* Hide secrets - -For more information, please check the -link:https://smallrye.io/docs/smallrye-config/index.html[SmallRye Config documentation]. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-config-config.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/config-yaml.adoc b/_versions/2.7/guides/config-yaml.adoc deleted file mode 100644 index dd358371c6f..00000000000 --- a/_versions/2.7/guides/config-yaml.adoc +++ /dev/null @@ -1,210 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= YAML Configuration - -include::./attributes.adoc[] - -:toc: - -https://en.wikipedia.org/wiki/YAML[YAML] is a very popular format. Kubernetes relies heavily on the YAML format to -write the various resource descriptors. - -Quarkus offers the possibility to use YAML in addition to the standard Java Properties file. - -== Enabling YAML Configuration - -To enable YAML configuration, add the `quarkus-config-yaml` extension: - -:add-extension-extensions: quarkus-config-yaml -include::includes/devtools/extension-add.adoc[] - -You can also just add the following dependency into your project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-config-yaml - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-config-yaml") ----- - -Remove the `src/main/resources/application.properties` and create a `src/main/resources/application.yaml` file. - -NOTE: If both are present, Quarkus prioritizes configuration properties from the YAML file first and then from the -Properties file. However, to avoid confusion, we recommend removing the Properties file. - -TIP: Quarkus supports both the `yml` and `yaml` file extensions. - -=== Example - -The following snippets provide examples of YAML configuration: - -[source,yaml] ----- -# YAML supports comments -quarkus: - datasource: - db-kind: postgresql - jdbc: - url: jdbc:postgresql://localhost:5432/some-database - -# REST Client configuration property -quarkus: - rest-client: - org.acme.rest.client.ExtensionsService: - url: https://stage.code.quarkus.io/api ----- - -[source,yaml] ----- -# For configuration property names that use quotes, do not split the string inside the quotes -quarkus: - log: - category: - "io.quarkus.category": - level: INFO ----- - -[source, yaml] ----- -quarkus: - datasource: - url: jdbc:postgresql://localhost:5432/quarkus_test - - hibernate-orm: - database: - generation: drop-and-create - - oidc: - enabled: true - auth-server-url: http://localhost:8180/auth/realms/quarkus - client-id: app - - -app: - frontend: - oidc-realm: quarkus - oidc-app: app - oidc-server: http://localhost:8180/auth - -# With profiles -"%test": - quarkus: - oidc: - enabled: false - security: - users: - file: - enabled: true - realm-name: quarkus - plain-text: true ----- - -== Profiles - -As you can see in the previous snippet, you can use xref:config-reference.adoc#profiles[profiles] in YAML. The profile -key requires double quotes: `"%test"`. This is because YAML does not support keys starting with `%`. - -Everything under the `"%test"` key is only enabled when the `test` profile is active. For example, in the previous -snippet it disables OIDC (`quarkus.oidc.enabled: false`), whereas without the `test` profile, it would be enabled. - -As for the Java Properties format, you can define your own profile: - -[source, yaml] ----- -quarkus: - http: - port: 8081 - -"%staging": - quarkus: - http: - port: 8082 ----- - -If you enable the `staging` profile, the HTTP port will be 8082, whereas it would be 8081 otherwise. - -The YAML configuration also support profile aware files. In this case, properties for a specific profile may reside in -an `application-{profile}.yaml` named file. The previous example may be expressed as: - -.application.yaml -[source, yaml] ----- -quarkus: - http: - port: 8081 ----- - -.application-staging.yaml -[source, yaml] ----- -quarkus: - http: - port: 8082 ----- - -== Expressions - -The YAML format also supports xref:config-reference.adoc#expressions[expressions], using the same format as Java -Properties: - -[source, yaml] ----- -mach: 3 -x: - factor: 2.23694 - -display: - mach: ${mach} - unit: - name: "mph" - factor: ${x.factor} ----- - -Note that you can reference nested properties using the `.` (dot) separator as in `{x.factor}`. - -== External application.yaml file - -The `application.yaml` file may also be placed in `config/application.yaml` to specialize the runtime configuration. The -file has to be present in the root of the working directoryrelative to the Quarkus application runner: - -[source, text] ----- -. -├── config -│ └── application.yaml -├── my-app-runner ----- - -The values from this file override any values from the regular `application.yaml` file if exists. - -== Configuration key conflicts - -The MicroProfile Config specification defines configuration keys as an arbitrary `.`-delimited string. However, -structured formats like YAML only support a subset of the possible configuration namespace. For example, consider the -two configuration properties `quarkus.http.cors` and `quarkus.http.cors.methods`. One property is the prefix of another, -so it may not be immediately evident how to specify both keys in your YAML configuration. - -This is solved by using a `null` key (represented by `~`) for any YAML property which is a prefix of another one: - -[source,yaml] ----- -quarkus: - http: - cors: - ~: true - methods: GET,PUT,POST ----- - -YAML `null` keys are not included in the assembly of the configuration property name, allowing them to be used -in any level for disambiguating configuration keys. diff --git a/_versions/2.7/guides/config.adoc b/_versions/2.7/guides/config.adoc deleted file mode 100644 index 3c09b95795b..00000000000 --- a/_versions/2.7/guides/config.adoc +++ /dev/null @@ -1,243 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Configuring Your Application - -include::./attributes.adoc[] - -IMPORTANT: The content of this guide and been revised and split into additional topics. Please check the -<> section. - -:toc: - -Hardcoded values in your code are a _no go_ (even if we all did it at some point ;-)). -In this guide, we will learn how to configure a Quarkus application. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `config-quickstart` {quickstarts-tree-url}/config-quickstart[directory]. - -== Create the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: config-quickstart -:create-app-extensions: resteasy -include::includes/devtools/create-app.adoc[] - -It generates: - -* the Maven structure -* a landing page accessible on `http://localhost:8080` -* example `Dockerfile` files for both `native` and `jvm` modes -* the application configuration file - -== Create the configuration - -A Quarkus application uses the https://github.com/smallrye/smallrye-config[SmallRye Config] API to provide all -mechanisms related with configuration. - -By default, Quarkus reads configuration properties from <>. -For the purpose of this guide, we will use an application configuration file located in `src/main/resources/application.properties`. -Edit the file with the following content: - -.application.properties -[source,properties] ----- -# Your configuration properties -greeting.message = hello -greeting.name = quarkus ----- - -== Create a REST resource - -Create the `org.acme.config.GreetingResource` REST resource with the following content: - -[source,java] ----- -package org.acme.config; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/greeting") -public class GreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "Hello RESTEasy"; - } -} ----- - -== Inject the configuration - -Quarkus uses https://microprofile.io/project/eclipse/microprofile-config[MicroProfile Config] annotations to inject the -configuration properties in the application. - -[source,java] ----- -@ConfigProperty(name = "greeting.message") <1> -String message; ----- -<1> You can use `@Inject @ConfigProperty` or just `@ConfigProperty`. The `@Inject` annotation is not necessary for -members annotated with `@ConfigProperty`. - -NOTE: If the application attempts to inject a configuration property that is not set, an error is thrown. - -Edit the `org.acme.config.GreetingResource`, and introduce the following configuration properties: - -[source,java] ----- -@ConfigProperty(name = "greeting.message") <1> -String message; - -@ConfigProperty(name = "greeting.suffix", defaultValue="!") <2> -String suffix; - -@ConfigProperty(name = "greeting.name") -Optional name; <3> ----- -<1> If you do not provide a value for this property, the application startup fails with `javax.enterprise.inject.spi.DeploymentException: No config value of type [class java.lang.String] exists for: greeting.message`. -<2> The default value is injected if the configuration does not provide a value for `greeting.suffix`. -<3> This property is optional - an empty `Optional` is injected if the configuration does not provide a value for `greeting.name`. - -Now, modify the `hello` method to use the injected properties: - -[source,java] ----- -@GET -@Produces(MediaType.TEXT_PLAIN) -public String hello() { - return message + " " + name.orElse("world") + suffix; -} ----- - -TIP: Use `@io.smallrye.config.ConfigMapping` annotation to group multiple configurations in a single interface. Please, -check the https://smallrye.io/docs/smallrye-config/main/mapping/mapping.html[Config Mappings] documentation. - -== Update the test - -We also need to update the functional test to reflect the changes made to the endpoint. -Create the `src/test/java/org/acme/config/GreetingResourceTest.java` file with the following content: - -[source,java] ----- -package org.acme.config; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello quarkus!")); // Modified line - } - -} ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/greeting. - -Changing the configuration file is immediately reflected. -You can add the `greeting.suffix`, remove the other properties, change the values, etc. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -and executed using `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Programmatically access the configuration - -The `org.eclipse.microprofile.config.ConfigProvider.getConfig()` API allows to access the Config API programmatically. -This API is mostly useful in situations where CDI injection is not available. - -[source,java] ----- -String databaseName = ConfigProvider.getConfig().getValue("database.name", String.class); -Optional maybeDatabaseName = ConfigProvider.getConfig().getOptionalValue("database.name", String.class); ----- - -== Configuring Quarkus - -Quarkus itself is configured via the same mechanism as your application. Quarkus reserves the `quarkus.` namespace -for its own configuration. For example to configure the HTTP server port you can set `quarkus.http.port` in -`application.properties`. All the Quarkus configuration properties are xref:all-config.adoc[documented and searchable]. - -[IMPORTANT] -==== -As mentioned above, properties prefixed with `quarkus.` are effectively reserved for configuring Quarkus itself and its -extensions. Therefore, the `quarkus.` prefix should **never** be used for application specific properties. -==== - -=== Build Time configuration - -Some Quarkus configurations only take effect during build time, meaning is not possible to change them at runtime. These -configurations are still available at runtime but as read-only and have no effect in Quarkus behaviour. A change to any -of these configurations requires a rebuild of the application itself to reflect changes of such properties. - -TIP: The properties fixed at build time are marked with a lock icon (icon:lock[]) in the xref:all-config.adoc[list of all configuration options]. - -However, some extensions do define properties _overridable at runtime_. A simple example is the database URL, username -and password which is only known specifically in your target environment, so they can be set and influence the -application behaviour at runtime. - -[[additional-information]] -== Additional Information - -* xref:config-reference.adoc[Configuration Reference Guide] -* xref:config-yaml.adoc[YAML ConfigSource Extension] -* xref:vault.adoc[HashiCorp Vault ConfigSource Extension] -* xref:consul-config.adoc[Consul ConfigSource Extension] -* xref:spring-cloud-config-client.adoc[Spring Cloud ConfigSource Extension] -* xref:config-mappings.adoc[Mapping configuration to objects] -* xref:config-extending-support.adoc[Extending configuration support] - -Quarkus relies on link:https://github.com/smallrye/smallrye-config/[SmallRye Config] and inherits its features: - -* Additional ``ConfigSource``s -* Additional ``Converter``s -* Indexed properties -* Parent profile -* Interceptors for configuration value resolution -* Relocate configuration properties -* Fallback configuration properties -* Logging -* Hide secrets - -For more information, please check the -link:https://smallrye.io/docs/smallrye-config/index.html[SmallRye Config documentation]. diff --git a/_versions/2.7/guides/container-image.adoc b/_versions/2.7/guides/container-image.adoc deleted file mode 100644 index faa860b46e7..00000000000 --- a/_versions/2.7/guides/container-image.adoc +++ /dev/null @@ -1,187 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Container Images - -include::./attributes.adoc[] - -Quarkus provides extensions for building (and pushing) container images. Currently it supports: - -- <<#jib,Jib>> -- <<#docker,Docker>> -- <<#s2i,S2I>> -- <<#buildpack,Buildpack>> - -== Container Image extensions - -[#jib] -=== Jib - -The extension `quarkus-container-image-jib` is powered by https://github.com/GoogleContainerTools/jib[Jib] for performing container image builds. -The major benefit of using Jib with Quarkus is that all the dependencies (everything found under `target/lib`) are cached in a different layer than the actual application making rebuilds really fast and small (when it comes to pushing). -Another important benefit of using this extension is that it provides the ability to create a container image without having to have any dedicated client side tooling (like Docker) or running daemon processes (like the Docker daemon) -when all that is needed is the ability to push to a container image registry. - -To use this feature, add the following extension to your project: - -:add-extension-extensions: container-image-jib -include::includes/devtools/extension-add.adoc[] - -WARNING: In situations where all that is needed to build a container image and no push to a registry is necessary (essentially by having set `quarkus.container-image.build=true` and left `quarkus.container-image.push` unset - it defaults to `false`), then this extension creates a container image and registers -it with the Docker daemon. This means that although Docker isn't used to build the image, it is nevertheless necessary. Also note that using this mode, the built container image *will* -show up when executing `docker images`. - -==== Including extra files - -There are cases when additional files (other than ones produced by the Quarkus build) need to be added to a container image. -To support these cases, Quarkus copies any file under `src/main/jib` into the built container image (which is essentially the same -idea that the Jib Maven and Gradle plugins support). -For example, the presence of `src/main/jib/foo/bar` would result in `/foo/bar` being added into the container filesystem. - -==== JVM Debugging - -There are cases where the built container image may need to have Java debugging conditionally enabled at runtime. - -When the base image has not been changed (and therefore `ubi8/openjdk-11-runtime` or `ubi8/openjdk-17-runtime` is used), then the `quarkus.jib.jvm-arguments` configuration property can be used in order to -make the JVM listen on the debug port at startup. - -The exact configuration is: - -[source,properties] ----- -quarkus.jib.jvm-arguments=-agentlib:jdwp=transport=dt_socket\\,server=y\\,suspend=n\\,address=*:5005 ----- - -Other base images might provide launch scripts that enable debugging when an environment variable is set, in which case you would set than environment variable when launching the container. - -Finally, the `quarkus.jib.jvm-entrypoint` configuration property can be used to completely override the container entry point and can thus be used to either hard code the JVM debug configuration or point to a script that handles the details. - -[#docker] -=== Docker - -The extension `quarkus-container-image-docker` is using the Docker binary and the generated Dockerfiles under `src/main/docker` in order to perform Docker builds. - -To use this feature, add the following extension to your project. - -:add-extension-extensions: container-image-docker -include::includes/devtools/extension-add.adoc[] - -[#s2i] -=== S2I - -The extension `quarkus-container-image-s2i` is using S2I binary builds in order to perform container builds inside the OpenShift cluster. -The idea behind the binary build is that you just upload the artifact and its dependencies to the cluster and during the build they will be merged to a builder image (defaults to `fabric8/s2i-java`). - -The benefit of this approach, is that it can be combined with OpenShift's `DeploymentConfig` that makes it easy to roll out changes to the cluster. - -To use this feature, add the following extension to your project. - -:add-extension-extensions: container-image-s2i -include::includes/devtools/extension-add.adoc[] - -S2I builds require creating a `BuildConfig` and two `ImageStream` resources, one for the builder image and one for the output image. -The creation of such objects is being taken care of by the Quarkus Kubernetes extension. - - -[#buildpack] -=== Buildpack - -The extension `quarkus-container-image-buildpack` is using buildpacks in order to perform container image builds. -Under the hood buildpacks will use a Docker daemon for the actual build. -While buildpacks support alternatives to Docker, this extension will only work with Docker. - -Additionally, the user will have to configure which build image to use (no default image is provided). For example: - -[source,properties] ----- -quarkus.buildpack.jvm-builder-image= ----- - -or for native: - -[source,properties] ----- -quarkus.buildpack.native-builder-image= ----- - -To use this feature, add the following extension to your project. - -:add-extension-extensions: container-image-buildpack -include::includes/devtools/extension-add.adoc[] - -NOTE: When using the buildpack container image extension it is strongly advised to avoid adding `quarkus.container-image.build=true` in your properties configuration as it might trigger nesting builds within builds. It's preferable to pass it as an option to the build command instead. - -== Building - -To build a container image for your project, `quarkus.container-image.build=true` needs to be set using any of the ways that Quarkus supports. - -:build-additional-parameters: -Dquarkus.container-image.build=true -include::includes/devtools/build.adoc[] -:!build-additional-parameters: - -NOTE: If you ever want to build a native container image and already have an existing native image you can set `-Dquarkus.native.reuse-existing=true` and the native image build will not be re-run. - -== Pushing - -To push a container image for your project, `quarkus.container-image.push=true` needs to be set using any of the ways that Quarkus supports. - -:build-additional-parameters: -Dquarkus.container-image.push=true -include::includes/devtools/build.adoc[] -:!build-additional-parameters: - -NOTE: If no registry is set (using `quarkus.container-image.registry`) then `docker.io` will be used as the default. - -== Selecting among multiple extensions - -It does not make sense to use multiple extension as part of the same build. When multiple container image extensions are present, an error will be raised to inform the user. The user can either remove the unneeded extensions or select one using `application.properties`. - -For example, if both `container-image-docker` and `container-image-s2i` are present and the user needs to use `container-image-docker`: - -[source,properties] ----- -quarkus.container-image.builder=docker ----- - -== Customizing - -The following properties can be used to customize the container image build process. - -=== Container Image Options - -include::{generated-dir}/config/quarkus-container-image.adoc[opts=optional, leveloffset=+1] - -==== Using CI Environments - -Various CI environments provide a ready to use container-image registry which can be combined with the container-image Quarkus extensions in order to -effortlessly create and push a Quarkus application to said registry. - -For example, https://gitlab.com/[GitLab] provides such a registry and in the provided CI environment, -makes available the `CI_REGISTRY_IMAGE` environment variable -(see GitLab's https://docs.gitlab.com/ee/ci/variables/[documentation]) for more information), which can be used in Quarkus like so: - -[source,properties] ----- -quarkus.container-image.image=${CI_REGISTRY_IMAGE} ----- - -NOTE: See xref:config.adoc#combine-property-env-var[this] for more information on how to combine properties with environment variables. - -=== Jib Options - -In addition to the generic container image options, the `container-image-jib` also provides the following options: - -include::{generated-dir}/config/quarkus-container-image-jib.adoc[opts=optional, leveloffset=+1] - -=== Docker Options - -In addition to the generic container image options, the `container-image-docker` also provides the following options: - -include::{generated-dir}/config/quarkus-container-image-docker.adoc[opts=optional, leveloffset=+1] - -=== S2I Options - -In addition to the generic container image options, the `container-image-s2i` also provides the following options: - -include::{generated-dir}/config/quarkus-container-image-s2i.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/context-propagation.adoc b/_versions/2.7/guides/context-propagation.adoc deleted file mode 100644 index fcfa1129b53..00000000000 --- a/_versions/2.7/guides/context-propagation.adoc +++ /dev/null @@ -1,293 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Context Propagation in Quarkus - -include::./attributes.adoc[] - -Traditional blocking code uses link:https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ThreadLocal.html[`ThreadLocal`] - variables to store contextual objects in order to avoid -passing them as parameters everywhere. Many Quarkus extensions require those contextual objects to operate -properly: xref:rest-json.adoc[RESTEasy], xref:cdi-reference.adoc[ArC] and xref:transaction.adoc[Transaction] -for example. - -If you write reactive/async code, you have to cut your work into a pipeline of code blocks that get executed -"later", and in practice after the method you defined them in have returned. As such, `try/finally` blocks -as well as `ThreadLocal` variables stop working, because your reactive code gets executed in another thread, -after the caller ran its `finally` block. - -link:https://github.com/smallrye/smallrye-context-propagation[SmallRye Context Propagation] an implementation of -link:https://github.com/eclipse/microprofile-context-propagation[MicroProfile Context Propagation] was made to -make those Quarkus extensions work properly in reactive/async settings. It works by capturing those contextual -values that used to be in thread-locals, and restoring them when your code is called. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `context-propagation-quickstart` {quickstarts-tree-url}/context-propagation-quickstart[directory]. - -== Setting it up - -If you are using link:http://smallrye.io/smallrye-mutiny[Mutiny] (the `quarkus-mutiny` extension), you just need to add the -the `quarkus-smallrye-context-propagation` extension to enable context propagation. - -In other words, add the following dependencies to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-resteasy-mutiny - - - - io.quarkus - quarkus-smallrye-context-propagation - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// RESTEasy support extensions if not already included -implementation("io.quarkus:quarkus-resteasy-mutiny") -// Context Propagation extension -implementation("io.quarkus:quarkus-smallrye-context-propagation") ----- - -With this, you will get context propagation for ArC, RESTEasy and transactions, if you are using them. - -== Usage example with Mutiny - -[TIP] -.Mutiny -==== -This section uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -Let's write a REST endpoint that reads the next 3 items from a xref:kafka.adoc[Kafka topic], stores them in a database using -xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache] (all in the same transaction) before returning -them to the client, you can do it like this: - -[source,java] ----- - // Get the prices stream - @Inject - @Channel("prices") Publisher prices; - - @Transactional - @GET - @Path("/prices") - @Produces(MediaType.SERVER_SENT_EVENTS) - @SseElementType(MediaType.TEXT_PLAIN) - public Publisher prices() { - // get the next three prices from the price stream - return Multi.createFrom().publisher(prices) - .select().first(3) - .map(price -> { - // store each price before we send them - Price priceEntity = new Price(); - priceEntity.value = price; - // here we are all in the same transaction - // thanks to context propagation - priceEntity.persist(); - return price; - // the transaction is committed once the stream completes - }); - } ----- - -Notice that thanks to Mutiny support for context propagation, this works out of the box. -The 3 items are persisted using the same transaction and this transaction is committed when the stream completes. - -== Usage example for `CompletionStage` - -If you are using link:https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/CompletionStage.html[`CompletionStage`] -you need manual context propagation. You can do that by injecting a `ThreadContext` -or `ManagedExecutor` that will propagate every context. For example, here we use the xref:vertx.adoc[Vert.x Web Client] -to get the list of Star Wars people, then store them in the database using -xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache] (all in the same transaction) before returning -them to the client as JSON using xref:rest-json.adoc[Jackson or JSON-B]: - -[source,java] ----- - @Inject ThreadContext threadContext; - @Inject ManagedExecutor managedExecutor; - @Inject Vertx vertx; - - @Transactional - @GET - @Path("/people") - public CompletionStage> people() throws SystemException { - // Create a REST client to the Star Wars API - WebClient client = WebClient.create(vertx, - new WebClientOptions() - .setDefaultHost("swapi.dev") - .setDefaultPort(443) - .setSsl(true)); - // get the list of Star Wars people, with context capture - return threadContext.withContextCapture(client.get("/api/people/").send()) - .thenApplyAsync(response -> { - JsonObject json = response.bodyAsJsonObject(); - List persons = new ArrayList<>(json.getInteger("count")); - // Store them in the DB - // Note that we're still in the same transaction as the outer method - for (Object element : json.getJsonArray("results")) { - Person person = new Person(); - person.name = ((JsonObject) element).getString("name"); - person.persist(); - persons.add(person); - } - return persons; - }, managedExecutor); - } ----- - -Using `ThreadContext` or `ManagedExecutor` you can wrap most useful functional types and `CompletionStage` -in order to get context propagated. - -[NOTE] -==== -The injected `ManagedExecutor` uses the Quarkus thread pool. -==== - -== Overriding which contexts are propagated - -By default, all available contexts are propagated. However, you can override this behaviour in several ways. - -=== Using configuration - -The following configuration properties allow you to specify the default sets of propagated contexts: - -[cols="1,1,1"] -|=== -|Configuration Key|Description|Default Value - -|`mp.context.ThreadContext.propagated` -|The comma-separated set of propagated contexts -|`Remaining` (all non-explicitly list contexts) - -|`mp.context.ThreadContext.cleared` -|The comma-separated set of cleared contexts -|`None` (no context), unless neither the propagated nor cleared sets contain `Remaining`, in which case the default is `Remaining` (all non-explicitly listed contexts) - -|`mp.context.ThreadContext.unchanged` -|The comma-separated set of unchanged contexts -|`None` (no context) -|=== - -The following contexts are available in Quarkus either out of the box, or depending on whether you include -their extensions: - -[cols="1,1,1"] -|=== -|Context Name|Name Constant|Description - -|`None` -|https://javadoc.io/static/org.eclipse.microprofile.context-propagation/microprofile-context-propagation-api/1.2/org/eclipse/microprofile/context/ThreadContext.html#NONE[`ThreadContext.NONE`] -|Can be used to specify an empty set of contexts, but setting the value to empty works too - -|`Remaining` -|https://javadoc.io/static/org.eclipse.microprofile.context-propagation/microprofile-context-propagation-api/1.2/org/eclipse/microprofile/context/ThreadContext.html#ALL_REMAINING[`ThreadContext.ALL_REMAINING`] -|All the contexts that are not explicitly listed in other sets - -|`Transaction` -|https://javadoc.io/static/org.eclipse.microprofile.context-propagation/microprofile-context-propagation-api/1.2/org/eclipse/microprofile/context/ThreadContext.html#TRANSACTION[`ThreadContext.TRANSACTION`] -|The JTA transaction context - -|`CDI` -|https://javadoc.io/static/org.eclipse.microprofile.context-propagation/microprofile-context-propagation-api/1.2/org/eclipse/microprofile/context/ThreadContext.html#CDI[`ThreadContext.CDI`] -|The CDI (ArC) context - -|`Servlet` -|N/A -|The servlet context - -|`JAX-RS` -|N/A -|The RESTEasy Classic context - -|`Application` -|https://javadoc.io/static/org.eclipse.microprofile.context-propagation/microprofile-context-propagation-api/1.2/org/eclipse/microprofile/context/ThreadContext.html#APPLICATION[`ThreadContext.APPLICATION`] -|The current `ThreadContextClassLoader` -|=== - -=== Overriding the propagated contexts using annotations - -In order for automatic context propagation, such as Mutiny uses, to be overridden in specific methods, -you can use the https://javadoc.io/doc/io.smallrye/smallrye-context-propagation-api/latest/io/smallrye/context/api/CurrentThreadContext.html[`@CurrentThreadContext`] -annotation: - -[source,java] ----- - // Get the prices stream - @Inject - @Channel("prices") Publisher prices; - - @GET - @Path("/prices") - @Produces(MediaType.SERVER_SENT_EVENTS) - @SseElementType(MediaType.TEXT_PLAIN) - // Get rid of all context propagation, since we don't need it here - @CurrentThreadContext(unchanged = ThreadContext.ALL_REMAINING) - public Publisher prices() { - // get the next three prices from the price stream - return Multi.createFrom().publisher(prices) - .select().first(3); - } ----- - -=== Overriding the propagated contexts using CDI injection - -You can also inject a custom-built `ThreadContext` using the https://javadoc.io/doc/io.smallrye/smallrye-context-propagation-api/latest/io/smallrye/context/api/ThreadContextConfig.html[`@ThreadContextConfig`] annotation on your injection point: - -[source,java] ----- - // Get the prices stream - @Inject - @Channel("prices") Publisher prices; - // Get a ThreadContext that doesn't propagate context - @Inject - @ThreadContextConfig(unchanged = ThreadContext.ALL_REMAINING) - SmallRyeThreadContext threadContext; - - @GET - @Path("/prices") - @Produces(MediaType.SERVER_SENT_EVENTS) - @SseElementType(MediaType.TEXT_PLAIN) - public Publisher prices() { - // Get rid of all context propagation, since we don't need it here - try(CleanAutoCloseable ac = SmallRyeThreadContext.withThreadContext(threadContext)){ - // get the next three prices from the price stream - return Multi.createFrom().publisher(prices) - .select().first(3); - } - } ----- - -== Context Propagation for CDI - -In terms of CDI, `@RequestScoped`, `@ApplicationScoped` and `@Singleton` beans get propagated and are available in other threads. -`@Dependent` beans as well as any custom scoped beans cannot be automatically propagated via CDI Context Propagation. - - -`@ApplicationScoped` and `@Singleton` beans are always active scopes and as such are easy to deal with - context propagation tasks can work with those beans so long as the CDI container is running. -However, `@RequestScoped` beans are a different story. They are only active for a short period of time which can be bound either to HTTP request or some other request/task when manually activated/deactivated. -In this case user must be aware that once the original thread gets to an end of a request, it will terminate the context, calling `@PreDestroy` on those beans and then clearing them from the context. -Subsequent attempts to access those beans from other threads can result in unexpected behaviour. -It is therefore recommended to make sure all tasks using request scoped beans via context propagation are performed in such a manner that they don't outlive the original request duration. - - -[NOTE] -==== -Due to the above described behavior, it is recommended to avoid using `@PreDestroy` on `@RequestScoped` beans when working with Context Propagation in CDI. -==== diff --git a/_versions/2.7/guides/continuous-testing.adoc b/_versions/2.7/guides/continuous-testing.adoc deleted file mode 100644 index 1e0c1cffd4f..00000000000 --- a/_versions/2.7/guides/continuous-testing.adoc +++ /dev/null @@ -1,164 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Continuous Testing - -include::./attributes.adoc[] - -:toc: macro -:toclevels: 4 -:doctype: book -:icons: font -:docinfo1: - -:numbered: -:sectnums: -:sectnumlevels: 4 - -Learn how to use continuous testing in your Quarkus Application. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* The completed greeter application from the xref:getting-started.adoc[Getting Started Guide] - -== Introduction - -Quarkus supports continuous testing, where tests run immediately after code changes have been saved. This allows you to -get instant feedback on your code changes. Quarkus detects which tests cover which code, and uses this information to -only run the relevant tests when code is changed. - -== Solution - -Start the xref:getting-started.adoc[Getting Started] application (or any other application) using: - -include::includes/devtools/dev.adoc[] - -Quarkus will start in development mode as normal, but down the bottom of the screen you should see the following: - -[source] ----- --- -Tests paused, press [r] to resume, [h] for more options> ----- - -Press `r` and the tests will start running. You should see the status change down the bottom of the screen as they -are running, and it should finish with: - -[source] ----- --- -Tests all passed, 2 tests were run, 0 were skipped. Tests took 1470ms. -Press [r] to re-run, [v] to view full results, [p] to pause, [h] for more options> ----- - - -NOTE: If you want continuous testing to start automatically you can set `quarkus.test.continuous-testing=enabled` in -`application.properties`. If you don't want it at all you can change this to `disabled`. - - -Now you can start making changes to your application. Go into the `GreetingResource` and change the hello endpoint to -return `"hello world"`, and save the file. Quarkus should immediately re-run the test, and you should get output similar -to the following: - -[source] ----- -2021-05-11 14:21:34,338 ERROR [io.qua.test] (Test runner thread) Test GreetingResourceTest#testHelloEndpoint() failed -: java.lang.AssertionError: 1 expectation failed. -Response body doesn't match expectation. -Expected: is "hello" - Actual: hello world - - at io.restassured.internal.ValidatableResponseImpl.body(ValidatableResponseImpl.groovy) - at org.acme.getting.started.GreetingResourceTest.testHelloEndpoint(GreetingResourceTest.java:21) - - --- -Test run failed, 2 tests were run, 1 failed, 0 were skipped. Tests took 295ms -Press [r] to re-run, [v] to view full results, [p] to pause, [h] for more options> ----- - -Change it back and the tests will run again. - -== Controlling Continuous Testing - -There are various hotkeys you can use to control continuous testing. Pressing `h` will display the following list -of commands: - -[source] ----- -The following commands are available: -[r] - Re-run all tests -[f] - Re-run failed tests -[b] - Toggle 'broken only' mode, where only failing tests are run (disabled) -[v] - Print failures from the last test run -[p] - Pause tests -[o] - Toggle test output (disabled) -[i] - Toggle instrumentation based reload (disabled) -[l] - Toggle live reload (enabled) -[s] - Force restart -[h] - Display this help -[q] - Quit ----- - -These are explained below: - -[r] - Re-run all tests:: -This will re-run every test - -[f] - Re-run failed tests:: -This will re-run every failing test - -[b] - Toggle 'broken only' mode, where only failing tests are run:: -Broken only mode will only run tests that have previously failed, even if other tests would normally be affected by a code -change. This can be useful if you are modifying code that is used by lots of tests, but you only want to focus on debugging -the failing one. - -[v] - Print failures from the last test run:: -Prints the failures to the console again, this can be useful if there has been lots of console output since the last run. - -[p] - Pause tests:: -Temporarily stops running tests. This can be useful if you are making lots of changes, and don't want feedback until they -are all done. - -[o] - Toggle test output:: -By default test output is filtered and not displayed on the console, so that test output and dev mode output is not -interleaved. Enabling test output will print output to the console when tests are run. Even when output is disabled -the filtered output is saved and can be viewed in the Dev UI. - -[i] - Toggle instrumentation based reload:: -This is not directly related to testing, but allows you to toggle instrumentation based reload. This will allow live reload -to avoid a restart if a change does not affect the structure of a class, which gives a faster reload and allows you to keep -state. - -[l] - Toggle live reload:: -This is not directly related to testing, but allows you to turn live reload on and off. - -[s] - Force restart:: -This will force a scan for changed files, and will perform a live reload with and changes. Note that even if there are no -changes the application will still restart. This will still work even if live reload is disabled. - -== Continuous Testing Without Dev Mode - -It is possible to run continuous testing without starting dev mode. This can be useful if dev mode will interfere with -your tests (e.g. running wiremock on the same port), or if you only want to develop using tests. To start continuous testing -mode run `mvn quarkus:test`. - -NOTE: The Dev UI is not available when running in continuous testing mode, as this is provided by dev mode. - -== Multi Module Projects - -Note that continuous testing supports multi-module projects, so tests in modules other than the application can still -be run when files are changed. The modules that are run can be controlled using config as listed below. - -This is enabled by default, and can be disabled via `quarkus.test.only-test-application-module=true`. - -== Configuring Continuous Testing - -Continuous testing supports multiple configuration options that can be used to limit the tests that are run, and -to control the output. The configuration properties are shown below: - -include::{generated-dir}/config/quarkus-test-dev-testing-test-config.adoc[opts=optional, leveloffset=+2] - diff --git a/_versions/2.7/guides/credentials-provider.adoc b/_versions/2.7/guides/credentials-provider.adoc deleted file mode 100644 index 7ee769c83ef..00000000000 --- a/_versions/2.7/guides/credentials-provider.adoc +++ /dev/null @@ -1,191 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using a Credentials Provider - -include::./attributes.adoc[] -:extension-status: preview - -Interacting with a datastore typically implies first connecting using credentials. -Those credentials will allow the client to be identified, authenticated and eventually authorized. -Username/password based authentication is very common, but that is not by any means the only one. -Such credentials information may appear in the application configuration, -but it is becoming increasingly popular to store this type of sensitive information in secure stores, -such as HashiCorp Vault, Azure Key Vault or the AWS Secrets Manager to name just a few. - -To bridge datastores that consume credentials, which can take different forms, -and secure stores that provide those credentials, -Quarkus introduces an intermediate abstraction called `Credentials Provider`, -that some extensions may support to consume credentials (e.g. `agroal`), -and some others may implement to produce credentials (e.g. `vault`). - -This Service Programming Interface (SPI) may also be used by implementers that want to support custom providers -not yet implemented in Quarkus (e.g. Azure Key Vault). - -Currently, the `Credentials Provider` interface is implemented by the `vault` extension, and is supported -by the following credentials consumer extensions: - -* `agroal` -* `reactive-db2-client` -* `reactive-mysql-client` -* `reactive-mssql-client` -* `reactive-oracle-client` -* `reactive-pg-client` -* `oidc` -* `oidc-client` -* `smallrye-reactive-messaging-rabbitmq` - -All extensions that rely on username/password authentication also allow setting configuration -properties in the `application.properties` as an alternative. But the `Credentials Provider` is the only option -if credentials are generated (e.g. `Vault Dynamic DB Credentials`) or if a custom credentials provider is required. - -This guide will show how to use the `Credentials Provider` provided in the `vault` extension, -then we will look at implementing a custom `Credentials Provider`, and finally we will talk about additional -considerations regarding implementing a `Credentials Provider` in a new extension. - -include::./status-include.adoc[] - -== Vault Credentials Provider - -To configure a `Vault Credentials Provider` you need to provide the following properties: - -[source, properties] ----- -quarkus.vault.credentials-provider..= ----- - -The `` will be used in the consumer to refer to this provider. The `` and `` fields are specific to the `Vault Credentials Provider`. For complete details, please refer to the {vault-datasource-guide}. - -For instance: - -[source, properties] ----- -quarkus.vault.credentials-provider.mydatabase.kv-path=myapps/vault-quickstart/db ----- - -Once defined, the `mydatabase` provider can be used in any extension that supports the `Credentials Provider` interface. For instance in `agroal`: - -[source, properties] ----- -# configure your datasource -quarkus.datasource.db-kind = postgresql -quarkus.datasource.username = sarah -quarkus.datasource.credentials-provider = mydatabase -quarkus.datasource.jdbc.url = jdbc:postgresql://localhost:5432/mydatabase ----- - -Note that `quarkus.datasource.username` is the original `agroal` property, whereas the `password` property -is not included because the value will come from the `mydatabase` credentials provider we just defined. -An alternative is to define both username and password in Vault and drop the `quarkus.datasource.username` -property from configuration. All consuming extensions do support the ability to fetch both the username -and password from the provider, or just the password. - -== Custom Credentials Provider - -Implementing a custom credentials provider is the only option when a vault product is not yet supported in Quarkus, or if credentials need to be retrieved from a custom store. - -The only interface to implement is: - -[source, java] ----- -public interface CredentialsProvider { - - String USER_PROPERTY_NAME = "user"; - String PASSWORD_PROPERTY_NAME = "password"; - - Map getCredentials(String credentialsProviderName); - -} ----- - -`USER_PROPERTY_NAME` and `PASSWORD_PROPERTY_NAME` are standard properties that should be recognized by any consuming extension that support username/password based authentication. - -It is required that implementations be valid `@ApplicationScoped` CDI beans. - -Here is a simple example: - -[source, java] ----- -@ApplicationScoped -@Unremovable -public class MyCredentialsProvider implements CredentialsProvider { - - @Override - public Map getCredentials(String credentialsProviderName) { - - Map properties = new HashMap<>(); - properties.put(USER_PROPERTY_NAME, "hibernate_orm_test"); - properties.put(PASSWORD_PROPERTY_NAME, "hibernate_orm_test"); - return properties; - } - -} ----- - -Note that we decided here to return both the username and the password. - -This provider may be used in a datasource definition like this: - -[source, properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.credentials-provider=custom -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5431/hibernate_orm_test ----- - -It is also possible to pass configuration properties to the provider using standard MicroProfile Config injection: - -[source, properties] ----- -custom.foo=bar ----- - -And in the provider implementation: - -[source, java] ----- -@Inject -Config config; - -@Override -public Map getCredentials(String credentialsProviderName) { - - System.out.println("MyCredentialsProvider called with foo=" + config.getValue(credentialsProviderName + ".foo", String.class)); - ... ----- - -== New Credentials Provider extension - -When creating a custom credentials provider in a new extension, there are a few additional considerations. - -First, you need to name it to avoid collisions in case multiple credentials providers are available in the project: - -[source, java] ----- -@ApplicationScoped -@Unremovable -@Named("my-credentials-provider") -public class MyCredentialsProvider implements CredentialsProvider { ----- - -It is the responsibility of the consumer to allow a `credentials-provider-name` property: - -[source, properties] ----- -quarkus.datasource.credentials-provider = custom -quarkus.datasource.credentials-provider-name = my-credentials-provider ----- - -The extension should allow runtime config, such as the `CredentialsProviderConfig` from the `vault` extension -to configure any custom property in the provider. For an AWS Secrets Manager extension, this could be: - -* `region` -* `credentials-type` -* `secrets-id` - -Note also that some consumers such as `agroal` will add to their connection configuration any properties returned -by the credentials provider, not just the username and password. So when you design the new credentials provider -limit the properties to what would be understood by consumers, or provide appropriate configuration options to -support different modes. diff --git a/_versions/2.7/guides/datasource.adoc b/_versions/2.7/guides/datasource.adoc deleted file mode 100644 index 30a42ec5027..00000000000 --- a/_versions/2.7/guides/datasource.adoc +++ /dev/null @@ -1,706 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Datasources - -include::./attributes.adoc[] - -Many projects that use data require connections to a relational database. - -The usual way of obtaining connections to a database is to use a datasource and configure a JDBC driver. -But you might also prefer using a reactive driver to connect to your database in a reactive way. - -Quarkus has you covered either way: - -* For JDBC, the preferred datasource and connection pooling implementation is https://agroal.github.io/[Agroal]. -* For reactive, we use the https://vertx.io/[Vert.x] reactive drivers. - -Both are configured via unified and flexible configuration. - -[NOTE] -==== -Agroal is a modern, light weight connection pool implementation designed for very high performance and scalability, -and features first class integration with the other components in Quarkus, such as security, transaction management components, health, and metrics. -==== - -This guide will explain how to: - -* configure a datasource, or multiple datasources -* how to obtain a reference to those datasources in code -* which pool tuning configuration properties are available - -This guide is mainly about datasource configuration. -If you want more details about how to consume and make use of a reactive datasource, -please refer to the xref:reactive-sql-clients.adoc[Reactive SQL clients guide]. - -== TL;DR - -This is a quick introduction to datasource configuration. -If you want a better understanding of how all this works, this guide has a lot more information in the subsequent paragraphs. - -[[dev-services]] -=== Zero Config Setup (Dev Services) - -When testing or running in dev mode Quarkus can even provide you with a zero config database out of the box, a feature -we refer to as Dev Services. Depending on your database type you may need Docker installed in order to use this feature. -Dev Services is supported for the following databases: - -* DB2 (container) (requires license acceptance) -* Derby (in-process) -* H2 (in-process) -* MariaDB (container) -* Microsoft SQL Server (container) (requires license acceptance) -* MySQL (container) -* Oracle Express Edition (container) -* PostgreSQL (container) - -If you want to use Dev Services then all you need to do is include the relevant extension for the type of database you want (either reactive or -JDBC, or both), and don't configure a database URL, username and password, Quarkus will provide the database and you can just start -coding without worrying about config. - -If you are using a proprietary database such as DB2 or MSSQL you will need to accept the license agreement. To do this -create a `src/main/resources/container-license-acceptance.txt` files in your project and add a line with the image -name and tag of the database. By default Quarkus uses the default image for the current version of Testcontainers, if -you attempt to start Quarkus the resulting failure will tell you the exact image name in use for you to add to the -file. - -An example file is shown below: - -.src/main/resources/container-license-acceptance.txt ----- -ibmcom/db2:11.5.0.0a -mcr.microsoft.com/mssql/server:2017-CU12 ----- - -[NOTE] -==== -All services based on containers are run using Testcontainers but Quarkus is not using the Testcontainers JDBC driver. - -Thus, even though extra JDBC URL properties can be set in your `application.properties` file, -specific properties supported by the Testcontainers JDBC driver such as `TC_INITSCRIPT`, `TC_INITFUNCTION`, `TC_DAEMON`, `TC_TMPFS` are not supported. - -Quarkus can support specific properties sent to the container itself though and, -typically, this is the case for `TC_MY_CNF` which allows to override the MariaDB/MySQL configuration file. - -Overriding the MariaDB/MySQL configuration would be done as follows: - -[source,properties] ----- -quarkus.datasource.devservices.container-properties.TC_MY_CNF=testcontainers/mysql-conf ----- - -This support is database specific and needs to be implemented in each dev service specifically. -==== - -=== JDBC datasource - -Add the `agroal` extension plus one of `jdbc-db2`, `jdbc-derby`, `jdbc-h2`, `jdbc-mariadb`, `jdbc-mssql`, `jdbc-mysql`, `jdbc-oracle` or `jdbc-postgresql`. - -Then configure your datasource: - -[source, properties] ----- -quarkus.datasource.db-kind=postgresql <1> -quarkus.datasource.username= -quarkus.datasource.password= - -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test -quarkus.datasource.jdbc.max-size=16 ----- -<1> If you only have a single JDBC extension, or you are running tests and only have a single test scoped JDBC extension installed then this is -optional. If there is only one possible extension we assume this is the correct one, and if a driver has been added with test scope then -we assume that this should be used in testing. - -=== Reactive datasource - -Add the correct reactive extension for the database you are using: -`reactive-db2-client`, `reactive-mssql-client`, `reactive-mysql-client`, `reactive-oracle-client`, or `reactive-pg-client`. - -Then configure your reactive datasource: - -[source, properties] ----- -quarkus.datasource.db-kind=postgresql <1> -quarkus.datasource.username= -quarkus.datasource.password= - -quarkus.datasource.reactive.url=postgresql:///your_database -quarkus.datasource.reactive.max-size=20 ----- -<1> As specified above this is optional. - -== Default datasource - -A datasource can be either a JDBC datasource, a reactive one or both. -It all depends on how you configure it and which extensions you added to your project. - -To define a datasource, start with the following (note that this is only required if you have more than one -database type installed): - -[source, properties] ----- -quarkus.datasource.db-kind=h2 ----- - -The database kind defines which type of database you will connect to. - -We currently include these built-in database kinds: - -* DB2: `db2` -* Derby: `derby` -* H2: `h2` -* MariaDB: `mariadb` -* Microsoft SQL Server: `mssql` -* MySQL: `mysql` -* Oracle: `oracle` -* PostgreSQL: `postgresql`, `pgsql` or `pg` - -Giving Quarkus the database kind you are targeting will facilitate configuration. -By using a JDBC driver extension and setting the kind in the configuration, -Quarkus resolves the JDBC driver automatically, -so you don't need to configure it yourself. -If you want to use a database kind that is not part of the built-in ones, use `other` and define the JDBC driver explicitly. - -[NOTE] -==== -You can use any JDBC driver in a Quarkus app in JVM mode (see <>). -It is unlikely that it will work when compiling your application to a native executable though. - -If you plan to make a native executable, we recommend you use the existing JDBC Quarkus extensions (or contribute one for your driver). -==== - -There is a good chance you will need to define some credentials to access your database. - -This is done by configuring the following properties: - -[source, properties] ----- -quarkus.datasource.username= -quarkus.datasource.password= ----- - -You can also retrieve the password from Vault by link:{vault-datasource-guide}[using a credential provider] for your datasource. - -Once you have defined the database kind and the credentials, you are ready to configure either a JDBC datasource, a reactive one, or both. - -=== JDBC datasource - -JDBC is the most common database connection pattern. -You typically need a JDBC datasource when using Hibernate ORM. - -==== Install the Maven dependencies - -First, you will need to add the `quarkus-agroal` dependency to your project. - -You can add it using a simple Maven command: - -[source,bash] ----- -./mvnw quarkus:add-extension -Dextensions="agroal" ----- - -[TIP] -==== -Agroal comes as a transitive dependency of the Hibernate ORM extension so if you are using Hibernate ORM, -you don't need to add the Agroal extension dependency explicitly. -==== - -You will also need to choose, and add, the Quarkus extension for your relational database driver. - -Quarkus provides driver extensions for: - -* DB2 - `jdbc-db2` -* Derby - `jdbc-derby` -* H2 - `jdbc-h2` -* MariaDB - `jdbc-mariadb` -* Microsoft SQL Server - `jdbc-mssql` -* MySQL - `jdbc-mysql` -* Oracle - `jdbc-oracle` -* PostgreSQL - `jdbc-postgresql` - -See <> if you want to use a JDBC driver for another database. - -[NOTE] -==== -The H2 and Derby databases can normally be configured to run in "embedded mode"; -the extension does not support compiling the embedded database engine into native executables. - -Read <> (below) for suggestions regarding integration testing. -==== - -As usual, you can install the extension using `add-extension`. - -To install the PostgreSQL driver dependency for instance, run the following command: - -[source,bash] ----- -./mvnw quarkus:add-extension -Dextensions="jdbc-postgresql" ----- - -==== Configure the JDBC connection - -Configuring your JDBC connection is easy, the only mandatory property is the JDBC URL. - -[source, properties] ----- -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test ----- - -[NOTE] -==== -Note the `jdbc` prefix in the property name. -All the configuration properties specific to JDBC have this prefix. -==== - -[TIP] -==== -For more information about the JDBC URL format, please refer to the <>. -==== - -When using one of the built-in datasource kinds, the JDBC driver is resolved automatically to the following values: - -.Database kind to JDBC driver mapping -|=== -|Database kind |JDBC driver |XA driver - -|`db2` -|`com.ibm.db2.jcc.DBDriver` -|`com.ibm.db2.jcc.DB2XADataSource` - -|`derby` -|`org.apache.derby.jdbc.ClientDriver` -|`org.apache.derby.jdbc.ClientXADataSource` - -|`h2` -|`org.h2.Driver` -|`org.h2.jdbcx.JdbcDataSource` - -|`mariadb` -|`org.mariadb.jdbc.Driver` -|`org.mariadb.jdbc.MySQLDataSource` - -|`mssql` -|`com.microsoft.sqlserver.jdbc.SQLServerDriver` -|`com.microsoft.sqlserver.jdbc.SQLServerXADataSource` - -|`mysql` -|`com.mysql.cj.jdbc.Driver` -|`com.mysql.cj.jdbc.MysqlXADataSource` - -|`oracle` -|`oracle.jdbc.driver.OracleDriver` -|`oracle.jdbc.xa.client.OracleXADataSource` - -|`postgresql` -|`org.postgresql.Driver` -|`org.postgresql.xa.PGXADataSource` -|=== - -[TIP] -==== -As previously stated, most of the time, this automatic resolution will suit you and -you won't need to configure the driver. -==== - -[[other-databases]] -==== Use a database with no built-in extension or with a different driver - -You can use a specific driver if you need to (for instance for using the OpenTracing driver) -or if you want to use a database for which Quarkus does not have a built-in JDBC driver extension. - -Without an extension, the driver will work fine in any Quarkus app running in JVM mode. -It is unlikely that it will work when compiling your application to a native executable though. -If you plan to make a native executable, we recommend you use the existing JDBC Quarkus extensions (or contribute one for your driver). - -Here is how you would use the OpenTracing driver: - -[source, properties] ----- -quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver ----- - -Here is how you would define access to a database with no built-in support (in JVM mode): - -[source, properties] ----- -quarkus.datasource.db-kind=other -quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver -quarkus.datasource.jdbc.url=jdbc:oracle:thin:@192.168.1.12:1521/ORCL_SVC -quarkus.datasource.username=scott -quarkus.datasource.password=tiger ----- - -==== More configuration - -You can configure a lot more things, for instance the size of the connection pool. - -Please refer to the <> for all the details about the JDBC configuration knobs. - -==== Consuming the datasource - -If you are using Hibernate ORM, the datasource will be consumed automatically. - -If for whatever reason, access to the datasource is needed in code, it can be obtained as any other bean in the following manner: - -[source,java] ----- -@Inject -AgroalDataSource defaultDataSource; ----- - -In the above example, the type is `AgroalDataSource` which is a subtype of `javax.sql.DataSource`. -Because of this, you can also use `javax.sql.DataSource` as the injected type. - -=== Reactive datasource - -If you prefer using a reactive datasource, Quarkus offers DB2, MariaDB/MySQL, Microsoft SQL Server, Oracle and PostgreSQL reactive clients. - -==== Install the Maven dependencies - -Depending on which database you want to use, add the corresponding extension: - -* DB2: `quarkus-reactive-db2-client` -* MariaDB/MySQL: `quarkus-reactive-mysql-client` -* Microsoft SQL Server: `quarkus-reactive-mssql-client` -* Oracle: `quarkus-reactive-oracle-client` -* PostgreSQL: `quarkus-reactive-pg-client` - -The installed extension must be consistent with the `quarkus.datasource.db-kind` you define in your datasource configuration. - -==== Configure the reactive datasource - -Once the driver is there, you just need to configure the connection URL. - -Optionally, but highly recommended, you should define a proper size for your connection pool. - -[source,properties] ----- -quarkus.datasource.reactive.url=postgresql:///your_database -quarkus.datasource.reactive.max-size=20 ----- - -=== JDBC and reactive datasources simultaneously - -By default, if you include both a JDBC extension (+ Agroal) and a reactive datasource extension handling the given database kind, -both will be created. - -If you want to disable the JDBC datasource explicitly, use: - -[source, properties] ----- -quarkus.datasource.jdbc=false ----- - -If you want to disable the reactive datasource explicitly, use: - -[source, properties] ----- -quarkus.datasource.reactive=false ----- - -[TIP] -==== -Most of the time, the configuration above won't be necessary as either a JDBC driver or a reactive datasource extension will be present and not both. -==== - -== Multiple Datasources - -=== Configuring Multiple Datasources - -For now, multiple datasources are only supported for JDBC and the Agroal extension. -So it is not currently possible to create multiple reactive datasources. - -[NOTE] -==== -The Hibernate ORM extension xref:hibernate-orm.adoc#multiple-persistence-units[supports defining several persistence units using configuration properties]. -For each persistence unit, you can point to the datasource of your choice. -==== - -Defining multiple datasources works exactly the same way as defining a single datasource, with one important change: -you define a name. - -In the following example, you have 3 different datasources: - -* The default one, -* A datasource named `users`, -* A datasource named `inventory`, - -each with its own configuration. - -[source,properties] ----- -quarkus.datasource.db-kind=h2 -quarkus.datasource.username=username-default -quarkus.datasource.jdbc.url=jdbc:h2:mem:default -quarkus.datasource.jdbc.max-size=13 - -quarkus.datasource.users.db-kind=h2 -quarkus.datasource.users.username=username1 -quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users -quarkus.datasource.users.jdbc.max-size=11 - -quarkus.datasource.inventory.db-kind=h2 -quarkus.datasource.inventory.username=username2 -quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory -quarkus.datasource.inventory.jdbc.max-size=12 ----- - -Notice there is an extra bit in the key. -The syntax is as follows: `quarkus.datasource.[optional name.][datasource property]`. - -NOTE: Even when only one database extension is installed, named databases need to specify at least one build time -property so that Quarkus knows they exist. Generally this will be the `db-kind` property, although you can also -specify Dev Services properties to create named datasources (covered later in this guide). - -=== Named Datasource Injection - -When using multiple datasources, each `DataSource` also has the `io.quarkus.agroal.DataSource` qualifier with the name of the datasource as the value. -Using the above properties to configure three different datasources, you can also inject each one as follows: - -[source,java,indent=0] ----- -@Inject -AgroalDataSource defaultDataSource; - -@Inject -@DataSource("users") -AgroalDataSource usersDataSource; - -@Inject -@DataSource("inventory") -AgroalDataSource inventoryDataSource; ----- - -== Datasource Health Check - -If you are using the `quarkus-smallrye-health` extension, the `quarkus-agroal` and reactive client extensions will automatically add a readiness health check -to validate the datasource. - -When you access the `/q/health/ready` endpoint of your application you will have information about the datasource validation status. -If you have multiple datasources, all datasources will be checked and the status will be `DOWN` as soon as there is one datasource validation failure. - -This behavior can be disabled via the property `quarkus.datasource.health.enabled`. - -== Datasource Metrics - -If you are using the `quarkus-micrometer` or `quarkus-smallrye-metrics` extension, `quarkus-agroal` can expose some data source metrics on the -`/q/metrics` endpoint. This can be turned on by setting the property `quarkus.datasource.metrics.enabled` to true. - -For the exposed metrics to contain any actual values, it is necessary that metric collection is enabled internally -by Agroal mechanisms. By default, this metric collection mechanism gets turned on for all data sources if a metrics extension -is present and metrics for the Agroal extension are enabled. If you want to disable metrics for a particular data source, -this can be done by setting `quarkus.datasource.jdbc.enable-metrics` to `false` (or `quarkus.datasource..jdbc.enable-metrics` for a named datasource). -This disables collecting the metrics as well as exposing them in the `/q/metrics` endpoint, -because it does not make sense to expose metrics if the mechanism to collect them is disabled. - -Conversely, setting `quarkus.datasource.jdbc.enable-metrics` to `true` (or `quarkus.datasource..jdbc.enable-metrics` for a named datasource) explicitly can be used to enable collection of metrics even if -a metrics extension is not in use. -This can be useful if you need to access the collected metrics programmatically. -They are available after calling `dataSource.getMetrics()` on an injected `AgroalDataSource` instance. If collection of metrics is disabled -for this data source, all values will be zero. - -== Narayana Transaction Manager integration - -If the Narayana JTA extension is also available, integration is automatic. - -You can override this by setting the `transactions` configuration property - see the <> below. - -== Dev Services (Configuration Free Databases) - -As mentioned above Quarkus supports a feature called Dev Services that allows you to create datasources without any config. If -you have a database extension that supports it and no config is provided, Quarkus will automatically start a database (either -using Testcontainers, or by starting a Java DB in process), and automatically configure a connection to this database. - -Production databases need to be configured as normal, so if you want to include a production database config in your -application.properties and continue to use Dev Services we recommend that you use the `%prod.` profile to define your -database settings. - -=== Configuring Dev Services - -Dev Services supports the following config options: - -include::{generated-dir}/config/quarkus-datasource-config-group-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -=== Named Datasources - -When using Dev Services the default datasource will always be created, but to specify a named datasource you need to have -at least one build time property so Quarkus knows to create the datasource. In general you will usually either specify -the `db-kind` property, or explicitly enable DevDb: `quarkus.datasource."name".devservices.enabled=true`. - -[[in-memory-databases]] -== Testing with in-memory databases - -Some databases like H2 and Derby are commonly used in "embedded mode" as a facility to run quick integration tests. - -Our suggestion is to use the real database you intend to use in production; container technologies made this simple enough so you no longer have an excuse. Still, there are sometimes -good reasons to also want the ability to run quick integration tests using the JVM powered databases, -so this is possible as well. - -It is important to remember that when configuring H2 (or Derby) to use the embedded engine, -this will work as usual in JVM mode but such an application will not compile into a native executable, as the Quarkus extensions only cover for making the JDBC client code compatible with the native compilation step: embedding the whole database engine into a native executable is currently not implemented. - -If you plan to run such integration tests in the JVM exclusively, it will of course work as usual. - -If you want the ability to run such integration test in both JVM and/or native executables, we have some cool helpers for you: just add either `@QuarkusTestResource(H2DatabaseTestResource.class)` or `@QuarkusTestResource(DerbyDatabaseTestResource.class)` on any class in your integration tests, this will make sure the test suite starts (and stops) the embedded database into a separate process as necessary to run your tests. - -These additional helpers are provided by the artifacts having Maven coordinates `io.quarkus:quarkus-test-h2` and `io.quarkus:quarkus-test-derby`, respectively for H2 and Derby. - -Follows an example for H2: - -[source,java] ----- -package my.app.integrationtests.db; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.h2.H2DatabaseTestResource; - -@QuarkusTestResource(H2DatabaseTestResource.class) -public class TestResources { -} ----- - -This will allow you to test your application even when it's compiled into a native executable, -while the database will run in the JVM as usual. - -Connect to it using: - -[source,properties] ----- -quarkus.datasource.db-kind=h2 -quarkus.datasource.jdbc.url=jdbc:h2:mem:test ----- - -[[configuration-reference]] -== Common Datasource Configuration Reference - -include::{generated-dir}/config/quarkus-datasource.adoc[opts=optional, leveloffset=+1] - -[[jdbc-configuration]] -== JDBC Configuration Reference - -include::{generated-dir}/config/quarkus-agroal.adoc[opts=optional, leveloffset=+1] - -[[jdbc-url]] -== JDBC URL Reference - -Each of the supported databases contains different JDBC URL configuration options. -Going into each of those options is beyond the scope of this document, -but the following section gives an overview of each database URL and a link to the official documentation. - -=== DB2 - -`jdbc:db2://[:]/[:=;[=;]]` - -Example:: `jdbc:db2://localhost:50000/MYDB:user=dbadm;password=dbadm;` - -See the https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_r0052342.html[official documentation] for more detail on URL syntax and additional supported options. - -=== Derby - -`jdbc:derby:[//serverName[:portNumber]/][memory:]databaseName[;property=value[;property=value]]` - -Example:: `jdbc:derby://localhost:1527/myDB`, `jdbc:derby:memory:myDB;create=true` - -Derby is an embedded database. -It can run as a server, based on a file, or live completely in memory. -All of these options are available as listed above. -You can find more information at the https://db.apache.org/derby/docs/10.8/devguide/cdevdvlp17453.html#cdevdvlp17453[official documentation]. - -=== H2 - -`jdbc:h2:{ {.|mem:}[name] | [file:]fileName | {tcp|ssl}:[//]server[:port][,server2[:port]]/name }[;key=value...]` - -Example:: `jdbc:h2:tcp://localhost/~/test`, `jdbc:h2:mem:myDB` - -H2 is an embedded database. -It can run as a server, based on a file, or live completely in memory. -All of these options are available as listed above. -You can find more information at the https://h2database.com/html/features.html?highlight=url&search=url#database_url[official documentation]. - -=== MariaDB - -`jdbc:mariadb:[replication:|failover:|sequential:|aurora:]//[,...]/[database][?=[&=]]` -hostDescription:: `[:] or address=(host=)[(port=)][(type=(master|slave))]` - -Example:: `jdbc:mariadb://localhost:3306/test` - -You can find more information about this feature and others detailed in the https://mariadb.com/kb/en/library/about-mariadb-connector-j/[official documentation]. - -=== Microsoft SQL Server - -`jdbc:sqlserver://[serverName[\instanceName][:portNumber]][;property=value[;property=value]]` - -Example:: `jdbc:sqlserver://localhost:1433;databaseName=AdventureWorks` - -The Microsoft SQL Server JDBC driver works essentially the same as the others. -More details can be found in the https://docs.microsoft.com/en-us/sql/connect/jdbc/connecting-to-sql-server-with-the-jdbc-driver?view=sql-server-2017[official documentation]. - -=== MySQL - -`jdbc:mysql:[replication:|failover:|sequential:|aurora:]//[,...]/[database][?=[&=]]` -hostDescription:: `[:] or address=(host=)[(port=)][(type=(master|slave))]` - -Example:: `jdbc:mysql://localhost:3306/test` - -You can find more information about this feature and others detailed in the https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference.html[official documentation]. - -==== MySQL Limitations - -When compiling a Quarkus application to native-image, the MySQL support for JMX and Oracle Cloud Infrastructure (OCI) integrations are disabled as they are not compatible -with GraalVM native-images. -The lack of JMX support is a natural consequence of running in native and is unlikely to be resolved. -The integration with OCI could be resolved, if you need it we suggest opening a support request with the MySQL Connector/J maintainers. - -=== Oracle - -`jdbc:oracle:driver_type:@database_specifier` - -Example:: `jdbc:oracle:thin:@localhost:1521/ORCL_SVC` - -More details can be found in the https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/data-sources-and-URLs.html#GUID-AEA8E228-1B21-4111-AF4C-B1F33744CA08[official documentation]. - -=== PostgreSQL - -PostgreSQL only runs as a server, as do the rest of the databases below. -As such, you must specify connection details, or use the defaults. - -`jdbc:postgresql:[//][host][:port][/database][?key=value...]` - -Example:: `jdbc:postgresql://localhost/test` - -Defaults for the different parts are as follows: - -`host`:: localhost -`port`:: 5432 -`database`:: same name as the username - -The https://jdbc.postgresql.org/documentation/head/connect.html[official documentation] go into more detail and list optional parameters as well. - -:no-duration-note: true - -[[reactive-configuration]] -== Reactive Datasource Configuration Reference - -include::{generated-dir}/config/quarkus-reactive-datasource.adoc[opts=optional, leveloffset=+1] - -=== Reactive DB2 Configuration - -include::{generated-dir}/config/quarkus-reactive-db2-client.adoc[opts=optional, leveloffset=+1] - -=== Reactive MariaDB/MySQL Specific Configuration - -include::{generated-dir}/config/quarkus-reactive-mysql-client.adoc[opts=optional, leveloffset=+1] - -=== Reactive Microsoft SQL Server Specific Configuration - -include::{generated-dir}/config/quarkus-reactive-mssql-client.adoc[opts=optional, leveloffset=+1] - -=== Reactive Oracle Specific Configuration - -include::{generated-dir}/config/quarkus-reactive-oracle-client.adoc[opts=optional, leveloffset=+1] - -=== Reactive PostgreSQL Specific Configuration - -include::{generated-dir}/config/quarkus-reactive-pg-client.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/deploying-to-azure-cloud.adoc b/_versions/2.7/guides/deploying-to-azure-cloud.adoc deleted file mode 100644 index 174fe868afd..00000000000 --- a/_versions/2.7/guides/deploying-to-azure-cloud.adoc +++ /dev/null @@ -1,175 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Deploying to Microsoft Azure Cloud - -include::./attributes.adoc[] - -This guide covers: - -* Update Quarkus HTTP Port -* Install the Azure CLI -* Create an Azure Registry Service instance and upload the Docker image -* Deploy the Docker image to Azure Container Instances -* Deploy the Docker image to Azure Kubernetes Service -* Deploy the Docker image to Azure App Service for Linux Containers - -== Prerequisites - -:prerequisites-time: 2 hours for all modalities -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* Having access to an Azure subscription. https://azure.microsoft.com/free/?WT.mc_id=opensource-quarkus-brborges[Get a free one here] - -This guide will take as input a native application developed in the xref:building-native-image.adoc[building native image guide]. - -Make sure you have the getting-started application at hand, or clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. The solution is located in the `getting-started` directory. - -== Change Quarkus HTTP Port - -If you correctly followed the xref:building-native-image.adoc[building native image guide], you should have a local container image named `quarkus-quickstart/getting-started`. - -While Quarkus by default runs on port 8080, most Azure services expect web applications to be running on port 80. Before we continue, go back to your quickstart code and open the file `src/main/docker/Dockerfile.native`. - -Change the last two commands in the `Dockerfile.native` file and make it read like this: - -[source,docker] ----- -EXPOSE 80 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=80"] ----- - -Now you can rebuild the docker image: - -[source,shell] ----- -$ docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . ----- - -To test, run it by exposing port 80 into port 8080 in your host: - -[source,shell] ----- -$ docker run -i --rm -p 8080:80 quarkus-quickstart/getting-started ----- - -Your container image is now ready to run on Azure. Remember, the Quarkus application is mapped to run on port 80. - -== Install the Azure CLI - -To ease the user experience throughout this guide, it is better to have the Azure CLI installed and authenticated. - -Visit the https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest?WT.mc_id=opensource-quarkus-brborges[Azure CLI] installation page for instructions specific to your operating system. - -Once installed, ensure you are authenticated: - -[source,shell] ----- -$ az login ----- - -== Create an Azure Container Registry instance - -It is possible to deploy images hosted on Docker Hub, but this location by default leaves images accessible to anyone. To better protect your container images, this guide shows how to host your images on a private instance of the Azure Container Registry service. - -First, create an Azure Resource Group: - -[source,shell] ----- -$ az group create --name --location eastus ----- - -Then you can create the ACR: - -[source,shell] ----- -$ az acr create --resource-group --name --sku Basic --admin-enabled true ----- - -Finally, authenticate your local Docker installation with this container registry by running: - -[source,shell] ----- -$ az acr login --name ----- - -== Upload Container Image on Azure - -If you've followed the build native image guide, you should have a local container image named `quarkus-quickstart/getting-started`. - -To upload this image to your ACR, you must tag and push the image under the ACR login server. To find the login server of the Azure Container Registry, run this command: - -[source,shell] ----- -$ az acr show -n --query loginServer ----- - -To upload, now do: - -[source,shell] ----- -$ docker tag quarkus-quickstart/getting-started /quarkus-quickstart/getting-started -$ docker push /quarkus-quickstart/getting-started ----- - -At this point, you should have your Quarkus container image on your Azure Container Registry. To verify, run the following command: - -[source,shell] ----- -$ az acr repository list -n ----- - -== Deploy to Azure Container Instances - -The simplest way to start this container in the cloud is with the Azure Container Instances service. It simply creates a container on Azure infrastructure. - -There are different approaches for using ACI. Check the documentation for details. The quickest way to get a container up and running goes as it follows. - -First step is to find the username and password for the admin, so that ACI can authenticate into ACR and pull the Docker image: - -[source,shell] ----- -$ az acr credential show --name ----- - -Now create the Docker instance on ACI pointing to your image on ACR: - -[source,shell] ----- -$ az container create \ - --name quarkus-hello \ - --resource-group \ - --image /quarkus-quickstart/getting-started \ - --registry-login-server \ - --registry-username \ - --registry-password \ - --dns-name-label quarkus-hello- \ - --query ipAddress.fqdn ----- - -The command above, if run successfully, will give you the address of your container in the Cloud. Access your Quarkus application in the address displayed as output. - -For more information and details on ACR authentication and the use of service principals, follow this guide below and remember the Azure Container Registry `loginServer` and the image name of your Quarkus application now hosted on the ACR. - -https://docs.microsoft.com/en-us/azure/container-instances/container-instances-using-azure-container-registry?WT.mc_id=opensource-quarkus-brborges[Deploy to Azure Container Instances from Azure Container Registry] - -Keep in mind that this service does not provide scalability. A container instance is unique and does not scale. - -== Deploy to Azure Kubernetes Service - -You can also deploy the container image as a microservice in a Kubernetes cluster on Azure. To do that, follow this tutorial: - -https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster?WT.mc_id=opensource-quarkus-brborges[Tutorial: Deploy an Azure Kubernetes Service (AKS) cluster] - -Once deployed, the application will be running on whatever port is used to expose the service. By default, Quarkus apps run on port 8080 internally. - -== Deploy to Azure App Service on Linux Containers - -This service provides scalability out of the box for web applications. If more instances are required, it will provide a load-balancing automatically, plus monitoring, metrics, logging and so on. - -To deploy your Quarkus Native container image to this service, follow this tutorial: - -https://docs.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image?WT.mc_id=opensource-quarkus-brborges[Tutorial: Build a custom image and run in App Service from a private registry] - diff --git a/_versions/2.7/guides/deploying-to-google-cloud.adoc b/_versions/2.7/guides/deploying-to-google-cloud.adoc deleted file mode 100644 index fd0a9529451..00000000000 --- a/_versions/2.7/guides/deploying-to-google-cloud.adoc +++ /dev/null @@ -1,294 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Deploying to Google Cloud Platform (GCP) - -include::./attributes.adoc[] - -This guide covers: - -* Login to Google Cloud -* Deploying a function to Google Cloud Functions -* Deploying a JAR to Google App Engine Standard -* Deploying a Docker image to Google App Engine Flexible Custom Runtimes -* Deploying a Docker image to Google Cloud Run -* Using Cloud SQL - -== Prerequisites - -:prerequisites-time: 1 hour for all modalities -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* https://cloud.google.com/[A Google Cloud Account]. Free accounts work. -* https://cloud.google.com/sdk[Cloud SDK CLI Installed] - -This guide will take as input an application developed in the xref:getting-started.adoc[Getting Started guide]. - -Make sure you have the getting-started application at hand, or clone the Git repository: `git clone {quickstarts-clone-url}`, -or download an {quickstarts-archive-url}[archive]. The solution is located in the `getting-started` directory. - -== Login to Google Cloud - -Login to Google Cloud is necessary for deploying the application and it can be done as follows: - -[source, subs=attributes+] ----- -gcloud auth login ----- - -== Deploying to Google Cloud Functions - -Quarkus supports deploying your application to Google Cloud Functions via the following extensions: - -- xref:gcp-functions.adoc[Google Cloud Functions]: Build functions using the Google Cloud Functions API. -- xref:gcp-functions-http.adoc[Google Cloud Functions HTTP binding]: Build functions using Quarkus HTTP APIs: RESTEasy (JAX-RS), -Undertow (Servlet), Vert.x Web, or xref:funqy-http.adoc[Funqy HTTP]. -- xref:funqy-gcp-functions.adoc[Funky Google Cloud Functions]: Build functions using Funqy. - -Each extension supports a specific kind of application development, -follow the specific guides for more information on how to develop, package and deploy your applications using them. - -== Deploying to Google App Engine Standard - -We will only cover the Java 11 runtime as the Java 8 runtime uses its own Servlet engine which is not compatible with Quarkus. - -First of all, make sure to have an App Engine environment initialized for your Google Cloud project, if not, initialize one via `gcloud app create --project=[YOUR_PROJECT_ID]`. - -Then, you will need to create a `src/main/appengine/app.yaml` file, let's keep it minimalistic with only the selected engine: - -[source, yaml] ----- -runtime: java11 ----- - -This will create a default service for your App Engine application. - -App Engine Standard does not support the default Quarkus' specific packaging layout, therefore, you must set up your application to be packaged as an uber-jar via your `application.properties` file: - -[source, properties] ----- -quarkus.package.type=uber-jar ----- - -Then, you can choose to build the application manually or delegating that responsibility to `gcloud` or the Google Cloud Maven plugin. - -=== Building the application manually - -Use Maven to build the application using `mvn clean package`, it will generate a single JAR that contains all the classes of your application including its dependencies. - -Finally, use `gcloud` to deploy your application as an App Engine service. - -[source, shell script] ----- -gcloud app deploy target/getting-started-1.0.0-SNAPSHOT-runner.jar ----- - -This command will upload your application jar and launch it on App Engine. - -When it’s done, the output will display the URL of your application (target url), you can use it with curl or directly open it in your browser using `gcloud app browse`. - -=== Building the application via gcloud - -You can choose to let `gcloud` build your application for you, this is the simplest way to deploy to App Engine. - -Then, you can just launch `gcloud app deploy` in the root of your project, it will upload all your project files (the list can be reduced via the `.gcloudignore` file), -package your JAR via Maven (or Gradle) and launch it on App Engine. - -When it’s done, the output will display the URL of your application (target url), you can use it with curl or directly open it in your browser using `gcloud app browse`. - -=== Building the application via the Google Cloud Maven plugin - -You can also let Maven control the deployment of your application using the App Engine Maven plugin. - -First, add the plugin to your `pom.xml`: - -[source,xml] ----- - - com.google.cloud.tools - appengine-maven-plugin - 2.4.0 - - GCLOUD_CONFIG <1> - gettingstarted - ${project.build.directory}/${project.artifactId}-${project.version}-runner.jar <2> - - ----- -<1> Use the default `gcloud` configuration -<2> Override the default JAR name to the one generated by the Quarkus Maven plugin - -Then you would be able to use Maven to build and deploy your application to App Engine via `mvn clean package appengine:deploy`. - -When it’s done, the output will display the URL of your application (target URL), you can use it with curl or directly open it in your browser using `gcloud app browse`. - -== Deploying to Google App Engine Flexible Custom Runtimes - -Before all, make sure to have an App Engine environment initialized for your Google Cloud project, if not, initialize one via `gcloud app create --project=[YOUR_PROJECT_ID]`. - -App Engine Flexible Custom Runtimes uses a Docker image to run your application. - -First, create an `app.yaml` file at the root of your project with the following content: - -[source, yaml] ----- -runtime: custom -env: flex ----- - -App Engine Flexible Custom Runtimes deploys your application as a Docker container, you can choose to deploy one of the Dockerfile provided inside your application. - -Both JVM and native executable versions will work. - -To deploy a JVM application: - -- Copy the JVM Dockerfile to the root directory of your project: `cp src/main/docker/Dockerfile.jvm Dockerfile`. -- Build your application using `mvn clean package`. - -To deploy a native application: - -- Copy the native Dockerfile to the root directory of your project: `cp src/main/docker/Dockerfile.native Dockerfile`. -- Build your application as a native executable using `mvn clean package -Dnative`. - -Finally, launch `gcloud app deploy` in the root of your project, it will upload all your project files (the list can be reduced via the `.gcloudignore` file), -build your Dockerfile and launch it on App Engine Flexible custom runtime. - -It uses Cloud Build to build your Docker image and deploy it to Google Container Registry (GCR). - -When done, the output will display the URL of your application (target url), you can use it with curl or directly open it in your browser using `gcloud app browse`. - -NOTE: App Engine Flexible custom runtimes support link:https://cloud.google.com/appengine/docs/flexible/custom-runtimes/configuring-your-app-with-app-yaml#updated_health_checks[health checks], -it is strongly advised to provide them thanks to Quarkus xref:microprofile-health.adoc[Microprofile Health] support. - -== Deploying to Google Cloud Run - -Google Cloud Run allows you to run your Docker containers inside Google Cloud Platform in a managed way. - -NOTE: By default, Quarkus listens on port 8080, and it's also the Cloud Run default port. -No need to use the `PORT` environment variable defined in Cloud Run to customize the Quarkus HTTP port. - -Cloud Run will use Cloud Build to build your Docker image and deploy it to Google Container Registry (GCR). - -Both JVM and native executable versions will work. - -To deploy a JVM application: - -- Copy the JVM Dockerfile to the root directory of your project: `cp src/main/docker/Dockerfile.jvm Dockerfile`. -- Build your application using `mvn clean package`. - -To deploy a native application: - -- Copy the native Dockerfile to the root directory of your project: `cp src/main/docker/Dockerfile.native Dockerfile`. -- Build your application as a native executable using `mvn clean package -Dnative`. - -Then, create a `.gcloudignore` file to tell gcloud which files should be not be uploaded for Cloud Build, -without it, it defaults to `.gitignore` that usually exclude the target directory where you packaged application has been created. - -In this example, I only exclude the `src` directory: - -[source] ----- -src/ ----- - -Then, use Cloud Build to build your image, it will upload to a Google Cloud Storage bucket all the files of your application (except the ones ignored by the `.gcloudignore`file), -build your Docker image and push it to Google Container Registry (GCR). - -[source, shell script] ----- -gcloud builds submit --tag gcr.io/PROJECT-ID/helloworld ----- - -NOTE: You can also build your image locally and push it to a publicly accessible Docker registry, then use this image in the next step. - -Finally, use Cloud Run to launch your application. - -[source, shell script] ----- -gcloud run deploy --image gcr.io/PROJECT-ID/helloworld --platform managed ----- - -Cloud run will ask you questions on the service name, the region and whether or not unauthenticated calls are allowed. -After you answer to these questions, it will deploy your application. - -When the deployment is done, the output will display the URL to access your application. - -== Using Cloud SQL - -Google Cloud SQL provides managed instances for MySQL, PostgreSQL and Microsoft SQL Server. -Quarkus has support for all three databases. - -=== Using Cloud SQL with a JDBC driver - -To make your applications work with Cloud SQL, you first need to use the corresponding JDBC extension, for example, for PostgreSQL, -add the `quarkus-jdbc-postgresql` extension. - -Then you need to add to your `pom.xml` the Cloud SQL JDBC library that provides the additional connectivity to Cloud SQL. -For PostgreSQL you will need to include the following dependency: - -[source, xml] ----- - - com.google.cloud.sql - postgres-socket-factory - ${postgres-socket-factory.version} - ----- - -Finally, you need to configure your datasource specifically to use the socket factory: - -[source, properties] ----- -quarkus.datasource.db-kind=other <1> -quarkus.datasource.jdbc.url=jdbc:postgresql:///mydatabase <2> -quarkus.datasource.jdbc.driver=org.postgresql.Driver -quarkus.datasource.username=quarkus -quarkus.datasource.password=quarkus -quarkus.datasource.jdbc.additional-jdbc-properties.cloudSqlInstance=project-id:gcp-region:instance <3> -quarkus.datasource.jdbc.additional-jdbc-properties.socketFactory=com.google.cloud.sql.postgres.SocketFactory <4> ----- -<1> Database kind must be 'other' as we need to skip Quarkus auto-configuration. -<2> The JDBC URL should not include the hostname / IP of the database. -<3> We add the `cloudSqlInstance` additional JDBC property to configure the instance id. -<4> We add the `socketFactory` additional JDBC property to configure the socket factory used to connect to Cloud SQL, -this one is coming from the `postgres-socket-factory` dependency. - -NOTE: If you use Hibernate ORM, you also need to configure `quarkus.hibernate-orm.dialect=org.hibernate.dialect.PostgreSQL10Dialect` -as Hibernate ORM would not be able to automatically detect the dialect of your database. - -WARNING: Using a PostgreSQL socket factory is not possible in dev mode at the moment -due to issue link:https://github.com/quarkusio/quarkus/issues/15782[#15782]. - -=== Using Cloud SQL with a reactive SQL client - -You can also use one of our reactive SQL client instead of the JDBC client. -To do so with Cloud SQL, add the following dependency -(adjust the classifier depending on your platform): - -[source, xml] ----- - - io.netty - netty-transport-native-epoll - linux-x86_64 - ----- - -Then configure your reactive datasource with no hostname and with the Netty native transport: - -[source, properties] ----- -quarkus.datasource.reactive.url=postgresql://:5432/db-name?host=/cloudsql/project-id:zone:db-name -quarkus.vertx.prefer-native-transport=true ----- - -WARNING: This only works when your application is running inside a Google Cloud managed runtime like App Engine. - -== Going further - -You can find a set of extensions to access various Google Cloud Services in the Quarkiverse (a GitHub organization for Quarkus extensions maintained by the community), -including PubSub, BigQuery, Storage, Spanner, Firestore, Secret Manager (visit the repository for an accurate list of supported services). - -You can find some documentation about them in the link:https://github.com/quarkiverse/quarkiverse-google-cloud-services[Quarkiverse Google Cloud Services repository]. diff --git a/_versions/2.7/guides/deploying-to-heroku.adoc b/_versions/2.7/guides/deploying-to-heroku.adoc deleted file mode 100644 index d9fb6868da9..00000000000 --- a/_versions/2.7/guides/deploying-to-heroku.adoc +++ /dev/null @@ -1,209 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Deploying to Heroku -include::./attributes.adoc[] - -In this guide you will learn how to deploy a Quarkus based web application as a web-dyno to Heroku. - -This guide covers: - -* Update Quarkus HTTP Port -* Install the Heroku CLI -* Deploy the application to Heroku -* Deploy the application as Docker image to Heroku -* Deploy the native application as Docker image to Heroku - -== Prerequisites - -:prerequisites-time: 1 hour for all modalities -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* https://www.heroku.com/[A Heroku Account]. Free accounts work. -* https://devcenter.heroku.com/articles/heroku-cli[Heroku CLI installed] - -== Introduction - -Heroku is a platform as a service (PaaS) that enables developers to build, run, and operate applications entirely in the cloud. -It supports several languages like Java, Ruby, Node.js, Scala, Clojure, Python, PHP, and Go. -In addition it offers a container registry that can be used to deploy prebuilt container images. - -Heroku can be used in different ways to run a Quarkus application: - -* As a plain Java program running in a container defined by Heroku's environment -* As a containerized Java program running in a container defined by the Quarkus build process -* As a containerized native program running in a container defined by the Quarkus build process - -All three approaches need to be aware of the port that Heroku assigns to it to handle traffic. -Luckily, there's a dynamic configuration property for it. - -The guide assumes that you have the https://devcenter.heroku.com/articles/heroku-cli[Heroku CLI] installed. - -== Common project setup - -This guide will take as input an application developed in the xref:getting-started.adoc[Getting Started guide]. - -Make sure you have the getting-started application at hand, or clone the Git repository: `git clone {quickstarts-clone-url}`, -or download an {quickstarts-archive-url}[archive]. The solution is located in the `getting-started` directory. - -Heroku can react on changes in your repository, run CI and redeploy your application when your code changes. -Therefore we start with a valid repository already. - -Also, make sure your Heroku CLI is working: - -[source,bash] ----- -heroku --version -heroku login ----- - -== Prepare the Quarkus HTTP Port - -Heroku picks a random port and assigns it to the container that is eventually running your Quarkus application. -That port is available as an environment variable under `$PORT`. -The easiest way to make Quarkus in all deployment scenarios aware of it is using the following configuration: - -[source,properties] ----- -quarkus.http.port=${PORT:8080} ----- - -This reads as: "Listen on `$PORT` if this is a defined variable, otherwise listen on 8080 as usual." -Run the following to add this to your `application.properties`: - -[source,bash] ----- -echo "quarkus.http.port=\${PORT:8080}" >> src/main/resources/application.properties -git commit -am "Configure the HTTP Port." ----- - -== Deploy the repository and build on Heroku - -The first variant uses the Quarkus Maven build to create the _quarkus-app_ application structure containing the runnable "fast-jar" as well as all libraries needed -inside Heroku's build infrastructure and then deploying that result, the other one uses a local build process to create an optimized container. - -Two additional files are needed in your application's root directory: - -* `system.properties` to configure the Java version -* `Procfile` to configure how Heroku starts your application - -Quarkus needs JDK 11, so we specify that first: - -[source,bash] ----- -echo "java.runtime.version=11" >> system.properties -git add system.properties -git commit -am "Configure the Java version for Heroku." ----- - -We will deploy a web application so we need to configure the type `web` in the Heroku `Procfile` like this: - -[source,bash] ----- -echo "web: java \$JAVA_OPTS -jar target/quarkus-app/quarkus-run.jar" >> Procfile -git add Procfile -git commit -am "Add a Procfile." ----- - -Your application should already be runnable via `heroku local web`. - -Let's create an application in your account and deploy that repository to it: - -[source,bash] ----- -heroku create -git push heroku master -heroku open ----- - -The application will have a generated name and the terminal should output that. `heroku open` opens your default browser to access your new application. - -To access the REST endpoint via curl, run: - -[source,bash] ----- -APP_NAME=`heroku info | grep "=== .*" |sed "s/=== //"` -curl $APP_NAME.herokuapp.com/hello ----- - -Of course, you can use the Heroku CLI to connect this repo to your GitHub account, too, but this is out of scope for this guide. - -== Deploy as container - -The advantage of pushing a whole container is that we are in complete control over its content and maybe even choose to deploy a container with a native executable running on GraalVM. - -First, login to Heroku's container registry: - -[source,bash] ------ -heroku container:login ------ - -We need to add an extension to our project to build container images via the Quarkus Maven plugin: - -[source,bash] ----- -mvn quarkus:add-extension -Dextensions="container-image-docker" -git add pom.xml -git commit -am "Add container-image-docker extension." ----- - -The image we are going to build needs to be named accordingly to work with Heroku's registry and deployment. -We get the generated name via `heroku info` and pass it on to the (local) build: - -[source,bash] ----- -APP_NAME=`heroku info | grep "=== .*" |sed "s/=== //"` -mvn clean package\ - -Dquarkus.container-image.build=true\ - -Dquarkus.container-image.group=registry.heroku.com/$APP_NAME\ - -Dquarkus.container-image.name=web\ - -Dquarkus.container-image.tag=latest ----- - -With Docker installed, you can now push the image and release it: - -[source,bash] ----- -docker push registry.heroku.com/$APP_NAME/web -heroku container:release web --app $APP_NAME ----- - -You can and should check the logs to see if your application is now indeed running from the container: - -[source,bash] ----- -heroku logs --app $APP_NAME --tail ----- - -The initial push is rather big, as all layers of the image need to be transferred. -The following pushes will be smaller. - -The biggest advantage we take when deploying our app as a container is to deploy a container with the natively compiled application. -Why? Because Heroku will stop or sleep the application when there's no incoming traffic. -A native application will wake up much faster from its sleep. - -The process is pretty much the same. -We opt in to compiling a native image inside a local container, so that we don't have to deal with installing GraalVM locally: - -[source,bash] ----- -APP_NAME=`heroku info | grep "=== .*" |sed "s/=== //"` -mvn clean package\ - -Dquarkus.container-image.build=true\ - -Dquarkus.container-image.group=registry.heroku.com/$APP_NAME\ - -Dquarkus.container-image.name=web\ - -Dquarkus.container-image.tag=latest\ - -Pnative\ - -Dquarkus.native.container-build=true ----- - -After that, push and release again: - -[source,bash] ----- -docker push registry.heroku.com/$APP_NAME/web -heroku container:release web --app $APP_NAME ----- diff --git a/_versions/2.7/guides/deploying-to-kubernetes.adoc b/_versions/2.7/guides/deploying-to-kubernetes.adoc deleted file mode 100644 index b6317dead25..00000000000 --- a/_versions/2.7/guides/deploying-to-kubernetes.adoc +++ /dev/null @@ -1,1555 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Kubernetes extension - -include::./attributes.adoc[] - -Quarkus offers the ability to automatically generate Kubernetes resources based on sane defaults and user-supplied configuration using https://github.com/dekorateio/dekorate/[dekorate]. -It currently supports generating resources for vanilla <<#kubernetes,Kubernetes>>, <<#openshift,OpenShift>> and <<#knative,Knative>>. -Furthermore, Quarkus can deploy the application to a target Kubernetes cluster by applying the generated manifests to the target cluster's API Server. -Finally, when either one of container image extensions is present (see the xref:container-image.adoc[container image guide] for more details), Quarkus has the ability to create a container image and push it to a registry *before* deploying the application to the target platform. - -== Prerequisites - -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* Access to a Kubernetes cluster (Minikube is a viable option) - -[#kubernetes] -== Kubernetes - -Let's create a new project that contains both the Kubernetes and Jib extensions: - -:create-app-artifact-id: kubernetes-quickstart -:create-app-extensions: resteasy,kubernetes,jib -include::includes/devtools/create-app.adoc[] - -This added the following dependencies to the build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy - - - io.quarkus - quarkus-kubernetes - - - io.quarkus - quarkus-container-image-jib - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy") -implementation("io.quarkus:quarkus-kubernetes") -implementation("io.quarkus:quarkus-container-image-jib") ----- - -By adding these dependencies, we enable the generation of Kubernetes manifests each time we perform a build while also enabling the build of a container image using Jib. -For example, following the execution of: - -include::includes/devtools/build.adoc[] - -you will notice amongst the other files that are created, two files named -`kubernetes.json` and `kubernetes.yml` in the `target/kubernetes/` directory. - -If you look at either file you will see that it contains both a Kubernetes `Deployment` and a `Service`. - -The full source of the `kubernetes.json` file looks something like this: - -[source,json] ----- -{ - { - "apiVersion" : "apps/v1", - "kind" : "Deployment", - "metadata" : { - "annotations": { - "app.quarkus.io/vcs-url" : "", - "app.quarkus.io/commit-id" : "", - }, - "labels" : { - "app.kubernetes.io/name" : "test-quarkus-app", - "app.kubernetes.io/version" : "1.0.0-SNAPSHOT", - }, - "name" : "test-quarkus-app" - }, - "spec" : { - "replicas" : 1, - "selector" : { - "matchLabels" : { - "app.kubernetes.io/name" : "test-quarkus-app", - "app.kubernetes.io/version" : "1.0.0-SNAPSHOT", - } - }, - "template" : { - "metadata" : { - "labels" : { - "app.kubernetes.io/name" : "test-quarkus-app", - "app.kubernetes.io/version" : "1.0.0-SNAPSHOT" - } - }, - "spec" : { - "containers" : [ { - "env" : [ { - "name" : "KUBERNETES_NAMESPACE", - "valueFrom" : { - "fieldRef" : { - "fieldPath" : "metadata.namespace" - } - } - } ], - "image" : "yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT", - "imagePullPolicy" : "Always", - "name" : "test-quarkus-app" - } ] - } - } - } - }, - { - "apiVersion" : "v1", - "kind" : "Service", - "metadata" : { - "annotations": { - "app.quarkus.io/vcs-url" : "", - "app.quarkus.io/commit-id" : "", - }, - "labels" : { - "app.kubernetes.io/name" : "test-quarkus-app", - "app.kubernetes.io/version" : "1.0.0-SNAPSHOT", - }, - "name" : "test-quarkus-app" - }, - "spec" : { - "ports" : [ { - "name" : "http", - "port" : 8080, - "targetPort" : 8080 - } ], - "selector" : { - "app.kubernetes.io/name" : "test-quarkus-app", - "app.kubernetes.io/version" : "1.0.0-SNAPSHOT" - }, - "type" : "ClusterIP" - } - } -} ----- - -Beside generating a `Deployment` resource, you can also choose to get a `StatefulSet` instead via `application.properties`: - -[source,properties] ----- -quarkus.kubernetes.deployment-kind=StatefulSet ----- - -The generated manifest can be applied to the cluster from the project root using `kubectl`: - -[source,bash] ----- -kubectl apply -f target/kubernetes/kubernetes.json ----- - -An important thing to note about the `Deployment` (or `StatefulSet`) is that is uses `yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT` as the container image of the `Pod`. -The name of the image is controlled by the Jib extension and can be customized using the usual `application.properties`. - -For example with a configuration like: - -[source,properties] ----- -quarkus.container-image.group=quarkus #optional, default to the system user name -quarkus.container-image.name=demo-app #optional, defaults to the application name -quarkus.container-image.tag=1.0 #optional, defaults to the application version ----- - -The image that will be used in the generated manifests will be `quarkus/demo-app:1.0` - -=== Namespace - -By default Quarkus omits the namespace in the generated manifests, rather than enforce the `default` namespace. That means that you can apply the manifest to your chosen namespace when using `kubectl`, which in the example below is `test`: - -[source,bash] ----- -kubectl apply -f target/kubernetes/kubernetes.json -n=test ----- - -To specify the namespace in your manifest customize with the following property in your `application.properties`: - -[source,properties] ----- -quarkus.kubernetes.namespace=mynamespace ----- - -=== Defining a Docker registry - -The Docker registry can be specified with the following property: - -[source,properties] ----- -quarkus.container-image.registry=my.docker-registry.net ----- - -By adding this property along with the rest of the container image properties of the previous section, the generated manifests will use the image `my.docker-registry.net/quarkus/demo-app:1.0`. -The image is not the only thing that can be customized in the generated manifests, as will become evident in the following sections. - -=== Labels and Annotations - -==== Labels - -The generated manifests use the Kubernetes link:https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels[recommended labels]. -These labels can be customized using `quarkus.kubernetes.name`, `quarkus.kubernetes.version` and `quarkus.kubernetes.part-of`. -For example by adding the following configuration to your `application.properties`: - -[source,properties] ----- -quarkus.kubernetes.part-of=todo-app -quarkus.kubernetes.name=todo-rest -quarkus.kubernetes.version=1.0-rc.1 ----- - -[NOTE] -==== -As is described in detail in the <<#openshift, OpenShift>> section, customizing OpenShift (or Knative) properties is done in the same way, but replacing -`kubernetes` with `openshift` (or `knative`). The previous example for OpenShift would look like this: - -[source,properties] ----- -quarkus.openshift.part-of=todo-app -quarkus.openshift.name=todo-rest -quarkus.openshift.version=1.0-rc.1 ----- -==== - -The labels in generated resources will look like: - -[source, json] ----- - "labels" : { - "app.kubernetes.io/part-of" : "todo-app", - "app.kubernetes.io/name" : "todo-rest", - "app.kubernetes.io/version" : "1.0-rc.1" - } ----- - -[NOTE] -==== -You can also remove the `app.kubernetes.io/version` label by applying the following configuration: - -[source,properties] ----- -quarkus.kubernetes.add-version-to-label-selectors=false ----- -==== - -==== Custom Labels - -To add additional custom labels, for example `foo=bar` just apply the following configuration: - -[source,properties] ----- -quarkus.kubernetes.labels.foo=bar ----- - -NOTE: When using the `quarkus-container-image-jib` extension to build a container image, then any label added via the aforementioned property will also be added to the generated container image. - -==== Annotations - -Out of the box, the generated resources will be annotated with version control related information that can be used either by tooling, or by the user for troubleshooting purposes. - -[source,json] ----- - "annotations": { - "app.quarkus.io/vcs-url" : "", - "app.quarkus.io/commit-id" : "", - } ----- - -==== Custom Annotations - -Custom annotations can be added in a way similar to labels.For example to add the annotation `foo=bar` and `app.quarkus/id=42` just apply the following configuration: - -[source,properties] ----- -quarkus.kubernetes.annotations.foo=bar -quarkus.kubernetes.annotations."app.quarkus/id"=42 ----- - -[#env-vars] -==== Environment variables - -Kubernetes provides multiple ways of defining environment variables: - -- key/value pairs -- import all values from a Secret or ConfigMap -- interpolate a single value identified by a given field in a Secret or ConfigMap -- interpolate a value from a field within the same resource - -===== Environment variables from key/value pairs - -To add a key/value pair as an environment variable in the generated resources: - -[source,properties] ----- -quarkus.kubernetes.env.vars.my-env-var=foobar ----- - -The command above will add `MY_ENV_VAR=foobar` as an environment variable. -Please note that the key `my-env-var` will be converted to uppercase and dashes will be replaced by underscores resulting in `MY_ENV_VAR`. - -[[secret-mapping]] -===== Environment variables from Secret - -To add all key/value pairs of `Secret` as environment variables just apply the following configuration, separating each `Secret` -to be used as source by a comma (`,`): - -[source,properties] ----- -quarkus.kubernetes.env.secrets=my-secret,my-other-secret ----- - -which would generate the following in the container definition: - -[source,yaml] ----- -envFrom: - - secretRef: - name: my-secret - optional: false - - secretRef: - name: my-other-secret - optional: false ----- - -The following extracts a value identified by the `keyName` field from the `my-secret` Secret into a `foo` environment variable: - -[source,properties] ----- -quarkus.kubernetes.env.mapping.foo.from-secret=my-secret -quarkus.kubernetes.env.mapping.foo.with-key=keyName ----- - -This would generate the following in the `env` section of your container: - -[source,yaml] ----- -- env: - - name: FOO - valueFrom: - secretKeyRef: - key: keyName - name: my-secret - optional: false ----- - -===== Environment variables from ConfigMap - -To add all key/value pairs from `ConfigMap` as environment variables just apply the following configuration, separating each -`ConfigMap` to be used as source by a comma (`,`): - -[source,properties] ----- -quarkus.kubernetes.env.configmaps=my-config-map,another-config-map ----- - -which would generate the following in the container definition: - -[source,yaml] ----- -envFrom: - - configMapRef: - name: my-config-map - optional: false - - configMapRef: - name: another-config-map - optional: false ----- - -The following extracts a value identified by the `keyName` field from the `my-config-map` ConfigMap into a `foo` -environment variable: - -[source,properties] ----- -quarkus.kubernetes.env.mapping.foo.from-configmap=my-configmap -quarkus.kubernetes.env.mapping.foo.with-key=keyName ----- - -This would generate the following in the `env` section of your container: - -[source,yaml] ----- -- env: - - name: FOO - valueFrom: - configMapRefKey: - key: keyName - name: my-configmap - optional: false ----- - -===== Environment variables from fields - -It's also possible to use the value from another field to add a new environment variable by specifying the path of the field to be used as a source, as follows: - -[source,properties] ----- -quarkus.kubernetes.env.fields.foo=metadata.name ----- - -[NOTE] -==== -As is described in detail in the <<#openshift, OpenShift>> section, customizing OpenShift properties is done in the same way, but replacing -`kubernetes` with `openshift`. The previous example for OpenShift would look like this: - -[source,properties] ----- -quarkus.openshift.env.fields.foo=metadata.name ----- -==== - -===== Validation - -A conflict between two definitions, e.g. mistakenly assigning both a value and specifying that a variable is derived from a field, will result in an error being thrown at build time so that you get the opportunity to fix the issue before you deploy your application to your cluster where it might be more difficult to diagnose the source of the issue. - -Similarly, two redundant definitions, e.g. defining an injection from the same secret twice, will not cause an issue but will indeed report a warning to let you know that you might not have intended to duplicate that definition. - -[#env-vars-backwards] -===== Backwards compatibility - -Previous versions of the Kubernetes extension supported a different syntax to add environment variables.The older syntax is still supported but is deprecated and it's advised that you migrate to the new syntax. - -.Old vs. new syntax -|==== -| |Old | New | -| Plain variable |`quarkus.kubernetes.env-vars.my-env-var.value=foobar` | `quarkus.kubernetes.env.vars.my-env-var=foobar` | -| From field |`quarkus.kubernetes.env-vars.my-env-var.field=foobar` | `quarkus.kubernetes.env.fields.my-env-var=foobar` | -| All from `ConfigMap` |`quarkus.kubernetes.env-vars.xxx.configmap=foobar` | `quarkus.kubernetes.env.configmaps=foobar` | -| All from `Secret` |`quarkus.kubernetes.env-vars.xxx.secret=foobar` | `quarkus.kubernetes.env.secrets=foobar` | -| From one `Secret` field |`quarkus.kubernetes.env-vars.foo.secret=foobar` | `quarkus.kubernetes.env.mapping.foo.from-secret=foobar` | -| |`quarkus.kubernetes.env-vars.foo.value=field` | `quarkus.kubernetes.env.mapping.foo.with-key=field` | -| From one `ConfigMap` field |`quarkus.kubernetes.env-vars.foo.configmap=foobar` | `quarkus.kubernetes.env.mapping.foo.from-configmap=foobar` | -| |`quarkus.kubernetes.env-vars.foo.value=field` | `quarkus.kubernetes.env.mapping.foo.with-key=field` | -|==== - -NOTE: If you redefine the same variable using the new syntax while keeping the old syntax, **ONLY** the new version will be kept -and a warning will be issued to alert you of the problem.For example, if you define both -`quarkus.kubernetes.env-vars.my-env-var.value=foobar` and `quarkus.kubernetes.env.vars.my-env-var=newValue`, the extension will -only generate an environment variable `MY_ENV_VAR=newValue` and issue a warning. - -==== Mounting volumes - -The Kubernetes extension allows the user to configure both volumes and mounts for the application. -Any volume can be mounted with a simple configuration: - -[source,properties] ----- -quarkus.kubernetes.mounts.my-volume.path=/where/to/mount ----- - -This will add a mount to the pod for volume `my-volume` to path `/where/to/mount`. -The volumes themselves can be configured as shown in the sections below. - -===== Secret volumes - -[source,properties] ----- -quarkus.kubernetes.secret-volumes.my-volume.secret-name=my-secret ----- - -===== ConfigMap volumes - -[source,properties] ----- -quarkus.kubernetes.config-map-volumes.my-volume.config-map-name=my-secret ----- - -==== Passing application configuration - -Quarkus supports passing configuration from external locations (via Smallrye Config). This usually requires setting an additional environment variable or system propertiy. -When you need to use a secret or a config map for the purpose of application configuration, you need to: - -- define a volume -- mount the volume -- create an environment variable for `SMALLRYE_CONFIG_LOCATIONS` - -To simplify things, quarkus provides single step alternative: - -[source,properties] ----- -quarkus.kubernetes.app-secret= ----- - -or - -[source,properties] ----- -quarkus.kubernetes.app-config-map= ----- - -When these properties are used, the generated manifests will contain everything required. -The application config volumes will be created using path: `/mnt/app-secret` and `/mnt/app-config-map` for secrets and configmaps respectively. - -Note: Users may use both properties at the same time. - -=== Changing the number of replicas: - -To change the number of replicas from 1 to 3: - -[source,properties] ----- -quarkus.kubernetes.replicas=3 ----- - -=== Add readiness and liveness probes - -By default, the Kubernetes resources do not contain readiness and liveness probes in the generated `Deployment`. Adding them however is just a matter of adding the SmallRye Health extension like so: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-health - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-health") ----- - -The values of the generated probes will be determined by the configured health properties: `quarkus.smallrye-health.root-path`, `quarkus.smallrye-health.liveness-path` and `quarkus.smallrye-health.readiness-path`. -More information about the health extension can be found in the relevant xref:microprofile-health.adoc[guide]. - -=== Customizing the readiness probe: -To set the initial delay of the probe to 20 seconds and the period to 45: - -[source,properties] ----- -quarkus.kubernetes.readiness-probe.initial-delay=20s -quarkus.kubernetes.readiness-probe.period=45s ----- - -=== Add hostAliases -To add entries to a Pod's `/etc/hosts` file (more information can be found in https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/[Kubernetes documentation]), just apply the following configuration: - -[source,properties] ----- -quarkus.kubernetes.hostaliases."10.0.0.0".hostnames=foo.com,bar.org ----- - -This would generate the following `hostAliases` section in the `deployment` definition: - -[source,yaml] ----- -kind: Deployment -spec: - template: - spec: - hostAliases: - - hostnames: - - foo.com - - bar.org - ip: 10.0.0.0 ----- - -=== Container Resources Management - -CPU & Memory limits and requests can be applied to a `Container` (more info in https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[Kubernetes documentation]) using the following configuration: - -[source] ----- -quarkus.kubernetes.resources.requests.memory=64Mi -quarkus.kubernetes.resources.requests.cpu=250m -quarkus.kubernetes.resources.limits.memory=512Mi -quarkus.kubernetes.resources.limits.cpu=1000m ----- - -This would generate the following entry in the `container` section: - -[source, yaml] ----- -containers: - resources: - limits: - cpu: 1000m - memory: 512Mi - requests: - cpu: 250m - memory: 64Mi ----- - -=== Using the Kubernetes client - -Applications that are deployed to Kubernetes and need to access the API server will usually make use of the `kubernetes-client` extension: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-kubernetes-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-kubernetes-client") ----- - -To access the API server from within a Kubernetes cluster, some RBAC related resources are required (e.g. a ServiceAccount, a RoleBinding etc.). -So, when the `kubernetes-client` extension is present, the `kubernetes` extension is going to create those resources automatically, so that application will be granted the `view` role. -If more roles are required, they will have to be added manually. - -=== Deploying to Minikube - -https://github.com/kubernetes/minikube[Minikube] is quite popular when a Kubernetes cluster is needed for development purposes. To make the deployment to Minikube -experience as frictionless as possible, Quarkus provides the `quarkus-minikube` extension. This extension can be added to a project like so: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-minikube - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-minikube") ----- - -The purpose of this extension is to generate Kubernetes manifests (`minikube.yaml` and `minikube.json`) that are tailored to Minikube. -This extension assumes a couple things: - -* Users won't be using an image registry and will instead make their container image accessible to the Kubernetes cluster by building it directly -into Minikube's Docker daemon. To use Minikube's Docker daemon you must first execute: -+ -[source,bash] ----- -eval $(minikube -p minikube docker-env) ----- - -* Applications deployed to Kubernetes won't be accessed via a Kubernetes `Ingress`, but rather as a `NodePort` `Service`. -The advantage of doing this is that the URL of an application can be retrieved trivially by executing: -+ -[source,bash] ----- -minikube service list ----- - -To control the https://kubernetes.io/docs/concepts/services-networking/service/#nodeport[nodePort] that is used in this case, users can set `quarkus.kubernetes.node-port`. -Note however that this configuration is entirely optional because Quarkus will automatically use a proper (and non-changing) value if none is set. - -WARNING: It is highly discouraged to use the manifests generated by the Minikube extension when deploying to production as these manifests are intended for development purposes -only. When deploying to production, consider using the vanilla Kubernetes manifests (or the OpenShift ones when targeting OpenShift). - -NOTE: If the assumptions the Minikube extension makes don't fit your workflow, nothing prevents you from using the regular Kubernetes extension to generate Kubernetes manifests -and apply those to your Minikube cluster. - -== Tuning the generated resources using application.properties - -The Kubernetes extension allows tuning the generated manifest, using the `application.properties` file. -Here are some examples: - -=== Configuration options - -The table below describe all the available configuration options. - -.Kubernetes -|==== -| Property | Type | Description | Default Value -| quarkus.kubernetes.name | String | | ${quarkus.container-image.name} -| quarkus.kubernetes.version | String | | ${quarkus.container-image.tag} -| quarkus.kubernetes.deployment-kind | String | | Deployment -| quarkus.kubernetes.part-of | String | | -| quarkus.kubernetes.init-containers | Map | | -| quarkus.kubernetes.namespace | String | | -| quarkus.kubernetes.labels | Map | | -| quarkus.kubernetes.annotations | Map | | -| quarkus.kubernetes.app-secret | String | | -| quarkus.kubernetes.app-config-map | String | | -| quarkus.kubernetes.env-vars | Map | | -| quarkus.kubernetes.working-dir | String | | -| quarkus.kubernetes.command | String[] | | -| quarkus.kubernetes.arguments | String[] | | -| quarkus.kubernetes.replicas | int | | 1 -| quarkus.kubernetes.service-account | String | | -| quarkus.kubernetes.ports | Map | | -| quarkus.kubernetes.service-type | ServiceType | | ClusterIP -| quarkus.kubernetes.pvc-volumes | Map | | -| quarkus.kubernetes.secret-volumes | Map | | -| quarkus.kubernetes.config-map-volumes | Map | | -| quarkus.kubernetes.git-repo-volumes | Map | | -| quarkus.kubernetes.aws-elastic-block-store-volumes | Map | | -| quarkus.kubernetes.azure-disk-volumes | Map | | -| quarkus.kubernetes.azure-file-volumes | Map | | -| quarkus.kubernetes.mounts | Map | | -| quarkus.kubernetes.image-pull-policy | ImagePullPolicy | | Always -| quarkus.kubernetes.image-pull-secrets | String[] | | -| quarkus.kubernetes.liveness-probe | Probe | | ( see Probe ) -| quarkus.kubernetes.readiness-probe | Probe | | ( see Probe ) -| quarkus.kubernetes.sidecars | Map | | -| quarkus.kubernetes.ingress.expose | boolean | | false -| quarkus.kubernetes.ingress.host | String | | -| quarkus.kubernetes.ingress.annotations | Map | | -| quarkus.kubernetes.headless | boolean | | false -| quarkus.kubernetes.hostaliases | Map | | -| quarkus.kubernetes.resources.requests.cpu | String | | -| quarkus.kubernetes.resources.requests.memory | String | | -| quarkus.kubernetes.resources.limits.cpu | String | | -| quarkus.kubernetes.resources.limits.memory | String | | -|==== - -Properties that use non-standard types, can be referenced by expanding the property. -For example to define a `kubernetes-readiness-probe` which is of type `Probe`: - -[source,properties] ----- -quarkus.kubernetes.readiness-probe.initial-delay=20s -quarkus.kubernetes.readiness-probe.period=45s ----- - -In this example `initial-delay` and `period` are fields of the type `Probe`. -Below you will find tables describing all available types. - -==== Client Connection Configuration -You may need to configure the connection to your Kubernetes cluster. -By default, it automatically uses the active _context_ used by `kubectl`. - -For instance, if your cluster API endpoint uses a self-signed SSL Certificate you need to explicitly configure the client to trust it. You can achieve this by defining the following property: - -[source,properties] ----- -quarkus.kubernetes-client.trust-certs=true ----- - -The full list of the Kuberneters client configuration properties is provided below. - -include::{generated-dir}/config/quarkus-kubernetes-client.adoc[opts=optional, leveloffset=+1] - -==== Basic Types - -.ServiceType -Allowed values: `cluster-ip`, `node-port`, `load-balancer`, `external-name` - -.Env -|==== -| Property | Type | Description | Default Value -| value | String | | -| secret | String | | -| configmap | String | | -| field | String | | -|==== - -.Probe -|==== -| Property | Type | Description | Default Value -| http-action-path | String | | -| exec-action | String | | -| tcp-socket-action | String | | -| initial-delay | Duration | | 0 -| period | Duration | | 30s -| timeout | Duration | | 10s -|==== - -.Port -|==== -| Property | Type | Description | Default Value -| container-port | int | | -| host-port | int | | 0 -| path | String | | / -| protocol | Protocol | | TCP -|==== - -.Container -|==== -| Property | Type | Description | Default Value -| image | String | | -| env-vars | Env[] | | -| working-dir | String | | -| command | String[] | | -| arguments | String[] | | -| ports | Port[] | | -| mounts | Mount[] | | -| image-pull-policy | ImagePullPolicy | | Always -| liveness-probe | Probe | | -| readiness-probe | Probe | | -|==== - -.HostAlias -|==== -| Property | Type | Description | Default Value -| hostnames | String[] | list of hostnames | -|==== - -==== Mounts and Volumes - -.Mount -|==== -| Property | Type | Description | Default Value -| path | String | | -| sub-path | String | | -| read-only | boolean | | false -|==== - -.ConfigMapVolume -|==== -| Property | Type | Description | Default Value -| config-map-name | String | | -| default-mode | int | | 0600 -| optional | boolean | | false -|==== - -.SecretVolume -|==== -| Property | Type | Description | Default Value -| secret-name | String | | -| default-mode | int | | 0600 -| optional | boolean | | false -|==== - - -.AzureDiskVolume -|==== -| Property | Type | Description | Default Value -| disk-name | String | | -| disk-uri | String | | -| kind | String | | Managed -| caching-mode | String | | ReadWrite -| fs-type | String | | ext4 -| read-only | boolean | | false -|==== - -.AwsElasticBlockStoreVolume -|==== -| Property | Type | Description | Default Value -| volume-id | String | | -| partition | int | | -| fs-type | String | | ext4 -| read-only | boolean | | false -|==== - -.GitRepoVolume -|==== -| Property | Type | Description | Default Value -| repository | String | | -| directory | String | | -| revision | String | | -|==== - -.PersistentVolumeClaimVolume -|==== -| Property | Type | Description | Default Value -| claim-name | String | | -| read-only | boolean | | false -|==== - -.AzureFileVolume -|==== -| Property | Type | Description | Default Value -| share-name | String | | -| secret-name | String | | -| read-only | boolean | | false -|==== - -[#openshift] -=== OpenShift - -One way to deploy an application to OpenShift is to use s2i (source to image) to create an image stream from the source and then deploy the image stream: - -[source,bash,role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus extension remove kubernetes,jib -quarkus extension add openshift - -oc new-project quarkus-project -quarkus build -Dquarkus.container-image.build=true - -oc new-app --name=greeting quarkus-project/kubernetes-quickstart:1.0.0-SNAPSHOT -oc expose svc/greeting -oc get route -curl /greeting ----- - -[source,bash,role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw quarkus:remove-extension -Dextensions="kubernetes, jib" -./mvnw quarkus:add-extension -Dextensions="openshift" - -oc new-project quarkus-project -./mvnw clean package -Dquarkus.container-image.build=true - -oc new-app --name=greeting quarkus-project/kubernetes-quickstart:1.0.0-SNAPSHOT -oc expose svc/greeting -oc get route -curl /greeting ----- - -[source,bash,role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew removeExtension --extensions="kubernetes, jib" -./gradlew addExtension --extensions="openshift" - -oc new-project quarkus-project -./gradlew build -Dquarkus.container-image.build=true - -oc new-app --name=greeting quarkus-project/kubernetes-quickstart:1.0.0-SNAPSHOT -oc expose svc/greeting -oc get route -curl /greeting ----- - -See further information in xref:deploying-to-openshift.adoc[Deploying to OpenShift]. - -A description of OpenShift resources and customisable properties is given below alongside Kubernetes resources to show similarities where applicable. This includes an alternative to `oc new-app ...` above, i.e. `oc apply -f target/kubernetes/openshift.json` . - -To enable the generation of OpenShift resources, you need to include OpenShift in the target platforms: - -[source,properties] ----- -quarkus.kubernetes.deployment-target=openshift ----- - -If you need to generate resources for both platforms (vanilla Kubernetes and OpenShift), then you need to include both (comma separated). - -[source,properties] ----- -quarkus.kubernetes.deployment-target=kubernetes,openshift ----- - -Following the execution of `./mvnw package -Dquarkus.container-image.build=true` you will notice amongst the other files that are created, two files named -`openshift.json` and `openshift.yml` in the `target/kubernetes/` directory. - -These manifests can be deployed as is to a running cluster, using `kubectl`: - -[source,bash] ----- -kubectl apply -f target/kubernetes/openshift.json ----- - -OpenShift users might want to use `oc` rather than `kubectl`: - -[source,bash] ----- -oc apply -f target/kubernetes/openshift.json ----- - - -NOTE: Quarkus also provides the xref:deploying-to-openshift.adoc[OpenShift] extension. This extension is basically a wrapper around the Kubernetes extension and -relieves OpenShift users of the necessity of setting the `deployment-target` property to `openshift` - -The OpenShift resources can be customized in a similar approach with Kubernetes. - -.OpenShift -|==== -| Property | Type | Description | Default Value -| quarkus.openshift.name | String | | ${quarkus.container-image.name} -| quarkus.openshift.version | String | | ${quarkus.container-image.tag} -| quarkus.openshift.deployment-kind | String | | DeploymentConfig -| quarkus.openshift.part-of | String | | -| quarkus.openshift.init-containers | Map | | -| quarkus.openshift.labels | Map | | -| quarkus.openshift.annotations | Map | | -| quarkus.openshift.app-secret | String | | -| quarkus.openshift.app-config-map | String | | -| quarkus.openshift.env-vars | Map | | -| quarkus.openshift.working-dir | String | | -| quarkus.openshift.command | String[] | | -| quarkus.openshift.arguments | String[] | | -| quarkus.openshift.replicas | int | | 1 -| quarkus.openshift.service-account | String | | -| quarkus.openshift.ports | Map | | -| quarkus.openshift.service-type | ServiceType | | ClusterIP -| quarkus.openshift.pvc-volumes | Map | | -| quarkus.openshift.secret-volumes | Map | | -| quarkus.openshift.config-map-volumes | Map | | -| quarkus.openshift.git-repo-volumes | Map | | -| quarkus.openshift.aws-elastic-block-store-volumes | Map | | -| quarkus.openshift.azure-disk-volumes | Map | | -| quarkus.openshift.azure-file-volumes | Map | | -| quarkus.openshift.mounts | Map | | -| quarkus.openshift.image-pull-policy | ImagePullPolicy | | Always -| quarkus.openshift.image-pull-secrets | String[] | | -| quarkus.openshift.liveness-probe | Probe | | ( see Probe ) -| quarkus.openshift.readiness-probe | Probe | | ( see Probe ) -| quarkus.openshift.sidecars | Map | | -| quarkus.openshift.route.expose | boolean | | false -| quarkus.openshift.route.host | String | | -| quarkus.openshift.route.annotations | Map | | -| quarkus.openshift.headless | boolean | | false -|==== - -[#knative] -=== Knative - -To enable the generation of Knative resources, you need to include Knative in the target platforms: - -[source,properties] ----- -quarkus.kubernetes.deployment-target=knative ----- - -Following the execution of `./mvnw package` you will notice amongst the other files that are created, two files named -`knative.json` and `knative.yml` in the `target/kubernetes/` directory. - -If you look at either file you will see that it contains a Knative `Service`. - -The full source of the `knative.json` file looks something like this: - -[source,json] ----- -{ - { - "apiVersion" : "serving.quarkus.knative.dev/v1alpha1", - "kind" : "Service", - "metadata" : { - "annotations": { - "app.quarkus.io/vcs-url" : "", - "app.quarkus.io/commit-id" : "" - }, - "labels" : { - "app.kubernetes.io/name" : "test-quarkus-app", - "app.kubernetes.io/version" : "1.0.0-SNAPSHOT" - }, - "name" : "knative" - }, - "spec" : { - "runLatest" : { - "configuration" : { - "revisionTemplate" : { - "spec" : { - "container" : { - "image" : "dev.local/yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT", - "imagePullPolicy" : "Always" - } - } - } - } - } - } - } -} ----- - -The generated manifest can be deployed as is to a running cluster, using `kubectl`: - -[source,bash] ----- -kubectl apply -f target/kubernetes/knative.json ----- - -The generated service can be customized using the following properties: - -.Knative -|==== -| Property | Type | Description | Default Value -| quarkus.knative.name | String | | ${quarkus.container-image.name} -| quarkus.knative.version | String | | ${quarkus.container-image.tag} -| quarkus.knative.part-of | String | | -| quarkus.knative.init-containers | Map | | -| quarkus.knative.labels | Map | | -| quarkus.knative.annotations | Map | | -| quarkus.knative.app-secret | String | | -| quarkus.knative.app-config-map | String | | -| quarkus.knative.env-vars | Map | | -| quarkus.knative.working-dir | String | | -| quarkus.knative.command | String[] | | -| quarkus.knative.arguments | String[] | | -| quarkus.knative.replicas | int | | 1 -| quarkus.knative.service-account | String | | -| quarkus.knative.host | String | | -| quarkus.knative.ports | Map | | -| quarkus.knative.service-type | ServiceType | | ClusterIP -| quarkus.knative.pvc-volumes | Map | | -| quarkus.knative.secret-volumes | Map | | -| quarkus.knative.config-map-volumes | Map | | -| quarkus.knative.git-repo-volumes | Map | | -| quarkus.knative.aws-elastic-block-store-volumes | Map | | -| quarkus.knative.azure-disk-volumes | Map | | -| quarkus.knative.azure-file-volumes | Map | | -| quarkus.knative.mounts | Map | | -| quarkus.knative.image-pull-policy | ImagePullPolicy | | Always -| quarkus.knative.image-pull-secrets | String[] | | -| quarkus.knative.liveness-probe | Probe | | ( see Probe ) -| quarkus.knative.readiness-probe | Probe | | ( see Probe ) -| quarkus.knative.sidecars | Map | | -| quarkus.knative.revision-name | String | | -| quarkus.knative.traffic | Traffic[] | | ( see Traffic ) -| quarkus.knative.min-scale | int | See link:https://knative.dev/docs/serving/autoscaling/scale-bounds/#lower-bound[link] | -| quarkus.knative.max-scale | int | See link:https://knative.dev/docs/serving/autoscaling/scale-bounds/#upper-bound[link] | -| quarkus.knative.scale-to-zero-enabled | boolean | See link:https://knative.dev/docs/serving/autoscaling/scale-to-zero/#enable-scale-to-zero[link] | true -| quarkus.knative.revision-auto-scaling | AutoScalingConfig | | ( see AutoScalingConfig ) -| quarkus.knative.global-auto-scaling | GlobalAutoScalingConfig | | ( see GlobalAutoScalingConfig ) -|==== - -.Traffic -|==== -| Property | Type | Description | Default Value -| revision-name | String | A specific revision to which to send this portion of traffic | -| tag | String | Expose a dedicated url for referncing this target | -| latest-revision | Boolean | Optionally provided to indicate that the latest revision should be used for this traffic target | false -| percent | Logn | Indicates the percent of traffic that is be routed to this revision | 100 -|==== - -.AutoScalingConfig -|==== -| Property | Type | Description | Default Value -| auto-scaler-class | String | The auto-scaler class. Possible values: `kpa` for Knative Pod Autoscaler, `hpa` for Horizontal Pod Autoscaler | kpa -| metric | String | The autoscaling metric to use. Possible values (concurency, rps, cpu) | -| target | int | This value specifies the autoscaling target | -| container-concurrency | int | The exact amount of requests allowed to the replica at a time | -| target-utilization-percentage | int | This value specifies a percentage of the target to actually be targeted by the autoscaler | -|==== - -.GlobalAutoScalingConfig -|==== -| Property | Type | Description | Default Value -| auto-scaler-class | String | The auto-scaler class. Possible values: `kpa` for Knative Pod Autoscaler, `hpa` for Horizontal Pod Autoscaler | kpa -| container-concurrency | int | The exact amount of requests allowed to the replica at a time | -| target-utilization-percentage | int | This value specifies a percentage of the target to actually be targeted by the autoscaler | -| requests-per-second | Logn | The requests per second per replica | -|==== - -=== Deployment targets - -Mentioned in the previous sections was the concept of `deployment-target`. This concept allows users to control which Kubernetes manifests will be generated -and deployed to a cluster (if `quarkus.kubernetes.deploy` has been set to `true`). - -By default, when no `deployment-target` is set, then only vanilla Kubernetes resources are generated and deployed. When multiple values are set (for example -`quarkus.kubernetes.deployment-target=kubernetes,openshift`) then the resources for all targets are generated, but only the resources -that correspond to the *first* target are applied to the cluster (if deployment is enabled). - -In the case of wrapper extensions like OpenShift and Minikube, when these extensions have been explicitly added to the project, the default `deployment-target` -is set by those extensions. For example if `quarkus-minikube` has been added to a project, then `minikube` becomes the default deployment target and its -resources will be applied to the Kubernetes cluster when deployment via `quarkus.kubernetes.deploy` has been set. -Users can still override the deployment-targets manually using `quarkus.kubernetes.deployment-target`. - -=== Deprecated configuration - -The following categories of configuration properties have been deprecated. - -==== Properties without the quarkus prefix - -In earlier versions of the extension, the `quarkus.` was missing from those properties. These properties are now deprecated. - -==== Docker and S2i properties - -The properties for configuring `docker` and `s2i` are also deprecated in favor of the new container-image extensions. - -==== Config group arrays - -Properties referring to config group arrays (e.g. `kubernetes.labels[0]`, `kubernetes.env-vars[0]` etc) have been converted to maps, to align with the rest of the Quarkus ecosystem. - -The code below demonstrates the change in `labels` config: - -[source,properties] ----- -# Old labels config: -kubernetes.labels[0].name=foo -kubernetes.labels[0].value=bar - -# New labels -quarkus.kubernetes.labels.foo=bar ----- - -The code below demonstrates the change in `env-vars` config: - -[source,properties] ----- -# Old env-vars config: -kubernetes.env-vars[0].name=foo -kubernetes.env-vars[0].configmap=my-configmap - -# New env-vars -quarkus.kubernetes.env-vars.foo.configmap=myconfigmap ----- - -==== `env-vars` properties - -`quarkus.kubernetes.env-vars` are deprecated (though still currently supported as of this writing) and the new declaration style should be used instead. -See <<#env-vars>> and more specifically <> for more details. - -== Deployment - -To trigger building and deploying a container image you need to enable the `quarkus.kubernetes.deploy` flag (the flag is disabled by default - furthermore it has no effect during test runs or dev mode). -This can be easily done with the command line: - -[source,bash,subs=attributes+] ----- -./mvnw clean package -Dquarkus.kubernetes.deploy=true ----- - -=== Building a container image - -Building a container image is possible, using any of the 3 available `container-image` extensions: - -- xref:container-image.adoc#docker[Docker] -- xref:container-image.adoc#jib[Jib] -- xref:container-image.adoc#s2i[s2i] - -Each time deployment is requested, a container image build will be implicitly triggered (no additional properties are required when the Kubernetes deployment has been enabled). - -=== Deploying - -When deployment is enabled, the Kubernetes extension will select the resources specified by `quarkus.kubernetes.deployment-target` and deploy them. -This assumes that a `.kube/config` is available in your user directory that points to the target Kubernetes cluster. -In other words the extension will use whatever cluster `kubectl` uses. The same applies to credentials. - -At the moment no additional options are provided for further customization. - -== Using existing resources - -Sometimes it's desirable to either provide additional resources (e.g. a ConfigMap, a Secret, a Deployment for a database etc) or provide custom ones that will be used as a `base` for the generation process. -Those resources can be added under `src/main/kubernetes` directory and can be named after the target environment (e.g. kubernetes.json, openshift.json, knative.json, or the yml equivalents). The correlation between provided and generated files is done by file name. -So, a `kubernetes.json`/`kubernetes.yml` file added in `src/main/kubernetes` will only affect the generated `kubernetes.json`/`kubernetes.yml`. An `openshift.json`/`openshift.yml` file added in `src/main/kubernetes` will only affect the generated `openshift.json`/`openshift.yml`. -A `knative.json`/`knative.yml` file added in `src/main/kubernetes` will only affect the generated `knative.json`/`knative.yml` and so on. The provided file may be either in json or yaml format and may contain one or more resources. These resources will end up in both generated formats (json and yaml). For example, a secret added in `src/main/kubernetes/kubernetes.yml` will be added to both the generated `kubernetes.yml` and `kubernetes.json`. - -Note: At the time of writing there is no mechanism in place that allows a one to many relationship between provided and generated files. Minikube is not an exception to the rule above, so if you want to customize the generated minikube manifests, the file placed under `src/main/kubernetes` will have to be named `minikube.json` or `minikube.yml` (naming it `kubernetes.yml` or `kubernetes.json` will result in having only the generated `kubernetes.yml` and `kubernetes.json` affected). - -Any resource found will be added in the generated manifests. Global modifications (e.g. labels, annotations etc) will also be applied to those resources. -If one of the provided resources has the same name as one of the generated ones, then the generated resource will be created on top of the provided resource, respecting existing content when possible (e.g. existing labels, annotations, environment variables, mounts, replicas etc). - -The name of the resource is determined by the application name and may be overridden by `quarkus.kubernetes.name`, `quarkus.openshift.name` and `quarkus.knative.name`. - -For example, in the `kubernetes-quickstart` application, we can add a `kubernetes.yml` file in the `src/main/kubernetes` that looks like: - -[source,yaml] ----- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: kubernetes-quickstart - labels: - app: quickstart -spec: - replicas: 3 - selector: - matchLabels: - app: quickstart - template: - metadata: - labels: - app: quickstart - spec: - containers: - - name: kubernetes-quickstart - image: someimage:latest - ports: - - containerPort: 80 - env: - - name: FOO - value: BAR ----- - -The generated `kubernetes.yml` will look like: - -[source,yaml] ----- -apiVersion: "apps/v1" -kind: "Deployment" -metadata: - annotations: - app.quarkus.io/build-timestamp: "2020-04-10 - 12:54:37 +0000" - labels: - app: "quickstart" - name: "kubernetes-quickstart" -spec: - replicas: 3 <1> - selector: - matchLabels: - app.kubernetes.io/name: "kubernetes-quickstart" - app.kubernetes.io/version: "1.0.0-SNAPSHOT" - template: - metadata: - annotations: - app.quarkus.io/build-timestamp: "2020-04-10 - 12:54:37 +0000" - labels: - app: "quickstart" <2> - spec: - containers: - - env: - - name: "FOO" <3> - value: "BAR" - image: "<>/kubernetes-quickstart:1.0.0-SNAPSHOT" <4> - imagePullPolicy: "Always" - name: "kubernetes-quickstart" - ports: - - containerPort: 8080 <5> - name: "http" - protocol: "TCP" - serviceAccount: "kubernetes-quickstart" ----- - -The provided replicas <1>, labels <2> and environment variables <3> were retained. However, the image <4> and container port <5> were modified. Moreover, the default annotations have been added. - -[NOTE] -==== -* When the resource name does not match the application name (or the overridden name) rather than reusing the resource a new one will be added. Same goes for the container. - -* When the name of the container does not match the application name (or the overridden name), container specific configuration will be ignored. -==== - -== Service Binding [[service_binding]] - -Quarkus supports the link:https://github.com/k8s-service-bindings/spec[Service Binding Specification for Kubernetes] to bind services to applications. - -Specifically, Quarkus implements the link:https://github.com/k8s-service-bindings/spec#workload-projection[Workload Projection] part of the specification, therefore allowing applications to bind to services, such as a Database or a Broker, without the need for user configuration. - -To enable Service Binding for supported extensions, add the `quarkus-kubernetes-service-binding` extension to the application dependencies. - -* The following extensions can be used with Service Binding and are supported for Workload Projection: -+ -==== -* `quarkus-jdbc-mariadb` -* `quarkus-jdbc-mssql` -* `quarkus-jdbc-mysql` -* `quarkus-jdbc-postgresql` -* `quarkus-mongo-client` - -* `quarkus-kafka-client` -* `quarkus-smallrye-reactive-messaging-kafka` -==== - - -=== Workload Projection - -Workload Projection is a process of obtaining the configuration for services from the Kubernetes cluster. This configuration takes the form of directory structures that follow certain conventions and is attached to an application or to a service as a mounted volume. The `kubernetes-service-binding` extension uses this directory structure to create configuration sources, which allows you to configure additional modules, such as databases or message brokers. - -During application development, users can use workload projection to connect their application to a development database, or other locally-run services, without changing the actual application code or configuration. - -For an example of a workload projection where the directory structure is included in the test resources and passed to integration test, see the link:https://github.com/quarkusio/quarkus/tree/e7efe6b3efba91b9c4ae26f9318f8397e23e7505/integration-tests/kubernetes-service-binding-jdbc/src/test/resources/k8s-sb[Kubernetes Service Binding datasource] GitHub repository. - -[NOTE] -==== -* The `k8s-sb` directory is the root of all service bindings. In this example, only one database called `fruit-db` is intended to be bound. This binding database has the `type` file, that indicates `postgresql` as the database type, while the other files in the directory provide the necessary information to establish the connection. - -* After your Quarkus project obtains information from `SERVICE_BINDING_ROOT` environment variables that are set by OpenShift, you can locate generated configuration files that are present in the file system and use them to map the configuration-file values to properties of certain extensions. -==== - - -== Introduction to the Service Binding Operator - -The link:https://github.com/redhat-developer/service-binding-operator[Service Binding Operator] is an Operator that implements link:https://github.com/k8s-service-bindings/spec[Service Binding Specification for Kubernetes] and is meant to simplify the binding of services to an application. Containerized applications that support link:https://github.com/k8s-service-bindings/spec#workload-projection[Workload Projection] obtain service binding information in the form of volume mounts. The Service Binding Operator reads binding service information and mounts it to the application containers that need it. - -The correlation between application and bound services is expressed through the `ServiceBinding` resources, which declares the intent of what services are meant to be bound to what application. - -The Service Binding Operator watches for `ServiceBinding` resources, which inform the Operator what applications are meant to be bound with what services. When a listed application is deployed, the Service Binding Operator collects all the binding information that must be passed to the application, then upgrades the application container by attaching a volume mount with the binding information. - -The Service Binding Operator completes the following actions: - -* Observes `ServiceBinding` resources for workloads intended to be bound to a particular service -* Applies the binding information to the workload using volume mounts - -The following chapter describes the automatic and semi-automatic service binding approaches and their use cases. With either approach, the `kubernetes-service-binding` extension generates a `ServiceBinding` resource. With the semi-automatic approach, users must provide a configuration for target services manually. With the automatic approach, for a limited set of services generating the `ServiceBinding` resource, no additional configuration is needed. - - -=== Semi-automatic service binding - -A service binding process starts with a user specification of required services that will be bound to a certain application. This expression is summarized in the `ServiceBinding` resource that is generated by the `kubernetes-service-binding` extension. The use of the `kubernetes-service-binding` extensions helps users to generate `ServiceBinding` resources with minimal configuration, therefore simplifying the process overall. - -The Service Binding Operator responsible for the binding process then reads the information from the `ServiceBinding` resource and mounts the required files to a container accordingly. - - -* An example of the `ServiceBinding` resource: -+ -[source,yaml] ----- -apiVersion: binding.operators.coreos.com/v1beta1 -kind: ServiceBinding -metadata: - name: binding-request - namespace: service-binding-demo -spec: - application: - name: java-app - group: apps - version: v1 - resource: deployments - services: - - group: postgres-operator.crunchydata.com - version: v1beta1 - kind: Database - name: db-demo - id: postgresDB ----- -+ -[NOTE] -==== -* The `quarkus-kubernetes-service-binding` extension provides a more compact way of expressing the same information. For example: -+ -[source,properties] ----- -quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 -quarkus.kubernetes-service-binding.services.db-demo.kind=Database ----- -==== - -After adding the earlier configuration properties inside your `application.properties`, the `quarkus-kubernetes`, in combination with the `quarkus-kubernetes-service-binding` extension, automatically generates the `ServiceBinding` resource. - -The earlier mentioned `db-demo` property-configuration identifier now has a double role and also completes the following actions: - -* Correlates and groups `api-version` and `kind` properties together -* Defines the `name` property for the custom resource with a possibility for a later edit. For example: -+ -[source,properties] ----- -quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 -quarkus.kubernetes-service-binding.services.db-demo.kind=Database -quarkus.kubernetes-service-binding.services.db-demo.name=my-db ----- - -.Additional resources - -* For a semi-automatic service binding demonstration, see link:https://developers.redhat.com/articles/2021/12/22/how-use-quarkus-service-binding-operator#create_the_quarkus_application[How to use Quarkus with the Service Binding Operator] - -* List of link:https://github.com/redhat-developer/service-binding-operator#known-bindable-operators[List of bindable Operators] - - -=== Automatic service binding - -The `quarkus-kubernetes-service-binding` extension can generate the `ServiceBinding` resource automatically after detecting that an application requires access to the external services that are provided by available bindable Operators. - -NOTE: Automatic service binding can be generated for a limited number of service types. To be consistent with established terminology for Kubernetes and Quarkus services, this chapter refers to these service types as kinds. - -.Operators that support the service auto-binding -[%autowidth,%noheader,stripes=even] -|==== -| | Operator | API Version | Kind -| `postgresql` | link:https://operatorhub.io/operator/postgresql[CrunchyData Postgres] | postgres-operator.crunchydata.com/v1beta1 | PostgresCluster -| `mysql` | link:https://operatorhub.io/operator/percona-xtradb-cluster-operator[Percona XtraDB Cluster] | pxc.percona.com/v1-9-0 | PerconaXtraDBCluster -| `mongo` | link:https://operatorhub.io/operator/percona-server-mongodb-operator[Percona Mongo] | psmdb.percona.com/v1-9-0 | PerconaServerMongoDB -|==== - - -=== Automatic datasource binding - -For traditional databases, automatic binding is initiated whenever a datasource is configured as follows: - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql ----- - -The previous configuration, combined with the presence of `quarkus-datasource`, `quarkus-jdbc-postgresql`, `quarkus-kubernetes`, and `quarkus-kubernetes-service-binding` properties in the application, results in the generation of the `ServiceBinding` resource for the the `postgresql` database type. - -By using the `apiVersion` and `kind` properties of the Operator resource, which matches the used `postgresql` Operator, the generated `ServiceBinding` resource binds the service or resource to the application. - -When you do not specify a name for your database service, the the value of the `db-kind` property is used as the default name. - -[source,yaml] ----- - services: - - apiVersion: postgres-operator.crunchydata.com/v1beta1 - kind: PostgresCluster - name: postgresql ----- - -Specified the name of the datasource as follows: - -[source,properties] ----- -quarkus.datasource.fruits-db.db-kind=postgresql ----- - -The `service` in the generated `ServiceBinding` then displays as follows: - -[source,yaml] ----- - services: - - apiVersion: postgres-operator.crunchydata.com/v1beta1 - kind: PostgresCluster - name: fruits-db ----- - -Similarly, if you use `mysql`, the name of the datasource can be specified as follows: - -[source,properties] ----- -quarkus.datasource.fruits-db.db-kind=mysql ----- - -The generated `service` contains the following: - -[source,yaml] ----- - services: - - apiVersion: pxc.percona.com/v1-9-0 - kind: PerconaXtraDBCluster - name: fruits-db ----- - - -==== Customizing Automatic Service Binding - -Even though automatic binding was developed to eliminate as much manual configuration as possible, there are cases where modifying the generated `ServiceBinding` resource might still be needed. The generation process exclusively relies on information extracted from the application and the knowledge of the supported Operators, which may not reflect what is deployed in the cluster. The generated resource is based purely on the knowledge of the supported bindable Operators for popular service kinds and a set of conventions that were developed to prevent possible mismatches, such as: - -* The target resource name does not match the datasource name -* A specific Operator needs to be used rather than the default Operator for that service kind -* Version conflicts that occur when a user needs to use any other version than default or latest - -.Conventions - -* The target resource coordinates are determined based on the type of Operator and the kind of service. -* The target resource name is set by default to match the service kind, such as `postgresql`, `mysql`, `mongo`. -* For named datasources, the name of the datasource is used. -* For named `mongo` clients, the name of the client is used. - -==== -.Example 1 - Name mismatch - -For cases in which you need to modify the generated `ServiceBinding` to fix a name mismatch, use the `quarkus.kubernetes-service-binding.services` properties and specify the service's name as the service key. - -The `service key` is usually the name of the service, for example the name of the datasource, or the name of the `mongo` client. When this value is not available, the datasource type, such as `postgresql`, `mysql`, `mongo`, is used instead. - -To avoid naming conflicts between different types of services, prefix the `service key` with a specific datasource type, such as `postgresql-____`. - -The following example shows how to customize the `apiVersion` property of the `PostgresCluster` resource: - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.kubernetes-service-binding.services.postgresql.api-version=postgres-operator.crunchydata.com/v1beta2 ----- -==== - -==== -.Example 2: Application of a custom name for a datasource - -In Example 1, the `db-kind`(`postgresql`) was used as a service key. In this example, because the datasource is named, according to convention, the datasource name (`fruits-db`) is used instead. - -The following example shows that for a named datasource, the datasource name is used as the name of the target resource: - -[source,properties] ----- -quarkus.datasource.fruits-db.db-kind=postgresql ----- - -This has the same effect as the following configuration: - -[source,properties] ----- -quarkus.kubernetes-service-binding.services.fruits-db.api-version=postgres-operator.crunchydata.com/v1beta1 -quarkus.kubernetes-service-binding.services.fruits-db.kind=PostgresCluster -quarkus.kubernetes-service-binding.services.fruits-db.name=fruits-db ----- -==== - -.Additional resources -* For more details about the available properties and how do they work, see the link:https://github.com/k8s-service-bindings/spec#workload-projection[Workload Projection] part of the Service Binding specification. diff --git a/_versions/2.7/guides/deploying-to-openshift.adoc b/_versions/2.7/guides/deploying-to-openshift.adoc deleted file mode 100644 index dda09d202b3..00000000000 --- a/_versions/2.7/guides/deploying-to-openshift.adoc +++ /dev/null @@ -1,460 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Deploying on OpenShift - -include::./attributes.adoc[] - -This guide covers generating and deploying OpenShift resources based on sane default and user supplied configuration. - - -== Prerequisites - -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* Access to an OpenShift cluster (Minishift is a viable option) -* OpenShift CLI (Optional, only required for manual deployment) - -== Bootstrapping the project - -First, we need a new project that contains the OpenShift extension. This can be done using the following command: - -:create-app-artifact-id: openshift-quickstart -:create-app-extensions: resteasy,openshift -:create-app-code: -include::includes/devtools/create-app.adoc[] - -Quarkus offers the ability to automatically generate OpenShift resources based on sane defaults and user supplied configuration. -The OpenShift extension is actually a wrapper extension that brings together the xref:deploying-to-kubernetes.adoc[kubernetes] and xref:container-image.adoc#s2i[container-image-s2i] -extensions with sensible defaults so that it's easier for the user to get started with Quarkus on OpenShift. - -When we added the OpenShift extension to the command line invocation above, the following dependency was added to the `pom.xml` - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-openshift - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-openshift") ----- - -== Log Into the OpenShift Cluster - -Before we build and deploy our application we need to log into an OpenShift cluster. -You can log in via the https://docs.openshift.com/container-platform/4.9/cli_reference/openshift_cli/getting-started-cli.html[OpenShift CLI]: - -.Log In - OpenShift CLI Example -[source,bash] ----- -oc login -u myUsername <1> ----- -<1> You'll be prompted for the required information such as server URL, password, etc. - -Alternatively, you may log in using the API token: - -.Log In - OpenShift CLI With API Token Example -[source,bash] ----- -oc login --token=myToken --server=myServerUrl ----- - -TIP: You can request the token via the _Copy Login Command_ link in the OpenShift web console. - -Finally, you don't need to use the OpenShift CLI at all. -Instead, set the `quarkus.kubernetes-client.master-url` config property and authenticate with the `quarkus.kubernetes-client.token`, or `quarkus.kubernetes-client.username` and `quarkus.kubernetes-client.password` respectively: - -:build-additional-parameters: -Dquarkus.kubernetes-client.master-url=myServerUrl -Dquarkus.kubernetes-client.token=myToken -include::includes/devtools/build.adoc[] -:!build-additional-parameters: - -== Build and Deployment - -You can trigger a build and deployment in a single step or build the container image first and then configure the OpenShift application manually if you need <>. - -To trigger a build and deployment in a single step: - -:build-additional-parameters: -Dquarkus.kubernetes.deploy=true -include::includes/devtools/build.adoc[] -:!build-additional-parameters: - -TIP: If you want to test your application immediately then set the `quarkus.openshift.route.expose` config property to `true` to <>, e.g. add `-Dquarkus.openshift.route.expose=true` to the command above. - -This command will build your application locally, then trigger a container image build and finally apply the generated OpenShift resources automatically. -The generated resources use OpenShift's `DeploymentConfig` that is configured to automatically trigger a redeployment when a change in the `ImageStream` is noticed. -In other words, any container image build after the initial deployment will automatically trigger redeployment, without the need to delete, update or re-apply the generated resources. - -You can use the OpenShift web console to verify that the above command has created an image stream, a service resource and has deployed the application. -Alternatively, you can run the following OpenShift CLI commands: -[source,bash,subs=attributes+] ----- -oc get is <1> -oc get pods <2> -oc get svc <3> ----- -<1> Lists the image streams created. -<2> Get the list of pods. -<3> Get the list of Kubernetes services. - -Note that the service is not exposed to the outside world by default. -So unless you've used the `quarkus.openshift.route.expose` config property to expose the created service automatically you'll need to expose the service manually. - -.Expose The Service - OpenShift CLI Example -[source,bash,subs=attributes+] ----- -oc expose svc/greeting <1> -oc get routes <2> -curl http:///greeting <3> ----- -<1> Expose the service. -<2> Get the list of exposed routes. -<3> Access your application. - -[[control_application_config]] -=== Configure the OpenShift Application Manually - -If you need more control over the deployment configuration you can build the container image first and then configure the OpenShift application manually. - -To trigger a container image build: - -[source,bash,subs=attributes+] ----- -./mvnw clean package -Dquarkus.container-image.build=true ----- - -The build that will be performed is an _s2i binary_ build. -The input of the build is the jar that has been built locally and the output of the build is an `ImageStream` that is configured to automatically trigger a deployment. - -[NOTE] -==== -During the build you may find the `Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed` exception due to self-signed certificate. To solve this, just add the following line to your `application.properties`: -```properties -quarkus.kubernetes-client.trust-certs=true -``` -For more information, see link:https://quarkus.io/guides/deploying-to-kubernetes#client-connection-configuration[deploying to kubernetes]. -==== - -Once the build is done we can create a new application from the relevant `ImageStream`. - -[source,bash,subs=attributes+] ----- -oc get is <1> -oc new-app --name=greeting /openshift-quickstart:1.0.0-SNAPSHOT <2> -oc get svc -oc expose svc/greeting <3> -oc get routes <4> -curl http:///greeting <5> ----- -<1> Lists the image streams created. The image stream of our application should be tagged as /openshift-quickstart:1.0.0-SNAPSHOT. -<2> Create a new application from the image source. -<3> Expose the service to the outside world. -<4> Get the list of exposed routes. -<5> Access your application. - -After this setup the next time the container image is built a deployment to OpenShift is triggered automatically. -In other words, you don't need to repeat the above steps. - -=== Non-S2I Builds - -Out of the box the OpenShift extension is configured to use xref:container-image.adoc#s2i[container-image-s2i]. However, it's still possible to use other container image extensions like: - -- xref:container-image.adoc#docker[container-image-docker] -- xref:container-image.adoc#jib[container-image-jib] - -When a non-s2i container image extension is used, an `ImageStream` is created that is pointing to an external `dockerImageRepository`. The image is built and pushed to the registry and the `ImageStream` populates the tags that are available in the `dockerImageRepository`. - -To select which extension will be used for building the image: - -[source,properties] ----- -quarkus.container-image.builder=docker ----- - -or - -[source,properties] ----- -quarkus.container-image.builder=jib ----- - -== Customizing - -All available customization options are available in the xref:deploying-to-kubernetes.adoc#openshift[OpenShift configuration options]. - -Some examples are provided in the sections below: - -[[exposing_routes]] -=== Exposing Routes - -To expose a `Route` for the Quarkus application: - -[source,properties] ----- -quarkus.openshift.route.expose=true ----- - -[TIP] -==== -You don't necessarily need to add this property in the `application.properties`. You can pass it as a command line argument: - -[source,bash,subs=attributes+] ----- -./mvnw clean package -Dquarkus.openshift.route.expose=true ----- - -The same applies to all properties listed below. -==== - -=== Labels - -To add a label in the generated resources: - -[source,properties] ----- -quarkus.openshift.labels.foo=bar ----- - -=== Annotations - -To add an annotation in the generated resources: - -[source,properties] ----- -quarkus.openshift.annotations.foo=bar ----- - -[#env-vars] -=== Environment variables - -OpenShift provides multiple ways of defining environment variables: - -- key/value pairs -- import all values from a Secret or ConfigMap -- interpolate a single value identified by a given field in a Secret or ConfigMap -- interpolate a value from a field within the same resource - -==== Environment variables from key/value pairs - -To add a key/value pair as an environment variable in the generated resources: - -[source,properties] ----- -quarkus.openshift.env.vars.my-env-var=foobar ----- - -The command above will add `MY_ENV_VAR=foobar` as an environment variable. -Please note that the key `my-env-var` will be converted to uppercase and dashes will be replaced by underscores resulting in `MY_ENV_VAR`. - -==== Environment variables from Secret - -To add all key/value pairs of `Secret` as environment variables just apply the following configuration, separating each `Secret` -to be used as source by a comma (`,`): - -[source,properties] ----- -quarkus.openshift.env.secrets=my-secret,my-other-secret ----- - -which would generate the following in the container definition: - -[source,yaml] ----- -envFrom: - - secretRef: - name: my-secret - optional: false - - secretRef: - name: my-other-secret - optional: false ----- - -The following extracts a value identified by the `keyName` field from the `my-secret` Secret into a `foo` environment variable: - -[source,properties] ----- -quarkus.openshift.env.mapping.foo.from-secret=my-secret -quarkus.openshift.env.mapping.foo.with-key=keyName ----- - -This would generate the following in the `env` section of your container: - -[source,yaml] ----- -- env: - - name: FOO - valueFrom: - secretKeyRef: - key: keyName - name: my-secret - optional: false ----- - -==== Environment variables from ConfigMap - -To add all key/value pairs from `ConfigMap` as environment variables just apply the following configuration, separating each -`ConfigMap` to be used as source by a comma (`,`): - -[source,properties] ----- -quarkus.openshift.env.configmaps=my-config-map,another-config-map ----- - -which would generate the following in the container definition: - -[source,yaml] ----- -envFrom: - - configMapRef: - name: my-config-map - optional: false - - configMapRef: - name: another-config-map - optional: false ----- - -The following extracts a value identified by the `keyName` field from the `my-config-map` ConfigMap into a `foo` -environment variable: - -[source,properties] ----- -quarkus.openshift.env.mapping.foo.from-configmap=my-configmap -quarkus.openshift.env.mapping.foo.with-key=keyName ----- - -This would generate the following in the `env` section of your container: - -[source,yaml] ----- -- env: - - name: FOO - valueFrom: - configMapRefKey: - key: keyName - name: my-configmap - optional: false ----- - -==== Environment variables from fields - -It's also possible to use the value from another field to add a new environment variable by specifying the path of the field to be used as a source, as follows: - -[source,properties] ----- -quarkus.openshift.env.fields.foo=metadata.name ----- - -==== Using Deployment instead of DeploymentConfig -Out of the box the extension will generate a `DeploymentConfig` resource. Often users, prefer to use `Deployment` as the main deployment resource, but still make use of OpenShift specific resources like `Route`, `BuildConfig` etc. -This feature is enabled by setting `quarkus.openshift.deployment-kind` to `Deployment`. - -[source,properties] ----- -quarkus.openshift.deployment-kind=Deployment ----- - -Since `Deployment` is a Kubernetes resource and not OpenShift specific, it can't possibly leverage `ImageStream` resources, as is the case with `DeploymentConfig`. This means that the image references need to include the container image registry that hosts the image. -When the image is built, using OpenShift builds (s2i binary and docker strategy) the OpenShift internal image registry `image-registry.openshift-image-registry.svc:5000` will be used, unless an other registry has been explicitly specified by the user. Please note, that in the internal registry the project/namespace name is added as part of the image repository: `image-registry.openshift-image-registry.svc:5000//:`, so users will need to make sure that the target project/namespace name is aligned with the `quarkus.container-image.group`. - -[source,properties] ----- -quarkus.container-image.group= ----- - -==== Validation - -A conflict between two definitions, e.g. mistakenly assigning both a value and specifying that a variable is derived from a field, will result in an error being thrown at build time so that you get the opportunity to fix the issue before you deploy your application to your cluster where it might be more difficult to diagnose the source of the issue. - -Similarly, two redundant definitions, e.g. defining an injection from the same secret twice, will not cause an issue but will indeed report a warning to let you know that you might not have intended to duplicate that definition. - -[#env-vars-backwards] -===== Backwards compatibility - -Previous versions of the OpenShift extension supported a different syntax to add environment variables. The older syntax is still supported but is deprecated, and it's advised that you migrate to the new syntax. - -.Old vs. new syntax -|==== -| |Old | New | -| Plain variable |`quarkus.openshift.env-vars.my-env-var.value=foobar` | `quarkus.openshift.env.vars.my-env-var=foobar` | -| From field |`quarkus.openshift.env-vars.my-env-var.field=foobar` | `quarkus.openshift.env.fields.my-env-var=foobar` | -| All from `ConfigMap` |`quarkus.openshift.env-vars.xxx.configmap=foobar` | `quarkus.openshift.env.configmaps=foobar` | -| All from `Secret` |`quarkus.openshift.env-vars.xxx.secret=foobar` | `quarkus.openshift.env.secrets=foobar` | -| From one `Secret` field |`quarkus.openshift.env-vars.foo.secret=foobar` | `quarkus.openshift.env.mapping.foo.from-secret=foobar` | -| |`quarkus.openshift.env-vars.foo.value=field` | `quarkus.openshift.env.mapping.foo.with-key=field` | -| From one `ConfigMap` field |`quarkus.openshift.env-vars.foo.configmap=foobar` | `quarkus.openshift.env.mapping.foo.from-configmap=foobar` | -| |`quarkus.openshift.env-vars.foo.value=field` | `quarkus.openshift.env.mapping.foo.with-key=field` | -|==== - -NOTE: If you redefine the same variable using the new syntax while keeping the old syntax, **ONLY** the new version will be kept, and a warning will be issued to alert you of the problem. For example, if you define both -`quarkus.openshift.env-vars.my-env-var.value=foobar` and `quarkus.openshift.env.vars.my-env-var=newValue`, the extension will only generate an environment variable `MY_ENV_VAR=newValue` and issue a warning. - -=== Mounting volumes - -The OpenShift extension allows the user to configure both volumes and mounts for the application. - -Any volume can be mounted with a simple configuration: - -[source,properties] ----- -quarkus.openshift.mounts.my-volume.path=/where/to/mount ----- - -This will add a mount to my pod for volume `my-volume` to path `/where/to/mount` - -The volumes themselves can be configured as shown in the sections below: - -==== Secret volumes - -[source,properties] ----- -quarkus.openshift.secret-volumes.my-volume.secret-name=my-secret ----- - -==== ConfigMap volumes - -[source,properties] ----- -quarkus.openshift.config-map-volumes.my-volume.config-map-name=my-secret ----- - -==== Persistent Volume Claims - -[source,properties] ----- -quarkus.openshift.pvc-volumes.my-pvc.claim-name=my-pvc ----- - -== Knative - OpenShift Serverless - -OpenShift also provides the ability to use Knative via the link:https://www.openshift.com/learn/topics/serverless[OpenShift Serverless] functionality. - -The first order of business is to instruct Quarkus to generate Knative resources by setting: - -[source,properties] ----- -quarkus.kubernetes.deployment-target=knative ----- - -In order to leverage OpenShift S2I to build the container image on the cluster and use the resulting container image for the Knative application, -we need to set a couple of configuration properties: - -[source,properties] ----- -# set the Kubernetes namespace which will be used to run the application -quarkus.container-image.group=geoand -# set the container image registry - this is the standard URL used to refer to the internal OpenShift registry -quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 ----- - -The application can then be deployed to OpenShift Serverless by enabling the standard `quarkus.kubernetes.deploy=true` property. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-openshift-openshift-config.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/dev-mode-differences.adoc b/_versions/2.7/guides/dev-mode-differences.adoc deleted file mode 100644 index 244edb3c453..00000000000 --- a/_versions/2.7/guides/dev-mode-differences.adoc +++ /dev/null @@ -1,106 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= How dev-mode differs from a production application - -include::./attributes.adoc[] - -This document explains how the dev-mode in Quarkus differs from a production application. - -== Intro - -Quarkus provides a dev-mode (explained in more detail xref:maven-tooling.adoc#dev-mode[here] and link:gradle-tooling#dev-mode[here]) which greatly aids -during development but should *NEVER* be used in production. - -[[architectural-differences]] -== Architectural differences - -Feature sets aside, the Quarkus application that is run under dev-mode differs architecturally from the production application (i.e. the one that is run using `java -jar ...`). - -In dev-mode, Quarkus uses a ClassLoader hierarchy (explained in detail xref:class-loading-reference.adoc[here]) that enables the live reload of user code -without requiring a rebuild and restart of the application. - -In a production application, the aforementioned class loading infrastructure is entirely absent - there is a single, purpose built ClassLoader that loads (almost) all classes and dependencies. - -== Dev-mode features - -In keeping with the mantra of providing developer joy, Quarkus provides a host of features when dev-mode is enabled. The most important features are: - -=== Live reload - -This mightily important feature needs no introduction and has already been mentioned in the <> section. - -=== Dev UI - -Quarkus provides a very useful xref:dev-ui.adoc[UI] accessible from the browser at `/q/dev`. This UI allows a developer to see the state of the application, but -also provides access to various actions that can change that state (depending on the extensions that are present). -Examples of such operations are: - -* Changing configuration values -* Running Database migration scripts -* Clearing of caches -* Running scheduled operations -* Building a container - -=== Error pages - -In an effort to make development errors very easy to diagnose, Quarkus provides various detailed error pages when running in dev-mode. - -=== Database import scripts - -The `quarkus-hibernate-orm` extension will run the `import.sql` script in `src/main/resources` when Quarkus is running in dev-mode. More details can be found xref:hibernate-orm.adoc#dev-mode[here]. - -=== Dev Services - -When testing or running in dev-mode Quarkus can even provide you with a zero config database out of the box, a feature we refer to as Dev Services. -More information can be found xref:datasource.adoc#dev-services[here]. - -=== Swagger UI - -The `quarkus-smallrye-openapi` extension will expose the Swagger UI when Quarkus is running in dev-mode. Additional information can be found xref:openapi-swaggerui.adoc#dev-mode[here]. - -=== GraphQL UI - -The `quarkus-smallrye-graphql` extension will expose the GraphiQL UI when Quarkus is running in dev-mode. More details can be found xref:smallrye-graphql.adoc#ui[here]. - -=== Health UI - -The `quarkus-smallrye-health` extension will expose the Health UI when Quarkus is running in dev-mode. xref:smallrye-health.adoc#ui[This] section provides additional information. - -=== Mock mailer - -The `quarkus-mailer` extension will enable an in-memory mock mail server when Quarkus is running in dev-mode. See xref:mailer-reference.adoc#testing[this] for more details. - - -=== gRPC - -* The gRPC Reflection Service is enabled in dev mode by default. That lets you use tools such as `grpcurl`. In production mode, the reflection service is disabled. You can enable it explicitly using `quarkus.grpc-server.enable-reflection-service=true`. - -* In dev-mode, `quarkus.grpc.server.instances` has no effect. - -=== Others - -There might be other configuration properties (depending on the extensions added to the application) that have no effect in dev-mode. - - -== Performance implications - -In dev-mode minimizing the runtime footprint of the application is not the primary objective (although Quarkus still starts plenty fast and consumes little memory even in dev-mode) - the primary objective -is enabling developer joy. -Therefore, many more classes are loaded and build time operations also take place every time a live-reload is performed. - -In contrast, in a production application the main objective for Quarkus is to consume the least amount of memory and startup in the smallest amount of time. -Thus, when running the production application, build time operations are not performed (by definition) and various infrastructure classes needed at build time are not present at all at runtime. -Furthermore, the purpose built ClassLoader that comes with the xref:maven-tooling.adoc#fast-jar[fast-jar] package type ensures that class lookup is done as fast as possible while also keeping -the minimum amount of jars in memory. - -== Security implications - -Perhaps the most important reason why dev-mode applications should not be run in production is that the dev-mode allows reading information that could be confidential (via the Dev-UI) -while also giving access to operations that could be destructive (either by exposing endpoints that should not be available in production application or via the Dev-UI). - -== Native executable - -When a native executable is created (explained in detail xref:building-native-image.adoc[here]), it is *always* built from a production application. diff --git a/_versions/2.7/guides/dev-services.adoc b/_versions/2.7/guides/dev-services.adoc deleted file mode 100644 index 5c485a93631..00000000000 --- a/_versions/2.7/guides/dev-services.adoc +++ /dev/null @@ -1,119 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services Overview - -include::./attributes.adoc[] - -Quarkus supports the automatic provisioning of unconfigured services in development and test mode. We refer to this capability -as Dev Services. From a developer's perspective this means that if you include an extension and don't configure it then -Quarkus will automatically start the relevant service (usually using https://www.testcontainers.org/[Testcontainers] behind the scenes) and wire up your -application to use this service. - -All this functionality is part of the Quarkus `deployment` modules, so does not affect the production application in any -way. If you want to disable all Dev Services you can use the `quarkus.devservices.enabled=false` config property, although -in most cases this is not necessary as simply configuring the service will result in the Dev Service being disabled automatically. - -Note that the default startup timeout is 60s, if this is not enough you can increase it with the `quarkus.devservices.timeout` property. - - -This page lists all the Dev Services that Quarkus supports. - -NOTE: In order to use Dev Services you will generally need a working Docker environment (remote environments are supported). -If you don't have Docker installed you will need to configure your services normally. - -== AMQP - -The AMQP Dev Service will be enabled when the `quarkus-smallrye-reactive-messaging-amqp` extension is present in your application, and -the broker address has not been explicitly configured. More information can be found at the -xref:amqp-dev-services.adoc[AMQP Dev Services Guide]. - -include::{generated-dir}/config/quarkus-smallrye-reactivemessaging-amqp-config-group-amqp-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Apicurio Registry - -The Apicurio Dev Service will be enabled when the `quarkus-apicurio-registry-avro` extension is present in your application, and it's -address has not been explicitly configured. More information can be found at the -xref:apicurio-registry-dev-services.adoc[Apicurio Registry Dev Services Guide]. - -include::{generated-dir}/config/quarkus-apicurio-registry-devservices-apicurio-registry-avro-apicurio-registry-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Databases - -The database Dev Services will be enabled when a reactive or JDBC datasource extension is present in the application, -and the database URL has not been configured. More information can be found at the -xref:datasource.adoc#dev-services[Datasource Guide]. - -Quarkus provides Dev Services for all databases it supports. Most of these are run in a container, with the -exception of H2 and Derby which are run in process. Dev Services are supported for both JDBC and reactive drivers. - -include::{generated-dir}/config/quarkus-datasource-config-group-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Kafka - -The Kafka Dev Service will be enabled when the `quarkus-kafka-client` extension is present in your application, and -the broker address has not been explicitly configured. More information can be found at the -xref:kafka-dev-services.adoc[Kafka Dev Services Guide]. - -include::{generated-dir}/config/quarkus-kafka-client-config-group-kafka-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Keycloak - -The Keycloak Dev Service will be enabled when the `quarkus-oidc` extension is present in your application, and -the server address has not been explicitly configured. More information can be found at the -xref:security-openid-connect-dev-services.adoc[OIDC Dev Services Guide]. - -:no-duration-note: true -include::{generated-dir}/config/quarkus-keycloak-devservices-keycloak-keycloak-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Kogito - -The Kogito Dev Service will be enabled when either `kogito-quarkus` or `kogito-quarkus-processes` extension is present in your application. More information can be found at the xref:kogito-dev-services.adoc[Kogito Dev Services Guide]. - -include::kogito-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== MongoDB - -The MongoDB Dev Service will be enabled when the `quarkus-mongodb-client` extension is present in your application, and -the server address has not been explicitly configured. More information can be found at the -xref:mongodb.adoc#dev-services[MongoDB Guide]. - -include::{generated-dir}/config/quarkus-mongodb-config-group-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== RabbitMQ - -The RabbitMQ Dev Service will be enabled when the `quarkus-smallrye-reactive-messaging-rabbitmq` extension is present in your application, and -the broker address has not been explicitly configured. More information can be found at the -xref:rabbitmq-dev-services.adoc[RabbitMQ Dev Services Guide]. - -include::{generated-dir}/config/quarkus-smallrye-reactivemessaging-rabbitmq-config-group-rabbit-mq-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Redis - -The Redis Dev Service will be enabled when the `quarkus-redis-client` extension is present in your application, and -the server address has not been explicitly configured. More information can be found at the -xref:redis-dev-services.adoc[Redis Dev Services Guide]. - -include::{generated-dir}/config/quarkus-redis-client-config-group-dev-services-config.adoc[opts=optional, leveloffset=+1] - -== Vault - -The Vault Dev Service will be enabled when the `quarkus-vault` extension is present in your application, and -the server address has not been explicitly configured. More information can be found at the -link:{vault-guide}#dev-services[Vault Guide]. - -== Neo4j - -The Neo4j Dev Service will be enabled when the `quarkus-neo4j` extension is present in your application, and -the server address has not been explicitly configured. More information can be found at the -link:{neo4j-guide}#dev-services[Neo4j Guide]. - -== Infinispan - -The Infinispan Dev Service will be enabled when the `quarkus-infinispan-client` extension is present in your application, and -the server address has not been explicitly configured. More information can be found at the -xref:infinispan-client.adoc#dev-services[Infinispan Guide]. - -include::{generated-dir}/config/quarkus-infinispan-client-infinispan-client-dev-service-build-time-config.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/dev-ui.adoc b/_versions/2.7/guides/dev-ui.adoc deleted file mode 100644 index 9751ef4ffec..00000000000 --- a/_versions/2.7/guides/dev-ui.adoc +++ /dev/null @@ -1,402 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev UI - -include::./attributes.adoc[] - -This guide covers the Quarkus Dev UI for xref:building-my-first-extension.adoc[extension authors]. - -Quarkus now ships with a new experimental Dev UI, which is available in dev mode (when you start -quarkus with `mvn quarkus:dev`) at http://localhost:8080/q/dev[/q/dev] by default. It will show you something like -this: - -image::dev-ui-overview.png[alt=Dev UI overview,role="center",width=90%] - -It allows you to quickly visualize all the extensions currently loaded, see their status and go directly -to their documentation. - -On top of that, each extension can add: - -- <> -- <> -- <> - -== How can I make my extension support the Dev UI? - -In order to make your extension listed in the Dev UI you don't need to do anything! - -So you can always start with that :) - -If you want to contribute badges or links in your extension card on the Dev UI overview -page, like this: - -image:dev-ui-embedded.png[alt=Dev UI embedded,role="center"] - -You have to add a file named `dev-templates/embedded.html` in your -xref:building-my-first-extension.adoc#description-of-a-quarkus-extension[`deployment`] -extension module's resources: - -image::dev-ui-embedded-file.png[alt=Dev UI embedded.html,align=center] - -The contents of this file will be included in your extension card, so for example we can place -two links with some styling and icons: - -[source,html] ----- - - - OpenAPI -
- - - Swagger UI ----- - -TIP: We use the Font Awesome Free icon set. - -Note how the paths are specified: `{config:http-path('quarkus.smallrye-openapi.path')}`. This is a special -directive that the quarkus dev console understands: it will replace that value with the resolved route -named 'quarkus.smallrye-openapi.path'. - -The corresponding non-application endpoint is declared using `.routeConfigKey` to associate the route with a name: - -[source,java] ----- - nonApplicationRootPathBuildItem.routeBuilder() - .route(openApiConfig.path) // <1> - .routeConfigKey("quarkus.smallrye-openapi.path") // <2> - ... - .build(); ----- -<1> The configured path is resolved into a valid route. -<2> The resolved route path is then associated with the key `quarkus.smallrye-openapi.path`. - -== Path considerations - -Paths are tricky business. Keep the following in mind: - -* Assume your UI will be nested under the dev endpoint. Do not provide a way to customize this without a strong reason. -* Never construct your own absolute paths. Adding a suffix to a known, normalized and resolved path is fine. - -Configured paths, like the `dev` endpoint used by the console or the SmallRye OpenAPI path shown in the example above, -need to be properly resolved against both `quarkus.http.root-path` and `quarkus.http.non-application-root-path`. -Use `NonApplicationRootPathBuildItem` or `HttpRootPathBuildItem` to construct endpoint routes and identify resolved -path values that can then be used in templates. - -The `{devRootAppend}` variable can also be used in templates to construct URLs for static dev console resources, for example: - -[source,html] ----- -Quarkus ----- - -Refer to the xref:all-config.adoc#quarkus-vertx-http_quarkus.http.non-application-root-path[Quarkus Vertx HTTP configuration reference] -for details on how the non-application root path is configured. - -== Template and styling support - -Both the `embedded.html` files and any full page you add in `/dev-templates` will be interpreted by -xref:qute.adoc[the Qute template engine]. - -This also means that you can xref:qute-reference.adoc#user_tags[add custom Qute tags] in -`/dev-templates/tags` for your templates to use. - -The style system currently in use is https://getbootstrap.com/docs/4.6/getting-started/introduction/[Bootstrap V4 (4.6.0)] -but note that this might change in the future. - -The main template also includes https://jquery.com/[jQuery 3.5.1], but here again this might change. - -=== Accessing Config Properties - -A `config:property(name)` expression can be used to output the config value for the given property name. -The property name can be either a string literal or obtained dynamically by another expression. -For example `{config:property('quarkus.lambda.handler')}` and `{config:property(foo.propertyName)}`. - -Reminder: do not use this to retrieve raw configured path values. As shown above, use `{config:http-path(...)}` with -a known route configuration key when working with resource paths. - -== Adding full pages - -To add full pages for your Dev UI extension such as this one: - -image::dev-ui-page.png[alt=Dev UI custom page,align=center,width=90%] - -You need to place them in your extension's -xref:building-my-first-extension.adoc#description-of-a-quarkus-extension[`deployment`] module's -`/dev-templates` resource folder, like this page for the xref:cache.adoc[`quarkus-cache` extension]: - -[[action-example]] -[source,java] ----- -{#include main}// <1> - {#style}// <2> - .custom { - color: gray; - } - {/style} - {#script} // <3> - $(document).ready(function(){ - $(function () { - $('[data-toggle="tooltip"]').tooltip() - }); - }); - {/script} - {#title}Cache{/title}// <4> - {#body}// <5> - - - - - - - - - {#for cacheInfo in info:cacheInfos}// <6> - - - - - {/for} - -
NameSize
- {cacheInfo.name} - -
- enctype="application/x-www-form-urlencoded"> - - - -
-
- {/body} -{/include} ----- -<1> In order to benefit from the same style as other Dev UI pages, extend the `main` template -<2> You can pass extra CSS for your page in the `style` template parameter -<3> You can pass extra JavaScript for your page in the `script` template parameter. This will be added inline after the JQuery script, so you can safely use JQuery in your script. -<4> Don't forget to set your page title in the `title` template parameter -<5> The `body` template parameter will contain your content -<6> In order for your template to read custom information from your Quarkus extension, you can use - the `info` xref:qute-reference.adoc#namespace_extension_methods[namespace]. -<7> This shows an <> - -== Linking to your full-page templates - -Full-page templates for extensions live under a pre-defined `{devRootAppend}/{groupId}.{artifactId}/` directory -that is referenced using the `urlbase` template parameter. Using configuration defaults, that would resolve to -`/q/dev/io.quarkus.quarkus-cache/`, as an example. - -Use the `{urlbase}` template parameter to reference this folder in `embedded.html`: - -[source,html] ----- -// <1> - - Caches {info:cacheInfos.size()} ----- -<1> Use the `urlbase` template parameter to reference full-page templates for your extension - -== Passing information to your templates - -In `embedded.html` or in full-page templates, you will likely want to display information that is -available from your extension. - -There are two ways to make that information available, depending on whether it is available at -build time or at run time. - -In both cases we advise that you add support for the Dev UI in your `{pkg}.deployment.devconsole` -package in a `DevConsoleProcessor` class (in your extension's -xref:building-my-first-extension.adoc#description-of-a-quarkus-extension[`deployment`] module). - -=== Passing run-time information - -[source,java] ----- -package io.quarkus.cache.deployment.devconsole; - -import io.quarkus.cache.runtime.CaffeineCacheSupplier; -import io.quarkus.deployment.IsDevelopment; -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.devconsole.spi.DevConsoleRuntimeTemplateInfoBuildItem; - -public class DevConsoleProcessor { - - @BuildStep(onlyIf = IsDevelopment.class)// <1> - public DevConsoleRuntimeTemplateInfoBuildItem collectBeanInfo() { - return new DevConsoleRuntimeTemplateInfoBuildItem("cacheInfos", - new CaffeineCacheSupplier());// <2> - } -} ----- -<1> Don't forget to make this xref:building-my-first-extension.adoc#deploying-the-greeting-feature[build step] - conditional on being in dev mode -<2> Declare a run-time dev `info:cacheInfos` template value - -This will map the `info:cacheInfos` value to this supplier in your extension's -xref:building-my-first-extension.adoc#description-of-a-quarkus-extension[`runtime module`]: - -[source,java] ----- -package io.quarkus.cache.runtime; - -import java.util.ArrayList; -import java.util.Collection; -import java.util.Comparator; -import java.util.List; -import java.util.function.Supplier; - -import io.quarkus.arc.Arc; -import io.quarkus.cache.CaffeineCache; - -public class CaffeineCacheSupplier implements Supplier> { - - @Override - public List get() { - List allCaches = new ArrayList<>(allCaches()); - allCaches.sort(Comparator.comparing(CaffeineCache::getName)); - return allCaches; - } - - public static Collection allCaches() { - // Get it from ArC at run-time - return (Collection) (Collection) - Arc.container().instance(CacheManagerImpl.class).get().getAllCaches(); - } -} ----- - -=== Passing build-time information - -Sometimes you only need build-time information to be passed to your template, so you can do it like this: - -[source,java] ----- -package io.quarkus.qute.deployment.devconsole; - -import java.util.List; - -import io.quarkus.deployment.IsDevelopment; -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.devconsole.spi.DevConsoleTemplateInfoBuildItem; -import io.quarkus.qute.deployment.CheckedTemplateBuildItem; -import io.quarkus.qute.deployment.TemplateVariantsBuildItem; - -public class DevConsoleProcessor { - - @BuildStep(onlyIf = IsDevelopment.class) - public DevConsoleTemplateInfoBuildItem collectBeanInfo( - List checkedTemplates,// <1> - TemplateVariantsBuildItem variants) { - DevQuteInfos quteInfos = new DevQuteInfos(); - for (CheckedTemplateBuildItem checkedTemplate : checkedTemplates) { - DevQuteTemplateInfo templateInfo = - new DevQuteTemplateInfo(checkedTemplate.templateId, - variants.getVariants().get(checkedTemplate.templateId), - checkedTemplate.bindings); - quteInfos.addQuteTemplateInfo(templateInfo); - } - return new DevConsoleTemplateInfoBuildItem("devQuteInfos", quteInfos);// <2> - } - -} ----- -<1> Use whatever dependencies you need as input -<2> Declare a build-time `info:devQuteInfos` DEV template value - -== Advanced usage: adding actions - -You can also add actions to your Dev UI templates: - -image::dev-ui-interactive.png[alt=Dev UI interactive page,align=center,width=90%] - -This can be done by adding another xref:building-my-first-extension.adoc#deploying-the-greeting-feature[build step] to -declare the action in your extension's -xref:building-my-first-extension.adoc#description-of-a-quarkus-extension[`deployment`] module: - - -[source,java] ----- -package io.quarkus.cache.deployment.devconsole; - -import static io.quarkus.deployment.annotations.ExecutionTime.STATIC_INIT; - -import io.quarkus.cache.runtime.devconsole.CacheDevConsoleRecorder; -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.deployment.annotations.Record; -import io.quarkus.devconsole.spi.DevConsoleRouteBuildItem; - -public class DevConsoleProcessor { - - @BuildStep - @Record(value = STATIC_INIT, optional = true)// <1> - DevConsoleRouteBuildItem invokeEndpoint(CacheDevConsoleRecorder recorder) { - return new DevConsoleRouteBuildItem("caches", "POST", - recorder.clearCacheHandler());// <2> - } -} ----- -<1> Mark the recorder as optional, so it will only be invoked when in dev mode -<2> Declare a `POST {urlbase}/caches` route handled by the given handler - - -Note: you can see <>. - -Now all you have to do is implement the recorder in your extension's -xref:building-my-first-extension.adoc#description-of-a-quarkus-extension[`runtime module`]: - - -[source,java] ----- -package io.quarkus.cache.runtime.devconsole; - -import io.quarkus.cache.CaffeineCache; -import io.quarkus.cache.runtime.CaffeineCacheSupplier; -import io.quarkus.runtime.annotations.Recorder; -import io.quarkus.devconsole.runtime.spi.DevConsolePostHandler; -import io.quarkus.vertx.http.runtime.devmode.devconsole.FlashScopeUtil.FlashMessageStatus; -import io.vertx.core.Handler; -import io.vertx.core.MultiMap; -import io.vertx.ext.web.RoutingContext; - -@Recorder -public class CacheDevConsoleRecorder { - - public Handler clearCacheHandler() { - return new DevConsolePostHandler() {// <1> - @Override - protected void handlePost(RoutingContext event, MultiMap form) // <2> - throws Exception { - String cacheName = form.get("name"); - for (CaffeineCache cache : CaffeineCacheSupplier.allCaches()) { - if (cache.getName().equals(cacheName)) { - cache.invalidateAll(); - flashMessage(event, "Cache for " + cacheName + " cleared");// <3> - return; - } - } - flashMessage(event, "Cache for " + cacheName + " not found", - FlashMessageStatus.ERROR);// <4> - } - }; - } -} ----- -<1> While you can use https://vertx.io/docs/vertx-web/java/#_routing_by_http_method[any Vert.x handler], - the `DevConsolePostHandler` superclass will handle your POST actions - nicely, and auto-redirect to the `GET` URI right after your `POST` for optimal behavior. -<2> You can get the Vert.x `RoutingContext` as well as the `form` contents -<3> Don't forget to add a message for the user to let them know everything went fine -<4> You can also add error messages - - -NOTE: Flash messages are handled by the `main` DEV template and will result in nice notifications for your -users: - -image::dev-ui-message.png[alt=Dev UI message,align=center,width=90%] - diff --git a/_versions/2.7/guides/docinfo.html b/_versions/2.7/guides/docinfo.html deleted file mode 100644 index cae91e61d33..00000000000 --- a/_versions/2.7/guides/docinfo.html +++ /dev/null @@ -1,45 +0,0 @@ - - - - - - - - - - - - - - diff --git a/_versions/2.7/guides/duration-format-note.adoc b/_versions/2.7/guides/duration-format-note.adoc deleted file mode 100644 index 823c283a5e3..00000000000 --- a/_versions/2.7/guides/duration-format-note.adoc +++ /dev/null @@ -1,9 +0,0 @@ -[NOTE] -==== -The format for durations uses the standard `java.time.Duration` format. -You can learn more about it in the link:https://docs.oracle.com/javase/8/docs/api/java/time/Duration.html#parse-java.lang.CharSequence-[Duration#parse() javadoc]. - -You can also provide duration values starting with a number. -In this case, if the value consists only of a number, the converter treats the value as seconds. -Otherwise, `PT` is implicitly prepended to the value to obtain a standard `java.time.Duration` format. -==== diff --git a/_versions/2.7/guides/elasticsearch.adoc b/_versions/2.7/guides/elasticsearch.adoc deleted file mode 100644 index 17a8b02b73c..00000000000 --- a/_versions/2.7/guides/elasticsearch.adoc +++ /dev/null @@ -1,498 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Connecting to an Elasticsearch cluster -include::./attributes.adoc[] - -Elasticsearch is a well known full text search engine and NoSQL datastore. - -In this guide, we will see how you can get your REST services to use an Elasticsearch cluster. - -Quarkus provides two ways of accessing Elasticsearch: via the lower level `RestClient` or via the `RestHighLevelClient` we will call them -the low level and the high level clients. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* Elasticsearch installed or Docker installed - -== Architecture - -The application built in this guide is quite simple: the user can add elements in a list using a form and the list is updated. - -All the information between the browser and the server is formatted as JSON. - -The elements are stored in Elasticsearch. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: elasticsearch-quickstart -:create-app-extensions: resteasy,resteasy-jackson,elasticsearch-rest-client -include::includes/devtools/create-app.adoc[] - -This command generates a Maven structure importing the RESTEasy/JAX-RS, Jackson, and the Elasticsearch low level client extensions. -After this, the `quarkus-elasticsearch-rest-client` extension has been added to your build file. - -If you want to use the high level client instead, replace the `elasticsearch-rest-client` extension by the `elasticsearch-rest-high-level-client` extension. - -[NOTE] -==== -We use the `resteasy-jackson` extension here and not the JSON-B variant because we will use the Vert.x `JsonObject` helper -to serialize/deserialize our objects to/from Elasticsearch and it uses Jackson under the hood. -==== - -If you don’t want to generate a new project, add the following dependencies to your build file. - -For the Elasticsearch low level client, add: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-elasticsearch-rest-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-elasticsearch-rest-client") ----- - -For the Elasticsearch high level client, add: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-elasticsearch-rest-high-level-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-elasticsearch-rest-high-level-client") ----- - -== Creating your first JSON REST service - -In this example, we will create an application to manage a list of fruits. - -First, let's create the `Fruit` bean as follows: - -[source,java] ----- -package org.acme.elasticsearch; - -public class Fruit { - public String id; - public String name; - public String color; -} ----- - -Nothing fancy. One important thing to note is that having a default constructor is required by the JSON serialization layer. - -Now create a `org.acme.elasticsearch.FruitService` that will be the business layer of our application and store/load the fruits from the Elasticsearch instance. -Here we use the low level client, if you want to use the high level client instead follow the instructions in the <> paragraph instead. - -[source,java] ----- -package org.acme.elasticsearch; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import org.apache.http.util.EntityUtils; -import org.elasticsearch.client.Request; -import org.elasticsearch.client.Response; -import org.elasticsearch.client.RestClient; - -import io.vertx.core.json.JsonArray; -import io.vertx.core.json.JsonObject; - -@ApplicationScoped -public class FruitService { - @Inject - RestClient restClient; //<1> - - public void index(Fruit fruit) throws IOException { - Request request = new Request( - "PUT", - "/fruits/_doc/" + fruit.id); //<2> - request.setJsonEntity(JsonObject.mapFrom(fruit).toString()); //<3> - restClient.performRequest(request); //<4> - } - - public Fruit get(String id) throws IOException { - Request request = new Request( - "GET", - "/fruits/_doc/" + id); - Response response = restClient.performRequest(request); - String responseBody = EntityUtils.toString(response.getEntity()); - JsonObject json = new JsonObject(responseBody); //<5> - return json.getJsonObject("_source").mapTo(Fruit.class); - } - - public List searchByColor(String color) throws IOException { - return search("color", color); - } - - public List searchByName(String name) throws IOException { - return search("name", name); - } - - private List search(String term, String match) throws IOException { - Request request = new Request( - "GET", - "/fruits/_search"); - //construct a JSON query like {"query": {"match": {"": " results = new ArrayList<>(hits.size()); - for (int i = 0; i < hits.size(); i++) { - JsonObject hit = hits.getJsonObject(i); - Fruit fruit = hit.getJsonObject("_source").mapTo(Fruit.class); - results.add(fruit); - } - return results; - } -} ----- - -In this example you can note the following: - -1. We inject an Elasticsearch low level `RestClient` into our service. -2. We create an Elasticsearch request. -3. We use Vert.x `JsonObject` to serialize the object before sending it to Elasticsearch, you can use whatever you want to serialize to JSON. -4. We send the request (indexing request here) to Elasticsearch. -5. In order to deserialize the object from Elasticsearch, we again use Vert.x `JsonObject`. - -Now, create the `org.acme.elasticsearch.FruitResource` class as follows: - -[source,java] ----- -package org.acme.elasticsearch; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.QueryParam; -import java.io.IOException; -import java.net.URI; -import java.util.List; -import java.util.UUID; - -@Path("/fruits") -public class FruitResource { - - @Inject - FruitService fruitService; - - @POST - public Response index(Fruit fruit) throws IOException { - if (fruit.id == null) { - fruit.id = UUID.randomUUID().toString(); - } - fruitService.index(fruit); - return Response.created(URI.create("/fruits/" + fruit.id)).build(); - } - - @GET - @Path("/{id}") - public Fruit get(@PathParam("id") String id) throws IOException { - return fruitService.get(id); - } - - @GET - @Path("/search") - public List search(@QueryParam("name") String name, @QueryParam("color") String color) throws IOException { - if (name != null) { - return fruitService.searchByName(name); - } else if (color != null) { - return fruitService.searchByColor(color); - } else { - throw new BadRequestException("Should provide name or color query parameter"); - } - } -} ----- - -The implementation is pretty straightforward and you just need to define your endpoints using the JAX-RS annotations and use the `FruitService` to list/add new fruits. - -== Configuring Elasticsearch -The main property to configure is the URL to connect to the Elasticsearch cluster. - -A sample configuration should look like this: - -[source,properties] ----- -# configure the Elasticsearch client for a cluster of two nodes -quarkus.elasticsearch.hosts = elasticsearch1:9200,elasticsearch2:9200 ----- - -In this example, we are using a single instance running on localhost: - -[source,properties] ----- -# configure the Elasticsearch client for a single instance on localhost -quarkus.elasticsearch.hosts = localhost:9200 ----- - -If you need a more advanced configuration, you can find the comprehensive list of supported configuration properties at the end of this guide. - -== Programmatically Configuring Elasticsearch -On top of the parametric configuration, you can also programmatically apply additional configuration to the client by implementing a `RestClientBuilder.HttpClientConfigCallback` and annotating it with `ElasticsearchClientConfig`. You may provide multiple implementations and configuration provided by each implementation will be applied in a randomly ordered cascading manner. - -For example, when accessing an Elasticsearch cluster that is set up for TLS on the HTTP layer, the client needs to trust the certificate that Elasticsearch is using. The following is an example of setting up the client to trust the CA that has signed the certificate that Elasticsearch is using, when that CA certificate is available in a PKCS#12 keystore. - -[source,java] ----- -import io.quarkus.elasticsearch.restclient.lowlevel.ElasticsearchClientConfig; -import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; -import org.apache.http.ssl.SSLContextBuilder; -import org.apache.http.ssl.SSLContexts; -import org.elasticsearch.client.RestClientBuilder; - -import javax.enterprise.context.Dependent; -import javax.net.ssl.SSLContext; -import java.io.InputStream; -import java.nio.file.Files; -import java.nio.file.Path; -import java.nio.file.Paths; -import java.security.KeyStore; - -@ElasticsearchClientConfig -public class SSLContextConfigurator implements RestClientBuilder.HttpClientConfigCallback { - @Override - public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { - try { - String keyStorePass = "password-for-keystore"; - Path trustStorePath = Paths.get("/path/to/truststore.p12"); - KeyStore truststore = KeyStore.getInstance("pkcs12"); - try (InputStream is = Files.newInputStream(trustStorePath)) { - truststore.load(is, keyStorePass.toCharArray()); - } - SSLContextBuilder sslBuilder = SSLContexts.custom() - .loadTrustMaterial(truststore, null); - SSLContext sslContext = sslBuilder.build(); - httpClientBuilder.setSSLContext(sslContext); - } catch (Exception e) { - throw new RuntimeException(e); - } - - return httpClientBuilder; - } -} ----- -See https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/_encrypted_communication.html[Elasticsearch documentation] for more details on this particular example. - -[NOTE] -==== -Classes marked with `@ElasticsearchClientConfig` are made application scoped CDI beans by default. -You can override the scope at the class level if you prefer a different scope. -==== - -== Running an Elasticsearch cluster - -As by default, the Elasticsearch client is configured to access a local Elasticsearch cluster on port 9200 (the default Elasticsearch port), -if you have a local running instance on this port, there is nothing more to do before being able to test it! - -If you want to use Docker to run an Elasticsearch instance, you can use the following command to launch one: - -[source,bash,subs=attributes+] ----- -docker run --name elasticsearch -e "discovery.type=single-node" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m"\ - --rm -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch-oss:{elasticsearch-version} ----- - -== Running the application - -Now let's run our application via Quarkus dev mode: - -:devtools-wrapped: -+ -include::includes/devtools/dev.adoc[] -:!devtools-wrapped: - -You can add new fruits to the list via the following curl command: - -[source,bash,subs=attributes+] ----- -curl localhost:8080/fruits -d '{"name": "bananas", "color": "yellow"}' -H "Content-Type: application/json" ----- - -And search for fruits by name or color via the flowing curl command: - -[source,bash,subs=attributes+] ----- -curl localhost:8080/fruits/search?color=yellow ----- - -== Using the High Level REST Client - -Quarkus provides support for the Elasticsearch High Level REST Client but keep in mind that it comes with some caveats: - -- It drags a lot of dependencies - especially Lucene -, which doesn't fit well with Quarkus philosophy. The Elasticsearch team is aware of this issue and it might improve sometime in the future. -- It is tied to a certain version of the Elasticsearch server: you cannot use a High Level REST Client version 7 to access a server version 6. - -[WARNING] -==== -Due to the license change made by Elastic for the Elasticsearch High Level REST Client, -we are keeping in Quarkus the last Open Source version of this particular client, namely 7.10, -and it won't be upgraded to newer versions. - -Given this client was deprecated by Elastic and replaced by a new Open Source Java client, -the Elasticsearch High Level REST Client extension is considered deprecated and will be removed from the Quarkus codebase at some point in the future. - -Note that contrary to the High Level REST client, we are using the latest version of the Low Level REST client (which is still Open Source), -and, while we believe it should work, the situation is less than ideal and might cause some issues. -Feel free to override the versions of the clients in your applications depending on your requirements, -but be aware of https://www.elastic.co/blog/elastic-license-v2[the new licence of the High Level REST Client] for versions 7.11+: -it is not Open Source and has several usage restrictions. - -We will eventually provide an extension for the new Open Source Java client but it will require changes in your applications -as it is an entirely new client. -==== - -Here is a version of the `FruitService` using the high level client instead of the low level one: - -[source,java] ----- -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import org.elasticsearch.action.get.GetRequest; -import org.elasticsearch.action.get.GetResponse; -import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.search.SearchRequest; -import org.elasticsearch.action.search.SearchResponse; -import org.elasticsearch.client.RequestOptions; -import org.elasticsearch.client.RestHighLevelClient; -import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.index.query.QueryBuilders; -import org.elasticsearch.search.SearchHit; -import org.elasticsearch.search.SearchHits; -import org.elasticsearch.search.builder.SearchSourceBuilder; - -import io.vertx.core.json.JsonObject; - -@ApplicationScoped -public class FruitService { - @Inject - RestHighLevelClient restHighLevelClient; // <1> - - public void index(Fruit fruit) throws IOException { - IndexRequest request = new IndexRequest("fruits"); // <2> - request.id(fruit.id); - request.source(JsonObject.mapFrom(fruit).toString(), XContentType.JSON); // <3> - restHighLevelClient.index(request, RequestOptions.DEFAULT); // <4> - } - - public Fruit get(String id) throws IOException { - GetRequest getRequest = new GetRequest("fruits", id); - GetResponse getResponse = restHighLevelClient.get(getRequest, RequestOptions.DEFAULT); - if (getResponse.isExists()) { - String sourceAsString = getResponse.getSourceAsString(); - JsonObject json = new JsonObject(sourceAsString); // <5> - return json.mapTo(Fruit.class); - } - return null; - } - - public List searchByColor(String color) throws IOException { - return search("color", color); - } - - public List searchByName(String name) throws IOException { - return search("name", name); - } - - private List search(String term, String match) throws IOException { - SearchRequest searchRequest = new SearchRequest("fruits"); - SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); - searchSourceBuilder.query(QueryBuilders.matchQuery(term, match)); - searchRequest.source(searchSourceBuilder); - - SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT); - SearchHits hits = searchResponse.getHits(); - List results = new ArrayList<>(hits.getHits().length); - for (SearchHit hit : hits.getHits()) { - String sourceAsString = hit.getSourceAsString(); - JsonObject json = new JsonObject(sourceAsString); - results.add(json.mapTo(Fruit.class)); - } - return results; - } -} ----- - -In this example you can note the following: - -1. We inject an Elasticsearch `RestHighLevelClient` inside the service. -2. We create an Elasticsearch index request. -3. We use Vert.x `JsonObject` to serialize the object before sending it to Elasticsearch, you can use whatever you want to serialize to JSON. -4. We send the request to Elasticsearch. -5. In order to deserialize the object from Elasticsearch, we again use Vert.x `JsonObject`. - -== Hibernate Search Elasticsearch - -Quarkus supports Hibernate Search with Elasticsearch via the `hibernate-search-orm-elasticsearch` extension. - -Hibernate Search Elasticsearch allows to synchronize your JPA entities to an Elasticsearch cluster and offers a way to query your Elasticsearch cluster using the Hibernate Search API. - -If you're interested in it, you can read the xref:hibernate-search-orm-elasticsearch.adoc[Hibernate Search with Elasticsearch guide]. - -== Cluster Health Check - -If you are using the `quarkus-smallrye-health` extension, both the extension will automatically add a readiness health check -to validate the health of the cluster. - -So when you access the `/q/health/ready` endpoint of your application you will have information about the cluster status. -It uses the cluster health endpoint, the check will be down if the status of the cluster is **red**, or the cluster is not available. - -This behavior can be disabled by setting the `quarkus.elasticsearch.health.enabled` property to `false` in your `application.properties`. - -== Building a native executable - -You can use both clients in a native executable. - -You can build a native executable with the usual command: - -include::includes/devtools/build-native.adoc[] - -Running it is as simple as executing `./target/elasticsearch-low-level-client-quickstart-1.0.0-SNAPSHOT-runner`. - -You can then point your browser to `http://localhost:8080/fruits.html` and use your application. - -== Conclusion - -Accessing an Elasticsearch cluster from a low level or a high level client is easy with Quarkus as it provides easy configuration, CDI integration and native support for it. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-elasticsearch-restclient-lowlevel.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/extension-codestart.adoc b/_versions/2.7/guides/extension-codestart.adoc deleted file mode 100644 index 7ae0eec08bd..00000000000 --- a/_versions/2.7/guides/extension-codestart.adoc +++ /dev/null @@ -1,286 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Extension codestart -include::./attributes.adoc[] - -This guide explains how to create and configure a Quarkus Codestart for an extension. - -== Description - -"Extension Codestarts" is the name we gave to our Quarkus extension quickstart code generation system. It aims to provide a personalized getting started experience with Quarkus. -A Quarkus extension is able to provide one or more well defined codestarts which will contain the necessary resources and code required to start using that particular extension. - -You can apply extension codestarts in the Quarkus tooling: - -* https://code.quarkus.io[code.quarkus.io, window="_blank"] (find the extensions tagged with [code]) -* The Quarkus Maven plugin: -+ -[source,bash] ----- -mvn io.quarkus.platform:quarkus-maven-plugin:create ----- - -* The Quarkus CLI: -+ -[source,bash] ----- -quarkus create app ----- - -== How it works - -When starting a project, you choose the language, the build tool, the framework, then you add dockerfiles, CI, dependencies and code. - -Codestarts are working the same way when contributing to the generation of a project, they are split in two categories: - -**The "Base" codestarts (you choose a combination of those):** - -* project: The project skeleton (e.g. a Quarkus project) -* buildtool: The build tool (e.g. Maven, Gradle, Gradle with Kotlin DSL) -* language: The coding language (e.g. Java, Kotlin, Scala) -* config: The config type (e.g. yaml, properties) - -**Extra codestarts (as much as wanted, to put on top):** - -* tooling: Anything that can be added to improve the project (e.g. dockerfiles, github-action) -* code: Any Quarkus extension can provide starter code. The user can decide to activate it or not. - -Each codestart consists of: - -. A codestart unique name, ie `my-codestart` -. A directory for the codestart files, ie `my-codestart/` -. A `codestart.yml` file -. Optionally some templates that are following a common structure and naming conventions - -== Where are the Quarkus Extension Codestarts located - -- In Quarkus core repo, the extension codestarts are all in the same https://github.com/quarkusio/quarkus/tree/main/devtools/project-core-extension-codestarts/src/main/resources/codestarts/quarkus/extension-codestarts[module, window="_blank"]. - -- RESTEasy, RESTEasy Reactive and Spring Web extension codestarts are part of https://github.com/quarkusio/quarkus/tree/main/independent-projects/tools/base-codestarts/src/main/resources/codestarts/quarkus/extension-codestarts[the base codestarts, window="_blank"]. - -- For other extensions, the codestart will typically be located in the runtime module (with special instruction in the `pom.xml` to generate a separate codestart artifact). - -== Base codestarts - -The https://github.com/quarkusio/quarkus/tree/main/independent-projects/tools/base-codestarts/src/main/resources/codestarts/quarkus[base codestarts, window="_blank"] contains templates to create project, buildtool, languages, config & tooling files. - -== Writing an Extension Codestart - -As was mentioned previously, the base project files (pom.xml, dockerfiles, ...) are already generated by base codestarts provided by the Quarkus core. Thanks to this, we can only focus on the important - the starter code for the extension. - -The codestart should not include any business logic, instead, it should contain some stub data/hello world to compile. The idea is to bring code that is the starting point to everyone using the extension. - -== Writing an Extension Codestart in Quarkus Core - -- Copy one of the existing https://github.com/quarkusio/quarkus/tree/main/devtools/project-core-extension-codestarts/src/main/resources/codestarts/quarkus/extension-codestarts[Quarkus core extension codestarts, window="_blank"]. If the code needs to expose a web resource, `resteasy-qute-codestart` could be a good base Otherwise, `config-yaml-codestart` could be a better starting point. More info on the <>. - -- Edit the <>: - -- Create the extension binding in the extension metadata (https://github.com/quarkusio/quarkus/blob/main/extensions/config-yaml/runtime/src/main/resources/META-INF/quarkus-extension.yaml#L12-L17[example, window="_blank"]). *Thanks to this, the codestart is added when the user selects the extension* - -- Add the readme <> section template. - -- Add the code in the language folder (it is recommended to at least provide java and kotlin). *You have to use `org.acme` as the package name: <>*. It is possible to use <> if needed. - -- Optionally, Add the `index.html` section template (<>). - -- Optionally, add some resources (`./base` directory if they are non language specific) - -- Optionally, add the <>. - -- Create an <>. - -- <> - -== Writing an Extension Codestart in the Quarkiverse or standalone - -For extensions hosted outside of the Quarkus core[https://github.com/quarkusio/quarkus] repository, codestarts will typically be located in the runtime module (with special instruction in the `pom.xml` to generate a separate codestart artifact). https://github.com/ia3andy/aloha-code/[Here, window="_blank"] is an example extension with a codestart and its tests. - - -[#generating] -== Generating your Extension Codestart - -**You need to build your codestart with Maven to make it available in the tooling:** - -- First add the codestart and update the relevant extension's metadata yml file, and build it all (the codestart and the extension if in core). - -- In Quarkus core, you also have to rebuild the `devtools/bom-descriptor-json` module to bind the codestart with the extension in the platform descriptor. - -=== With the tests - -You can use the <> to help develop your codestart with `buildAllProjects` (In Quarkus core we added `@EnabledIfSystemProperty(named = "build-projects", matches = "true")` because codestarts are already built together in another test from `QuarkusCodestartBuildIT`). - -Use `-Dbuild-projects=true` when running this test to generate the real project with your codestart. Open it with your IDE, then change the code and copy it back to the codestart (and iterate until you are happy with the result). - -=== With the Quarkus tooling - -NOTE: Using the tooling to generate your local extension codestart during dev is not yet available Quarkiverse/Standalone extension (Until then, you may use the tests and follow https://github.com/quarkusio/quarkus/issues/21165[#21165, window="_blank"] for updates). - -Using the CLI or Maven plugin to generate a project with your codestart: - -- If using the CLI, you'll probably need to add `-P=io.quarkus:quarkus-bom:999-SNAPSHOT` to the CLI's arguments to use your snapshot of the platform - -- Example CLI command: `quarkus create app -x smallrye-health --code --java -P=io.quarkus:quarkus-bom:999-SNAPSHOT` - -- Equivalent for the Maven plugin: `mvn io.quarkus:quarkus-maven-plugin:2.3.0.Final:create -Dextensions=smallrye-health -DplatformVersion=999-SNAPSHOT` - - -== Specific topics - -[#org-acme-package] -=== Dynamic package name generation from org.acme - -You have to use `org.acme` as the package name in your extension codestart sources. In the generated project, the user specified package will be used (and auto-replace `org.acme`). - -It will be auto-replaced in all the source files (.java, .kt, .scala). The package directory will also be automatically adjusted. If for some reason, another type of file needs the user package name then you should use a <> for it and `{project.package-name}` data placeholder (https://github.com/quarkusio/quarkus/blob/main/devtools/project-core-extension-codestarts/src/main/resources/codestarts/quarkus/extension-codestarts/grpc-codestart/base/src/main/proto/hello.tpl.qute.proto#L4[find an example in the grpc proto file, window="_blank"]). - -[#codestart-yml] -=== codestart.yml - -[source,yaml] ----- -# the codestart unique name -name: resteasy-example -# the codestart reference (the name is used if not set) -ref: resteasy -# the type of codestart (other types are used for other project files) -type: code -# public metadata for this example (they will also be accessible from this codestart qute templates by using the key: {title}) -metadata: - title: RESTEasy JAX-RS example - description: Rest is easy peasy with this Hello World RESTEasy resource. - related-guide-section: https://quarkus.io/guides/getting-started#the-jax-rs-resources - # the path is optional and used by the generated index.html if present - path: /some-path -language: - base: - # Specify the extension and possibly other required dependencies - dependencies: - - io.quarkus:quarkus-resteasy - # And maybe test dependencies? - test-dependencies: - - io.rest-assured:rest-assured ----- - -[#directory-structure] -=== Directory Structure - -NOTE: `codestart.yml` is the only required file. - -* `codestart.yml` must be at the root of the codestart -* `./base` contains all the files that will be processed independently of the specified language -* `./[java/kotlin/scala]` contains all the files that will be processed only if the specified language has been selected (overriding base) - -=== Naming Convention for files - -* `.tpl.qute` will be processed with Qute and can use data (`.tpl.qute` will be removed from the output file name). -* certain common files, such as `readme.md`, `src/main/resources/application.yml`, `src/main/resources/META-INF/resources/index.html` are generated from the collected fragments found in the selected codestarts for the project -* other files are copied. - -[#qute-templates] -=== Templates (Qute) - -Codestarts may use Qute templates `MyClass.tpl.qute.java` for dynamic rendering. - -Those templates are able to use data which contains: - -* The `data` (and public `metadata`) of the codestart to generate (specified in the `codestart.yml`) -* A merge of the `shared-data` from the all the codestarts used to generate the project -* The user input -* Some dynamically generated data (e.g. `dependencies` and `test-dependencies`) - -[#readme-md] -=== README.md - -You may add a `README.md` or `README.tpl.qute.md` in the `base` directory, it will be appended to the others. -So just add the info relative to your extension codestart. - -base/readme.tpl.qute.md -[source,html] ----- -{#include readme-header /} - -[Optionally, Here you may add information about how to use the example, settings, ...] ----- - -NOTE: The `{#include readme-header /}` will use a template located in the Quarkus project codestart which displays standard info from the `codestart.yml` metadata. - -[#app-config] -=== application config application.yml - -As a convention, you should always provide the Quarkus configuration as a yaml file (`base/src/main/resources/application.yml`). - -It is going to be: - -* merged with the other extension codestarts configs -* automatically converted to the selected config type (yaml or properties) at generation time depending on the selected extensions - -[#index-html] -=== index.html and web extension codestarts - -Extension codestarts may provide a snippet for the generated index.html by adding this file: - -base/src/main/resources/META-INF/resources/index.entry.qute.html: -[source,html] ----- -{#include index-entry /} ----- - -NOTE: The `{#include index-entry /}` will use a template located in the Quarkus project codestart which displays standard info from the `codestart.yml` metadata. - -[#integration-test] -=== Integration test - -An extension is available to help test extension codestarts `QuarkusCodestartTest`. It provides a way to test: - -- the generated project content (with immutable mocked data) using snapshot testing -- the generated project build/run (with real data) with helpers to run the build - -NOTE: Before all the tests, the extension will generate Quarkus projects in the specified languages with the given codestart using mocked data and real data. You can find those generated projects in the `target/quarkus-codestart-test` directory. You can open the `real-data` ones in your IDE or play with them using the terminal. *The real data is the easiest way to iterate on your extension codestart development.* - - -The extension provides helpers to test that the projects build `buildAllProjects` or just a specific language project `buildProject(Language language)`. It also provides helpers to test the content with <>. - -The https://github.com/quarkusio/quarkus/blob/main/integration-tests/devtools/src/test/java/io/quarkus/devtools/codestarts/quarkus/ConfigYamlCodestartTest.java[ ConfigYamlCodestartTest, window="_blank"] is a good example in Quarkus core. - -[#snapshot-testing] -==== Snapshot testing - -Snapshot testing is a way to make sure the content generated by a test doesn't change from one revision to another, i.e. between commits. That means, the generated content for each commit needs to be immutable and deterministic (this is the reason for using mocked data). To be able to perform such checks, we auto-generate snapshots of the generated content and commit them as the references of the expected output for subsequent test runs. When the templates change, we also commit the induced snapshots changes. This way, during the review, we can make sure the applied code changes have the expected effects on the generated output. - -The extension provides helpers to check the content: - -- `checkGeneratedSource()` validate a class against the snapshots for all languages (or a specific one). -- `checkGeneratedTestSource()` validate a test class against the snapshots for all languages (or a specific one). -- `assertThatGeneratedFileMatchSnapshot()` check a project file against the snapshot. -- You can use `AbstractPathAssert.satisfies(checkContains("some content"))` or any Path assert on the return of the methods above to also check the file contains a specific content. -- `assertThatGeneratedTreeMatchSnapshots()` lets you compare the project file structure (tree) for a specific language against its snapshot. - -NOTE: In order to first generate or update existing snapshots files on your local filesystem, you need to add `-Dsnap` when running the tests locally while developing the codestart. They need to be added as part of the commit, else the tests will not pass on the CI. - -=== Writing tips - -* Your extension codestart must/should be independent of buildtool and dockerfiles. -* Extension codestarts should be able to work alongside each other without interference (in combination). -* Make sure your class names are unique across all extension codestarts. -* Only use `org.acme` as package name. -* Use a unique path `/[unique]` for your REST paths -* Write the config in yml `src/main/resources/application.yml`. -+ -It is going to be merged with the other codestarts config and automatically converted to the selected config type (yaml or properties). -* You can start with java and add kotlin later in another PR (create an issue so you don't forget). -* If you have a question, ping me @ia3andy on https://quarkusio.zulipchat.com/. - -== The generator sources - -* https://github.com/quarkusio/quarkus/tree/main/independent-projects/tools/codestarts[Codestart generator, window="_blank"] -* https://github.com/quarkusio/quarkus/tree/main/independent-projects/tools/devtools-common/src/main/java/io/quarkus/devtools/codestarts/quarkus[Quarkus implementation of the Codestart generator, window="_blank"] - -== Issues and Feature requests - -https://github.com/quarkusio/quarkus/labels/area%2Fcodestarts - - diff --git a/_versions/2.7/guides/extension-registry-user.adoc b/_versions/2.7/guides/extension-registry-user.adoc deleted file mode 100644 index 89d1e6c7bde..00000000000 --- a/_versions/2.7/guides/extension-registry-user.adoc +++ /dev/null @@ -1,193 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Extension Registry - -include::./attributes.adoc[] - -The Quarkus dev tools, such as the https://quarkus.io/guides/cli-tooling[Quarkus CLI], the https://quarkus.io/guides/maven-tooling[Maven] and the https://quarkus.io/guides/gradle-tooling[Gradle] plugins, or https://code.quarkus.io[code.quarkus.io] can be used to list and search the Quarkus ecosystem for extensions that match a certain criteria. That includes the https://quarkus.io/guides/platform[Quarkus platform] extensions and various other extensions contributed by the community, many of which are hosted on the https://github.com/quarkiverse[Quarkiverse Hub]. - -The information about all the available Quarkus extensions is provided to the dev tools by __Quarkus extension registries__. - -A Quarkus extension registry is a database providing information about: - -* available Quarkus platforms, indicating which of those are currently recommended for new projects and/or as updates; -* available non-platform extensions, indicating which Quarkus versions they are compatible with. - -[[registry.quarkus.io]] -== registry.quarkus.io - -The registry hosted at https://registry.quarkus.io[registry.quarkus.io] is the default Quarkus community extension registry. It is updated on every release of the https://github.com/quarkusio/quarkus-platform[Quarkus community platform] and includes extensions hosted on the https://github.com/quarkiverse[Quarkiverse Hub]. - -=== Maven repository - -The registry hosted at https://registry.quarkus.io[registry.quarkus.io] is a Maven __snapshot__ repository that provides platform and extension catalogs to the dev tools as Maven JSON artifacts. Once downloaded the extension catalogs will be cached in the user's local Maven repository and will be available to the dev tools even if the Internet network (or the registry itself) isn't available. - -The extension catalog artifacts cached locally will be regularly checked for updates and updated if newer versions of those catalogs are available in the registry. The default interval to check for updates is *daily*, which matches the default Maven repository `updatePolicy` for SNAPSHOT artifacts. - -[[registry.quarkus.io.maven.repo]] -==== Maven repository configuration - -IMPORTANT: The repository configuration below is shown only for illustrative purposes and does *NOT* have to be added to the user `settings.xml` or the application's `pom.xml`. The Quarkus dev tools come with this repository pre-configured. - -The complete https://maven.apache.org/settings.html#repositories[Maven repository configuration] of <> is - -[source,xml] ----- - - registry.quarkus.io - Quarkus community extension registry - https://registry.quarkus.io/maven - - true - daily - warn - - ----- - -When the Quarkus dev tools are initialized, this repository configuration is automatically added to the Maven resolver which will be used to resolve the platform and extension catalogs from <>. - -==== Maven repository mirrors and proxies - -When the Quarkus dev tools Maven resolver is initialized, the relevant Maven mirrors and proxies found in the user `settings.xml` are automatically applied to the <> Maven repository configuration, as if the <> Maven repository was configured in the user `settings.xml`. - -That means, if, for example, a matching Maven repository mirror was applied to the <> Maven repository, the <> Maven repository would have to be added to the mirror repository group in the corresponding Maven server instance (e.g. Nexus). - -==== Overriding the default registry Maven repository configuration - -The default registry Maven repository configuration can actually be overriden in the user `settings.xml` by simply adding the desired `` configuration with the `registry.quarkus.io` as its `` value. If such a repository configuration is found in the user `settings.xml`, the dev tools will use it in place of the default <> Maven repository configuration shown above. - -== Quarkus Extension Registry Client Configuration - -Typically, Quarkus community users will not need to have any registry-related configuration in their environment. The registry hosted at <> is enabled in all the Quarkus dev tools by default. However, there could be a few situations where a custom registry client configuration could help. For example, to change the local registry cache update policy or to configure additional (non-default) Quarkus extension registries. - -=== Registry client configuration location - -When the Quarkus dev tools are launched, a search for the registry client configuration file is performed following these steps: - -. `quarkus.tools.config` system property is checked, if it exists, its value will be used as a path to the registry client configuration file; -. the current directory is checked to contain the `.quarkus/config.yaml` file, if the file exists, it will be used to configure the registry client; -. the user home directory is checked to contain the `~/.quarkus/config.yaml` file, if the file exists, it will be used to configure the registry client; -. if none of the above steps located a configuration file, the default <> configuration will be used. - -=== Configuring multiple registries - -The <> is the default Quarkus community extension registry but it is not meant to be always the only registry. Other organizations may find it useful to create their own Quarkus extension registries to provide their own https://quarkus.io/guides/platform[Quarkus platforms] and/or individual (non-platform) Quarkus extensions. Users wishing to enable custom Quarkus extension registries in their environment would need to add them to the registry client configuration file. - -The registry client configuration file is a simple YAML file which contains a list of registries, for example: - -[source,yaml] ----- -registries: -- registry.acme.org -- registry.quarkus.io ----- - -The configuration above enables two registries: `registry.acme.org` and `registry.quarkus.io`. The order of the registries is actually significant. When the Quarkus dev tools are looking for extensions on user's request, the registries will be searched in the order they are configured, i.e. from the top to the bottom of the list. Extensions and platforms found first will appear as the preferred ones to the user. - -IMPORTANT: <> is the default registry which normally does not have to be configured explicitly, however if a user provides a custom registry list and `registry.quarkus.io` is not in it, <> will *not* be enabled. - -For example, here is a registry client configuration that replaces the default <> registry with a custom one: - -[source,yaml] ----- -registries: -- registry.acme.org ----- - -=== Adjusting the registry cache update policy - -Usually, a Quarkus extension registry will be implemented as a Maven snapshot repository. The platform and extension catalogs resolved from the registry as Maven artifacts will be cached in the user's local Maven repository. The platform and extension catalogs are actually `SNAPSHOT` artifacts that are periodically checked for updates by the registry client. The default registry interval to check for updates matches the default value of the Maven's `updatePolicy` for https://maven.apache.org/settings.html#repositories[snapshot repositories] and is `daily`. This default can be override in the registry configuration, for example: - -[source,yaml] ----- -registries: -- registry.acme.org: - update-policy: "always" -- registry.quarkus.io ----- - -In the example above, the `registry.acme.org` registry will be checked for catalog updates on every catalog request, while the `registry.quarkus.io` registry will be checked for catalog updates once a day (on the first catalog request of the day). - -Here is a complete list of choices for a registry's `update-policy` value: - -* _always_ - check for the updates on every catalog request; -* _daily_ (default) - check for the catalog updates once a day on the first catalog request; -* _interval:X_ (where X is an integer in minutes) - custom interval in minutes; -* _never_ - resolve the catalogs once and never check for updates. - -=== Disabling a registry in the configuration - -All the registries listed in the configuration file are enabled by default. A registry can be disabled though by adding `enabled: false` to its configuration. For example: - -[source,yaml] ----- -registries: -- registry.acme.org -- registry.quarkus.io: - enabled: false ----- - -In the configuration above, only the `registry.acme.org` is enabled. The configuration above is equivalent to: - -[source,yaml] ----- -registries: -- registry.acme.org ----- - -=== Enabling the debug mode - -The registry client is not logging much information by default. However, it does resolve various artifacts from Maven repositories behind the scenes. If you would like to see artifact transfer and other debugging related messages in the logs, you can enable the debug mode in the configuration. For example: - -[source,yaml] ----- -debug: true -registries: -- registry.acme.org -- registry.quarkus.io ----- -=== Overriding a registry URL - -There may be situations where the URL of the registry changes, however the ID needs to be the same (because the Maven coordinates are queried). To override the registry URL, add the following: - -[source,yaml] ----- -registries: -- registry.acme.org -- registry.quarkus.io: - maven: - repository: - url: https://internal.registry.acme.org/maven ----- - - - -=== [[how-to-register-as-nexus-repository]] How to register as a Nexus Repository proxy - -You can register a Quarkus extension registry as a Nexus repository proxy.You need to be an administrator to perform these operations. - -==== [[how-to-register-as-nexus-2-repository]] Nexus 2.x -Some options need to be set: - -- Set the `Repository Policy` to `Snapshot`; -- Disable `Download Remote Indexes`; -- Disable `Allow File Browsing`; -- Disable `Include in Search`. - -Here is an example on how it should look like: - -[#img-nexus] -.Nexus Repository Manager: Add Proxy Repository -image:registry-nexus-repository.png[Nexus Repository Proxy] - -==== [[how-to-register-as-nexus-3-repository]] Nexus 3.x - -- Create a `maven2(proxy)` repository -- Set the `Version Policy` to `Snapshot` -- Set the `Remote Storage` URL to `https://registry.quarkus.io/maven` - -image:registry-nexus3-repository.png[Nexus Repository Proxy] diff --git a/_versions/2.7/guides/faq.adoc b/_versions/2.7/guides/faq.adoc deleted file mode 100644 index a7c9bbe4fe4..00000000000 --- a/_versions/2.7/guides/faq.adoc +++ /dev/null @@ -1,29 +0,0 @@ -= Frequently Asked Questions - -include::./attributes.adoc[] - -:toc: macro -:toclevels: 4 -:doctype: book -:icons: font -:docinfo1: - -:numbered: -:sectnums: -:sectnumlevels: 4 - -== Native compilation - -Native executable fails on macOS with `error: unknown type name 'uint8_t'`:: -Your macOS has the wrong `*.h` files compared to the OS and no gcc compilation will work. -This can happen when you migrate from versions of the OS. -See https://stackoverflow.com/questions/48029309/cannot-compile-any-c-programs-error-unknown-type-name-uint8-t -+ -The solution is to - -* `sudo mv /usr/local/include /usr/local/include.old` -* Reinstall XCode for good measure -* (optional?) `brew install llvm` -* generally reinstall your brew dependencies with native compilation - -The executable should work now. diff --git a/_versions/2.7/guides/flyway.adoc b/_versions/2.7/guides/flyway.adoc deleted file mode 100644 index 44b03457449..00000000000 --- a/_versions/2.7/guides/flyway.adoc +++ /dev/null @@ -1,223 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Flyway - -include::./attributes.adoc[] -:migrations-path: src/main/resources/db/migration -:config-file: application.properties - -https://flywaydb.org/[Flyway] is a popular database migration tool that is commonly used in JVM environments. - -Quarkus provides first class support for using Flyway as will be explained in this guide. - -== Setting up support for Flyway - -To start using Flyway with your project, you just need to: - -* add your migrations to the `{migrations-path}` folder as you usually do with Flyway -* activate the `migrate-at-start` option to migrate the schema automatically or inject the `Flyway` object and run -your migration as you normally do - -In your build file, add the following dependencies: - -* the Flyway extension -* your JDBC driver extension (`quarkus-jdbc-postgresql`, `quarkus-jdbc-h2`, `quarkus-jdbc-mariadb`, ...) -* the MariaDB/MySQL support is now in a separate dependency, MariaDB/MySQL users need to add the `flyway-mysql` dependency from now on. -* the Microsoft SQL Server support is now in a separate dependency, Microsoft SQL Server users need to add the `flyway-sqlserver` dependency from now on. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-flyway - - - - - org.flywaydb - flyway-sqlserver - - - - - org.flywaydb - flyway-mysql - - - - - io.quarkus - quarkus-jdbc-postgresql - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Flyway specific dependencies -implementation("io.quarkus:quarkus-flyway") -// Flyway SQL Server specific dependencies -implementation("org.flywaydb:flyway-sqlserver") -// Flyway MariaDB/MySQL specific dependencies -implementation("org.flywaydb:flyway-mysql") -// JDBC driver dependencies -implementation("io.quarkus:quarkus-jdbc-postgresql") ----- - -Flyway support relies on the Quarkus datasource config. -It can be customized for the default datasource as well as for every <>. -First, you need to add the datasource config to the `{config-file}` file -in order to allow Flyway to manage the schema. -Also, you can customize the Flyway behaviour by using the following properties: - -include::{generated-dir}/config/quarkus-flyway.adoc[opts=optional, leveloffset=+1] - - -The following is an example for the `{config-file}` file: - -[source,properties] ----- -# configure your datasource -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=sarah -quarkus.datasource.password=connor -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/mydatabase - -# Flyway minimal config properties -quarkus.flyway.migrate-at-start=true - -# Flyway optional config properties -# quarkus.flyway.baseline-on-migrate=true -# quarkus.flyway.baseline-version=1.0.0 -# quarkus.flyway.baseline-description=Initial version -# quarkus.flyway.connect-retries=10 -# quarkus.flyway.schemas=TEST_SCHEMA -# quarkus.flyway.table=flyway_quarkus_history -# quarkus.flyway.locations=db/location1,db/location2 -# quarkus.flyway.sql-migration-prefix=X -# quarkus.flyway.repeatable-sql-migration-prefix=K ----- - -Add a SQL migration to the default folder following the Flyway naming conventions: `{migrations-path}/V1.0.0__Quarkus.sql` - -[source,sql] ----- -CREATE TABLE quarkus -( - id INT, - name VARCHAR(20) -); -INSERT INTO quarkus(id, name) -VALUES (1, 'QUARKED'); ----- - -Now you can start your application and Quarkus will run the Flyway's migrate method according to your config: - -[source,java] ----- -@ApplicationScoped -public class MigrationService { - // You can Inject the object if you want to use it manually - @Inject - Flyway flyway; <1> - - public void checkMigration() { - // This will print 1.0.0 - System.out.println(flyway.info().current().getVersion().toString()); - } -} ----- - -<1> Inject the Flyway object if you want to use it directly - -== Multiple datasources - -Flyway can be configured for multiple datasources. -The Flyway properties are prefixed exactly the same way as the named datasources, for example: - -[source,properties] ----- -quarkus.datasource.db-kind=h2 -quarkus.datasource.username=username-default -quarkus.datasource.jdbc.url=jdbc:h2:tcp://localhost/mem:default -quarkus.datasource.jdbc.max-size=13 - -quarkus.datasource.users.db-kind=h2 -quarkus.datasource.users.username=username1 -quarkus.datasource.users.jdbc.url=jdbc:h2:tcp://localhost/mem:users -quarkus.datasource.users.jdbc.max-size=11 - -quarkus.datasource.inventory.db-kind=h2 -quarkus.datasource.inventory.username=username2 -quarkus.datasource.inventory.jdbc.url=jdbc:h2:tcp://localhost/mem:inventory -quarkus.datasource.inventory.jdbc.max-size=12 - -# Flyway configuration for the default datasource -quarkus.flyway.schemas=DEFAULT_TEST_SCHEMA -quarkus.flyway.locations=db/default/location1,db/default/location2 -quarkus.flyway.migrate-at-start=true - -# Flyway configuration for the "users" datasource -quarkus.flyway.users.schemas=USERS_TEST_SCHEMA -quarkus.flyway.users.locations=db/users/location1,db/users/location2 -quarkus.flyway.users.migrate-at-start=true - -# Flyway configuration for the "inventory" datasource -quarkus.flyway.inventory.schemas=INVENTORY_TEST_SCHEMA -quarkus.flyway.inventory.locations=db/inventory/location1,db/inventory/location2 -quarkus.flyway.inventory.migrate-at-start=true ----- - -Notice there's an extra bit in the key. -The syntax is as follows: `quarkus.flyway.[optional name.][datasource property]`. - -NOTE: Without configuration, Flyway is set up for every datasource using the default settings. - -== Using the Flyway object - -In case you are interested in using the `Flyway` object directly, you can inject it as follows: - -NOTE: If you enabled the `quarkus.flyway.migrate-at-start` property, by the time you use the Flyway instance, -Quarkus will already have run the migrate operation - -[source,java] ----- -@ApplicationScoped -public class MigrationService { - // You can Inject the object if you want to use it manually - @Inject - Flyway flyway; <1> - - @Inject - @FlywayDataSource("inventory") <2> - Flyway flywayForInventory; - - @Inject - @Named("flyway_users") <3> - Flyway flywayForUsers; - - public void checkMigration() { - // Use the flyway instance manually - flyway.clean(); <4> - flyway.migrate(); - // This will print 1.0.0 - System.out.println(flyway.info().current().getVersion().toString()); - } -} ----- - -<1> Inject the Flyway object if you want to use it directly -<2> Inject Flyway for named datasources using the Quarkus `FlywayDataSource` qualifier -<3> Inject Flyway for named datasources -<4> Use the Flyway instance directly - -== Flyway and Hibernate ORM - -When using Flyway together with Hibernate ORM, you can use the Dev UI to generate the initial schema creation script. - -You can find more information about this feature in the xref:hibernate-orm.adoc#flyway[Hibernate ORM guide]. diff --git a/_versions/2.7/guides/funqy-amazon-lambda-http.adoc b/_versions/2.7/guides/funqy-amazon-lambda-http.adoc deleted file mode 100644 index e1932b113b9..00000000000 --- a/_versions/2.7/guides/funqy-amazon-lambda-http.adoc +++ /dev/null @@ -1,63 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy HTTP Binding with Amazon Lambda  -:extension-status: preview - -include::./attributes.adoc[] - -If you want to allow HTTP clients to invoke on your Funqy functions on AWS Lambda, Quarkus allows you to expose multiple -Funqy functions through HTTP deployed as one AWS Lambda. This approach does add overhead over the -regular Funqy AWS Lambda integration and also requires you to use AWS API Gateway. - -include::./status-include.adoc[] - -Follow the xref:amazon-lambda-http.adoc[Amazon Lambda Http Guide]. It walks through using a variety of HTTP -frameworks on Amazon Lambda, including Funqy. - -WARNING: The Funqy HTTP + AWS Lambda binding is not a replacement for REST over HTTP. Because Funqy -needs to be portable across a lot of different protocols and function providers its HTTP binding -is very minimalistic and you will lose REST features like linking and the ability to leverage -HTTP features like cache-control and conditional GETs. You may want to consider using Quarkus's -JAX-RS, Spring MVC, or Vert.x Web Reactive Route xref:amazon-lambda-http.adoc[support] instead. They also work with Quarkus and AWS Lambda. - -== An additional Quickstart - -Beyond generating an AWS project that is covered in the xref:amazon-lambda-http.adoc[Amazon Lambda Http Guide], -there's also a quickstart for running Funqy HTTP on AWS Lambda. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `funqy-amazon-lambda-quickstart` {quickstarts-tree-url}/funqy-quickstarts/funqy-amazon-lambda-http-quickstart[directory]. - -== The Code - -There is nothing special about the code and more importantly nothing AWS specific. Funqy functions can be deployed to many different -environments and AWS Lambda is one of them. The Java code is actually the same exact code as the {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart[funqy-http-quickstart]. - -== Getting Started - -The steps to get this quickstart running are exactly the same as defined in the xref:amazon-lambda-http.adoc[Amazon Lambda HTTP Guide]. -This differences are that you are running from a quickstart and the maven dependencies are slightly different. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-funqy-http - - - io.quarkus - quarkus-amazon-lambda-http - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-funqy-http") -implementation("io.quarkus:quarkus-amazon-lambda-http") ----- diff --git a/_versions/2.7/guides/funqy-amazon-lambda.adoc b/_versions/2.7/guides/funqy-amazon-lambda.adoc deleted file mode 100644 index 12ce63d1a76..00000000000 --- a/_versions/2.7/guides/funqy-amazon-lambda.adoc +++ /dev/null @@ -1,300 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy Amazon Lambda Binding -:extension-status: preview -:devtools-no-gradle: - -include::./attributes.adoc[] - -The guide walks through quickstart code to show you how you can deploy Funqy functions to Amazon Lambda. - -Funqy functions can be deployed using the AWS Lambda Java Runtime, or you can build a native executable and use -Lambda Custom Runtime if you want a smaller memory footprint and faster cold boot startup time. - -include::./status-include.adoc[] - -== Prerequisites - -:prerequisites-time: 30 minutes -include::includes/devtools/prerequisites.adoc[] -* Read about xref:funqy.adoc[Funqy Basics]. This is a short read! -* https://aws.amazon.com[An Amazon AWS account] -* https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html[AWS CLI] -* https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html[AWS SAM CLI], for local testing - -NOTE: Funqy Amazon Lambdas build off of our xref:amazon-lambda.adoc[Quarkus Amazon Lambda support]. - -== Installing AWS bits - -Installing all the AWS bits is probably the most difficult thing about this guide. Make sure that you follow all the steps -for installing AWS CLI. - -== The Quickstart - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `funqy-amazon-lambda-quickstart` {quickstarts-tree-url}/funqy-quickstarts/funqy-amazon-lambda-quickstart[directory]. - -== The Code - -There is nothing special about the code and more importantly nothing AWS specific. Funqy functions can be deployed to many different -environments and AWS Lambda is one of them. The Java code is actually the same exact code as the {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart[funqy-http-quickstart]. - -[[choose]] -== Choose Your Function - -Only one Funqy function can be exported per Amazon Lambda deployment. If you have multiple functions defined -within your project, then you will need to choose the function within your Quarkus `application.properties`: - -[source,properties,subs=attributes+] ----- -quarkus.funqy.export=greet ----- - -You can see how the quickstart has done it within its own {quickstarts-tree-url}/funqy-quickstarts/funqy-amazon-lambda-quickstart/src/main/resources/application.properties[application.properties]. - -Alternatively, you can set the `QUARKUS_FUNQY_EXPORT` environment variable when you create the Amazon Lambda using the `aws` cli. - -== Deploy to AWS Lambda Java Runtime - -There are a few steps to get your Funqy function running on AWS Lambda. The quickstart maven project generates a helpful script to -create, update, delete, and invoke your functions for pure Java and native deployments. This script is generated -at build time. - -== Build and Deploy - -Build the project using Maven: - -include::includes/devtools/build.adoc[] - -This will compile and package your code. - -== Create an Execution Role - -View the https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html[Getting Started Guide] for deploying -a lambda with AWS CLI. Specifically, make sure you have created an `Execution Role`. You will need to define -a `LAMBDA_ROLE_ARN` environment variable in your profile or console window, Alternatively, you can edit -the `manage.sh` script that is generated by the build and put the role value directly there: - -[source,bash] ----- -LAMBDA_ROLE_ARN="arn:aws:iam::1234567890:role/lambda-role" ----- - -== Extra Build Generated Files - -After you run the build, there are a few extra files generated by the `quarkus-funqy-amazon-lambda` extension. These files -are in the the build directory: `target/` for maven, `build/` for gradle. - -* `function.zip` - lambda deployment file -* `manage.sh` - wrapper around aws lambda cli calls -* `bootstrap-example.sh` - example bootstrap script for native deployments -* `sam.jvm.yaml` - (optional) for use with sam cli and local testing -* `sam.native.yaml` - (optional) for use with sam cli and native local testing - -== Create the function - -The `target/manage.sh` script is for managing your Funqy function using the AWS Lambda Java runtime. This script is provided only for -your convenience. Examine the output of the `manage.sh` script if you want to learn what aws commands are executed -to create, delete, and update your functions. - -`manage.sh` supports four operation: `create`, `delete`, `update`, and `invoke`. - -NOTE: To verify your setup, that you have the AWS CLI installed, executed aws configure for the AWS access keys, -and setup the `LAMBDA_ROLE_ARN` environment variable (as described above), please execute `manage.sh` without any parameters. -A usage statement will be printed to guide you accordingly. - -To see the `usage` statement, and validate AWS configuration: -[source,bash,subs=attributes+] ----- -sh target/manage.sh ----- - -You can `create` your function using the following command: - -[source,bash,subs=attributes+] ----- -sh target/manage.sh create ----- - -or if you do not have `LAMBDA_ROLE_ARN` already defined in this shell: - -[source,bash] ----- -LAMBDA_ROLE_ARN="arn:aws:iam::1234567890:role/lambda-role" sh target/manage.sh create ----- - -WARNING: Do not change the handler switch. This must be hardcoded to `io.quarkus.funqy.lambda.FunqyStreamHandler::handleRequest`. -This special handler is Funqy's integration point with AWS Lambda. - -If there are any problems creating the function, you must delete it with the `delete` function before re-running -the `create` command. - -[source,bash,subs=attributes+] ----- -sh target/manage.sh delete ----- - -Commands may also be stacked: -[source,bash,subs=attributes+] ----- -sh target/manage.sh delete create ----- - -== Invoke the function - -Use the `invoke` command to invoke your function. - -[source,bash,subs=attributes+] ----- -sh target/manage.sh invoke ----- - -The example function takes input passed in via the `--payload` switch which points to a json file -in the root directory of the project. - -The function can also be invoked locally with the SAM CLI like this: - -[source,bash] ----- -sam local invoke --template target/sam.jvm.yaml --event payload.json ----- - -If you are working with your native image build, simply replace the template name with the native version: - -[source,bash] ----- -sam local invoke --template target/sam.native.yaml --event payload.json ----- - -== Update the function - -You can update the Java code as you see fit. Once you've rebuilt, you can redeploy your function by executing the -`update` command. - -[source,bash,subs=attributes+] ----- -sh target/manage.sh update ----- - -== Deploy to AWS Lambda Custom (native) Runtime - -If you want a lower memory footprint and faster initialization times for your Funqy function, you can compile your Java -code to a native executable. Just make sure to rebuild your project with the `-Pnative` switch. - -For Linux hosts execute: - -include::includes/devtools/build-native.adoc[] - -NOTE: If you are building on a non-Linux system, you will need to also pass in a property instructing Quarkus to use a Docker build as Amazon -Lambda requires Linux binaries. You can do this by passing this property to your build: -`-Dnative-image.docker-build=true`. This requires you to have Docker installed locally, however. - -include::includes/devtools/build-native-container.adoc[] - -Either of these commands will compile and create a native executable. It also generates a zip file `target/function.zip`. -This zip file contains your native executable image renamed to `bootstrap`. This is a requirement of the AWS Lambda -Custom (Provided) Runtime. - -The instructions here are exactly as above with one change: you'll need to add `native` as the first parameter to the -`manage.sh` script: - -[source,bash,subs=attributes+] ----- -sh target/manage.sh native create ----- - -As above, commands can be stacked. The only requirement is that `native` be the first parameter should you wish -to work with native image builds. The script will take care of the rest of the details necessary to manage your native -image function deployments. - -Examine the output of the `manage.sh` script if you want to learn what aws commands are executed -to create, delete, and update your functions. - -One thing to note about the create command for native is that the `aws lambda create-function` -call must set a specific environment variable: - -[source,bash,subs=attributes+] ----- ---environment 'Variables={DISABLE_SIGNAL_HANDLERS=true}' ----- - -== Examine the POM - -There is nothing special about the POM other than the inclusion of the `quarkus-funqy-amazon-lambda` extension -as a dependency. The extension automatically generates everything you might need for your lambda deployment. - -== Integration Testing - -Funqy Amazon Lambda support leverages the Quarkus AWS Lambda test framework so that you can unit tests your Funqy functions. -This is true for both JVM and native modes. -This test framework provides similar functionality to the SAM CLI, without the overhead of Docker. - -If you open up {quickstarts-tree-url}/funqy-quickstarts/funqy-amazon-lambda-quickstart/src/test/java/org/acme/funqy/FunqyTest.java[FunqyTest.java] -you'll see that the test replicates the AWS execution environment. - -[source,java] ----- -package org.acme.funqy; - -import io.quarkus.amazon.lambda.test.LambdaClient; -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; - -@QuarkusTest -public class FunqyTest { - @Test - public void testSimpleLambdaSuccess() throws Exception { - Friend friend = new Friend("Bill"); - Greeting out = LambdaClient.invoke(Greeting.class, friend); - Assertions.assertEquals("Hello Bill", out.getMessage()); - } -} ----- - -== Testing with the SAM CLI - -The https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html[AWS SAM CLI] -allows you to run your functions locally on your laptop in a simulated Lambda environment. This requires -https://www.docker.com/products/docker-desktop[docker] to be installed. This is an optional approach should you choose -to take advantage of it. Otherwise, the Quarkus JUnit integration should be sufficient for most of your needs. - -A starter template has been generated for both JVM and native execution modes. - -Run the following SAM CLI command to locally test your function, passing the appropriate SAM `template`. -The `event` parameter takes any JSON file, in this case the sample `payload.json`. - -[source,bash] ----- -sam local invoke --template target/sam.jvm.yaml --event payload.json ----- - -The native image can also be locally tested using the `sam.native.yaml` template: - -[source,bash] ----- -sam local invoke --template target/sam.native.yaml --event payload.json ----- - -== Modifying `function.zip` - -There are times where you may have to add additional entries to the `function.zip` lambda deployment that is generated -by the build. To do this create a `zip.jvm` or `zip.native` directory within `src/main`. -Create `zip.jvm/` if you are doing a pure Java. `zip.native/` if you are doing a native deployment. - -Any you files and directories you create under your zip directory will be included within `function.zip` - -== Custom `bootstrap` script - -There are times you may want to set specific system properties or other arguments when lambda invokes -your native Funqy deployment. If you include a `bootstrap` script file within -`zip.native`, the Funqy extension will automatically rename the executable to `runner` within -`function.zip` and set the unix mode of the `bootstrap` script to executable. - -NOTE: The native executable must be referenced as `runner` if you include a custom `bootstrap` script. - -The extension generates an example script within `target/bootstrap-example.sh`. diff --git a/_versions/2.7/guides/funqy-azure-functions-http.adoc b/_versions/2.7/guides/funqy-azure-functions-http.adoc deleted file mode 100644 index 7142acfc4ae..00000000000 --- a/_versions/2.7/guides/funqy-azure-functions-http.adoc +++ /dev/null @@ -1,25 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy HTTP Binding with Azure Functions -:extension-status: preview - -include::./attributes.adoc[] - -You can use xref:funqy-http.adoc[Funqy HTTP] on Azure Functions. This allows you to invoke on multiple Funqy functions -using HTTP deployed as one Azure Function. - -WARNING: The Funqy HTTP + Azure Functions binding is not a replacement for REST over HTTP. Because Funqy -needs to be portable cross a lot of different protocols and function providers its HTTP binding -is very minimalistic and you will lose REST features like linking and the ability to leverage -HTTP features like cache-control and conditional GETs. You may want to consider using Quarkus's -JAX-RS, Spring MVC, or Vert.x Web Reactive Route xref:azure-functions-http.adoc[support] instead. They also work with Quarkus and Azure Functions. - - -include::./status-include.adoc[] - -Follow the xref:azure-functions-http.adoc[Azure Functions HTTP Guide]. It walks through using a variety of HTTP -frameworks on Azure Functions. Including Funqy. - diff --git a/_versions/2.7/guides/funqy-gcp-functions-http.adoc b/_versions/2.7/guides/funqy-gcp-functions-http.adoc deleted file mode 100644 index 1c425a7c8a4..00000000000 --- a/_versions/2.7/guides/funqy-gcp-functions-http.adoc +++ /dev/null @@ -1,63 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy HTTP Binding with Google Cloud Functions -:extension-status: experimental - -include::./attributes.adoc[] - -If you want to allow HTTP clients to invoke your Funqy functions on Google Cloud Functions, Quarkus allows you to expose multiple -Funqy functions through HTTP deployed as one Google Cloud Function. This approach does add overhead over the -regular Funqy Google Cloud Function integration. - -include::./status-include.adoc[] - -Follow the xref:gcp-functions-http.adoc[Google Cloud Functions Http Guide]. It walks through using a variety of HTTP -frameworks on Google Cloud Functions, including Funqy. - -WARNING: The Funqy HTTP + Google Cloud Functions binding is not a replacement for REST over HTTP. Because Funqy -needs to be portable across a lot of different protocols and function providers its HTTP binding -is very minimalistic and you will lose REST features like linking and the ability to leverage -HTTP features like cache-control and conditional GETs. You may want to consider using Quarkus's -JAX-RS, Spring MVC, or Vert.x Web Reactive Route xref:gcp-functions-http.adoc[support] instead. They also work with Quarkus and Google Cloud Functions. - -== An additional Quickstart - -Beyond generating a Google Cloud Functions project that is covered in the xref:gcp-functions-http.adoc[Google Cloud Functions HTTP Guide], -there's also a quickstart for running Funqy HTTP on Google Cloud Functions. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `funqy-google-cloud-functions-http-quickstart` {quickstarts-tree-url}/funqy-quickstarts/funqy-google-cloud-functions-http-quickstart[directory]. - -== The Code - -There is nothing special about the code and more importantly nothing Google Cloud specific. Funqy functions can be deployed to many different -environments and Google Cloud Functions is one of them. The Java code is actually the same exact code as the {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart[funqy-http-quickstart]. - -== Getting Started - -The steps to get this quickstart running are exactly the same as defined in the xref:gcp-functions-http.adoc[Google Cloud Functions HTTP Guide]. -This differences are that you are running from a quickstart and the Maven dependencies are slightly different. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-funqy-http - - - io.quarkus - quarkus-google-cloud-functions-http - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-funqy-http") -implementation("io.quarkus:quarkus-google-cloud-functions-http") ----- diff --git a/_versions/2.7/guides/funqy-gcp-functions.adoc b/_versions/2.7/guides/funqy-gcp-functions.adoc deleted file mode 100644 index 3144f3a98a1..00000000000 --- a/_versions/2.7/guides/funqy-gcp-functions.adoc +++ /dev/null @@ -1,271 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy Google Cloud Functions -:extension-status: experimental - -include::./attributes.adoc[] - -The guide walks through quickstart code to show you how you can deploy Funqy functions to Google Cloud Functions. - -include::./status-include.adoc[] - -== Prerequisites - -:prerequisites-time: 30 minutes -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* https://cloud.google.com/[A Google Cloud Account]. Free accounts work. -* https://cloud.google.com/sdk[Cloud SDK CLI Installed] - -== Login to Google Cloud - -Login to Google Cloud is necessary for deploying the application and it can be done as follows: - -[source,bash,subs=attributes+] ----- -gcloud auth login ----- - -== The Quickstart - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `funqy-google-cloud-functions-quickstart` {quickstarts-tree-url}/funqy-quickstarts/funqy-google-cloud-functions-quickstart[directory]. - -== Creating the Maven Deployment Project - -Create an application with the `quarkus-funqy-google-cloud-functions` extension. -You can use the following Maven command to create it: - -:create-app-artifact-id: funqy-google-cloud-functions -:create-app-extensions: funqy-google-cloud-functions -include::includes/devtools/create-app.adoc[] - -== The Code - -There is nothing special about the code and more importantly nothing Google Cloud specific. Funqy functions can be deployed to many different -environments and Google Cloud Functions is one of them. - -[[choose]] -== Choose Your Function - -Only one Funqy function can be exported per Google Cloud Functions deployment. If you only have one method -annotated with `@Funq` in your project, then there is no worries. If you have multiple functions defined -within your project, then you will need to choose the function within your Quarkus `application.properties`: - -[source,properties,subs=attributes+] ----- -quarkus.funqy.export=greet ----- - -Alternatively, you can set the `QUARKUS_FUNQY_EXPORT` environment variable when you create the Google Cloud Function using the `gcloud` cli. - -== Build and Deploy - -Build the project: - -include::includes/devtools/build.adoc[] - -This will compile and package your code. - - -== Create the function - -In this example, we will create two background functions. Background functions allow to -react to Google Cloud events like PubSub messages, Cloud Storage events, Firestore events, ... - -[source,java] ----- -import javax.inject.Inject; - -import io.quarkus.funqy.Funq; -import io.quarkus.funqy.gcp.functions.event.PubsubMessage; -import io.quarkus.funqy.gcp.functions.event.StorageEvent; - -public class GreetingFunctions { - - @Inject GreetingService service; // <1> - - @Funq // <2> - public void helloPubSubWorld(PubsubMessage pubSubEvent) { - String message = service.hello(pubSubEvent.data); - System.out.println(pubSubEvent.messageId + " - " + message); - } - - @Funq // <3> - public void helloGCSWorld(StorageEvent storageEvent) { - String message = service.hello("world"); - System.out.println(storageEvent.name + " - " + message); - } - -} ----- - -NOTE: Function return type can also be Mutiny reactive types. - -1. Injection works inside your function. -2. This is a background function that takes as parameter a `io.quarkus.funqy.gcp.functions.event.PubsubMessage`, -this is a convenient class to deserialize a PubSub message. -3. This is a background function that takes as parameter a `io.quarkus.funqy.gcp.functions.event.StorageEvent`, -this is a convenient class to deserialize a Google Storage event. - -NOTE: we provide convenience class to deserialize common Google Cloud event inside the `io.quarkus.funqy.gcp.functions.event` package. -They are not mandatory to use, you can use any object you want. - -As our project contains multiple function, we need to specify which function needs to be deployed via the following property inside our `application.properties` : - -[source,property] ----- -quarkus.funqy.export=helloPubSubWorld ----- - -== Build and Deploy to Google Cloud - -To build your application, you can package your application via `mvn clean package`. -You will have a single JAR inside the `target/deployment` repository that contains your classes and all your dependencies in it. - -Then you will be able to use `gcloud` to deploy your function to Google Cloud, the `gcloud` command will be different depending from which event you want to be triggered. - -[WARNING] -==== -The first time you launch the `gcloud functions deploy`, you can have the following error message: - -[source] ----- -ERROR: (gcloud.functions.deploy) OperationError: code=7, message=Build Failed: Cloud Build has not been used in project before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudbuild.googleapis.com/overview?project= then retry. ----- -This means that Cloud Build is not activated yet. To overcome this error, open the URL shown in the error, follow the instructions and then wait a few minutes before retrying the command. -==== - -=== Background Functions - PubSub - -Use this command to deploy to Google Cloud Functions: - -[source,bash] ----- -gcloud functions deploy quarkus-example-funky-pubsub \ - --entry-point=io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction \ - --runtime=java11 --trigger-resource hello_topic --trigger-event google.pubsub.topic.publish \ - --source=target/deployment ----- - -The entry point always needs to be `io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction` as it will be this class -that will bootstrap Quarkus. - -The `--trigger-resource` option defines the name of the PubSub topic, and the `--trigger-event google.pubsub.topic.publish` option -define that this function will be triggered by all message publication inside the topic. - -To trigger an event to this function, you can use the `gcloud functions call` command: - -[source,bash] ----- -gcloud functions call quarkus-example-funky-pubsub --data '{"data":"Pub/Sub"}' ----- - -The `--data '{"data":"Hello, Pub/Sub"}'` option allow to specify the message to be send to PubSub. - -=== Background Functions - Cloud Storage - -Before deploying your function, you need to create a bucket. - -[source,bash] ----- -gsutil mb gs://quarkus-hello ----- - -Then, use this command to deploy to Google Cloud Functions: - -[source,bash] ----- -gcloud functions deploy quarkus-example-funky-storage \ - --entry-point=io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction \ - --runtime=java11 --trigger-resource quarkus-hello --trigger-event google.storage.object.finalize \ - --source=target/deployment ----- - -The entry point always needs to be `io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction` as it will be this class -that will bootstrap Quarkus. - -The `--trigger-resource` option defines the name of the Cloud Storage bucket, and the `--trigger-event google.storage.object.finalize` option -define that this function will be triggered by all new file inside this bucket. - -To trigger an event to this function, you can use the `gcloud functions call` command: - -[source,bash] ----- -gcloud functions call quarkus-example-funky-storage --data '{"name":"test.txt"}' ----- - -The `--data '{"name":"test.txt"}'` option allow to specify a fake file name, a fake Cloud Storage event will be created for this name. - -You can also simply add a file to Cloud Storage using the command line of the web console. - -== Testing locally - -The easiest way to locally test your function is using the Cloud Function invoker JAR. - -You can download it via Maven using the following command: - -[source,bash] ----- -mvn dependency:copy \ - -Dartifact='com.google.cloud.functions.invoker:java-function-invoker:1.0.2' \ - -DoutputDirectory=. ----- - -Before using the invoker, you first need to build your function via: - -include::includes/devtools/build.adoc[] - -Then you can use it to launch your function locally, again, the command depends on the type of function and the type of events. - -=== Background Functions - PubSub - -For background functions, you launch the invoker with a target class of `io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction`. - -[source,bash] ----- -java -jar java-function-invoker-1.0.2.jar \ - --classpath target/funqy-google-cloud-functions-1.0.0-SNAPSHOT-runner.jar \ - --target io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction ----- - -IMPORTANT: The `--classpath` parameter needs to be set to the previously packaged JAR that contains your function class and all Quarkus related classes. - -Then you can call your background function via an HTTP call with a payload containing the event: - -[source,bash] ----- -curl localhost:8080 -d '{"data":{"data":"world"}}' ----- - -This will call your PubSub background function with a PubSubMessage `{"data":"hello"}`. - -=== Background Functions - Cloud Storage - -For background functions, you launch the invoker with a target class of `io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction`. - -[source,bash] ----- -java -jar java-function-invoker-1.0.2.jar \ - --classpath target/funqy-google-cloud-functions-1.0.0-SNAPSHOT-runner.jar \ - --target io.quarkus.funqy.gcp.functions.FunqyBackgroundFunction ----- - -IMPORTANT: The `--classpath` parameter needs to be set to the previously packaged JAR that contains your function class and all Quarkus related classes. - -Then you can call your background function via an HTTP call with a payload containing the event: - -[source,bash] ----- -curl localhost:8080 -d '{"data":{"name":"text"}}' ----- - -This will call your PubSub background function with a Cloud Storage event `{"name":"file.txt"}`, so an event on the `file.txt` file. - -== What's next? - -If you are looking for JAX-RS, Servlet or Vert.x support for Google Cloud Functions, we have it thanks to our xref:gcp-functions-http.adoc[Google Cloud Functions HTTP binding]. diff --git a/_versions/2.7/guides/funqy-http.adoc b/_versions/2.7/guides/funqy-http.adoc deleted file mode 100644 index e7ab5bd2ffa..00000000000 --- a/_versions/2.7/guides/funqy-http.adoc +++ /dev/null @@ -1,300 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy HTTP Binding (Standalone) - -include::./attributes.adoc[] -:extension-status: preview - -The guide walks through quickstart code to show you how you can deploy Funqy as a -standalone service and invoke on Funqy functions using HTTP. - -WARNING: The Funqy HTTP binding is not a replacement for REST over HTTP. Because Funqy -needs to be portable across a lot of different protocols and function providers its HTTP binding -is very minimalistic and you will lose REST features like linking and the ability to leverage -HTTP features like cache-control and conditional GETs. You may want to consider using Quarkus's -JAX-RS, Spring MVC, or Vert.x Web Reactive Routes support instead, although Funqy will have less overhead -than these alternatives (except Vert.x which is still super fast). - -== Prerequisites - -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* Read about xref:funqy.adoc[Funqy Basics]. This is a short read! - -== The Quickstart - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `funqy-http-quickstart` {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart[directory]. - -== The Code - -If you look at the Java code, you'll see that there is no HTTP specific API. Its just simple Java methods -annotated with `@Funq`. Simple, easy, straightforward. - -== Maven Dependencies - -To write Funqy HTTP functions, simply include the `quarkus-funqy-http` dependency into your Quarkus `pom.xml` file: - -[source, xml] ----- - - io.quarkus - quarkus-funqy-http - ----- - -== Build Project - -[source,bash] ----- -mvn clean quarkus:dev ----- - -This starts your functions in Quarkus dev mode. - -== Execute Funqy HTTP functions - -The URL path to execute a function is the function name. For example if your function name is `foo` then the URL path -to execute the function would be `/foo`. - -The HTTP POST or GET methods can be used to invoke on a function. The return value of the function -is marshalled to JSON using the Jackson JSON library. Jackson annotations can be used. If your function -has an input parameter, a POST invocation must use JSON as the input type. Jackson is also used here for unmarshalling. - -You can invoke the `hello` function defined in {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart/src/main/java/org/acme/funqy/PrimitiveFunctions.java[PrimitiveFunctions.java] -by pointing your browser to http://localhost:8080/hello - -Invoking the other functions in the quickstart requires an HTTP POST. -To execute the `greet` function defined in {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart/src/main/java/org/acme/funqy/GreetingFunction.java[GreetingFunction.java] -invoke this curl script. - -[source,bash] ----- -curl "http://localhost:8080/greet" \ --X POST \ --H "Content-Type: application/json" \ --d '{"name":"Bill"}' ----- - -Primitive types can also be passed as input using the standard JSON mapping for them. -To execute the `toLowerCase` function defined in {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart/src/main/java/org/acme/funqy/PrimitiveFunctions.java[PrimitiveFunctions.java] -invoke this curl script: - -[source,bash] ----- -curl "http://localhost:8080/toLowerCase" \ --X POST \ --H "Content-Type: application/json" \ --d '"HELLO WORLD"' ----- - -To execute the `double` function defined in {quickstarts-tree-url}/funqy-quickstarts/funqy-http-quickstart/src/main/java/org/acme/funqy/PrimitiveFunctions.java[PrimitiveFunctions.java] -invoke this curl script: - -[source,bash] ----- -curl "http://localhost:8080/double" \ --X POST \ --H "Content-Type: application/json" \ --d '2' ----- - -== GET Query Parameter Mapping - -For GET requests, the Funqy HTTP Binding also has a query parameter mapping for function input parameters. -Only bean style classes and `java.util.Map` can be used for your input parameter. For bean style -classes, query parameter names are mapped to properties on the bean class. Here's an example of a simple -`Map`: - -[source, java] ----- -@Funq -public String hello(Map map) { -... -} ----- - -Key values must be a primitive type (except char) or `String`. Values can be primitives (except char), `String`, `OffsetDateTime` or a complex -bean style class. For the above example, here's the corresponding curl request: - -[source,bash] ----- -curl "http://localhost:8080/a=1&b=2" ----- - -The `map` input parameter of the `hello` function would have the key value pairs: `a`->1, `b`->2. - -Bean style classes can also be use as the input parameter type. Here's an example: - -[source, java] ----- -public class Person { - String first; - String last; - - public String getFirst() { return first; } - public void setFirst(String first) { this.first = first; } - public String getLast() { return last; } - public void setLast(String last) { this.last = last; } -} - -public class MyFunctions { - @Funq - public String greet(Person p) { - return "Hello " + p.getFirst() + " " + p.getLast(); - } -} ----- - -Property values can be any primitive type except `char`. It can also be `String`, and `OffsetDateTime`. -`OffsetDateTime` query param values must be in ISO-8601 format. - -You can invoke on this using an HTTP GET and query parameters: - -[source,bash] ----- -curl "http://localhost:8080/greet?first=Bill&last=Burke" ----- - -In the above request, the query parameter names are mapped to corresponding properties in the input class. - -The input class can also have nested bean classes. Expanding on the previous example: - -[source, java] ----- -public class Family { - private Person dad; - private Person mom; - - public Person getDad() { return dad; } - public void setDad(Person dad) { this.dad = dad; } - public Person getMom() { return mom; } - public void setMom(Person mom) { this.mom = mom; } -} - -public class MyFunctions { - @Funq - public String greet(Family family) { - ... - } -} - ----- - -In this case, query parameters for nested values use the `.` notation. For example: - -[source,bash] ----- -curl "http://localhost:8080/greet?dad.first=John&dad.last=Smith&mom.first=Martha&mom.last=Smith" ----- - -`java.util.List` and `Set` are also supported as property values. For example: - -[source, java] ----- -public class Family { - ... - - List pets; -} - -public class MyFunctions { - @Funq - public String greet(Family family) { - ... - } -} - ----- - -To invoke a GET request, just list the `pets` query parameter multiple times. - -[source,bash] ----- -curl "http://localhost:8080/greet?pets=itchy&pets=scratchy" ----- - -For more complex types, `List` and `Set` members must have an identifier in the -query parameter. For example: - -[source, java] ----- -public class Family { - ... - - List kids; -} - -public class MyFunctions { - @Funq - public String greet(Family family) { - ... - } -} - ----- - -Each `kids` query parameter must identify the kid they are referencing so that -the runtime can figure out which -property values go to which members in the list. Here's the curl request: - -[source,bash] ----- -curl "http://localhost:8080/greet?kids.1.first=Buffy&kids.2.first=Charlie" ----- - -The above URL uses the value `1` and `2` to identity the target member of the list, but any unique -string can be used. - -A property can also be a `java.util.Map`. The key of the map can be any primitive type and `String`. -For example: - -[source, java] ----- -public class Family { - ... - - Map address; -} - -public class MyFunctions { - @Funq - public String greet(Family family) { - ... - } -} ----- - -The corresponding call would look like this: - -[source,bash] ----- -curl "http://localhost:8080/greet?address.state=MA&address.city=Boston" ----- - -If your `Map` value is a complex type, then just continue the notation by adding the property to set at the end. - -[source, java] ----- -public class Family { - ... - - Map addresses; -} - -public class MyFunctions { - @Funq - public String greet(Family family) { - ... - } -} ----- - -[source,bash] ----- -curl "http://localhost:8080/greet?addresses.home.state=MA&addresses.home.city=Boston" ----- diff --git a/_versions/2.7/guides/funqy-knative-events.adoc b/_versions/2.7/guides/funqy-knative-events.adoc deleted file mode 100644 index 1476aec3b1a..00000000000 --- a/_versions/2.7/guides/funqy-knative-events.adoc +++ /dev/null @@ -1,380 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy Knative Events Binding - -include::./attributes.adoc[] -:extension-status: preview -:devtools-no-gradle: - -Quarkus Funqy link:https://knative.dev/docs/eventing[Knative Events] builds off of the xref:funqy-http.adoc[Funqy HTTP] extension to allow you to -route and process Knative Events within a Funqy function. - -The guide walks through quickstart code to show you how you can deploy and invoke on Funqy functions -with Knative Events. - -== Prerequisites - -:prerequisites-time: 1 hour -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* Read about xref:funqy.adoc[Funqy Basics]. This is a short read! -* Have gone through the link:https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial/index.html[Knative Tutorial], specifically link:https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial/eventing/eventing-trigger-broker.html[Brokers and Triggers] - -== Setting up Knative - -Setting up Knative locally in a Minikube environment is beyond the scope of this guide. It is advised -to follow https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial/index.html[this] Knative Tutorial -put together by Red Hat. It walks through how to set up Knative on Minikube or OpenShift in a local environment. - -NOTE: Specifically you should run the link:https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial/eventing/eventing-trigger-broker.html[Brokers and Triggers] -tutorial as this guide requires that you can invoke on a Broker to trigger the quickstart code. - -== Read about Cloud Events - -The Cloud Event link:https://cloudevents.io/[specification] is a good read to give you an even greater understanding of Knative Events. - -== The Quickstart - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `funqy-knative-events-quickstart` {quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart[directory]. - -== The Quickstart Flow - -The quickstart works by manually sending an HTTP request containing a Cloud Event to the Knative Broker using `curl`. -The Knative Broker receives the request and triggers the startup of the Funqy container built by the quickstart. -The event triggers the invocation of a chain of Funqy functions. The output of one function triggers the -invocation of another Funqy function. - -== Funqy and Cloud Events - -When living within a Knative Events environment, Funqy functions are triggered by a specific -Cloud Event type. You can have multiple Funqy functions within a single application/deployment, -but they must be triggered by a specific Cloud Event Type. The exception to this rule is if there is -only one Funqy function in the application. In that case, the event is pushed to that function irregardless -of the Cloud Event type. - -Currently, Funqy can only consume JSON-based data. It supports both Binary and Structured mode of execution, -but the data component of the Cloud Event message must be JSON. This JSON must also be marshallable to and from the -Java parameters and return types of your functions. - -== The Code - -Let's start looking at our quickstart code so that you can understand how Knative Events map to Funqy. -Open up {quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart/src/main/java/org/acme/funqy/SimpleFunctionChain.java[SimpleFunctionChain.java] - -The first function we'll look at is `defaultChain`. - -[source, java] ----- -import io.quarkus.funqy.Funq; - -public class SimpleFunctionChain { - @Funq - public String defaultChain(String input) { - log.info("*** defaultChain ***"); - return input + "::" + "defaultChain"; - } ----- - -As is, a Funqy function has a default Cloud Event mapping. By default, the Cloud Event type must match -the function name for the function to trigger. If the function returns output, -the response is converted into a Cloud Event and returned to the Broker to be routed to other triggers. -The default Cloud Event type for this response is the function name + `.output`. The default Cloud Event source is the function name. - -So, for the `defaultChain` function, the Cloud Event type that triggers the function is `defaultChain`. It generates -a response that triggers a new Cloud Event whose type is `defaultChain.output` and the event source is `defaultChain`. - -While the default mapping is simple, it might not always be feasible. You can change this default mapping -through configuration. Let's look at the next function: - -[source, java] ----- -import io.quarkus.funqy.Funq; - -public class SimpleFunctionChain { - @Funq - public String configChain(String input) { - log.info("*** configChain ***"); - return input + "::" + "configChain"; - } ----- - -The `configChain` function has its Cloud Event mapping changed by configuration within {quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart/src/main/resources/application.properties[application.properties]. - - -[source,properties,subs=attributes+] ----- -quarkus.funqy.knative-events.mapping.configChain.trigger=defaultChain.output -quarkus.funqy.knative-events.mapping.configChain.response-type=annotated -quarkus.funqy.knative-events.mapping.configChain.response-source=configChain ----- - -In this case, the configuration maps the incoming Cloud Event type `defaultChain.output` to the `configChain` function. -The `configChain` function maps its response to the `annotated` Cloud Event type, and the Cloud Event source `configChain`. - -* `quarkus.funqy.knative-events.mapping.{function name}.trigger` sets the Cloud Event type that triggers a specific function. It is possible to use the special value `*` as a catch-all value. The function will in this case be used for all event types. -* `quarkus.funqy.knative-events.mapping.{function name}.response-type` sets the Cloud Event type of the response -* `quarkus.funqy.knative-events.mapping.{function name}.resource-source` sets the Cloud Event source of the response - -The Funqy Knative Events extension also has annotations to do this Cloud Event mapping to your functions. Take a look at the -`annotatedChain` method - -[source, java] ----- -import io.quarkus.funqy.Funq; -import io.quarkus.funqy.knative.events.CloudEventMapping; - -public class SimpleFunctionChain { - @Funq - @CloudEventMapping(trigger = "annotated", responseSource = "annotated", responseType = "lastChainLink") - public String annotatedChain(String input) { - log.info("*** annotatedChain ***"); - return input + "::" + "annotatedChain"; - } ----- - -If you use the `@CloudEventMapping` annotation on your function you can map the Cloud Event type trigger -and the Cloud Event response. In this example the `annotatedChain` function will be triggered -by the `annotated` Cloud Event type and the response will be mapped to a `lastChainLink` type -and `annotated` Cloud Event source. - -So, if look at all the functions defined within `SimpleFunctionChain` you'll notice that one function triggers the next. -The last function that is triggered is `lastChainLink`. - -[source, java] ----- -import io.quarkus.funqy.Context; -import io.quarkus.funqy.Funq; - -public class SimpleFunctionChain { - @Funq - public void lastChainLink(String input, @Context CloudEvent event) { - log.info("*** lastChainLink ***"); - log.info(input + "::" + "lastChainLink"); - } -} ----- - -There are two things to notice about this function. One, it has no output. Your functions are not -required to return output. Second, there is an additional `event` parameter to the function. - -If you want to know additional information about the incoming Cloud Event, you can inject the -`io.quarkus.funqy.knative.events.CloudEvent` interface using the Funqy `@Context` annotation. The `CloudEvent` interface exposes information -about the triggering event. - -[source, java] ----- -public interface CloudEvent { - String id(); - String specVersion(); - String source(); - String subject(); - OffsetDateTime time(); -} ----- - -== Maven - -If you look at the {quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart/pom.xml[POM], -you'll see that it is a typical Quarkus POM that pulls in one Funqy dependency: - -[source,xml] ----- - - io.quarkus - quarkus-funqy-knative-events - ----- - -== Dev mode and Testing - -Funqy Knative Events support dev mode and unit testing using RestAssured. You can invoke on Funqy Knative Events functions -using the same invocation model as -xref:funqy-http.adoc[Funqy HTTP] using normal HTTP requests, or Cloud Event Binary mode, or Structured Mode. All -invocation modes are supported at the same time. - -So, if you open up the unit test code in {quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart/src/test/java/org/acme/funqy/FunqyTest.java[FunqyTest.java] -you'll see that its simply using RestAssured to make HTTP invocations to test the functions. - -Funqy also works with Quarkus Dev mode! - -== Build the Project - -First build the Java artifacts: - -include::includes/devtools/build.adoc[] - -Next, a docker image is required by Knative, so you'll need to build that next: - -[source,bash] ----- -docker build -f src/main/docker/Dockerfile.jvm -t yourAccountName/funqy-knative-events-quickstart . ----- - -Make sure to replace `yourAccountName` with your docker or quay account name when you run `docker build`. The -Dockerfile is a standard Quarkus dockerfile. No special Knative magic. - -Push your image to docker hub or quay - -[source,bash] ----- -docker push yourAccountName/funqy-knative-events-quickstart ----- - -Again, make sure to replace `yourAccountName` with your docker or quay account name when you run `docker push`. - -== Deploy to Kubernetes/OpenShift - -The first step is to set up the broker in our namespace. -Following is an example command from the the Knative cli. - -[source, yaml] ----- -kn broker create default \ - --namespace knativetutorial ----- - -The broker we have created is called `default`, this broker will receive the cloud events. -The broker is also referenced in the function YAML files. - -The second step is to define a Kubernetes/OpenShift service to point to the Docker image you created and pushed -during build. Take a look at {quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart/src/main/k8s/funqy-service.yaml[funqy-service.yaml] - -[source, yaml] ----- -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: funqy-knative-events-quickstart -spec: - template: - metadata: - name: funqy-knative-events-quickstart-v1 - annotations: - autoscaling.knative.dev/target: "1" - spec: - containers: - - image: docker.io/yourAccountName/funqy-knative-events-quickstart ----- - -This is a standard Kubernetes service definition YAML file. - -NOTE: Make sure you change the image URL to point to the image you built and pushed earlier! - -For our quickstart, one Kubernetes service will contain all functions. There's no reason you couldn't break up this -quickstart into multiple different projects and deploy a service for each function. For simplicity, and to show that you -don't have to have a deployment per function, the quickstart combines everything into one project, image, and service. - -Deploy the service: - -[source,bash] ----- -kubectl apply -n knativetutorial -f src/main/k8s/funqy-service.yaml ----- - -The next step is to deploy Knative Event triggers for each of the event types. As noted in the code section, each -Funqy function is mapped to a specific Cloud Event type. You must create Knative Event triggers that map a Cloud Event -and route it to a specific Kubernetes service. We have 4 different triggers. - - -{quickstarts-tree-url}/funqy-quickstarts/funqy-knative-events-quickstart/src/main/k8s/defaultChain-trigger.yaml[defaultChain-trigger.yaml] -[source, yaml] ----- -apiVersion: eventing.knative.dev/v1alpha1 -kind: Trigger -metadata: - name: defaultchain -spec: - broker: default - filter: - attributes: - type: defaultChain - subscriber: - ref: - apiVersion: serving.knative.dev/v1 - kind: Service - name: funqy-knative-events-quickstart ----- - -The `spec:filter:attributes:type` maps a Cloud Event type to the Kubernetes service defined in `spec:subscriber:ref`. -When a Cloud Event is pushed to the Broker, it will trigger the spin up of the service mapped to that event. - -There's a trigger YAML file for each of our 4 Funqy functions. Deploy them all: - -[source,bash] ----- -kubectl apply -n knativetutorial -f src/main/k8s/defaultChain-trigger.yaml -kubectl apply -n knativetutorial -f src/main/k8s/configChain-trigger.yaml -kubectl apply -n knativetutorial -f src/main/k8s/annotatedChain-trigger.yaml -kubectl apply -n knativetutorial -f src/main/k8s/lastChainLink-trigger.yaml ----- - -== Run the demo - -You'll need two different terminal windows. One to do a curl request to the Broker, the other to watch the pod log -files so you can see the messages flowing through the Funqy function event chain. - -Make sure you have the `stern` tool installed. See the Knative Tutorial setup for information on that. Run stern -to look for logs outputted by our Funqy deployment - -[source,bash] ----- -stern funq user-container ----- - -Open a separate terminal. You'll first need to learn the URL of the broker. Execute this command to find it. - -[source,bash] ----- -kubectl get broker default -o jsonpath='{.status.address.url}' ----- - -This will provide you a URL similar to e.g.: `http://broker-ingress.knative-eventing.svc.cluster.local/knativetutorial/default`. Remember this URL. - -Next thing we need to do is ssh into our Kubernetes cluster so that we can send a curl request to our broker. -Following command will create a simple OS pod so we can curl into our functions. - -[source,bash] ----- -kubectl -n knativetutorial apply -f src/main/k8s/curler.yaml ----- - -You might need to wait a couple of seconds until the curler pod comes up. Run the following to get bash access to the curler pod: - -[source,bash] ----- -kubectl -n knativetutorial exec -it curler -- /bin/bash ----- - - -You will now be in a shell within the Kubernetes cluster. Within the shell, execute this curl command , the broker address is an example and might differ based on your project or broker name. - -[source,bash] ----- -curl -v "http://default-broker.knativetutorial.svc.cluster.local" \ --X POST \ --H "Ce-Id: 1234" \ --H "Ce-Specversion: 1.0" \ --H "Ce-Type: defaultChain" \ --H "Ce-Source: curl" \ --H "Content-Type: application/json" \ --d '"Start"' ----- - -This posts a Knative Event to the broker, which will trigger the `defaultChain` function. As discussed earlier, the output -of `defaultChain` triggers an event that is posted to `configChain` which triggers an event posted to `annotatedChain` then -finally to the `lastChainLink` function. You can see this flow in your `stern` window. Something like this should -be outputted. - -[source, subs=attributes+] ----- -funqy-knative-events-quickstart-v1-deployment-59bb88bcf4-9jwdx user-container 2020-05-12 13:44:02,256 INFO [org.acm.fun.SimpleFunctionChain] (executor-thread-1) *** defaultChain *** -funqy-knative-events-quickstart-v1-deployment-59bb88bcf4-9jwdx user-container 2020-05-12 13:44:02,365 INFO [org.acm.fun.SimpleFunctionChain] (executor-thread-2) *** configChain *** -funqy-knative-events-quickstart-v1-deployment-59bb88bcf4-9jwdx user-container 2020-05-12 13:44:02,394 INFO [org.acm.fun.SimpleFunctionChain] (executor-thread-1) *** annotatedChain *** -funqy-knative-events-quickstart-v1-deployment-59bb88bcf4-9jwdx user-container 2020-05-12 13:44:02,466 INFO [org.acm.fun.SimpleFunctionChain] (executor-thread-2) *** lastChainLink *** -funqy-knative-events-quickstart-v1-deployment-59bb88bcf4-9jwdx user-container 2020-05-12 13:44:02,467 INFO [org.acm.fun.SimpleFunctionChain] (executor-thread-2) Start::defaultChain::configChain::annotatedChain::lastChainLink ----- diff --git a/_versions/2.7/guides/funqy.adoc b/_versions/2.7/guides/funqy.adoc deleted file mode 100644 index 03373bdec8d..00000000000 --- a/_versions/2.7/guides/funqy.adoc +++ /dev/null @@ -1,189 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Funqy - -include::./attributes.adoc[] -:extension-status: preview - -Quarkus Funqy is part of Quarkus's serverless strategy and aims to provide a portable Java API to write functions -deployable to various FaaS environments like AWS Lambda, Azure Functions, Google Cloud Functions, Knative, and Knative Events (Cloud Events). -It is also usable as a standalone service. - -Because Funqy is an abstraction that spans multiple different cloud/function providers -and protocols it has to be a very simple API and thus, might not have all the features you are used -to in other remoting abstractions. A nice side effect though is that Funqy is as optimized and -as small as possible. This means that because Funqy sacrifices a little bit on flexibility, you'll -get a framework that has little to no overhead. - -== Funqy Basics - -The Funqy API is simple. Annotate a method with `@Funq`. This method may only have one optional input parameter -and may or may not return a response. - -[source, java] ----- -import io.quarkus.funqy.Funq; - -public class GreetingFunction { - @Funq - public String greet(String name) { - return "Hello " + name; - } -} ----- - -Java classes can also be used as input and output and must follow the Java bean convention and have -a default constructor. The Java type that is declared as the parameter or return type is the type that will be -expected by the Funqy runtime. Funqy does type introspection at build time to speed up boot time, so any derived types -will not be noticed by the Funqy marshalling layer at runtime. - -Here's an example of using a POJO as input and output types. - -[source, java] ----- -public class GreetingFunction { - public static class Friend { - String name; - - public String getName() { return name; } - public void setName(String name) { this.name = name; } - } - - public static class Greeting { - String msg; - - public Greeting() {} - public Greeting(String msg) { this.msg = msg } - - public String getMessage() { return msg; } - public void setMessage(String msg) { this.msg = msg; } - } - - @Funq - public Greeting greet(Friend friend) { - return new Greeting("Hello " + friend.getName()); - } -} ----- - -== Async Reactive Types - -Funqy supports the https://smallrye.io/smallrye-mutiny[Smallrye Mutiny] `Uni` reactive type as a return type. The only requirement is that -the `Uni` must fill out the generic type. - -[source, java] ----- -import io.quarkus.funqy.Funq; -import io.smallrye.mutiny.Uni; - -public class GreetingFunction { - - @Funq - public Uni reactiveGreeting(String name) { - ... - } -} ----- - -== Function Names - -The function name defaults to the method name and is case sensitive. If you want your function referenced by a different name, -parameterize the `@Funq` annotation as follows: - -[source, java] ----- -import io.quarkus.funqy.Funq; - -public class GreetingFunction { - - @Funq("HelloWorld") - public String greet(String name) { - return "Hello " + name; - } -} ----- - -== Funqy DI - -Each Funqy Java class is a Quarkus Arc component and supports dependency injection through -CDI or Spring DI. Spring DI requires including the `quarkus-spring-di` dependency in your build. - -The default object lifecycle for a Funqy class is `@Dependent`. - -[source, java] ----- -import io.quarkus.funqy.Funq; - -import javax.inject.Inject; -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingFunction { - - @Inject - GreetingService service; - - @Funq - public Greeting greet(Friend friend) { - Greeting greeting = new Greeting(); - greeting.setMessage(service.greet(friend.getName())); - return greeting; - } -} ----- - -== Context injection - -The Funqy API will usually not allow you to inject or use abstractions that -are specific to a protocol (i.e. HTTP) or function API (i.e. AWS Lambda). There are -exceptions to the rule though and you may be able to inject -contextual information that is specific to the environment you are deploying in. - -NOTE: We do not recommend injecting contextual information specific to a runtime. Keep your functions portable. - -Contextual information is injected via the `@Context` annotation which can be used on a function parameter -or a class field. A good example is the `io.quarkus.funqy.knative.events.CloudEvent` interface that comes with our Funqy -Knative Cloud Events integration: - -[source, java] ----- -import io.quarkus.funqy.Funq; -import io.quarkus.funqy.Context; -import io.quarkus.funqy.knative.events.CloudEvent; - -public class GreetingFunction { - - @Funq - public Greeting greet(Friend friend, @Context CloudEvent eventInfo) { - System.out.println("Received greeting request from: " eventInfo.getSource()); - - Greeting greeting = new Greeting(); - greeting.setMessage("Hello " + friend.getName())); - return greeting; - } -} ----- - -== Should I Use Funqy? - -REST over HTTP has become a very common way to write services over the past decade. While Funqy -has an HTTP binding it is not a replacement for REST. Because Funqy has to work across a variety -of protocols and function cloud platforms, it is very minimalistic and constrained. For example, if you -use Funqy you lose the ability to link (think URIs) to the data your functions spit out. You also -lose the ability to leverage cool HTTP features like `cache-control` and conditional GETs. Many -developers will be ok with that as many won't be using these REST/HTTP features or styles. You'll -have to make the decision on what camp you are in. Quarkus does support REST integration (through JAX-RS, - Spring MVC, Vert.x Web, and Servlet) with -various cloud/function providers, but there are some disadvantages of using that approach as well. For example, -if you want to do xref:amazon-lambda-http.adoc[HTTP with AWS Lambda], this requires you to use the AWS API Gateway which may -slow down deployment and cold start time or even cost you more. - -The purpose of Funqy is to allow you to write cross-provider functions so that you can move -off of your current function provider if, for instance, they start charging you a lot more for their -service. Another reason you might not want to use Funqy is if you need access specific APIs of the -target function environment. For example, developers often want access to the AWS Context on -Lambda. In this case, we tell them they may be better off using the xref:amazon-lambda.adoc[Quarkus Amazon Lambda] integration instead. - diff --git a/_versions/2.7/guides/gcp-functions-http.adoc b/_versions/2.7/guides/gcp-functions-http.adoc deleted file mode 100644 index 8781f484084..00000000000 --- a/_versions/2.7/guides/gcp-functions-http.adoc +++ /dev/null @@ -1,223 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Google Cloud Functions (Serverless) with RESTEasy, Undertow, or Reactive Routes -:extension-status: preview - -include::./attributes.adoc[] - -The `quarkus-google-cloud-functions-http` extension allows you to write microservices with RESTEasy (JAX-RS), -Undertow (Servlet), Reactive Routes, or xref:funqy-http.adoc[Funqy HTTP], and make these microservices deployable to the Google Cloud Functions runtime. - -One Google Cloud Functions deployment can represent any number of JAX-RS, Servlet, Reactive Routes, or xref:funqy-http.adoc[Funqy HTTP] endpoints. - -include::./status-include.adoc[] - -== Prerequisites - -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* https://cloud.google.com/[A Google Cloud Account]. Free accounts work. -* https://cloud.google.com/sdk[Cloud SDK CLI Installed] - -== Solution - -This guide walks you through generating a sample project followed by creating three HTTP endpoints -written with JAX-RS APIs, Servlet APIs, Reactive Routes, or xref:funqy-http.adoc[Funqy HTTP] APIs. Once built, you will be able to deploy -the project to Google Cloud. - -If you don't want to follow all these steps, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `google-cloud-functions-http-quickstart` {quickstarts-tree-url}/google-cloud-functions-http-quickstart[directory]. - -== Creating the Maven Deployment Project - -Create an application with the `quarkus-google-cloud-functions-http` extension. -You can use the following Maven command to create it: - -:create-app-artifact-id: google-cloud-functions-http -:create-app-extensions: resteasy,google-cloud-functions-http,resteasy-jackson,undertow,reactive-routes,funqy-http -include::includes/devtools/create-app.adoc[] - -== Login to Google Cloud - -Login to Google Cloud is necessary for deploying the application and it can be done as follows: - -[source,bash,subs=attributes+] ----- -gcloud auth login ----- - -== Creating the endpoints - -For this example project, we will create four endpoints, one for RESTEasy (JAX-RS), one for Undertow (Servlet), -one for Reactive routes and one for xref:funqy-http.adoc[Funqy HTTP]. - -[NOTE] -==== -These various endpoints are for demonstration purposes. -For real life applications, you should choose one of this technology and stick to it. -==== - -If you don't need endpoints of each type, you can remove the corresponding extensions from your `pom.xml`. - -=== The JAX-RS endpoint - -[source,java] ----- -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class GreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -=== The Servlet endpoint - -[source,java] ----- -import java.io.IOException; - -import javax.servlet.ServletException; -import javax.servlet.annotation.WebServlet; -import javax.servlet.http.HttpServlet; -import javax.servlet.http.HttpServletRequest; -import javax.servlet.http.HttpServletResponse; - -@WebServlet(name = "ServletGreeting", urlPatterns = "/servlet/hello") -public class GreetingServlet extends HttpServlet { - @Override - protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { - resp.setStatus(200); - resp.addHeader("Content-Type", "text/plain"); - resp.getWriter().write("hello"); - } - - @Override - protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { - String name = req.getReader().readLine(); - resp.setStatus(200); - resp.addHeader("Content-Type", "text/plain"); - resp.getWriter().write("hello " + name); - } -} ----- - -=== The Reactive Routes endpoint - -[source,java] ----- -import static io.quarkus.vertx.web.Route.HttpMethod.GET; - -import io.quarkus.vertx.web.Route; -import io.vertx.ext.web.RoutingContext; - -public class GreetingRoutes { - @Route(path = "/vertx/hello", methods = GET) - void hello(RoutingContext context) { - context.response().headers().set("Content-Type", "text/plain"); - context.response().setStatusCode(200).end("hello"); - } -} ----- - -=== The Funqy HTTP endpoint - -[source,java] ----- -import io.quarkus.funqy.Funq; - -public class GreetingFunqy { - @Funq - public String funqy() { - return "Make it funqy"; - } -} ----- - -== Build and Deploy to Google Cloud - -NOTE: Quarkus forces a packaging of type `uber-jar` for your function as Google Cloud Function deployment requires a single JAR. - -Package your application using the standard `mvn clean package` command. -The result of the previous command is a single JAR file inside the `target/deployment` directory that contains the classes and the dependencies of the project. - -Then you will be able to use `gcloud` to deploy your function to Google Cloud. - -[source,bash] ----- -gcloud functions deploy quarkus-example-http \ - --entry-point=io.quarkus.gcp.functions.http.QuarkusHttpFunction \ - --runtime=java11 --trigger-http --source=target/deployment ----- - -[IMPORTANT] -==== -The entry point must always be set to `io.quarkus.gcp.functions.http.QuarkusHttpFunction` as this is the class that integrates Cloud Functions with Quarkus. -==== - -[WARNING] -==== -The first time you launch this command, you can have the following error message: -[source] ----- -ERROR: (gcloud.functions.deploy) OperationError: code=7, message=Build Failed: Cloud Build has not been used in project before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudbuild.googleapis.com/overview?project= then retry. ----- -This means that Cloud Build is not activated yet. To overcome this error, open the URL shown in the error, follow the instructions and then wait a few minutes before retrying the command. -==== - - -This command will give you as output a `httpsTrigger.url` that points to your function. - -You can then call your endpoints via: - -- For JAX-RS: {httpsTrigger.url}/hello -- For servlet: {httpsTrigger.url}/servlet/hello -- For Reactive Routes: {httpsTrigger.url}/vertx/hello -- For Funqy: {httpsTrigger.url}/funqy - -== Testing locally - -The easiest way to locally test your function is using the Cloud Function invoker JAR. - -You can download it via Maven using the following command: - -[source,bash] ----- -mvn dependency:copy \ - -Dartifact='com.google.cloud.functions.invoker:java-function-invoker:1.0.2' \ - -DoutputDirectory=. ----- - -Before using the invoker, you first need to build your function via `mvn package`. - -Then you can use it to launch your function locally. - -[source,bash] ----- -java -jar java-function-invoker-1.0.2.jar \ - --classpath target/deployment/google-cloud-functions-http-1.0.0-SNAPSHOT-runner.jar \ - --target io.quarkus.gcp.functions.http.QuarkusHttpFunction ----- - -IMPORTANT: The `--classpath` parameter needs to be set to the previously packaged JAR that contains your function class and all Quarkus related classes. - -Your endpoints will be available on http://localhost:8080. - -== What's next? - -You can use our xref:funqy-gcp-functions.adoc[Google Cloud Functions Funqy binding] to use Funqy, -a provider agnostic function as a service framework, that allow to deploy HTTP function or Background function to Google Cloud. diff --git a/_versions/2.7/guides/gcp-functions.adoc b/_versions/2.7/guides/gcp-functions.adoc deleted file mode 100644 index 2c409011555..00000000000 --- a/_versions/2.7/guides/gcp-functions.adoc +++ /dev/null @@ -1,364 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Google Cloud Functions (Serverless) -:extension-status: preview - -include::./attributes.adoc[] - -The `quarkus-google-cloud-functions` extension allows you to use Quarkus to build your Google Cloud Functions. -Your functions can use injection annotations from CDI or Spring and other Quarkus facilities as you need them. - -include::./status-include.adoc[] - -== Prerequisites - -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] -* https://cloud.google.com/[A Google Cloud Account]. Free accounts work. -* https://cloud.google.com/sdk[Cloud SDK CLI Installed] - -== Solution - -This guide walks you through generating a sample project followed by creating multiple functions showing how to implement `HttpFunction`, `BackgroundFunction` and `RawBackgroundFunction` in Quarkus. -Once built, you will be able to deploy the project to Google Cloud. - -If you don't want to follow all these steps, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `google-cloud-functions-quickstart` {quickstarts-tree-url}/google-cloud-functions-quickstart[directory]. - - -== Creating the Maven Deployment Project - -Create an application with the `quarkus-google-cloud-functions` extension. -You can use the following Maven command to create it: - -:create-app-artifact-id: google-cloud-functions -:create-app-extensions: google-cloud-functions -include::includes/devtools/create-app.adoc[] - -Now, let's remove the `index.html` from `resources/META-INF/resources` or it will be picked up instead of your Function. - -== Login to Google Cloud - -Login to Google Cloud is necessary for deploying the application and it can be done as follows: - -[source,bash,subs=attributes+] ----- -gcloud auth login ----- - -== Creating the functions - -For this example project, we will create three functions, one `HttpFunction`, one `BackgroundFunction` (Storage event) and one `RawBackgroundFunction` (PubSub event). - -== Choose Your Function - -The `quarkus-google-cloud-functions` extension scans your project for a class that directly implements the Google Cloud `HttpFunction`, `BackgroundFunction` or `RawBackgroundFunction` interface. -It must find a class in your project that implements one of these interfaces or it will throw a build time failure. -If it finds more than one function classes, a build time exception will also be thrown. - -Sometimes, though, you might have a few related functions that share code and creating multiple maven modules is just -an overhead you don't want to do. The extension allows you to bundle multiple functions in one -project and use configuration or an environment variable to pick the function you want to deploy. - -To configure the name of the function, you can use the following configuration property: - -[source,properties,subs=attributes+] ----- -quarkus.google-cloud-functions.function=test ----- - -The `quarkus.google-cloud-functions.function` property tells Quarkus which function to deploy. This can be overridden -with an environment variable too. - -The CDI name of the function class must match the value specified within the `quarkus.google-cloud-functions.function` property. -This must be done using the `@Named` annotation. - -[source, java] ----- -@Named("test") -public class TestHttpFunction implements HttpFunction { -} ----- - - -=== The HttpFunction - -[source,java] ----- -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import java.io.Writer; -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.inject.Named; -import com.google.cloud.functions.HttpFunction; -import com.google.cloud.functions.HttpRequest; -import com.google.cloud.functions.HttpResponse; -import io.quarkus.gcp.function.test.service.GreetingService; - -@Named("httpFunction") // <1> -@ApplicationScoped // <2> -public class HttpFunctionTest implements HttpFunction { // <3> - @Inject GreetingService greetingService; // <4> - - @Override - public void service(HttpRequest httpRequest, HttpResponse httpResponse) throws Exception { // <5> - Writer writer = httpResponse.getWriter(); - writer.write(greetingService.hello()); - } -} ----- -<1> The `@Named` annotation allows to name the CDI bean to be used by the `quarkus.google-cloud-functions.function` property, this is optional. -<2> The function must be a CDI bean -<3> This is a regular Google Cloud Function implementation, so it needs to implement `com.google.cloud.functions.HttpFunction`. -<4> Injection works inside your function. -<5> This is standard Google Cloud Function implementation, nothing fancy here. - -=== The BackgroundFunction - -This `BackgroundFunction` is triggered by a Storage event, you can use any events supported by Google Cloud instead. - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.inject.Named; -import com.google.cloud.functions.BackgroundFunction; -import com.google.cloud.functions.Context; -import io.quarkus.gcp.function.test.service.GreetingService; - - -@Named("storageTest") // <1> -@ApplicationScoped // <2> -public class BackgroundFunctionStorageTest implements BackgroundFunction { // <3> - @Inject GreetingService greetingService; // <4> - - @Override - public void accept(StorageEvent event, Context context) throws Exception { // <5> - System.out.println("Receive event: " + event); - System.out.println("Be polite, say " + greetingService.hello()); - } - - // - public static class StorageEvent { // <6> - public String name; - } -} ----- -<1> The `@Named` annotation allows to name the CDI bean to be used by the `quarkus.google-cloud-functions.function` property, this is optional. -<2> The function must be a CDI bean -<3> This is a regular Google Cloud Function implementation, so it needs to implement `com.google.cloud.functions.BackgroundFunction`. -<4> Injection works inside your function. -<5> This is standard Google Cloud Function implementation, nothing fancy here. -<6> This is the class the event will be deserialized to. - -=== The RawBackgroundFunction - -This `RawBackgroundFunction` is triggered by a PubSub event, you can use any events supported by Google Cloud instead. - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.inject.Named; -import com.google.cloud.functions.Context; -import com.google.cloud.functions.RawBackgroundFunction; -import io.quarkus.gcp.function.test.service.GreetingService; - -@Named("rawPubSubTest") // <1> -@ApplicationScoped // <2> -public class RawBackgroundFunctionPubSubTest implements RawBackgroundFunction { // <3> -@Inject GreetingService greetingService; // <4> - - @Override - public void accept(String event, Context context) throws Exception { // <5> - System.out.println("PubSub event: " + event); - System.out.println("Be polite, say " + greetingService.hello()); - } -} ----- -<1> The `@Named` annotation allows to name the CDI bean to be used by the `quarkus.google-cloud-functions.function` property, this is optional. -<2> The function must be a CDI bean -<3> This is a regular Google Cloud Function implementation, so it needs to implement `com.google.cloud.functions.RawBackgroundFunction`. -<4> Injection works inside your function. -<5> This is standard Google Cloud Function implementation, nothing fancy here. - -== Build and Deploy to Google Cloud - -To build your application, you can package it using the standard command: - -include::includes/devtools/build.adoc[] - -The result of the previous command is a single JAR file inside the `target/deployment` repository that contains classes and dependencies of the project. - -Then you will be able to use `gcloud functions deploy` command to deploy your function to Google Cloud. - -[WARNING] -==== -The first time you launch this command, you can have the following error message: -[source] ----- -ERROR: (gcloud.functions.deploy) OperationError: code=7, message=Build Failed: Cloud Build has not been used in project before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudbuild.googleapis.com/overview?project= then retry. ----- -This means that Cloud Build is not activated yet. To overcome this error, open the URL shown in the error, follow the instructions and then wait a few minutes before retrying the command. -==== - -=== The HttpFunction - -This is an example command to deploy your `HttpFunction` to Google Cloud: - -[source,bash] ----- -gcloud functions deploy quarkus-example-http \ - --entry-point=io.quarkus.gcp.functions.QuarkusHttpFunction \ - --runtime=java11 --trigger-http --source=target/deployment ----- - -[IMPORTANT] -==== -The entry point must always be set to `io.quarkus.gcp.functions.QuarkusHttpFunction` as this is the class that integrates Cloud Functions with Quarkus. -==== - -This command will give you as output a `httpsTrigger.url` that points to your function. - -=== The BackgroundFunction - -Before deploying your function, you need to create a bucket. - -[source,bash] ----- -gsutil mb gs://quarkus-hello ----- - -This is an example command to deploy your `BackgroundFunction` to Google Cloud, as the function is triggered by a Storage event, -it needs to use `--trigger-event google.storage.object.finalize` and the `--trigger-resource` parameter with the name of a previously created bucket: - -[source,bash] ----- -gcloud functions deploy quarkus-example-storage \ - --entry-point=io.quarkus.gcp.functions.QuarkusBackgroundFunction \ - --trigger-resource quarkus-hello --trigger-event google.storage.object.finalize \ - --runtime=java11 --source=target/deployment ----- - -[IMPORTANT] -==== -The entry point must always be set to `io.quarkus.gcp.functions.QuarkusBackgroundFunction` as this is the class that integrates Cloud Functions with Quarkus. -==== - -To trigger the event, you can send a file to the GCS `quarkus-hello` bucket or you can use gcloud to simulate one: - -[source,bash] ----- -gcloud functions call quarkus-example-storage --data '{"name":"test.txt"}' ----- - -NOTE: `--data` contains the GCS event, it is a JSON document with the name of the file added to the bucket. - -=== The RawBackgroundFunction - -This is an example command to deploy your `RawBackgroundFunction` to Google Cloud, as the function is triggered by a PubSub event, -it needs to use `--trigger-event google.pubsub.topic.publish` and the `--trigger-resource` parameter with the name of a previously created topic: - -[source,bash] ----- -gcloud functions deploy quarkus-example-pubsub \ - --entry-point=io.quarkus.gcp.functions.QuarkusBackgroundFunction \ - --runtime=java11 --trigger-resource hello_topic --trigger-event google.pubsub.topic.publish --source=target/deployment ----- - -[IMPORTANT] -==== -The entry point must always be set to `io.quarkus.gcp.functions.QuarkusBackgroundFunction` as this is the class that integrates Cloud Functions with Quarkus. -==== - -To trigger the event, you can send a file to the `hello_topic` topic or you can use gcloud to simulate one: - -[source,bash] ----- -gcloud functions call quarkus-example-pubsub --data '{"data":{"greeting":"world"}}' ----- - -== Testing locally - -The easiest way to locally test your function is using the Cloud Function invoker JAR. - -You can download it via Maven using the following command: - -[source,bash] ----- -mvn dependency:copy \ - -Dartifact='com.google.cloud.functions.invoker:java-function-invoker:1.0.2' \ - -DoutputDirectory=. ----- - -Before using the invoker, you first need to build your function via: - -include::includes/devtools/build.adoc[] - -=== The HttpFunction - -To test an `HttpFunction`, you can use this command to launch your function locally. - -[source,bash] ----- -java -jar java-function-invoker-1.0.2.jar \ - --classpath target/google-cloud-functions-1.0.0-SNAPSHOT-runner.jar \ - --target io.quarkus.gcp.functions.QuarkusHttpFunction ----- - -IMPORTANT: The `--classpath` parameter needs to be set to the previously packaged JAR that contains your function class and all Quarkus related classes. - -Your endpoints will be available on http://localhost:8080. - -=== The BackgroundFunction - -For background functions, you launch the invoker with a target class of `io.quarkus.gcp.functions.BackgroundFunction`. - -[source,bash] ----- -java -jar java-function-invoker-1.0.2.jar \ - --classpath target/google-cloud-functions-1.0.0-SNAPSHOT-runner.jar \ - --target io.quarkus.gcp.functions.QuarkusBackgroundFunction ----- - -IMPORTANT: The `--classpath` parameter needs to be set to the previously packaged JAR that contains your function class and all Quarkus related classes. - -Then you can call your background function via an HTTP call with a payload containing the event: - -[source,bash] ----- -curl localhost:8080 -d '{"data":{"name":"hello.txt"}}' ----- - -This will call your Storage background function with an event `{"name":"hello.txt"}`, so an event on the `hello.txt` file. - -=== The RawBackgroundFunction - -For background functions, you launch the invoker with a target class of `io.quarkus.gcp.functions.BackgroundFunction`. - -[source,bash] ----- -java -jar java-function-invoker-1.0.2.jar \ - --classpath target/google-cloud-functions-1.0.0-SNAPSHOT-runner.jar \ - --target io.quarkus.gcp.functions.QuarkusBackgroundFunction ----- - -IMPORTANT: The `--classpath` parameter needs to be set to the previously packaged JAR that contains your function class and all Quarkus related classes. - -Then you can call your background function via an HTTP call with a payload containing the event: - -[source,bash] ----- -curl localhost:8080 -d '{"data":{"greeting":"world"}}' ----- - -This will call your PubSub background function with a PubSubMessage `{"greeting":"world"}`. diff --git a/_versions/2.7/guides/getting-started-reactive.adoc b/_versions/2.7/guides/getting-started-reactive.adoc deleted file mode 100644 index 376f2dd2fea..00000000000 --- a/_versions/2.7/guides/getting-started-reactive.adoc +++ /dev/null @@ -1,345 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Getting Started With Reactive - -include::./attributes.adoc[] - -_Reactive_ is a set of principles to build robust, efficient, and concurrent applications and systems. -These principles let you handle more load than traditional approaches while using the resources (CPU and memory) more efficiently while also reacting to failures gracefully. - -Quarkus is a _Reactive_ framework. -Since the beginning, _Reactive_ has been an essential tenet of the Quarkus architecture. -It includes many reactive features and offers a broad ecosystem. - -This guide is not an in-depth article about what _Reactive_ is and how Quarkus enables reactive architectures. -If you want to read more about these topics, refer to the xref:quarkus-reactive-architecture.adoc[Reactive Architecture guide], which provides an overview of the Quarkus reactive ecosystem. - -In this guide, we will get you started with some reactive features of Quarkus. -We are going to implement a simple CRUD application. -Yet, unlike in the xref:hibernate-orm-panache.adoc[Hibernate with Panache guide], it uses the reactive features of Quarkus. - -This guide will help you with: - -* Bootstrapping a reactive CRUD application with Quarkus -* Using Hibernate Reactive with Panache to interact with a database in a reactive fashion -* Using RESTEasy Reactive to implement HTTP API while enforcing the reactive principle -* Packaging and Running the application - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -NOTE: Verify that Maven is using the Java version you expect. -If you have multiple JDKs installed, make sure Maven is using the expected one. -You can verify which JDK Maven uses by running `mvn --version.` - -== Imperative vs. Reactive: a question of threads - -As mentioned above, in this guide, we are going to implement a reactive CRUD application. -But you may wonder what the differences and benefits are in comparison to the traditional and imperative model. - -To better understand the contrast, we need to explain the difference between the reactive and imperative execution models. -It's essential to comprehend that _Reactive_ is not just a different execution model, but that distinction is necessary to understand this guide. - -In the traditional and imperative approach, frameworks assign a thread to handle the request. -So, the whole processing of the request runs on this worker thread. -This model does not scale very well. -Indeed, to handle multiple concurrent requests, you need multiple threads; and so your application concurrency is constrained by the number of threads. -In addition, these threads are blocked as soon as your code interacts with remote services. -So, it leads to inefficient usage of the resources, as you may need more threads, and each thread, as they are mapped to OS threads, has a cost in terms of memory and CPU. - -image::blocking-threads.png[alt=Imperative Execution Model and Worker Threads, width=50%, align=center] - -On the other side, the reactive model relies on non-blocking I/Os and a different execution model. -Non-blocking I/O provides an efficient way to deal with concurrent I/O. -A minimal amount of threads called I/O threads, can handle many concurrent I/O. -With such a model, request processing is not delegated to a worker thread but uses these I/O threads directly.It saves memory and CPU as there is no need to create worker threads to handle the requests. -It also improves the concurrency as it removes the constraint on the number of threads. -Finally, it also improves response time as it reduces the number of thread switches. - -image::reactive-thread.png[alt=Reactive Execution Model and I/O Threads, width=50%, align=center] - - -== From sequential to continuation style - -So, with the reactive execution model, the requests are processed using I/O threads. -But that's not all. -An I/O thread can handle multiple concurrent requests. -How? Here is the trick and one of the most significant differences between reactive and imperative. - -When processing a request requires interacting with a remote service, like an HTTP API or a database, it does not block the execution while waiting for the response. -Instead, it schedules the I/O operation and attaches a continuation, i.e., the request processing remaining code. -This continuation can be passed as a callback (a function invoked with the I/O outcome), or use more advanced constructs such as reactive programming or co-routines. -Regardless of how the continuation is expressed, the essential aspect is the release of the I/O thread and, as a consequence, the fact that this thread can be used to process another request. -When the scheduled I/O completes, the I/O thread executes the continuation, and the processing of the pending request continues. - -So, unlike the imperative model, where I/O blocks the execution, reactive switches to a continuation-based design, where the I/O threads are released, and continuation invoked when the I/Os complete. -As a result, the I/O thread can handle multiple concurrent requests, improving the overall concurrency of the application. - -But, there is a catch. -We need a way to write continuation-passing code. -There are many ways of doing this. -In Quarkus, we propose: - -* Mutiny - an intuitive and event-driven reactive programming library -* Kotlin co-routines - a way to write asynchronous code in a sequential manner - -In this guide, we will use Mutiny. -To know more about Mutiny, check the xref:mutiny-primer.adoc[Mutiny documentation]. - -NOTE: Project Loom is coming to the JDK soon and proposes a virtual thread-based model. -The Quarkus architecture is ready to support Loom as soon as it's become globally available. - -== Bootstrapping the Reactive Fruits application - -With this in mind, let's see how we can develop a CRUD application with Quarkus, which will use the I/O thread to handle the HTTP requests, interact with a database, process the result, and write the HTTP response; in other words: a reactive CRUD application. - -While we recommend you to follow the step-by-step instructions, you can find the final solution on https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-reactive-panache-quickstart. - -First, go to https://code.quarkus.io[code.quarkus.io] and select the following extensions: - -1. RESTEasy Reactive Jackson -2. Hibernate Reactive with Panache -3. Reactive PostgreSQL client - -image::reactive-guide-code.png[alt=Extensions to select in https://code.quarkus.io,width=90%,align=center] - -The last extension is the reactive database driver for PostgreSQL. -Hibernate Reactive uses that driver to interact with the database without blocking the caller thread. - -Once selected, click on "Generate your application", download the zip file, unzip it and open the code in your favorite IDE. - -== Reactive Panache Entity - -Let's start with the `Fruit` entity.Create the `src/main/java/org/acme/hibernate/orm/panache/Fruit.java` file with the following content: - -[source, java] ----- -package org.acme.hibernate.orm.panache; - -import javax.persistence.Cacheable; -import javax.persistence.Column; -import javax.persistence.Entity; - -import io.quarkus.hibernate.reactive.panache.PanacheEntity; // <1> - -@Entity -@Cacheable -public class Fruit extends PanacheEntity { - - @Column(length = 40, unique = true) - public String name; - -} ----- -<1> Make sure you import the reactive variant of `PanacheEntity`. - -This class represents `Fruits`. -It's a straightforward entity with a single field (`name`). -Note that it uses `io.quarkus.hibernate.reactive.panache.PanacheEntity`, the reactive variant of `PanacheEntity`. -So, behind the scenes, Hibernate uses the execution model we described above. -It interacts with the database without blocking the thread. -In addition, this reactive `PanacheEntity` proposes a reactive API. -We will use this API to implement the REST endpoint. - -Before going further, open the `src/main/resource/application.properties` file and add: - -[source, properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -It instructs the application to use PostgreSQL for the database and to handle the database schema generation. - -In the same directory, create an `import.sql` file, which inserts a few fruits, so we don't start with an empty database in dev mode: - -[source, text] ----- -INSERT INTO fruit(id, name) VALUES (nextval('hibernate_sequence'), 'Cherry'); -INSERT INTO fruit(id, name) VALUES (nextval('hibernate_sequence'), 'Apple'); -INSERT INTO fruit(id, name) VALUES (nextval('hibernate_sequence'), 'Banana'); ----- - -In a terminal, launch the application in dev mode using: `./mvnw quarkus:dev`. -Quarkus automatically starts a database instance for you and configure the application. Now we only need to implement the HTTP endpoint. - - -== Reactive Resource - -Because the interaction with the database is non-blocking and asynchronous, we need to use asynchronous constructs to implement our HTTP resource. -Quarkus uses Mutiny as its central reactive programming model. -So, it supports returning Mutiny types (`Uni` and `Multi`) from HTTP endpoints. -Also, our Fruit Panache entity exposes methods using these types, so we only need to implement the _glue_. - -Create the `src/main/java/org/acme/hibernate/orm/panache/FruitResource.java` file with the following content: - -[source, java] ----- -package org.acme.hibernate.orm.panache; - -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.Path; - -@Path("/fruits") -@ApplicationScoped -public class FruitResource { - -} ----- - -Let's start with the `getAll` method. The `getAll` method returns all the fruits stored in the database. -In the `FruitResource`, add the following code: - -[source, java] ----- -@GET -public Uni> get() { - return Fruit.listAll(Sort.by("name")); -} ----- - -Open http://localhost:8080/fruits to invoke this method: - -[source, json] ----- -[{"id":2,"name":"Apple"},{"id":3,"name":"Banana"},{"id":1,"name":"Cherry"},{"id":4,"name":"peach"}] ----- - -We get the expected JSON array. -RESTEasy Reactive automatically maps the list into a JSON Array, except if instructed otherwise. - -Look at the return type; it returns a `Uni` of `List`. -`Uni` is an asynchronous type. -It's a bit like a future. -It's a placeholder that will get its value (item) later. -When it receives the item (Mutiny says it _emits_ its item), you can attach some behavior. -That's how we express the continuation: get a uni, and when the uni emits its item, execute the rest of the processing. - -NOTE: Reactive developers may wonder why we can't return a stream of fruits directly. -It tends to be a bad idea when dealing with a database. -Relational databases do not handle streaming well. -It’s a problem of protocols not designed for this use case. -So, to stream rows from the database, you need to keep a connection (and sometimes a transaction) open until all the rows are consumed. -If you have slow consumers, you break the golden rule of databases: don’t hold connections for too long. -Indeed, the number of connections is rather low, and having consumers keeping them for too long will dramatically reduce the concurrency of your application. -So, when possible, use a `Uni>` and load the content. -If you have a large set of results, implement pagination. - -Let's continue our API with `getSingle`: - -[source, java] ----- -@GET -@Path("/{id}") -public Uni getSingle(Long id) { - return Fruit.findById(id); -} ----- - -In this case, we use `Fruit.findById` to retrieve the fruit. -It returns a `Uni`, which will complete when the database has retrieved the row. - -The `create` method allows adding a new fruit to the database: - -[source, java] ----- -@POST -public Uni create(Fruit fruit) { - return Panache.withTransaction(fruit::persist) - .onItem().transform(inserted -> Response.created(URI.create("/fruits/" + inserted.id)).build()); -} ----- - -The code is a bit more involved. -To write to a database, we need a transaction. -So we use `Panache.withTransaction` to get one (asynchronously) and call the `persist` method when we receive the transaction. -The `persist` method is also returning a `Uni`. -This `Uni` emits the result of the insertion of the fruit in the database. -Once the insertion completes (and that's our continuation), we create a `201 CREATED` response. -RESTEasy Reactive automatically reads the request body as JSON and creates the `Fruit` instance. - -NOTE: The `.onItem().transform(...)` can be replaced with `.map(...)`. -`map` is a shortcut. - -If you have https://curl.se/[curl] on your machine, you can try the endpoint using: - -[source, bash] ----- -> curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"name":"peach"}' \ - http://localhost:8080/fruits ----- - -Following the same ideas, you can implement the other CRUD methods. - -== Testing and Running - -Testing a reactive application is similar to testing a non-reactive one: use the HTTP endpoint and verify the HTTP responses. -The fact that the application is reactive does not change anything. - -In https://github.com/quarkusio/quarkus-quickstarts/blob/main/hibernate-reactive-panache-quickstart/src/test/java/org/acme/hibernate/orm/panache/FruitsEndpointTest.java[FruitsEndpointTest.java] you can see how the test for the fruit application can be implemented. - -Packaging and running the application does not change either. - -You can use the following command as usual: - -include::includes/devtools/build.adoc[] - -or to build a native executable: - -include::includes/devtools/build-native.adoc[] - -You can also package the application in a container. - -To run the application, don’t forget to start a database and provide the configuration to your application. - -For example, you can use Docker to run your database: - -[source, bash] ----- -docker run -it --rm=true \ - --name postgres-quarkus -e POSTGRES_USER=quarkus \ - -e POSTGRES_PASSWORD=quarkus -e POSTGRES_DB=fruits \ - -p 5432:5432 postgres:14.1 ----- - -Then, launch the application using: - -[source, bash] ----- -java \ - -Dquarkus.datasource.reactive.url=postgresql://localhost/fruits \ - -Dquarkus.datasource.username=quarkus \ - -Dquarkus.datasource.password=quarkus \ - -jar target/quarkus-app/quarkus-run.jar ----- - -Or, if you packaged your application as native executable, use: - - -[source, bash] ----- -./target/getting-started-with-reactive-runner \ - -Dquarkus.datasource.reactive.url=postgresql://localhost/fruits \ - -Dquarkus.datasource.username=quarkus \ - -Dquarkus.datasource.password=quarkus ----- - -The parameters passed to the application are described in the datasource guide. -There are other ways to configure the application - please check the xref:config-reference.adoc#configuration_sources[configuration guide] to have an overview of the possibilities (such as env variable, .env files and so on). - -== Going further - -This guide is a brief introduction to some reactive features offered by Quarkus. -Quarkus is a reactive framework, and so offers a lot of reactive features. - -If you want to continue on this topic check: - -* xref:quarkus-reactive-architecture.adoc[The Quarkus Reactive Architecture] -* xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library] - diff --git a/_versions/2.7/guides/getting-started-testing.adoc b/_versions/2.7/guides/getting-started-testing.adoc deleted file mode 100644 index b4c0637c63a..00000000000 --- a/_versions/2.7/guides/getting-started-testing.adoc +++ /dev/null @@ -1,1410 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Testing Your Application - -include::./attributes.adoc[] - -:toc: macro -:toclevels: 4 -:doctype: book -:icons: font -:docinfo1: - -:numbered: -:sectnums: -:sectnumlevels: 4 - - -Learn how to test your Quarkus Application. -This guide covers: - -* Testing in JVM mode -* Testing in native mode -* Injection of resources into tests - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* The completed greeter application from the xref:getting-started.adoc[Getting Started Guide] - -== Architecture - -In this guide, we expand on the initial test that was created as part of the Getting Started Guide. -We cover injection into tests and also how to test native executables. - -NOTE: Quarkus supports Continuous testing, but this is covered by the xref:continuous-testing.adoc[Continuous Testing Guide]. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `getting-started-testing` {quickstarts-tree-url}/getting-started-testing[directory]. - -This guide assumes you already have the completed application from the `getting-started` directory. - -== Recap of HTTP based Testing in JVM mode - -If you have started from the Getting Started example you should already have a completed test, including the correct -`pom.xml` setup. - -In the `pom.xml` file you should see 2 test dependencies: - -[source,xml,subs=attributes+] ----- - - io.quarkus - quarkus-junit5 - test - - - io.rest-assured - rest-assured - test - ----- - -`quarkus-junit5` is required for testing, as it provides the `@QuarkusTest` annotation that controls the testing framework. -`rest-assured` is not required but is a convenient way to test HTTP endpoints, we also provide integration that automatically -sets the correct URL so no configuration is required. - -Because we are using JUnit 5, the version of the https://maven.apache.org/surefire/maven-surefire-plugin/[Surefire Maven Plugin] -must be set, as the default version does not support Junit 5: - -[source,xml,subs=attributes+] ----- - - maven-surefire-plugin - ${surefire-plugin.version} - - - org.jboss.logmanager.LogManager - ${maven.home} - - - ----- - -We also set the `java.util.logging.manager` system property to make sure tests will use the correct logmanager and `maven.home` to ensure that custom configuration -from `${maven.home}/conf/settings.xml` is applied (if any). - -The project should also contain a simple test: - -[source,java] ----- -package org.acme.getting.started.testing; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import java.util.UUID; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/hello") - .then() - .statusCode(200) - .body(is("hello")); - } - - @Test - public void testGreetingEndpoint() { - String uuid = UUID.randomUUID().toString(); - given() - .pathParam("name", uuid) - .when().get("/hello/greeting/{name}") - .then() - .statusCode(200) - .body(is("hello " + uuid)); - } - -} ----- - -This test uses HTTP to directly test our REST endpoint. When the test is run the application will be started before -the test is run. - -=== Controlling the test port - -While Quarkus will listen on port `8080` by default, when running tests it defaults to `8081`. This allows you to run -tests while having the application running in parallel. - -[TIP] -.Changing the test port -==== -You can configure the ports used by tests by configuring `quarkus.http.test-port` for HTTP and `quarkus.http.test-ssl-port` for HTTPS in your `application.properties`: -[source] ----- -quarkus.http.test-port=8083 -quarkus.http.test-ssl-port=8446 ----- -`0` will result in the use of a random port (assigned by the operating system). -==== - -Quarkus also provides RestAssured integration that updates the default port used by RestAssured before the tests are run, -so no additional configuration should be required. - -=== Controlling HTTP interaction timeout - -When using REST Assured in your test, the connection and response timeouts are set to 30 seconds. -You can override this setting with the `quarkus.http.test-timeout` property: - -[source] ----- -quarkus.http.test-timeout=10s ----- - -=== Injecting a URI - -It is also possible to directly inject the URL into the test which can make is easy to use a different client. This is -done via the `@TestHTTPResource` annotation. - -Let's write a simple test that shows this off to load some static resources. First create a simple HTML file in -`src/main/resources/META-INF/resources/index.html` : - - -[source,xml] ----- - - - Testing Guide - - - Information about testing - - ----- - -We will create a simple test to ensure that this is being served correctly: - - -[source,java] ----- -package org.acme.getting.started.testing; - -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.InputStream; -import java.net.URL; -import java.nio.charset.StandardCharsets; - -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.common.http.TestHTTPResource; -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -public class StaticContentTest { - - @TestHTTPResource("index.html") // <1> - URL url; - - @Test - public void testIndexHtml() throws Exception { - try (InputStream in = url.openStream()) { - String contents = readStream(in); - Assertions.assertTrue(contents.contains("Testing Guide")); - } - } - - private static String readStream(InputStream in) throws IOException { - byte[] data = new byte[1024]; - int r; - ByteArrayOutputStream out = new ByteArrayOutputStream(); - while ((r = in.read(data)) > 0) { - out.write(data, 0, r); - } - return new String(out.toByteArray(), StandardCharsets.UTF_8); - } -} ----- -<1> This annotation allows you to directly inject the URL of the Quarkus instance, the value of the annotation will be the path component of the URL - -For now `@TestHTTPResource` allows you to inject `URI`, `URL` and `String` representations of the URL. - -== Testing a specific endpoint - -Both RESTassured and `@TestHTTPResource` allow you to specify the endpoint class you are testing rather than hard coding -a path. This currently supports both JAX-RS endpoints, Servlets and Reactive Routes. This makes it a lot easier to see exactly which endpoints -a given test is testing. - -For the purposes of these examples I am going to assume we have an endpoint that looks like the following: - -[source,java] ----- -@Path("/hello") -public class GreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -NOTE: This currently does not support the `@ApplicationPath()` annotation to set the JAX-RS context path. Use the -`quarkus.resteasy.path` config value instead if you want a custom context path. - -=== TestHTTPResource - -You can the use the `io.quarkus.test.common.http.TestHTTPEndpoint` annotation to specify the endpoint path, and the path -will be extracted from the provided endpoint. If you also specify a value for the `TestHTTPResource` endpoint it will -be appended to the end of the endpoint path. - -[source,java] ----- -package org.acme.getting.started.testing; - -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.InputStream; -import java.net.URL; -import java.nio.charset.StandardCharsets; - -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.common.http.TestHTTPEndpoint; -import io.quarkus.test.common.http.TestHTTPResource; -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -public class StaticContentTest { - - @TestHTTPEndpoint(GreetingResource.class) // <1> - @TestHTTPResource - URL url; - - @Test - public void testIndexHtml() throws Exception { - try (InputStream in = url.openStream()) { - String contents = readStream(in); - Assertions.assertTrue(contents.equals("hello")); - } - } - - private static String readStream(InputStream in) throws IOException { - byte[] data = new byte[1024]; - int r; - ByteArrayOutputStream out = new ByteArrayOutputStream(); - while ((r = in.read(data)) > 0) { - out.write(data, 0, r); - } - return new String(out.toByteArray(), StandardCharsets.UTF_8); - } -} ----- -<1> Because `GreetingResource` is annotated with `@Path("/hello")` the injected URL -will end with `/hello`. - -=== RESTassured - -To control the RESTassured base path (i.e. the default path that serves as the root for every -request) you can use the `io.quarkus.test.common.http.TestHTTPEndpoint` annotation. This can -be applied at the class or method level. To test out greeting resource we would do: - -[source,java] ----- -package org.acme.getting.started.testing; - -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.common.http.TestHTTPEndpoint; -import org.junit.jupiter.api.Test; - -import java.util.UUID; - -import static io.restassured.RestAssured.when; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -@TestHTTPEndpoint(GreetingResource.class) //<1> -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - when().get() //<2> - .then() - .statusCode(200) - .body(is("hello")); - } -} ----- -<1> This tells RESTAssured to prefix all requests with `/hello`. -<2> Note we don't need to specify a path here, as `/hello` is the default for this test - -== Injection into tests - -So far we have only covered integration style tests that test the app via HTTP endpoints, but what if we want to do unit -testing and test our beans directly? - -Quarkus supports this by allowing you to inject CDI beans into your tests via the `@Inject` annotation (in fact, tests in -Quarkus are full CDI beans, so you can use all CDI functionality). Let's create a simple test that tests the greeting -service directly without using HTTP: - - -[source,java] ----- -package org.acme.getting.started.testing; - -import javax.inject.Inject; - -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -public class GreetingServiceTest { - - @Inject // <1> - GreetingService service; - - @Test - public void testGreetingService() { - Assertions.assertEquals("hello Quarkus", service.greeting("Quarkus")); - } -} ----- -<1> The `GreetingService` bean will be injected into the test - -== Applying Interceptors to Tests - -As mentioned above Quarkus tests are actually full CDI beans, and as such you can apply CDI interceptors as you would -normally. As an example, if you want a test method to run within the context of a transaction you can simply apply the -`@Transactional` annotation to the method and the transaction interceptor will handle it. - -In addition to this you can also create your own test stereotypes. For example we could create a `@TransactionalQuarkusTest` -as follows: - -[source,java] ----- -@QuarkusTest -@Stereotype -@Transactional -@Retention(RetentionPolicy.RUNTIME) -@Target(ElementType.TYPE) -public @interface TransactionalQuarkusTest { -} ----- - -If we then apply this annotation to a test class it will act as if we had applied both the `@QuarkusTest` and -`@Transactional` annotations, e.g.: - - -[source,java] ----- -@TransactionalQuarkusTest -public class TestStereotypeTestCase { - - @Inject - UserTransaction userTransaction; - - @Test - public void testUserTransaction() throws Exception { - Assertions.assertEquals(Status.STATUS_ACTIVE, userTransaction.getStatus()); - } - -} ----- - -== Tests and Transactions - -You can use the standard Quarkus `@Transactional` annotation on tests, but this means that the changes your -test makes to the database will be persistent. If you want any changes made to be rolled back at the end of -the test you can use the `io.quarkus.test.TestTransaction` annotation. This will run the test method in a -transaction, but roll it back once the test method is complete to revert any database changes. - -== Enrichment via QuarkusTest*Callback - -Alternatively or additionally to an interceptor, you can enrich *all* your `@QuarkusTest` classes by implementing the following callback interfaces: - -* `io.quarkus.test.junit.callback.QuarkusTestBeforeClassCallback` -* `io.quarkus.test.junit.callback.QuarkusTestAfterConstructCallback` -* `io.quarkus.test.junit.callback.QuarkusTestBeforeEachCallback` -* `io.quarkus.test.junit.callback.QuarkusTestAfterEachCallback` - -Such a callback implementation has to be registered as a "service provider" as defined by `java.util.ServiceLoader`. - -E.g. the following sample callback: -[source,java] ----- -package org.acme.getting.started.testing; - -import io.quarkus.test.junit.callback.QuarkusTestBeforeEachCallback; -import io.quarkus.test.junit.callback.QuarkusTestMethodContext; - -public class MyQuarkusTestBeforeEachCallback implements QuarkusTestBeforeEachCallback { - - @Override - public void beforeEach(QuarkusTestMethodContext context) { - System.out.println("Executing " + context.getTestMethod()); - } -} ----- -has to be registered via `src/main/resources/META-INF/services/io.quarkus.test.junit.callback.QuarkusTestBeforeEachCallback` as follows: -[source] ----- -org.acme.getting.started.testing.MyQuarkusTestBeforeEachCallback ----- - -TIP: It is possible to read annotations from the test class or method to control what the callback shall be doing. - -WARNING: While it is possible to use JUnit Jupiter callback interfaces like `BeforeEachCallback`, you might run into classloading issues because Quarkus has - to run tests in a custom classloader which JUnit is not aware of. - -[[testing_different_profiles]] -== Testing Different Profiles - -So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, -then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however -it is a bit limited as you can't test different configurations. - -To get around this Quarkus supports the idea of a test profile. If a test has a different profile to the previously -run test then Quarkus will be shut down and started with the new profile before running the tests. This is obviously -a bit slower, as it adds a shutdown/startup cycle to the test time, but gives a great deal of flexibility. - -To reduce the amount of times Quarkus needs to restart, `io.quarkus.test.junit.util.QuarkusTestProfileAwareClassOrderer` -is registered as a global `ClassOrderer` as described in the -link:https://junit.org/junit5/docs/current/user-guide/#writing-tests-test-execution-order-classes[JUnit 5 User Guide]. -The behavior of this orderer is configurable via `junit-platform.properties` (see the source code or javadoc for more details). -It can also be disabled entirely by setting another orderer that is provided by JUnit 5 or even your own custom one. + -Please note that as of JUnit 5.8.2 link:https://github.com/junit-team/junit5/issues/2794[only a single `junit-platform.properties` is picked up and a warning is logged if more than one is found]. -If you encounter such warnings, you can can get rid of them by removing the Quarkus-supplied `junit-platform.properties` from the classpath via an exclusion: -[source,xml] ----- - - io.quarkus - quarkus-junit5 - test - - - io.quarkus - quarkus-junit5-properties - - - ----- - -=== Writing a Profile - -To implement a test profile we need to implement `io.quarkus.test.junit.QuarkusTestProfile`: - -[source,java] ----- -package org.acme.getting.started.testing; - -import java.util.Collections; -import java.util.List; -import java.util.Map; -import java.util.Set; - -import io.quarkus.test.junit.QuarkusTestProfile; -import io.quarkus.test.junit.QuarkusTestProfile.TestResourceEntry; - -public class MockGreetingProfile implements QuarkusTestProfile { <1> - - /** - * Returns additional config to be applied to the test. This - * will override any existing config (including in application.properties), - * however existing config will be merged with this (i.e. application.properties - * config will still take effect, unless a specific config key has been overridden). - * - * Here we are changing the JAX-RS root path. - */ - @Override - public Map getConfigOverrides() { - return Collections.singletonMap("quarkus.resteasy.path","/api"); - } - - /** - * Returns enabled alternatives. - * - * This has the same effect as setting the 'quarkus.arc.selected-alternatives' config key, - * however it may be more convenient. - */ - @Override - public Set> getEnabledAlternatives() { - return Collections.singleton(MockGreetingService.class); - } - - /** - * Allows the default config profile to be overridden. This basically just sets the quarkus.test.profile system - * property before the test is run. - * - * Here we are setting the profile to test-mocked - */ - @Override - public String getConfigProfile() { - return "test-mocked"; - } - - /** - * Additional {@link QuarkusTestResourceLifecycleManager} classes (along with their init params) to be used from this - * specific test profile. - * - * If this method is not overridden, then only the {@link QuarkusTestResourceLifecycleManager} classes enabled via the {@link io.quarkus.test.common.QuarkusTestResource} class - * annotation will be used for the tests using this profile (which is the same behavior as tests that don't use a profile at all). - */ - @Override - public List testResources() { - return Collections.singletonList(new TestResourceEntry(CustomWireMockServerManager.class)); - } - - - /** - * If this returns true then only the test resources returned from {@link #testResources()} will be started, - * global annotated test resources will be ignored. - */ - @Override - public boolean disableGlobalTestResources() { - return false; - } - - /** - * The tags this profile is associated with. - * When the {@code quarkus.test.profile.tags} System property is set (its value is a comma separated list of strings) - * then Quarkus will only execute tests that are annotated with a {@code @TestProfile} that has at least one of the - * supplied (via the aforementioned system property) tags. - */ - @Override - public Set tags() { - return Collections.emptySet(); - } - - /** - * The command line parameters that are passed to the main method on startup. - */ - @Override - public String[] commandLineParameters() { - return new String[0]; - } - - /** - * If the main method should be run. - */ - @Override - public boolean runMainMethod() { - return false; - } - - /** - * If this method returns true then all {@code StartupEvent} and {@code ShutdownEvent} observers declared on application - * beans should be disabled. - */ - @Override - public boolean disableApplicationLifecycleObservers() { - return false; - } -} ----- -<1> All these methods have default implementations so just override the ones you need to override. - -Now we have defined our profile we need to include it on our test class. -We do this by annotating the test class with `@TestProfile(MockGreetingProfile.class)`. - -All the test profile configuration is stored in a single class, which makes it easy to tell if the previous test ran with the -same configuration. - -=== Running specific tests - -Quarkus provides the ability to limit test execution to tests with specific `@TestProfile` annotations. -This works by leveraging the `tags` method of `QuarkusTestProfile` in conjunction with the `quarkus.test.profile.tags` system property. - -Essentially, any `QuarkusTestProfile` with at least one matching tag matching the value of `quarkus.test.profile.tags` will be considered active -and all the tests annotated with `@TestProfile` of active profiles, will be run while the rest will be skipped. -This is best shown in the following example. - -First let's define a few `QuarkusTestProfile` implementations like so: -[source,java] ----- -public class Profiles { - - public static class NoTags implements QuarkusTestProfile { - - } - - public static class SingleTag implements QuarkusTestProfile { - @Override - public Set tags() { - return Collections.singleton("test1"); - } - } - - public static class MultipleTags implements QuarkusTestProfile { - @Override - public Set tags() { - return new HashSet<>(Arrays.asList("test1", "test2")); - } - } -} ----- - -Now let's assume that we have the following tests: - -[source,java] ----- -@QuarkusTest -public class NoQuarkusProfileTest { - - @Test - public void test() { - // test something - } -} ----- - -[source,java] ----- -@QuarkusTest -@TestProfile(Profiles.NoTags.class) -public class NoTagsTest { - - @Test - public void test() { - // test something - } -} ----- - -[source,java] ----- -@QuarkusTest -@TestProfile(Profiles.SingleTag.class) -public class SingleTagTest { - - @Test - public void test() { - // test something - } -} ----- - -[source,java] ----- -@QuarkusTest -@TestProfile(Profiles.MultipleTags.class) -public class MultipleTagsTest { - - @Test - public void test() { - // test something - } -} ----- - -Let's consider the following scenarios: - -* `quarkus.test.profile.tags` is not set: All tests will be executed. -* `quarkus.test.profile.tags=foo`: In this case none of tests will be executed because none of the tags defined on the `QuarkusTestProfile` implementations match the value of `quarkus.test.profile.tags`. -Note that `NoQuarkusProfileTest` is not executed either because it is not annotated with `@TestProfile`. -* `quarkus.test.profile.tags=test1`: In this case `SingleTagTest` and `MultipleTagsTest` will be run because the tags on their respective `QuarkusTestProfile` implementations -match the value of `quarkus.test.profile.tags`. -* `quarkus.test.profile.tags=test1,test3`: This case results in the same tests being executed as the previous case. -* `quarkus.test.profile.tags=test2,test3`: In this case only `MultipleTagsTest` will be run because `MultipleTagsTest` is the only `QuarkusTestProfile` implementation whose `tags` method -matches the value of `quarkus.test.profile.tags`. - -== Mock Support - -Quarkus supports the use of mock objects using two different approaches. You can either use CDI alternatives to -mock out a bean for all test classes, or use `QuarkusMock` to mock out beans on a per test basis. - -=== CDI `@Alternative` mechanism. - -To use this simply override the bean you wish to mock with a class in the `src/test/java` directory, and put the `@Alternative` and `@Priority(1)` annotations on the bean. -Alternatively, a convenient `io.quarkus.test.Mock` stereotype annotation could be used. -This built-in stereotype declares `@Alternative`, `@Priority(1)` and `@Dependent`. -For example if I have the following service: - -[source,java] ----- -@ApplicationScoped -public class ExternalService { - - public String service() { - return "external"; - } - -} ----- - -I could mock it with the following class in `src/test/java`: - -[source,java] ----- -@Mock -@ApplicationScoped // <1> -public class MockExternalService extends ExternalService { - - @Override - public String service() { - return "mock"; - } -} ----- -<1> Overrides the `@Dependent` scope declared on the `@Mock` stereotype. - -It is important that the alternative be present in the `src/test/java` directory rather than `src/main/java`, as otherwise -it will take effect all the time, not just when testing. - -Note that at present this approach does not work with native image testing, as this would required the test alternatives -to be baked into the native image. - -[[quarkus_mock]] -=== Mocking using QuarkusMock - -The `io.quarkus.test.junit.QuarkusMock` class can be used to temporarily mock out any normal scoped -bean. If you use this method in a `@BeforeAll` method the mock will take effect for all tests on the current class, -while if you use this in a test method the mock will only take effect for the duration of the current test. - -This method can be used for any normal scoped CDI bean (e.g. `@ApplicationScoped`, `@RequestScoped` etc, basically -every scope except `@Singleton` and `@Dependent`). - -An example usage could look like: - -[source,java] ----- -@QuarkusTest -public class MockTestCase { - - @Inject - MockableBean1 mockableBean1; - - @Inject - MockableBean2 mockableBean2; - - @BeforeAll - public static void setup() { - MockableBean1 mock = Mockito.mock(MockableBean1.class); - Mockito.when(mock.greet("Stuart")).thenReturn("A mock for Stuart"); - QuarkusMock.installMockForType(mock, MockableBean1.class); // <1> - } - - @Test - public void testBeforeAll() { - Assertions.assertEquals("A mock for Stuart", mockableBean1.greet("Stuart")); - Assertions.assertEquals("Hello Stuart", mockableBean2.greet("Stuart")); - } - - @Test - public void testPerTestMock() { - QuarkusMock.installMockForInstance(new BonjourGreeter(), mockableBean2); // <2> - Assertions.assertEquals("A mock for Stuart", mockableBean1.greet("Stuart")); - Assertions.assertEquals("Bonjour Stuart", mockableBean2.greet("Stuart")); - } - - @ApplicationScoped - public static class MockableBean1 { - - public String greet(String name) { - return "Hello " + name; - } - } - - @ApplicationScoped - public static class MockableBean2 { - - public String greet(String name) { - return "Hello " + name; - } - } - - public static class BonjourGreeter extends MockableBean2 { - @Override - public String greet(String name) { - return "Bonjour " + name; - } - } -} ----- -<1> As the injected instance is not available here we use `installMockForType`, this mock is used for both test methods -<2> We use `installMockForInstance` to replace the injected bean, this takes effect for the duration of the test method. - -Note that there is no dependency on Mockito, you can use any mocking library you like, or even manually override the -objects to provide the behaviour you require. - -NOTE: Using `@Inject` will get you a CDI proxy to the mock instance you install, which is not suitable for passing to methods such as `Mockito.verify` -which want the mock instance itself. So if you need to call methods such as `verify` you need to hang on to the mock instance in your test, or use `@InjectMock` -as shown below. - -==== Further simplification with `@InjectMock` - -Building on the features provided by `QuarkusMock`, Quarkus also allows users to effortlessly take advantage of link:https://site.mockito.org/[Mockito] for mocking the beans supported by `QuarkusMock`. -This functionality is available via the `@io.quarkus.test.junit.mockito.InjectMock` annotation which is available in the `quarkus-junit5-mockito` dependency. - -Using `@InjectMock`, the previous example could be written as follows: - -[source,java] ----- -@QuarkusTest -public class MockTestCase { - - @InjectMock - MockableBean1 mockableBean1; // <1> - - @InjectMock - MockableBean2 mockableBean2; - - @BeforeEach - public void setup() { - Mockito.when(mockableBean1.greet("Stuart")).thenReturn("A mock for Stuart"); // <2> - } - - @Test - public void firstTest() { - Assertions.assertEquals("A mock for Stuart", mockableBean1.greet("Stuart")); - Assertions.assertEquals(null, mockableBean2.greet("Stuart")); // <3> - } - - @Test - public void secondTest() { - Mockito.when(mockableBean2.greet("Stuart")).thenReturn("Bonjour Stuart"); // <4> - Assertions.assertEquals("A mock for Stuart", mockableBean1.greet("Stuart")); - Assertions.assertEquals("Bonjour Stuart", mockableBean2.greet("Stuart")); - } - - @ApplicationScoped - public static class MockableBean1 { - - public String greet(String name) { - return "Hello " + name; - } - } - - @ApplicationScoped - public static class MockableBean2 { - - public String greet(String name) { - return "Hello " + name; - } - } -} ----- -<1> `@InjectMock` results in a mock being and is available in test methods of the test class (other test classes are *not* affected by this) -<2> The `mockableBean1` is configured here for every test method of the class -<3> Since the `mockableBean2` mock has not been configured, it will return the default Mockito response. -<4> In this test the `mockableBean2` is configured, so it returns the configured response. - -Although the test above is good for showing the capabilities of `@InjectMock`, it is not a good representation of a real test. In a real test -we would most likely configure a mock, but then test a bean that uses the mocked bean. -Here is an example: - -[source,java] ----- -@QuarkusTest -public class MockGreetingServiceTest { - - @InjectMock - GreetingService greetingService; - - @Test - public void testGreeting() { - when(greetingService.greet()).thenReturn("hi"); - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hi")); // <1> - } - - @Path("greeting") - public static class GreetingResource { - - final GreetingService greetingService; - - public GreetingResource(GreetingService greetingService) { - this.greetingService = greetingService; - } - - @GET - @Produces("text/plain") - public String greet() { - return greetingService.greet(); - } - } - - @ApplicationScoped - public static class GreetingService { - public String greet(){ - return "hello"; - } - } -} ----- -<1> Since we configured `greetingService` as a mock, the `GreetingResource` which uses the `GreetingService` bean, we get the mocked response instead of the response of the regular `GreetingService` bean - -By default the `@InjectMock` annotation can be used for any normal CDI scoped bean (e.g. `@ApplicationScoped`, `@RequestScoped`). -Mocking `@Singleton` beans can be performed by setting the `convertScopes` property to true (such as `@InjectMock(convertScopes = true`). -This will convert the `@Singleton` bean to an `@ApplicationScoped` bean for the test. - -This is considered an advanced option and should only be performed if you fully understand the consequences of changing the scope of the bean. - -==== Using Spies instead of Mocks with `@InjectSpy` - -Building on the features provided by `InjectMock`, Quarkus also allows users to effortlessly take advantage of link:https://site.mockito.org/[Mockito] for spying on the beans supported by `QuarkusMock`. -This functionality is available via the `@io.quarkus.test.junit.mockito.InjectSpy` annotation which is available in the `quarkus-junit5-mockito` dependency. - -Sometimes when testing you only need to verify that a certain logical path was taken, or you only need to stub out a single method's response while still executing the rest of the methods on the Spied clone. Please see link:https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html#spy-T-[Mockito documentation] for more details on Spy partial mocks. -In either of those situations a Spy of the object is preferable. -Using `@InjectSpy`, the previous example could be written as follows: - -[source,java] ----- -@QuarkusTest -public class SpyGreetingServiceTest { - - @InjectSpy - GreetingService greetingService; - - @Test - public void testDefaultGreeting() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello")); - - Mockito.verify(greetingService, Mockito.times(1)).greet(); <1> - } - - @Test - public void testOverrideGreeting() { - when(greetingService.greet()).thenReturn("hi"); <2> - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hi")); <3> - } - - @Path("greeting") - public static class GreetingResource { - - final GreetingService greetingService; - - public GreetingResource(GreetingService greetingService) { - this.greetingService = greetingService; - } - - @GET - @Produces("text/plain") - public String greet() { - return greetingService.greet(); - } - } - - @ApplicationScoped - public static class GreetingService { - public String greet(){ - return "hello"; - } - } -} ----- -<1> Instead of overriding the value, we just want to ensure that the greet method on our `GreetingService` was called by this test. -<2> Here we are telling the Spy to return "hi" instead of "hello". When the `GreetingResource` requests the greeting from `GreetingService` we get the mocked response instead of the response of the regular `GreetingService` bean -<3> We are verifying that we get the mocked response from the Spy. - -==== Using `@InjectMock` with `@RestClient` - -The `@RegisterRestClient` registers the implementation of the rest-client at runtime, and because the bean needs to be a regular scope, you have to annotate your interface with `@ApplicationScoped`. - -[source,java] ----- -@Path("/") -@ApplicationScoped -@RegisterRestClient -public interface GreetingService { - - @GET - @Path("/hello") - @Produces(MediaType.TEXT_PLAIN) - String hello(); -} ----- - -For the test class here is an example: - -[source,java] ----- -@QuarkusTest -public class GreetingResourceTest { - - @InjectMock - @RestClient // <1> - GreetingService greetingService; - - @Test - public void testHelloEndpoint() { - Mockito.when(greetingService.hello()).thenReturn("hello from mockito"); - - given() - .when().get("/hello") - .then() - .statusCode(200) - .body(is("hello from mockito")); - } - -} ----- -<1> Indicate that this injection point is meant to use an instance of `RestClient`. - -=== Mocking with Panache - -If you are using the `quarkus-hibernate-orm-panache` or `quarkus-mongodb-panache` extensions, check out the xref:hibernate-orm-panache.adoc#mocking[Hibernate ORM with Panache Mocking] and xref:mongodb-panache.adoc#mocking[MongoDB with Panache Mocking] documentation for the easiest way to mock your data access. - -== Testing Security - -If you are using Quarkus Security, check out the xref:security-testing.adoc[Testing Security] section for information on how to easily test security features of the application. - -[#quarkus-test-resource] -== Starting services before the Quarkus application starts - -A very common need is to start some services on which your Quarkus application depends, before the Quarkus application starts for testing. To address this need, Quarkus provides `@io.quarkus.test.common.QuarkusTestResource` and `io.quarkus.test.common.QuarkusTestResourceLifecycleManager`. - -By simply annotating any test in the test suite with `@QuarkusTestResource`, Quarkus will run the corresponding `QuarkusTestResourceLifecycleManager` before any tests are run. -A test suite is also free to utilize multiple `@QuarkusTestResource` annotations, in which case all the corresponding `QuarkusTestResourceLifecycleManager` objects will be run before the tests. When using multiple test resources they can be started concurrently. For that you need to set `@QuarkusTestResource(parallel = true)`. - -NOTE: test resources are global, even if they are defined on a test class or custom profile, which means they will all be activated for all tests, even though we do -remove duplicates. If you want to only enable a test resource on a single test class or test profile, you can use `@QuarkusTestResource(restrictToAnnotatedClass = true)`. - -Quarkus provides a few implementations of `QuarkusTestResourceLifecycleManager` out of the box (see `io.quarkus.test.h2.H2DatabaseTestResource` which starts an H2 database, or `io.quarkus.test.kubernetes.client.KubernetesServerTestResource` which starts a mock Kubernetes API server), -but it is common to create custom implementations to address specific application needs. -Common cases include starting docker containers using https://www.testcontainers.org/[Testcontainers] (an example of which can be found https://github.com/quarkusio/quarkus/blob/main/test-framework/keycloak-server/src/main/java/io/quarkus/test/keycloak/server/KeycloakTestResourceLifecycleManager.java[here]), -or starting a mock HTTP server using http://wiremock.org/[Wiremock] (an example of which can be found https://github.com/geoand/quarkus-test-demo/blob/main/src/test/java/org/acme/getting/started/country/WiremockCountries.java[here]). - - -=== Altering the test class -When creating a custom `QuarkusTestResourceLifecycleManager` that needs to inject the something into the test class, the `inject` methods can be used. -If for example you have a test like the following: - -[source,java] ----- -@QuarkusTest -@QuarkusTestResource(MyWireMockResource.class) -public class MyTest { - - @InjectWireMock // this a custom annotation you are defining in your own application - WireMockServer wireMockServer; - - @Test - public someTest() { - // control wiremock in some way and perform test - } -} ----- - -Making `MyWireMockResource` inject the `wireMockServer` field can be done as shown in the `inject` method of the following code snippet: - -[source,java] ----- -public class MyWireMockResource implements QuarkusTestResourceLifecycleManager { - - WireMockServer wireMockServer; - - @Override - public Map start() { - wireMockServer = new WireMockServer(8090); - wireMockServer.start(); - - // create some stubs - - return Map.of("some.service.url", "localhost:" + wireMockServer.port()); - } - - @Override - public synchronized void stop() { - if (wireMockServer != null) { - wireMockServer.stop(); - wireMockServer = null; - } - } - - @Override - public void inject(TestInjector testInjector) { - testInjector.injectIntoFields(wireMockServer, new TestInjector.AnnotatedAndMatchesType(InjectWireMock.class, WireMockServer.class)); - } -} ----- - -IMPORTANT: It is worth mentioning that this injection into the test class is not under the control of CDI and happens after CDI has performed -any necessary injections into the test class. - -=== Annotation-based test resources - -It is possible to write test resources that are enabled and configured using annotations. This is enabled by placing the `@QuarkusTestResource` -on an annotation which will be used to enable and configure the test resource. - -For example, this defines the `@WithKubernetesTestServer` annotation, which you can use on your tests to activate the `KubernetesServerTestResource`, -but only for the annotated test class. You can also place them on your `QuarkusTestProfile` test profiles. - -[source,java] ----- -@QuarkusTestResource(KubernetesServerTestResource.class) -@Retention(RetentionPolicy.RUNTIME) -@Target(ElementType.TYPE) -public @interface WithKubernetesTestServer { - /** - * Start it with HTTPS - */ - boolean https() default false; - - /** - * Start it in CRUD mode - */ - boolean crud() default true; - - /** - * Port to use, defaults to any available port - */ - int port() default 0; -} ----- - -The `KubernetesServerTestResource` class has to implement the -`QuarkusTestResourceConfigurableLifecycleManager` interface in order to be configured using the previous annotation: - -[source,java] ----- -public class KubernetesServerTestResource - implements QuarkusTestResourceConfigurableLifecycleManager { - - private boolean https = false; - private boolean crud = true; - private int port = 0; - - @Override - public void init(WithKubernetesTestServer annotation) { - this.https = annotation.https(); - this.crud = annotation.crud(); - this.port = annotation.port(); - } - - // ... -} ----- - -== Hang Detection - -`@QuarkusTest` has support for hang detection to help diagnose any unexpected hangs. If no progress is made for a specified -time (i.e. no JUnit callbacks are invoked) then Quarkus will print a stack trace to the console to help diagnose the hang. -The default value for this timeout is 10 minutes. - -No further action will be taken, and the tests will continue as normal (generally until CI times out), however the printed -stack traces should help diagnose why the build has failed. You can control this timeout with the -`quarkus.test.hang-detection-timeout` system property (you can also set this in application.properties, but this won't -be read until Quarkus has started, so the timeout for Quarkus start will be the default of 10 minutes). - -== Native Executable Testing - -It is also possible to test native executables using `@NativeImageTest`. This supports all the features mentioned in this -guide except injecting into tests (and the native executable runs in a separate non-JVM process this is not really possible). - - -This is covered in the xref:building-native-image.adoc[Native Executable Guide]. - -[WARNING] -==== -Although `@NativeImageTest` is not yet deprecated, it will be in the future as its functionality is covered by `@QuarkusIntegrationTest` -which is described in the following section. -==== - -[#quarkus-integration-test] -== Using @QuarkusIntegrationTest - -`@QuarkusIntegrationTest` should be used to launch and test the artifact produced by the Quarkus build, and supports testing a jar (of whichever type), a native image or container image. -Put simply, this means that if the result of a Quarkus build (`mvn package` or `gradle build`) is a jar, that jar will be launched as `java -jar ...` and tests run against it. -If instead a native image was built, then the application is launched as `./application ...` and again the tests run against the running application. -Finally, if a container image was created during the build (by including the `quarkus-container-image-jib` or `quarkus-container-image-docker` extensions and having the -`quarkus.container-image.build=true` property configured), then a container is created and run (this requires the `docker` executable being present). - -As is the case with `@NativeImageTest`, this is a black box test that supports the same set features and has the same limitations. - -[NOTE] -==== -As a test annotated with `@QuarkusIntegrationTest` tests the result of the build, it should be run as part of the integration test suite - i.e. via the `maven-failsafe-plugin` if using Maven or an additional task if using Gradle. -These tests will **not** work if run in the same phase as `@QuarkusTest` as Quarkus has not yet created the final artifact. -==== - -=== Launching containers - -When `@QuarkusIntegrationTest` results in launching a container (because the application was built with `quarkus.container-image.build` set to `true`), the container is launched on a predictable container network. This facilitates writing integration tests that need to launch services to support the application. -This means that `@QuarkusIntegrationTest` works out of the box with containers launched via xref:dev-services.adoc[Dev Services], but it also means that it enables using <> resources that launch additional containers. -This can be achieved by having your `QuarkusTestLifecycleManager` implement `io.quarkus.test.common.DevServicesContext.ContextAware`. A simple example could be the following: - -[source,java] ----- -import io.quarkus.test.common.DevServicesContext; -import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; - -import java.util.HashMap; -import java.util.Map; -import java.util.Optional; - -public class CustomResource implements QuarkusTestResourceLifecycleManager, DevServicesContext.ContextAware { - - private Optional containerNetworkId; - - @Override - public void setIntegrationTestContext(DevServicesContext context) { - containerNetworkId = context.containerNetworkId(); - } - - @Override - public Map start() { - // start a container making sure to call withNetworkMode() with the value of containerNetworkId if present - - // return a map containing the configuration the application needs to use the service - return new HashMap<>(); - } - - @Override - public void stop() { - // close container - } -} ----- - -`CustomResource` would be activated on a `@QuarkusIntegrationTest` using `@QuarkusTestResource` as is described in the corresponding section of this doc. - -=== Executing against a running application - -[WARNING] -==== -This feature is considered experimental and is likely to change in future versions of Quarkus. -==== - -`@QuarkusIntegrationTest` supports executing tests against an already running instance of the application. This can be achieved by setting the -`quarkus.http.test-host` system property when running the tests. - -An example use of this could be the following Maven command, that forces `@QuarkusIntegrationTest` to execute against that is accessible at `http://1.2.3.4:4321`: - -[source,bash] ----- -./mvnw verify -Dquarkus.http.test-host=1.2.3.4 -Dquarkus.http.test-port=4321 ----- - - -== Mixing `@QuarkusTest` with other type of tests - -Mixing tests annotated with `@QuarkusTest` with tests annotated with either `@QuarkusDevModeTest`, `@QuarkusProdModeTest` or `@QuarkusUnitTest` -is not allowed in a single execution run (in a single Maven Surefire Plugin execution, for instance), -while the latter three can coexist. - -The reason of this restriction is that `@QuarkusTest` starts a Quarkus server for the whole lifetime of the tests execution run, -thus preventing the other tests to start their own Quarkus server. - -To alleviate this restriction, the `@QuarkusTest` annotation defines a JUnit 5 `@Tag`: `io.quarkus.test.junit.QuarkusTest`. -You can use this tag to isolate the `@QuarkusTest` test in a specific execution run, for example with the Maven Surefire Plugin: - -[source,xml] ----- - - maven-surefire-plugin - ${surefire-plugin.version} - - - default-test - - test - - - io.quarkus.test.junit.QuarkusTest - - - - quarkus-test - - test - - - io.quarkus.test.junit.QuarkusTest - - - - - - org.jboss.logmanager.LogManager - - - ----- - -[[test-from-ide]] -== Running `@QuarkusTest` from an IDE - -Most IDEs offer the possibility to run a selected class as a JUnit test directly. For this you should set a few properties in the settings of your chosen IDE: - -* `java.util.logging.manager` (see xref:logging.adoc[Logging Guide]) - -* `maven.home` (only if there are any custom settings in `${maven.home}/conf/settings.xml`, see xref:maven-tooling.adoc[Maven Guide]) - -* `maven.settings` (in case a custom version of `settings.xml` file should be used for the tests) - -=== Eclipse separate JRE definition - -Copy your current "Installed JRE" definition into a new one, where you will add the properties as a new VM arguments: - -* `-Djava.util.logging.manager=org.jboss.logmanager.LogManager` - -* `-Dmaven.home=` - -Use this JRE definition as your Quarkus project targeted runtime and the workaround will be applied to any "Run as JUnit" configuration. - -=== VSCode "run with" configuration - -The `settings.json` placed in the root of your project directory or in the workspace will need the following workaround in your test configuration: -[source, json] ----- -"java.test.config": [ - { - "name": "quarkusConfiguration", - "vmargs": [ "-Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dmaven.home= ..." ], - ... - }, - ... -] ----- - -=== IntelliJ JUnit template - -Nothing needed in IntelliJ because the IDE will pick the `systemPropertyVariables` from the surefire plugin configuration in `pom.xml`. - -== Testing Dev Services - -By default tests should just work with xref:dev-services.adoc[Dev Services], however from some use cases you may need access to -the automatically configured properties in your tests. - -You can do this with `io.quarkus.test.common.DevServicesContext`, which can be injected directly into any `@QuarkusTest` -or `@QuarkusIntegrationTest`. All you need to do is define a field of type `DevServicesContext` and it will be automatically -injected. Using this you can retrieve any properties that have been set. Generally this is used to directly connect to a -resource from the test itself, e.g. to connect to kafka to send messages to the application under test. - -Injection is also supported into objects that implement `io.quarkus.test.common.DevServicesContext.ContextAware`. If you -have a field that implements `io.quarkus.test.common.DevServicesContext.ContextAware` Quarkus will call the -`setIntegrationTestContext` method to pass the context into this object. This allows client logic to be encapsulated in -a utility class. - -`QuarkusTestResourceLifecycleManager` implementations can also implement `ContextAware` to get access to these properties, -which allows you to setup the resource before Quarkus starts (e.g. configure a KeyCloak instance, add data to a database etc). - - -[NOTE] -==== -For `@QuarkusIntegrationTest` tests that result in launcher the application as a container, `io.quarkus.test.common.DevServicesContext` also provides access to the id of the container network on which the application container was launched (via the `containerNetworkId` method). -This can be used by `QuarkusTestResourceLifecycleManager` that need to launch additional containers that the application will communicate with. -==== - diff --git a/_versions/2.7/guides/getting-started.adoc b/_versions/2.7/guides/getting-started.adoc deleted file mode 100644 index c93906ab67b..00000000000 --- a/_versions/2.7/guides/getting-started.adoc +++ /dev/null @@ -1,493 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Creating Your First Application - -include::./attributes.adoc[] - -:toc: macro -:toclevels: 4 -:doctype: book -:icons: font -:docinfo1: - -:numbered: -:sectnums: -:sectnumlevels: 4 - - -Learn how to create a Hello World Quarkus app. -This guide covers: - -* Bootstrapping an application -* Creating a JAX-RS endpoint -* Injecting beans -* Functional tests -* Packaging of the application - -== Prerequisites - -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] - -[TIP] -.Verify Maven is using the Java you expect -==== -If you have multiple JDK's installed it is not certain Maven will pick up the expected java -and you could end up with unexpected results. -You can verify which JDK Maven uses by running `mvn --version`. -==== - - -== Architecture - -In this guide, we create a straightforward application serving a `hello` endpoint. To demonstrate -dependency injection, this endpoint uses a `greeting` bean. - -image::getting-started-architecture.png[alt=Architecture, align=center] - -This guide also covers the testing of the endpoint. - -== Solution - -We recommend that you follow the instructions from <> and onwards to create the application step by step. - -However, you can go right to the completed example. - -Download an {quickstarts-archive-url}[archive] or clone the git repository: - -[source,bash,subs=attributes+] ----- -git clone {quickstarts-clone-url} ----- - -The solution is located in the `getting-started` {quickstarts-tree-url}/getting-started[directory]. - -== Bootstrapping the project - -The easiest way to create a new Quarkus project is to open a terminal and run the following command: - -For Linux & MacOS users - -:create-app-artifact-id: getting-started -:create-app-extensions: resteasy -:create-app-code: -include::includes/devtools/create-app.adoc[] - -For Windows users - -- If using cmd , (don't use backward slash `\` and put everything on the same line) -- If using Powershell , wrap `-D` parameters in double quotes e.g. `"-DprojectArtifactId=getting-started"` - -It generates the following in `./getting-started`: - -* the Maven structure -* an `org.acme.GreetingResource` resource exposed on `/hello` -* an associated unit test -* a landing page that is accessible on `http://localhost:8080` after starting the application -* example `Dockerfile` files for both `native` and `jvm` modes in `src/main/docker` -* the application configuration file - -Once generated, look at the `pom.xml`. -You will find the import of the Quarkus BOM, allowing you to omit the version of the different Quarkus dependencies. -In addition, you can see the `quarkus-maven-plugin` responsible of the packaging of the application and also providing the development mode. - -[source,xml,subs=attributes+] ----- - - - - ${quarkus.platform.group-id} - quarkus-bom - ${quarkus.platform.version} - pom - import - - - - - - - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus-plugin.version} - true - - - - build - generate-code - generate-code-tests - - - - - - ----- - -In a Gradle project, you would find a similar setup: - -* the Quarkus Gradle plugin -* an `enforcedPlatform` directive for the Quarkus BOM - -If we focus on the dependencies section, you can see the extension allowing the development of REST applications: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy") ----- - -=== The JAX-RS resources - -During the project creation, the `src/main/java/org/acme/GreetingResource.java` file has been created with the following content: - -[source,java] ----- -package org.acme; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class GreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "Hello RESTEasy"; - } -} ----- - -It's a very simple REST endpoint, returning "Hello RESTEasy" to requests on "/hello". - -[TIP] -.Differences with vanilla JAX-RS -==== -With Quarkus, there is no need to create an `Application` class. It's supported, but not required. In addition, only one instance -of the resource is created and not one per request. You can configure this using the different `*Scoped` annotations (`ApplicationScoped`, `RequestScoped`, etc). -==== - -== Running the application - -Now we are ready to run our application: - -include::includes/devtools/dev.adoc[] - -[source,shell] ----- -[INFO] --------------------< org.acme:getting-started >--------------------- -[INFO] Building getting-started 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -[INFO] -[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ getting-started --- -[INFO] Using 'UTF-8' encoding to copy filtered resources. -[INFO] skip non existing resourceDirectory /getting-started/src/main/resources -[INFO] -[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ getting-started --- -[INFO] Changes detected - recompiling the module! -[INFO] Compiling 2 source files to /getting-started/target/classes -[INFO] -[INFO] --- quarkus-maven-plugin::dev (default-cli) @ getting-started --- -Listening for transport dt_socket at address: 5005 -2019-02-28 17:05:22,347 INFO [io.qua.dep.QuarkusAugmentor] (main) Beginning quarkus augmentation -2019-02-28 17:05:22,635 INFO [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 288ms -2019-02-28 17:05:22,770 INFO [io.quarkus] (main) Quarkus started in 0.668s. Listening on: http://localhost:8080 -2019-02-28 17:05:22,771 INFO [io.quarkus] (main) Installed features: [cdi, resteasy] ----- - -Once started, you can request the provided endpoint: - -[source,shell] ----- -$ curl -w "\n" http://localhost:8080/hello -hello ----- - -Hit `CTRL+C` to stop the application, or keep it running and enjoy the blazing fast hot-reload. - -[TIP] -.Automatically add newline with `curl -w "\n"` -==== -We are using `curl -w "\n"` in this example to avoid your terminal printing a '%' or put both result and next command prompt on the same line. -==== - -== Using injection - -Dependency injection in Quarkus is based on ArC which is a CDI-based dependency injection solution tailored for Quarkus' architecture. -If you're new to CDI then we recommend you to read the xref:cdi.adoc[Introduction to CDI] guide. - -Quarkus only implements a subset of the CDI features and comes with non-standard features and specific APIS, you can learn more about it in the xref:cdi-reference.adoc[Contexts and Dependency Injection guide]. - -ArC comes as a dependency of `quarkus-resteasy` so you already have it handy. - -Let's modify the application and add a companion bean. -Create the `src/main/java/org/acme/GreetingService.java` file with the following content: - -[source, java] ----- -package org.acme; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingService { - - public String greeting(String name) { - return "hello " + name; - } - -} ----- - -Edit the `GreetingResource` class to inject the `GreetingService` and create a new endpoint using it: - -[source, java] ----- -package org.acme; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -@Path("/hello") -public class GreetingResource { - - @Inject - GreetingService service; - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("/greeting/{name}") - public String greeting(@PathParam String name) { - return service.greeting(name); - } - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -If you stopped the application -(keep in mind you don't have to do it, changes will be automatically deployed by our live reload feature), -restart the application with: - -include::includes/devtools/dev.adoc[] - -Then check that the endpoint returns `hello quarkus` as expected: - -[source,shell,subs=attributes+] ----- -$ curl -w "\n" http://localhost:8080/hello/greeting/quarkus -hello quarkus ----- - -== Development Mode - -`quarkus:dev` runs Quarkus in development mode. This enables live reload with background compilation, which means -that when you modify your Java files and/or your resource files and refresh your browser, these changes will automatically take effect. -This works too for resource files like the configuration property file. -Refreshing the browser triggers a scan of the workspace, and if any changes are detected, the Java files are recompiled -and the application is redeployed; your request is then serviced by the redeployed application. If there are any issues -with compilation or deployment an error page will let you know. - -This will also listen for a debugger on port `5005`. If you want to wait for the debugger to attach before running you -can pass `-Dsuspend` on the command line. If you don't want the debugger at all you can use `-Ddebug=false`. - -== Testing - -All right, so far so good, but wouldn't it be better with a few tests, just in case. - -In the generated build file, you can see 2 test dependencies: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-junit5 - test - - - io.rest-assured - rest-assured - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-junit5") -testImplementation("io.rest-assured:rest-assured") ----- - -Quarkus supports https://junit.org/junit5/[JUnit 5] tests. - -Because of this, in the case of Maven, the version of the https://maven.apache.org/surefire/maven-surefire-plugin/[Surefire Maven Plugin] must be set, as the default version does not support JUnit 5: - -[source,xml,subs=attributes+] ----- - - maven-surefire-plugin - ${surefire-plugin.version} - - - org.jboss.logmanager.LogManager - ${maven.home} - - - ----- - -We also set the `java.util.logging` system property to make sure tests will use the correct log manager and `maven.home` to ensure that custom configuration -from `${maven.home}/conf/settings.xml` is applied (if any). - -The generated project contains a simple test. -Edit the `src/test/java/org/acme/GreetingResourceTest.java` to match the following content: - -[source,java] ----- -package org.acme; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import java.util.UUID; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test // <1> - public void testHelloEndpoint() { - given() - .when().get("/hello") - .then() - .statusCode(200) // <2> - .body(is("hello")); - } - - @Test - public void testGreetingEndpoint() { - String uuid = UUID.randomUUID().toString(); - given() - .pathParam("name", uuid) - .when().get("/hello/greeting/{name}") - .then() - .statusCode(200) - .body(is("hello " + uuid)); - } - -} ----- -<1> By using the `QuarkusTest` runner, you instruct JUnit to start the application before the tests. -<2> Check the HTTP response status code and content - -These tests use http://rest-assured.io/[RestAssured], but feel free to use your favorite library. - -You can run these using Maven: - -[source,bash,subs=attributes+] ----- -./mvnw test ----- - -You can also run the test from your IDE directly (be sure you stopped the application first). - -By default, tests will run on port `8081` so as not to conflict with the running application. We automatically -configure RestAssured to use this port. If you want to use a different client you should use the `@TestHTTPResource` -annotation to directly inject the URL of the tested application into a field on the test class. This field can be of the type -`String`, `URL` or `URI`. This annotation can also be given a value for the test path. For example, if I want to test -a Servlet mapped to `/myservlet` I would just add the following to my test: - - -[source,java] ----- -@TestHTTPResource("/myservlet") -URL testUrl; ----- - -The test port can be controlled via the `quarkus.http.test-port` config property. Quarkus also creates a system -property called `test.url` that is set to the base test URL for situations where you cannot use injection. - -== Working with multi-module project or external modules - -Quarkus heavily utilizes https://github.com/wildfly/jandex[Jandex] at build time, to discover various classes or annotations. One immediately recognizable application of this, is CDI bean discovery. -As a result, most of the Quarkus extensions will not work properly if this build time discovery isn't properly setup. - -This index is created by default on the project on which Quarkus is configured for, thanks to our Maven and Gradle plugins. - -However, when working with a multi-module project, be sure to read the `Working with multi-module projects` section of the -xref:maven-tooling.adoc#multi-module-maven[Maven] or xref:gradle-tooling.adoc#multi-module-maven[Gradle] guides. - -If you plan to use external modules (for example, an external library for all your domain objects), -you will need to make these modules known to the indexing process either by adding the Jandex plugin (if you can modify them) -or via the `quarkus.index-dependency` property inside your `application.properties` (useful in cases where you can't modify the module). - -Be sure to read the xref:cdi-reference.adoc#bean_discovery[Bean Discovery] section of the CDI guide for more information. - -== Packaging and run the application - -The application is packaged using: - -include::includes/devtools/build.adoc[] - -It produces several outputs in `/target`: - -* `getting-started-1.0.0-SNAPSHOT.jar` - containing just the classes and resources of the projects, it's the regular -artifact produced by the Maven build - it is *not* the runnable jar; -* the `quarkus-app` directory which contains the `quarkus-run.jar` jar file - being an executable _jar_. Be aware that it's not an _über-jar_ as -the dependencies are copied into subdirectories of `quarkus-app/lib/`. - -You can run the application using: `java -jar target/quarkus-app/quarkus-run.jar` - -NOTE: If you want to deploy your application somewhere (typically in a container), you need to deploy the whole `quarkus-app` directory. - -NOTE: Before running the application, don't forget to stop the hot reload mode (hit `CTRL+C`), or you will have a port conflict. - -[#banner] -== Configuring the banner - -By default when a Quarkus application starts (in regular or dev mode), it will display an ASCII art banner. The banner can be disabled by setting `quarkus.banner.enabled=false` in `application.properties`, -by setting the `-Dquarkus.banner.enabled=false` Java System Property, or by setting the `QUARKUS_BANNER_ENABLED` environment variable to `false`. -Furthermore, users can supply a custom banner by placing the banner file in `src/main/resources` and configuring `quarkus.banner.path=name-of-file` in `application.properties`. - -== What's next? - -This guide covered the creation of an application using Quarkus. -However, there is much more. -We recommend continuing the journey with the xref:building-native-image.adoc[building a native executable guide], where you learn about creating a native executable and packaging it in a container. -If you are interested in reactive, we recommend the xref:getting-started-reactive.adoc[Getting Started with Reactive guide], where you can see how to implement reactive applications with Quarkus. - -In addition, the xref:tooling.adoc[tooling guide] document explains how to: - -* scaffold a project in a single command line -* enable the _development mode_ (hot reload) -* import the project in your favorite IDE -* and more diff --git a/_versions/2.7/guides/gradle-config.adoc b/_versions/2.7/guides/gradle-config.adoc deleted file mode 100644 index 198d6863572..00000000000 --- a/_versions/2.7/guides/gradle-config.adoc +++ /dev/null @@ -1,49 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Gradle Plugin Repositories - -include::./attributes.adoc[] - -// tag::repositories[] -The Quarkus Gradle plugin is published to the https://plugins.gradle.org/plugin/io.quarkus[Gradle Plugin Portal]. - -To use it, add the following to your `build.gradle` file: - -[source, groovy, subs=attributes+] ----- -plugins { - id 'java' - id 'io.quarkus' -} ----- - -You also need to add the following at the top of your `settings.gradle` file: -[source, groovy, subs=attributes+] ----- -pluginManagement { - repositories { - mavenCentral() - gradlePluginPortal() - } - plugins { - id 'io.quarkus' version "${quarkusPluginVersion}" - } -} ----- - -NOTE:: the `plugins{}` method in `settings.gradle` is not supported in Gradle 5.x. In this case make sure to explicitly declare the plugin version in the `build.gradle` file like the example below: - -[source, groovy, subs=attributes+] ----- -plugins { - id 'java' - id 'io.quarkus' version '{quarkus-version}' -} ----- - - - -// end::repositories[] diff --git a/_versions/2.7/guides/gradle-tooling.adoc b/_versions/2.7/guides/gradle-tooling.adoc deleted file mode 100644 index 8a1155c3cb7..00000000000 --- a/_versions/2.7/guides/gradle-tooling.adoc +++ /dev/null @@ -1,573 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Building Quarkus apps with Gradle - -include::./attributes.adoc[] -:devtools-no-maven: - -[[project-creation]] -== Creating a new project - -To scaffold a Gradle project you can either use the xref:cli-tooling.adoc[Quarkus CLI] or the Quarkus Maven plugin: - -[role="primary asciidoc-tabs-sync-cli"] -.CLI -**** -[source, bash] ----- -quarkus create app my-groupId:my-artifactId \ - --extension=resteasy,resteasy-jackson \ - --gradle ----- - -_For more information about how to install the Quarkus CLI and use it, please refer to xref:cli-tooling.adoc[the Quarkus CLI guide]._ -**** - -[role="secondary asciidoc-tabs-sync-maven"] -.Maven -**** -[source, bash, subs=attributes+] ----- -mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create \ - -DprojectGroupId=my-groupId \ - -DprojectArtifactId=my-artifactId \ - -Dextensions="resteasy,resteasy-jackson" \ - -DbuildTool=gradle ----- - -NOTE: If you just launch `mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create` the Maven plugin asks -for user inputs. You can disable this interactive mode (and use default values) by passing `-B` to the Maven command. -**** - -TIP: If you prefer using the Kotlin DSL, use `gradle-kotlin-dsl` instead of `gradle`. - -[NOTE] -==== -Quarkus project scaffolding automatically installs the Gradle wrapper (`./gradlew`) in your project. - -If you prefer to use a standalone Gradle installation, please use Gradle {gradle-version}. -==== - -The project is generated in a directory named after the passed artifactId. - -A pair of Dockerfiles for native and JVM modes are also generated in `src/main/docker`. -Instructions to build the image and run the container are written in those Dockerfiles. - -[[custom-test-configuration-profile]] -=== Custom test configuration profile in JVM mode - -By default, Quarkus tests in JVM mode are run using the `test` configuration profile. If you are not familiar with Quarkus -configuration profiles, everything you need to know is explained in the -xref:config.adoc#configuration-profiles[Configuration Profiles Documentation]. - -It is however possible to use a custom configuration profile for your tests with the Gradle build configuration shown below. -This can be useful if you need for example to run some tests using a specific database which is not your default testing -database. - -[role="primary asciidoc-tabs-sync-groovy"] -.Groovy DSL -**** -[source,groovy,subs=attributes+] ----- -test { - systemProperty "quarkus.test.profile", "foo" <1> -} ----- - -<1> The `foo` configuration profile will be used to run the tests. -**** - -[role="secondary asciidoc-tabs-sync-kotlin"] -.Kotlin DSL -**** -[source,kotlin,subs=attributes+] ----- -tasks.test { - systemProperty("quarkus.test.profile", "foo") <1> -} ----- - -<1> The `foo` configuration profile will be used to run the tests. -**** - -[WARNING] -==== -It is not possible to use a custom test configuration profile in native mode for now. Native tests are always run using the -`prod` profile. -==== - -== Dealing with extensions - -From inside a Quarkus project, you can obtain a list of the available extensions with: - -[source,bash,subs=attributes+,role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus extension ----- - -[source,bash,subs=attributes+,role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew listExtensions ----- - -You can enable an extension using: - -:add-extension-extensions: hibernate-validator -include::includes/devtools/extension-add.adoc[] - -Extensions are passed using a comma-separated list. - -The extension name is the GAV name of the extension: e.g. `io.quarkus:quarkus-agroal`. -But you can pass a partial name and Quarkus will do its best to find the right extension. -For example, `agroal`, `Agroal` or `agro` will expand to `io.quarkus:quarkus-agroal`. -If no extension is found or if more than one extensions match, you will see a red check mark ❌ in the command result. - -[source,shell] ----- -$ ./gradlew addExtension --extensions="jdbc,agroal,non-exist-ent" -[...] -❌ Multiple extensions matching 'jdbc' - * io.quarkus:quarkus-jdbc-h2 - * io.quarkus:quarkus-jdbc-mariadb - * io.quarkus:quarkus-jdbc-postgresql - Be more specific e.g using the exact name or the full gav. -✅ Adding extension io.quarkus:quarkus-agroal -❌ Cannot find a dependency matching 'non-exist-ent', maybe a typo? -[...] ----- - -You can install all extensions which match a globbing pattern: - -:add-extension-extensions: smallrye-* -include::includes/devtools/extension-add.adoc[] - -[[dev-mode]] -== Development mode - -Quarkus comes with a built-in development mode. -You can start it with: - -include::includes/devtools/dev.adoc[] - -Note that if you run it this way the continuous testing experience will not be as nice, as gradle runs as a daemon -Quarkus can't draw the 'pretty' test output so falls back to just logging the output. - -You can then update the application sources, resources and configurations. -The changes are automatically reflected in your running application. -This is great to do development spanning UI and database as you see changes reflected immediately. - -`quarkusDev` enables hot deployment with background compilation, which means that when you modify -your Java files or your resource files and refresh your browser these changes will automatically take effect. -This works too for resource files like the configuration property file. -The act of refreshing the browser triggers a scan of the workspace, and if any changes are detected the -Java files are compiled, and the application is redeployed, then your request is serviced by the -redeployed application. If there are any issues with compilation or deployment an error page will let you know. - -Hit `CTRL+C` to stop the application. - -You can change the working directory the development environment runs on: - -[role="primary asciidoc-tabs-sync-groovy"] -.Groovy DSL -**** -[source,groovy] ----- -quarkusDev { - workingDir = rootProject.projectDir -} ----- -**** - -[role="secondary asciidoc-tabs-sync-kotlin"] -.Kotlin DSL -**** -[source,kotlin] ----- -tasks.quarkusDev { - workingDir = rootProject.projectDir.toString() -} ----- -**** - -[TIP] -==== -By default, the `quarkusDev` task uses `compileJava` compiler options. These can be overridden by setting the `compilerArgs` property in the task. -==== - -[NOTE] -==== -By default, `quarkusDev` sets the debug host to `localhost` (for security reasons). If you need to change this, for example to enable debugging on all hosts, you can use the `-DdebugHost` option like so: - -:dev-additional-parameters: -DdebugHost=0.0.0.0 -include::includes/devtools/dev-parameters.adoc[] -:!dev-additional-parameters: -==== -The plugin also exposes a `quarkusDev` configuration. Using this configuration to declare a dependency will restrict the usage of that dependency to development mode. -The `quarkusDev` configuration can be used as following: - -[role="primary asciidoc-tabs-sync-groovy"] -.Groovy DSL -**** -[source,groovy] ----- -dependencies { - quarkusDev 'io.quarkus:quarkus-jdbc-h2' -} ----- -**** - -[role="secondary asciidoc-tabs-sync-kotlin"] -.Kotlin DSL -**** -[source,kotlin] ----- -dependencies { - quarkusDev("io.quarkus:quarkus-jdbc-h2") -} ----- -**** - -=== Remote Development Mode - -It is possible to use development mode remotely, so that you can run Quarkus in a container environment (such as OpenShift) -and have changes made to your local files become immediately visible. - -This allows you to develop in the same environment you will actually run your app in, and with access to the same services. - -WARNING: Do not use this in production. This should only be used in a development environment. You should not run production applications in dev mode. - -To do this you must build a mutable application, using the `mutable-jar` format. Set the following properties in `application.properties`: - -[source,properties] ----- -quarkus.package.type=mutable-jar <1> -quarkus.live-reload.password=changeit <2> -quarkus.live-reload.url=http://my.cluster.host.com:8080 <3> ----- -<1> This tells Quarkus to use the mutable-jar format. Mutable applications also include the deployment time parts of Quarkus, -so they take up a bit more disk space. If run normally they start just as fast and use the same memory as an immutable application, -however they can also be started in dev mode. -<2> The password that is used to secure communication between the remote side and the local side. -<3> The URL that your app is going to be running in dev mode at. This is only needed on the local side, so you -may want to leave it out of the properties file and specify it as a system property on the command line. - -The `mutable-jar` is then built in the same way that a regular Quarkus jar is built, i.e. by issuing: - -include::includes/devtools/build.adoc[] - -Before you start Quarkus on the remote host set the environment variable `QUARKUS_LAUNCH_DEVMODE=true`. If you are -on bare metal you can set it via the `export QUARKUS_LAUNCH_DEVMODE=true` command and then run the application with the proper `java -jar ...` command to run the application. - -If you plan on running the application via Docker, then you'll need to add `-e QUARKUS_LAUNCH_DEVMODE=true` to the `docker run` command. -When the application starts you should now see the following line in the logs: `Profile dev activated. Live Coding activated`. - - -NOTE: The remote side does not need to include Maven or any other development tools. The normal `fast-jar` Dockerfile -that is generated with a new Quarkus application is all you need. If you are using bare metal launch the Quarkus runner -jar, do not attempt to run normal devmode. - -Now you need to connect your local agent to the remote host, using the `remote-dev` command: - -[source,bash] ----- -./gradlew quarkusRemoteDev -Dquarkus.live-reload.url=http://my-remote-host:8080 ----- - -Now every time you refresh the browser you should see any changes you have made locally immediately visible in the remote -app. - -All the config options are shown below: - -include::{generated-dir}/config/quarkus-live-reload-live-reload-config.adoc[opts=optional, leveloffset=+1] - -== Debugging - -In development mode, Quarkus starts by default with debug mode enabled, listening to port `5005` without suspending the JVM. - -This behavior can be changed by giving the `debug` system property one of the following values: - -* `false` - the JVM will start with debug mode disabled -* `true` - The JVM is started in debug mode and will be listening on port `5005` -* `client` - the JVM will start in client mode and attempt to connect to `localhost:5005` -* `{port}` - The JVM is started in debug mode and will be listening on `{port}` - -An additional system property `suspend` can be used to suspend the JVM, when launched in debug mode. `suspend` supports the following values: - -* `y` or `true` - The debug mode JVM launch is suspended -* `n` or `false` - The debug mode JVM is started without suspending - -[TIP] -==== -You can also run a Quarkus application in debug mode with a suspended JVM using: - -:dev-additional-parameters: -Dsuspend -Ddebug -include::includes/devtools/dev-parameters.adoc[] -:!dev-additional-parameters: - -Then, attach your debugger to `localhost:5005`. -==== - -== Import in your IDE - -Once you have a <>, you can import it in your favorite IDE. -The only requirement is the ability to import a Gradle project. - -**Eclipse** - -In Eclipse, click on: `File -> Import`. -In the wizard, select: `Gradle -> Existing Gradle Project`. -On the next screen, select the root location of the project. -The next screen list the found modules; select the generated project and click on `Finish`. Done! - -In a separated terminal, run: - -include::includes/devtools/dev.adoc[] - -and enjoy a highly productive environment. - -**IntelliJ** - -In IntelliJ: - -1. From inside IntelliJ select `File -> New -> Project From Existing Sources...` or, if you are on the welcome dialog, select `Import project`. -2. Select the project root -3. Select `Import project from external model` and `Gradle` -4. Next a few times (review the different options if needed) -5. On the last screen click on Finish - -In a separated terminal or in the embedded terminal, run: - -include::includes/devtools/dev.adoc[] - -Enjoy! - -**Apache NetBeans** - -In NetBeans: - -1. Select `File -> Open Project` -2. Select the project root -3. Click on `Open Project` - -In a separated terminal or the embedded terminal, go to the project root and run: - -include::includes/devtools/dev.adoc[] - -Enjoy! - -**Visual Studio Code** - -Open the project directory in VS Code. If you have installed the Java Extension Pack (grouping a set of Java extensions), the project is loaded as a Gradle project. - -== Downloading dependencies for offline development and testing - -Quarkus extension dependencies are divided into the runtime extension dependencies that end up on the application runtime -classpath and the deployment (or build time) extension dependencies that are resolved by Quarkus only at application build time to create -the build classpath. Application developers are expected to express dependencies only on the runtime artifacts of Quarkus extensions. - -To enable the use-case of building and testing a Quarkus application offline, the plugin includes the `quarkusGoOffline` task that could be called from the command line like this: - -[source,bash] ----- -./gradlew quarkusGoOffline ----- - -This task will resolve all the runtime, build time, test and dev mode dependencies of the application to the Gradle cache. -Once executed, you will be able to safely run quarkus task with `--offline` flag. - -== Building a native executable - -Native executables make Quarkus applications ideal for containers and serverless workloads. - -Make sure to have `GRAALVM_HOME` configured and pointing to GraalVM version {graalvm-version} (Make sure to use a Java 11 version of GraalVM). - -Create a native executable using: - -include::includes/devtools/build-native.adoc[] - -A native executable will be present in `build/`. - -Native related properties can either be added in `application.properties` file, as command line arguments or in the `quarkusBuild` task. -Configuring the `quarkusBuild` task can be done as following: - -[role="primary asciidoc-tabs-sync-groovy"] -.Groovy DSL -**** -[source,groovy,subs=attributes+] ----- -quarkusBuild { - nativeArgs { - containerBuild = true <1> - builderImage = "quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}" <2> - } -} ----- - -<1> Set `quarkus.native.container-build` property to `true` -<2> Set `quarkus.native.builder-image` property to `quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}` -**** - -[role="secondary asciidoc-tabs-sync-kotlin"] -.Kotlin DSL -**** -[source,kotlin,subs=attributes+] ----- -tasks.quarkusBuild { - nativeArgs { - "container-build" to true <1> - "builder-image" to "quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}" <2> - } -} ----- - -<1> Set `quarkus.native.container-build` property to `true` -<2> Set `quarkus.native.builder-image` property to `quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}` -**** - -[WARNING] -==== -When using the Gradle Groovy DSL, property keys must follow lower camel case notation. -e.g. `container-build` is not valid, and should be replaced by `containerBuild`. -This limitation does not apply to the Gradle Kotlin DSL. -==== - -=== Build a container friendly executable - -The native executable will be specific to your operating system. -To create an executable that will run in a container, use the following: - -include::includes/devtools/build-native-container.adoc[] - -The produced executable will be a 64 bit Linux executable, so depending on your operating system it may no longer be runnable. -However, it's not an issue as we are going to copy it to a Docker container. -Note that in this case the build itself runs in a Docker container too, so you don't need to have GraalVM installed locally. - -[TIP] -==== -By default, the native executable will be generated using the `quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}` Docker image. - -If you want to build a native executable with a different Docker image (for instance to use a different GraalVM version), -use the `-Dquarkus.native.builder-image=` build argument. - -The list of the available Docker images can be found on https://quay.io/repository/quarkus/ubi-quarkus-native-image?tab=tags[quay.io]. -Be aware that a given Quarkus version might not be compatible with all the images available. -==== - -== Running native tests - -Run the native tests using: - -[source,bash] ----- -./gradlew testNative ----- - -This task depends on `quarkusBuild`, so it will generate the native image before running the tests. - -[NOTE] -==== -By default, the `native-test` source set is based on `main` and `test` source sets. It is possible to add an extra source set. For example, if your integration tests are located in an `integrationTest` source set, you can specify it as: - -[role="primary asciidoc-tabs-sync-groovy"] -.Groovy DSL -**** -[source,groovy] ----- -quarkus { - sourceSets { - extraNativeTest = sourceSets.integrationTest - } -} ----- -**** - -[role="secondary asciidoc-tabs-sync-kotlin"] -.Kotlin DSL -**** -[source,kotlin] ----- -quarkus { - sourceSets { - setExtraNativeTest(sourceSets["integrationTest"]) - } -} ----- -**** - -==== - -== Using fast-jar - -`fast-jar` is now the default quarkus package type. The result of `./gradlew build` command is a new directory under `build` named `quarkus-app`. - -You can run the application using: `java -jar target/quarkus-app/quarkus-run.jar`. - -WARNING: In order to successfully run the produced jar, you need to have the entire contents of the `quarkus-app` directory. If any of the files are missing, the application will not start or -might not function correctly. - -TIP: The `fast-jar` packaging results in creating an artifact that starts a little faster and consumes slightly less memory than a legacy Quarkus jar -because it has indexed information about which dependency jar contains classes and resources. It can thus avoid the lookup into potentially every jar -on the classpath that the legacy jar necessitates, when loading a class or resource. - -== Building Uber-Jars - -Quarkus Gradle plugin supports the generation of Uber-Jars by specifying a `quarkus.package.type` argument as follows: - -:build-additional-parameters: -Dquarkus.package.type=uber-jar -include::includes/devtools/build.adoc[] -:!build-additional-parameters: - -When building an Uber-Jar you can specify entries that you want to exclude from the generated jar by using the `--ignored-entry` argument: - -[source,bash] ----- -./gradlew quarkusBuild -Dquarkus.package.type=uber-jar --ignored-entry=META-INF/file1.txt ----- - -The entries are relative to the root of the generated Uber-Jar. You can specify multiple entries by adding extra `--ignored-entry` arguments. - -[[multi-module-gradle]] -=== Working with multi-module projects - -By default, Quarkus will not discover CDI beans inside another module. - -The best way to enable CDI bean discovery for a module in a multi-module project would be to include a `META-INF/beans.xml` file, -unless it is the main application module already configured with the quarkus-maven-plugin, in which case it will indexed automatically. - -Alternatively, there is some unofficial link:https://plugins.gradle.org/search?term=jandex[Gradle Jandex plugins] that can be used instead of the `META-INF/beans.xml` file. - -More information on this topic can be found on the xref:cdi-reference.adoc#bean_discovery[Bean Discovery] section of the CDI guide. - - -== Publishing your application - -In order to make sure the right dependency versions are being used by Gradle, the BOM is declared as an `enforcedPlatform` in your build file. -By default, the `maven-publish` plugin will prevent you from publishing your application due to this `enforcedPlatform`. -This validation can be skipped by adding the following configuration in your build file: - -[role="primary asciidoc-tabs-sync-groovy"] -.Groovy DSL -**** -[source,groovy] ----- -tasks.withType(GenerateModuleMetadata).configureEach { - suppressedValidationErrors.add('enforced-platform') -} ----- -**** - -[role="secondary asciidoc-tabs-sync-kotlin"] -.Kotlin DSL -**** -[source,kotlin] ----- -tasks.withType().configureEach { - suppressedValidationErrors.add("enforced-platform") -} ----- -**** diff --git a/_versions/2.7/guides/grpc-getting-started.adoc b/_versions/2.7/guides/grpc-getting-started.adoc deleted file mode 100644 index 6fe98105456..00000000000 --- a/_versions/2.7/guides/grpc-getting-started.adoc +++ /dev/null @@ -1,428 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Getting Started with gRPC - -include::./attributes.adoc[] - -This page explains how to start using gRPC in your Quarkus application. -While this page describes how to configure it with Maven, it is also possible to use Gradle. - -Let's imagine you have a regular Quarkus project, generated from the https://code.quarkus.io[Quarkus project generator]. -The default configuration is enough, but you can also select some extensions if you want. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `grpc-plain-text-quickstart` {quickstarts-tree-url}/grpc-plain-text-quickstart[directory]. - -== Configuring your project - -Edit the `pom.xml` file to add the Quarkus gRPC extension dependency (just under ``): - -[source,xml] ----- - - io.quarkus - quarkus-grpc - ----- - -By default the `quarkus-grpc` extension relies on reactive programming model, in this guide we will follow reactive approach. -Under the dependencies section of your `pom.xml` file replace the `quarkus-resteasy` dependency with: - -[source,xml] ----- - - io.quarkus - quarkus-resteasy-reactive - ----- - -Make sure you have `generate-code` goal of `quarkus-maven-plugin` enabled in your `pom.xml`. -If you wish to generate code from different `proto` files for tests, also add the `generate-code-tests` goal. -Please note that no additional task/goal is required for the Gradle plugin. - -[source,xml] ----- - - - - io.quarkus - quarkus-maven-plugin - ${quarkus-plugin.version} - true - - - - build - generate-code - generate-code-tests - - - - - - ----- - -With this configuration, you can put your service and message definitions in the `src/main/proto` directory. -`quarkus-maven-plugin` will generate Java files from your `proto` files. - -`quarkus-maven-plugin` retrieves a version of `protoc` (the protobuf compiler) from Maven repositories. The retrieved version matches your operating system and CPU architecture. -If this retrieved version does not work in your context, you can either force to use a different OS classifier with -`-Dquarkus.grpc.protoc-os-classifier=your-os-classifier` (e.g. `osx-x86_64`). -You can also download the suitable binary and specify the location via -`-Dquarkus.grpc.protoc-path=/path/to/protoc`. - - -Alternatively to using the `generate-code` goal of the `quarkus-maven-plugin`, you can use `protobuf-maven-plugin` to generate these files, more in <> - -Let's start with a simple _Hello_ service. -Create the `src/main/proto/helloworld.proto` file with the following content: - -[source,javascript] ----- -syntax = "proto3"; - -option java_multiple_files = true; -option java_package = "io.quarkus.example"; -option java_outer_classname = "HelloWorldProto"; - -package helloworld; - -// The greeting service definition. -service Greeter { - // Sends a greeting - rpc SayHello (HelloRequest) returns (HelloReply) {} -} - -// The request message containing the user's name. -message HelloRequest { - string name = 1; -} - -// The response message containing the greetings -message HelloReply { - string message = 1; -} ----- - -This `proto` file defines a simple service interface with a single method (`SayHello`), and the exchanged messages (`HelloRequest` containing the name and `HelloReply` containing the greeting message). - -Before coding, we need to generate the classes used to implement and consume gRPC services. -In a terminal, run: - -[source,shell] ----- -$ mvn compile ----- - -Once generated, you can look at the `target/generated-sources/grpc` directory: - -[source,txt] ----- -target/generated-sources/grpc -└── io - └── quarkus - └── example - ├── Greeter.java - ├── GreeterBean.java - ├── GreeterClient.java - ├── GreeterGrpc.java - ├── HelloReply.java - ├── HelloReplyOrBuilder.java - ├── HelloRequest.java - ├── HelloRequestOrBuilder.java - ├── HelloWorldProto.java - └── MutinyGreeterGrpc.java ----- - -These are the classes we are going to use. - - -=== `proto` files with imports - -Protocol Buffers specification provides a way to import `proto` files. -The Quarkus code generation mechanism lets you control the scope of dependencies to scan for possible imports by setting the `quarkus.generate-code.grpc.scan-for-imports` property to one of the following: - -- `all` - scan all the dependencies -- `none` - don't scan the dependencies, use only what is defined in the `src/main/proto` or `src/test/proto` -- `groupId1:artifactId1,groupId2:artifactId2` - scan only the dependencies with group id and artifact id in the list. - -If not specified, the property is set to `com.google.protobuf:protobuf-java`. -To override it, set the `quarkus.generate-code.grpc.scan-for-imports` property in your application.properties to the desired value, e.g. - -[source,properties] ----- -quarkus.generate-code.grpc.scan-for-imports=all ----- - -=== `proto` files from dependencies -In some cases, you may want to use `proto` files from a different project to generate the gRPC stubs. In this case: - -1. Add a dependency on the artifact that contains the proto file to your project -2. In `application.properties`, specify the dependencies you want to scan for proto files. - -[source,properties] ----- -quarkus.generate-code.grpc.scan-for-proto=: ----- -The value of the property may be `none`, which is the default value, or a comma separated list of `groupId:artifactId` coordinates. - -== Implementing a gRPC service - -Now that we have the generated classes let's implement our _hello_ service. - -With Quarkus, implementing a service requires to implement the generated service interface based on Mutiny, a Reactive Programming API integrated in Quarkus, and expose it as a CDI bean. -Learn more about Mutiny on the xref:mutiny-primer.adoc[Mutiny guide]. -The service class must be annotated with the `@io.quarkus.grpc.GrpcService` annotation. - -=== Implementing a service - -Create the `src/main/java/org/acme/HelloService.java` file with the following content: - -[source,java] ----- -package org.acme; - -import io.grpc.stub.StreamObserver; -import io.quarkus.example.Greeter; -import io.quarkus.example.HelloReply; -import io.quarkus.example.HelloRequest; -import io.quarkus.grpc.GrpcService; -import io.smallrye.mutiny.Uni; - -@GrpcService <1> -public class HelloService implements Greeter { <2> - - @Override - public Uni sayHello(HelloRequest request) { <3> - return Uni.createFrom().item(() -> - HelloReply.newBuilder().setMessage("Hello " + request.getName()).build() - ); - } -} ----- -<1> Expose your implementation as a bean. -<2> Implement the generated service interface. -<3> Implement the methods defined in the service definition (here we have a single method). - -You can also use the default gRPC API instead of Mutiny: - -[source,java] ----- -package org.acme; - -import io.grpc.stub.StreamObserver; -import io.quarkus.example.GreeterGrpc; -import io.quarkus.example.HelloReply; -import io.quarkus.example.HelloRequest; -import io.quarkus.grpc.GrpcService; - -@GrpcService <1> -public class HelloService extends GreeterGrpc.GreeterImplBase { <2> - - @Override - public void sayHello(HelloRequest request, StreamObserver responseObserver) { <3> - String name = request.getName(); - String message = "Hello " + name; - responseObserver.onNext(HelloReply.newBuilder().setMessage(message).build()); <4> - responseObserver.onCompleted(); <5> - } -} ----- -<1> Expose your implementation as a bean. -<2> Extends the `ImplBase` class. This is a generated class. -<3> Implement the methods defined in the service definition (here we have a single method). -<4> Build and send the response. -<5> Close the response. - -NOTE: If your service implementation logic is blocking (use blocking I/O for example), annotate your method with -`@Blocking`. -The `io.smallrye.common.annotation.Blocking` annotation instructs the framework to invoke the -annotated method on a worker thread instead of the I/O thread (event-loop). - -=== The gRPC server - -The services are _served_ by a _server_. -Available services (_CDI beans_) are automatically registered and exposed. - -By default, the server is exposed on `localhost:9000`, and uses _plain-text_ (so no TLS) when -running normally, and `localhost:9001` for tests. - -Run the application using: `mvn quarkus:dev`. - -== Consuming a gRPC service - -In this section, we are going to consume the service we expose. -To simplify, we are going to consume the service from the same application, which in the real world, does not make sense. - -Open the existing `org.acme.ExampleResource` class, and edit the content to become: - -[source, java] ----- -package org.acme; - -import io.quarkus.example.Greeter; -import io.quarkus.example.HelloRequest; -import io.quarkus.grpc.GrpcClient; -import io.smallrye.mutiny.Uni; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class ExampleResource { - - @GrpcClient // <1> - Greeter hello; // <2> - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } - - @GET - @Path("/{name}") - public Uni hello(@PathParam("name") String name) { - return hello.sayHello(HelloRequest.newBuilder().setName(name).build()) - .onItem().transform(helloReply -> helloReply.getMessage()); // <3> - } -} ----- -<1> Inject the service and configure its name. The name is used in the application configuration. If not specified then the field name is used instead: `hello` in this particular case. -<2> Use the generated service interface based on Mutiny API. -<3> Invoke the service. - -We need to configure the application to indicate where the `hello` service is found. -In the `src/main/resources/application.properties` file, add the following property: - -[source,properties] ----- -quarkus.grpc.clients.hello.host=localhost ----- - -- `hello` is the name used in the `@GrpcClient` annotation. -- `host` configures the service host (here it's localhost). - -Then, open http://localhost:8080/hello/quarkus in a browser, and you should get `Hello quarkus`! - -== Packaging the application - -Like any other Quarkus applications, you can package it with: `mvn package`. -You can also package the application into a native executable with: `mvn package -Pnative`. - -== Generating Java files from proto with protobuf-maven-plugin - -Alternatively to using Quarkus code generation to generate stubs for `proto` files, you can also use -`protobuf-maven-plugin`. - -To do it, first define the 2 following properties in the `` section: - -[source,xml,subs="verbatim,attributes"] ----- -{grpc-version} -{protoc-version} ----- - -They configure the gRPC version and the `protoc` version. - -Then, add to the `build` section the `os-maven-plugin` extension and the `protobuf-maven-plugin` configuration. - -[source,xml,subs="verbatim,attributes"] ----- - - - - kr.motd.maven - os-maven-plugin - ${os-maven-plugin-version} - - - - - - org.xolstice.maven.plugins - protobuf-maven-plugin // <1> - ${protobuf-maven-plugin-version} - - com.google.protobuf:protoc:${protoc.version}:exe:${os.detected.classifier} // <2> - grpc-java - io.grpc:protoc-gen-grpc-java:${grpc.version}:exe:${os.detected.classifier} - - - quarkus-grpc-protoc-plugin - io.quarkus - quarkus-grpc-protoc-plugin - {quarkus-version} - io.quarkus.grpc.protoc.plugin.MutinyGrpcGenerator - - - - - - compile - - compile - compile-custom - - - - test-compile - - test-compile - test-compile-custom - - - - - - - - ----- -<1> The `protobuf-maven-plugin` that generates stub classes from your gRPC service definition (`proto` files). -<2> The class generation uses a tool named `protoc`, which is OS-specific. -That's why we use the `os-maven-plugin` to target the executable compatible with the operating system. - -NOTE: This configuration instructs the `protobuf-maven-plugin` to generate the default gRPC classes and classes using Mutiny to fit with the Quarkus development experience. - -IMPORTANT: When using `protobuf-maven-plugin`, instead of the `quarkus-maven-plugin`, every time you update the `proto` files, you need to re-generate the classes (using `mvn compile`). - - -== gRPC classes from dependencies - -When gRPC classes - the classes generated from `proto` files - are in a dependency of the application, then the dependency needs a Jandex index. -The `jandex-maven-plugin` can be used to create a Jandex index. More information on this topic can be found in the xref:cdi-reference.adoc#bean_discovery[Bean Discovery] section of the CDI guide. - -[source,xml,subs="attributes+"] ----- - - - - org.jboss.jandex - jandex-maven-plugin - {jandex-maven-plugin-version} - - - make-index - - jandex - - - - - - ----- \ No newline at end of file diff --git a/_versions/2.7/guides/grpc-service-consumption.adoc b/_versions/2.7/guides/grpc-service-consumption.adoc deleted file mode 100644 index 858758b1475..00000000000 --- a/_versions/2.7/guides/grpc-service-consumption.adoc +++ /dev/null @@ -1,401 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Consuming a gRPC Service - -include::./attributes.adoc[] - -gRPC clients can be injected in your application code. - -IMPORTANT: Consuming gRPC services requires the gRPC classes to be generated. -Place your `proto` files in `src/main/proto` and run `mvn compile`. - -== Stubs and Injection - -gRPC generation provides several stubs, providing different ways to consume a gRPC service. -You can inject: - -* a service interface using the Mutiny API, -* a blocking stub using the gRPC API, -* a reactive stub based on Mutiny, -* the gRPC `io.grpc.Channel`, that lets you create other types of stubs. - -[source, java] ----- -import io.quarkus.grpc.GrpcClient; - -import hello.Greeter; -import hello.GreeterGrpc.GreeterBlockingStub; -import hello.MutinyGreeterGrpc.MutinyGreeterStub; - -class MyBean { - - // A service interface using the Mutiny API - @GrpcClient("helloService") // <1> - Greeter greeter; - - // A reactive stub based on Mutiny - @GrpcClient("helloService") - MutinyGreeterGrpc.MutinyGreeterStub mutiny; - - // A blocking stub using the gRPC API - @GrpcClient - GreeterGrpc.GreeterBlockingStub helloService; // <2> - - @GrpcClient("hello-service") - Channel channel; - -} ----- -<1> A gRPC client injection point must be annotated with the `@GrpcClient` qualifier. This qualifier can be used to specify the name that is used to configure the underlying gRPC client. For example, if you set it to `hello-service`, configuring the host of the service is done using the `quarkus.grpc.clients.**hello-service**.host`. -<2> If the name is not specified via the `GrpcClient#value()` then the field name is used instead, e.g. `helloService` in this particular case. - -The stub class names are derived from the service name used in your `proto` file. -For example, if you use `Greeter` as a service name as in: - -[source] ----- -option java_package = "hello"; - -service Greeter { - rpc SayHello (HelloRequest) returns (HelloReply) {} -} ----- - -Then the service interface name is: `hello.Greeter`, the Mutiny stub name is: `hello.MutinyGreeterGrpc.MutinyGreeterStub` and the blocking stub name is: `hello.GreeterGrpc.GreeterBlockingStub`. - -== Examples - -=== Service Interface - -[source, java] ----- -import io.quarkus.grpc.GrpcClient; -import io.smallrye.mutiny.Uni; - -import hello.Greeter; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class ExampleResource { - - @GrpcClient <1> - Greeter hello; - - @GET - @Path("/mutiny/{name}") - public Uni helloMutiny(@PathParam("name") String name) { - return hello.sayHello(HelloRequest.newBuilder().setName(name).build()) - .onItem().transform(HelloReply::getMessage); - } -} ----- -<1> The service name is derived from the injection point - the field name is used. The `quarkus.grpc.clients.hello.host` property must be set. - -=== Blocking Stub - -[source, java] ----- -import io.quarkus.grpc.GrpcClient; - -import hello.GreeterGrpc.GreeterBlockingStub; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class ExampleResource { - - @GrpcClient("hello") <1> - GreeterGrpc.GreeterBlockingStub blockingHelloService; - - @GET - @Path("/blocking/{name}") - public String helloBlocking(@PathParam("name") String name) { - return blockingHelloService.sayHello(HelloRequest.newBuilder().setName(name).build()).getMessage(); - } -} ----- -<1> The `quarkus.grpc.clients.hello.host` property must be set. - -=== Handling streams - -gRPC allows sending and receiving streams: - -[source] ----- -service Streaming { - rpc Source(Empty) returns (stream Item) {} // Returns a stream - rpc Sink(stream Item) returns (Empty) {} // Reads a stream - rpc Pipe(stream Item) returns (stream Item) {} // Reads a streams and return a streams -} ----- - -Using the Mutiny stub, you can interact with these as follows: - -[source, java] ----- -package io.quarkus.grpc.example.streaming; - -import io.grpc.examples.streaming.Empty; -import io.grpc.examples.streaming.Item; -import io.grpc.examples.streaming.MutinyStreamingGrpc; -import io.quarkus.grpc.GrpcClient; - -import io.smallrye.mutiny.Multi; -import io.smallrye.mutiny.Uni; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/streaming") -@Produces(MediaType.APPLICATION_JSON) -public class StreamingEndpoint { - - @GrpcClient - MutinyStreamingGrpc.MutinyStreamingStub streaming; - - @GET - public Multi invokeSource() { - // Retrieve a stream - return streaming.source(Empty.newBuilder().build()) - .onItem().transform(Item::getValue); - } - - @GET - @Path("sink/{max}") - public Uni invokeSink(@PathParam("max") int max) { - // Send a stream and wait for completion - Multi inputs = Multi.createFrom().range(0, max) - .map(i -> Integer.toString(i)) - .map(i -> Item.newBuilder().setValue(i).build()); - return streaming.sink(inputs).onItem().ignore().andContinueWithNull(); - } - - @GET - @Path("/{max}") - public Multi invokePipe(@PathParam("max") int max) { - // Send a stream and retrieve a stream - Multi inputs = Multi.createFrom().range(0, max) - .map(i -> Integer.toString(i)) - .map(i -> Item.newBuilder().setValue(i).build()); - return streaming.pipe(inputs).onItem().transform(Item::getValue); - } - -} - ----- - -== Client configuration - -For each gRPC service you inject in your application, you can configure the following attributes: - -include::{generated-dir}/config/quarkus-grpc-config-group-config-grpc-client-configuration.adoc[opts=optional, leveloffset=+1] - -The `client-name` is the name set in the `@GrpcClient` or derived from the injection point if not explicitly defined. - -The following examples uses _hello_ as the client name. -Don't forget to replace it with the name you used in in the `@GrpcClient` annotation. - -=== Enabling TLS - -To enable TLS, use the following configuration. -Note that all paths in the configuration may either specify a resource on the classpath -(typically from `src/main/resources` or its subfolder) or an external file. - -[source,properties] ----- -quarkus.grpc.clients.hello.host=localhost - -# either a path to a classpath resource or to a file: -quarkus.grpc.clients.hello.ssl.trust-store=tls/ca.pem ----- - -NOTE: When SSL/TLS is configured, `plain-text` is automatically disabled. - -=== TLS with Mutual Auth - -To use TLS with mutual authentication, use the following configuration: - -[source,properties] ----- -quarkus.grpc.clients.hello.host=localhost -quarkus.grpc.clients.hello.plain-text=false - -# all the following may use either a path to a classpath resource or to a file: -quarkus.grpc.clients.hello.ssl.certificate=tls/client.pem -quarkus.grpc.clients.hello.ssl.key=tls/client.key -quarkus.grpc.clients.hello.ssl.trust-store=tls/ca.pem ----- - -=== Client Deadlines - -It's always reasonable to set a deadline (timeout) for a gRPC client, i.e. to specify a duration of time after which the RPC times out and the client receives the status error `DEADLINE_EXCEEDED`. -You can specify the deadline via the `quarkus.grpc.clients."service-name".deadline` configuration property, e.g.: - -[source,properties] ----- -quarkus.grpc.clients.hello.host=localhost -quarkus.grpc.clients.hello.deadline=2s <1> ----- -<1> Set the deadline for all injected clients. - -== gRPC Headers -Similarly to HTTP, alongside the message, gRPC calls can carry headers. -Headers can be useful e.g. for authentication. - -To set headers for a gRPC call, create a client with headers attached and then perform the call on this client: -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; - -import examples.Greeter; -import examples.HelloReply; -import examples.HelloRequest; -import io.grpc.Metadata; -import io.quarkus.grpc.GrpcClient; -import io.quarkus.grpc.GrpcClientUtils; -import io.smallrye.mutiny.Uni; - -@ApplicationScoped -public class MyService { - @GrpcClient - Greeter client; - - public Uni doTheCall() { - Metadata extraHeaders = new Metadata(); - if (headers) { - extraHeaders.put("my-header", "my-interface-value"); - } - - Greeter alteredClient = GrpcClientUtils.attachHeaders(client, extraHeaders); // <1> - - return alteredClient.sayHello(HelloRequest.newBuilder().setName(name).build()); // <2> - } -} ----- -<1> Alter the client to make calls with the `extraHeaders` attached -<2> Perform the call with the altered client. The original client remains unmodified - -`GrpcClientUtils` work with all flavors of clients. - -== Client Interceptors - -A gRPC client interceptor can be implemented by a CDI bean that also implements the `io.grpc.ClientInterceptor` interface. -You can annotate an injected client with `@io.quarkus.grpc.RegisterClientInterceptor` to register the specified interceptor for the particular client instance. -The `@RegisterClientInterceptor` annotation is repeatable. -Alternatively, if you want to apply the interceptor to any injected client then annotate the interceptor bean with `@io.quarkus.grpc.GlobalInterceptor`. - -.Global Client Interceptor Example -[source, java] ----- -import io.quarkus.grpc.GlobalInterceptor; - -import io.grpc.ClientInterceptor; - -@GlobalInterceptor <1> -@ApplicationScoped -public class MyInterceptor implements ClientInterceptor { - - @Override - public ClientCall interceptCall(MethodDescriptor method, - CallOptions callOptions, Channel next) { - // ... - } -} ----- -<1> This interceptor is applied to all injected gRPC clients. - -TIP: Check the https://grpc.github.io/grpc-java/javadoc/io/grpc/ClientInterceptor.html[ClientInterceptor JavaDoc] to properly implement your interceptor. - -.`@RegisterClientInterceptor` Example -[source, java] ----- -import io.quarkus.grpc.GrpcClient; -import io.quarkus.grpc.RegisterClientInterceptor; - -import hello.Greeter; - -@ApplicationScoped -class MyBean { - - @RegisterClientInterceptor(MySpecialInterceptor.class) <1> - @GrpcClient("helloService") - Greeter greeter; -} ----- -<1> Registers the `MySpecialInterceptor` for this particular client. - -When you have multiple client interceptors, you can order them by implementing the `javax.enterprise.inject.spi.Prioritized` interface: - -[source, java] ----- -@ApplicationScoped -public class MyInterceptor implements ClientInterceptor, Prioritized { - - @Override - public ClientCall interceptCall(MethodDescriptor method, - CallOptions callOptions, Channel next) { - // ... - } - - @Override - public int getPriority() { - return 10; - } -} ----- - -Interceptors with the highest priority are called first. -The default priority, used if the interceptor does not implement the `Prioritized` interface, is `0`. - -== gRPC Client metrics - -=== Enabling metrics collection - -gRPC client metrics are automatically enabled when the application also uses the xref:micrometer.adoc[`quarkus-micrometer`] extension. -Micrometer collects the metrics of all the gRPC clients used by the application. - -As an example, if you export the metrics to Prometheus, you will get: - -[source, text] ----- -# HELP grpc_client_responses_received_messages_total The total number of responses received -# TYPE grpc_client_responses_received_messages_total counter -grpc_client_responses_received_messages_total{method="SayHello",methodType="UNARY",service="helloworld.Greeter",} 6.0 -# HELP grpc_client_requests_sent_messages_total The total number of requests sent -# TYPE grpc_client_requests_sent_messages_total counter -grpc_client_requests_sent_messages_total{method="SayHello",methodType="UNARY",service="helloworld.Greeter",} 6.0 -# HELP grpc_client_processing_duration_seconds The total time taken for the client to complete the call, including network delay -# TYPE grpc_client_processing_duration_seconds summary -grpc_client_processing_duration_seconds_count{method="SayHello",methodType="UNARY",service="helloworld.Greeter",statusCode="OK",} 6.0 -grpc_client_processing_duration_seconds_sum{method="SayHello",methodType="UNARY",service="helloworld.Greeter",statusCode="OK",} 0.167411625 -# HELP grpc_client_processing_duration_seconds_max The total time taken for the client to complete the call, including network delay -# TYPE grpc_client_processing_duration_seconds_max gauge -grpc_client_processing_duration_seconds_max{method="SayHello",methodType="UNARY",service="helloworld.Greeter",statusCode="OK",} 0.136478028 ----- - -The service name, method and type can be found in the _tags_. - -=== Disabling metrics collection - -To disable the gRPC client metrics when `quarkus-micrometer` is used, add the following property to the application configuration: - -[source, properties] ----- -quarkus.micrometer.binder.grpc-client.enabled=false ----- \ No newline at end of file diff --git a/_versions/2.7/guides/grpc-service-implementation.adoc b/_versions/2.7/guides/grpc-service-implementation.adoc deleted file mode 100644 index 82ec98d1aeb..00000000000 --- a/_versions/2.7/guides/grpc-service-implementation.adoc +++ /dev/null @@ -1,396 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Implementing a gRPC Service - -include::./attributes.adoc[] - -gRPC service implementations exposed as CDI beans are automatically registered and served by quarkus-grpc. - -IMPORTANT: Implementing a gRPC service requires the gRPC classes to be generated. -Place your `proto` files in `src/main/proto` and run `mvn compile`. - -== Generated Code - -Quarkus generates a few implementation classes for services declared in the `proto` file: - -1. A _service interface_ using the Mutiny API - - the class name is `${JAVA_PACKAGE}.${NAME_OF_THE_SERVICE}` -2. An _implementation base_ class using the gRPC API - - the class name is structured as follows: `${JAVA_PACKAGE}.${NAME_OF_THE_SERVICE}Grpc.${NAME_OF_THE_SERVICE}ImplBase` - -For example, if you use the following `proto` file snippet: - -[source] ----- -option java_package = "hello"; <1> - -service Greeter { <2> - rpc SayHello (HelloRequest) returns (HelloReply) {} -} ----- -<1> `hello` is the java package for the generated classes. -<2> `Greeter` is the service name. - -Then the service interface is `hello.Greeter` and the implementation base is the abstract static nested class: `hello.GreeterGrpc.GreeterImplBase`. - -IMPORTANT: You'll need to implement the _service interface_ or extend the _base class_ with your service implementation bean as described in the following sections. - -== Implementing a Service with the Mutiny API - -To implement a gRPC service using the Mutiny API, create a class that implements the service interface. -Then, implement the methods defined in the service interface. -If you don't want to implement a service method just throw an `java.lang.UnsupportedOperationException` from the method body (the exception will be automatically converted to the appropriate gRPC exception). -Finally, implement the service and add the `@GrpcService` annotation: - -[source, java] ----- -import io.quarkus.grpc.GrpcService; -import hello.Greeter; - -@GrpcService <1> -public class HelloService implements Greeter { <2> - - @Override - public Uni sayHello(HelloRequest request) { - return Uni.createFrom().item(() -> - HelloReply.newBuilder().setMessage("Hello " + request.getName()).build() - ); - } -} ----- -<1> A gRPC service implementation bean must be annotated with the `@GrpcService` annotation and should not declare any other CDI qualifier. All gRPC services have the `javax.inject.Singleton` scope. Additionally, the request context is always active during a service call. -<2> `hello.Greeter` is the generated service interface. - -NOTE: The service implementation bean can also extend the Mutiny implementation base, where the class name is structured as follows: `Mutiny${NAME_OF_THE_SERVICE}Grpc.${NAME_OF_THE_SERVICE}ImplBase`. - -== Implementing a Service with the default gRPC API - -To implement a gRPC service using the default gRPC API, create a class that extends the default implementation base. -Then, override the methods defined in the service interface. -Finally, implement the service and add the `@GrpcService` annotation: - -[source, java] ----- -import io.quarkus.grpc.GrpcService; - -@GrpcService -public class HelloService extends GreeterGrpc.GreeterImplBase { - - @Override - public void sayHello(HelloRequest request, StreamObserver responseObserver) { - String name = request.getName(); - String message = "Hello " + name; - responseObserver.onNext(HelloReply.newBuilder().setMessage(message).build()); - responseObserver.onCompleted(); - } -} ----- - -== Blocking Service Implementation - -By default, all the methods from a gRPC service run on the event loop. -As a consequence, you must **not** block. -If your service logic must block, annotate the method with `io.smallrye.common.annotation.Blocking`: - -[source, java] ----- -@Override -@Blocking -public Uni sayHelloBlocking(HelloRequest request) { - // Do something blocking before returning the Uni -} ----- - -== Handling Streams - -gRPC allows receiving and returning streams: - -[source] ----- -service Streaming { - rpc Source(Empty) returns (stream Item) {} // Returns a stream - rpc Sink(stream Item) returns (Empty) {} // Reads a stream - rpc Pipe(stream Item) returns (stream Item) {} // Reads a streams and return a streams -} ----- - -Using Mutiny, you can implement these as follows: - -[source, java] ----- -import io.quarkus.grpc.GrpcService; - -@GrpcService -public class StreamingService implements Streaming { - - @Override - public Multi source(Empty request) { - // Just returns a stream emitting an item every 2ms and stopping after 10 items. - return Multi.createFrom().ticks().every(Duration.ofMillis(2)) - .select().first(10) - .map(l -> Item.newBuilder().setValue(Long.toString(l)).build()); - } - - @Override - public Uni sink(Multi request) { - // Reads the incoming streams, consume all the items. - return request - .map(Item::getValue) - .map(Long::parseLong) - .collect().last() - .map(l -> Empty.newBuilder().build()); - } - - @Override - public Multi pipe(Multi request) { - // Reads the incoming stream, compute a sum and return the cumulative results - // in the outbound stream. - return request - .map(Item::getValue) - .map(Long::parseLong) - .onItem().scan(() -> 0L, Long::sum) - .onItem().transform(l -> Item.newBuilder().setValue(Long.toString(l)).build()); - } -} ----- - -== Health Check -For the implemented services, Quarkus gRPC exposes health information in the following format: -[source,protobuf] ----- -syntax = "proto3"; - -package grpc.health.v1; - -message HealthCheckRequest { - string service = 1; -} - -message HealthCheckResponse { - enum ServingStatus { - UNKNOWN = 0; - SERVING = 1; - NOT_SERVING = 2; - } - ServingStatus status = 1; -} - -service Health { - rpc Check(HealthCheckRequest) returns (HealthCheckResponse); - - rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse); -} ----- - -Clients can specify the fully qualified service name to get the health status of a specific service -or skip specifying the service name to get the general status of the gRPC server. - -For more details, check out the -https://github.com/grpc/grpc/blob/v1.28.1/doc/health-checking.md[gRPC documentation] - -Additionally, if Quarkus SmallRye Health is added to the application, a readiness check for -the state of the gRPC services will be added to the MicroProfile Health endpoint response, that is `/q/health`. - -== Reflection Service - -Quarkus gRPC Server implements the https://github.com/grpc/grpc/blob/master/doc/server-reflection.md[reflection service]. -This service allows tools like https://github.com/fullstorydev/grpcurl[grpcurl] or https://github.com/gusaul/grpcox[grpcox] to interact with your services. - -The reflection service is enabled by default in _dev_ mode. -In test or production mode, you need to enable it explicitly by setting `quarkus.grpc.server.enable-reflection-service` to `true`. - -== Scaling -By default, quarkus-grpc starts a single gRPC server running on a single event loop. - -If you wish to scale your server, you can set the number of server instances by setting `quarkus.grpc.server.instances`. - -== Server Configuration - -include::{generated-dir}/config/quarkus-grpc-config-group-config-grpc-server-configuration.adoc[opts=optional, leveloffset=+1] - -== Example of Configuration - -=== Enabling TLS - -To enable TLS, use the following configuration. - -Note that all paths in the configuration may either specify a resource on the classpath -(typically from `src/main/resources` or its subfolder) or an external file. - -[source,properties] ----- -quarkus.grpc.server.ssl.certificate=tls/server.pem -quarkus.grpc.server.ssl.key=tls/server.key ----- - -NOTE: When SSL/TLS is configured, `plain-text` is automatically disabled. - -=== TLS with Mutual Auth - -To use TLS with mutual authentication, use the following configuration: - -[source,properties] ----- -quarkus.grpc.server.ssl.certificate=tls/server.pem -quarkus.grpc.server.ssl.key=tls/server.key -quarkus.grpc.server.ssl.trust-store=tls/ca.jks -quarkus.grpc.server.ssl.trust-store-password=***** -quarkus.grpc.server.ssl.client-auth=REQUIRED ----- - -== Server Interceptors - -gRPC server interceptors let you perform logic, such as authentication, before your service is invoked. - -You can implement a gRPC server interceptor by creating an `@ApplicationScoped` bean implementing `io.grpc.ServerInterceptor`: - -[source, java] ----- -@ApplicationScoped -// add @GlobalInterceptor for interceptors meant to be invoked for every service -public class MyInterceptor implements ServerInterceptor { - - @Override - public ServerCall.Listener interceptCall(ServerCall serverCall, - Metadata metadata, ServerCallHandler serverCallHandler) { - // ... - } -} ----- - -TIP: Check the https://grpc.github.io/grpc-java/javadoc/io/grpc/ServerInterceptor.html[ServerInterceptor JavaDoc] to properly implement your interceptor. - -To apply an interceptor to all exposed services, annotate it with `@io.quarkus.grpc.GlobalInterceptor`. -To apply an interceptor to a single service, register it on the service with `@io.quarkus.grpc.RegisterInterceptor`: -[source, java] ----- -import io.quarkus.grpc.GrpcService; -import io.quarkus.grpc.RegisterInterceptor; - -@GrpcService -@RegisterInterceptor(MyInterceptor.class) -public class StreamingService implements Streaming { - // ... -} ----- - -When you have multiple server interceptors, you can order them by implementing the `javax.enterprise.inject.spi.Prioritized` interface. Please note that all the global interceptors are invoked before the service-specific -interceptors. - -[source, java] ----- -@ApplicationScoped -public class MyInterceptor implements ServerInterceptor, Prioritized { - - @Override - public ServerCall.Listener interceptCall(ServerCall serverCall, - Metadata metadata, ServerCallHandler serverCallHandler) { - // ... - } - - @Override - public int getPriority() { - return 10; - } -} ----- - -Interceptors with the highest priority are called first. -The default priority, used if the interceptor does not implement the `Prioritized` interface, is `0`. - - -== Testing your services - -The easiest way to test a gRPC service is to use a gRPC client as described -in xref:grpc-service-consumption.adoc[Consuming a gRPC Service]. - -Please note that in the case of using a client to test an exposed service that does not use TLS, -there is no need to provide any configuration. E.g. to test the `HelloService` -defined above, one could create the following test: - -[source,java] ----- -public class HelloServiceTest implements Greeter { - - @GrpcClient - Greeter client; - - @Test - void shouldReturnHello() { - CompletableFuture message = new CompletableFuture<>(); - client.sayHello(HelloRequest.newBuilder().setName("Quarkus").build()) - .subscribe().with(reply -> message.complete(reply.getMessage())); - assertThat(message.get(5, TimeUnit.SECONDS)).isEqualTo("Hello Quarkus"); - } -} ----- - -== Trying out your services manually -In the dev mode, you can try out your gRPC services in the Quarkus Dev UI. -Just go to http://localhost:8080/q/dev and click on _Services_ under the gRPC tile. - -Please note that your application needs to expose the "normal" HTTP port for the Dev UI to be accessible. If your application does not expose any HTTP endpoints, you can create a dedicated profile with a dependency on `quarkus-vertx-http`: -[source,xml] ----- - - - development - - - io.quarkus - quarkus-vertx-http - - - - ----- -Having it, you can run the dev mode with: `mvn quarkus:dev -Pdevelopment`. - -If you use Gradle, you can simply add a dependency for the `quarkusDev` task: - -[source,groovy] ----- -dependencies { - quarkusDev 'io.quarkus:quarkus-vertx-http' -} ----- - -== gRPC Server metrics - -=== Enabling metrics collection - -gRPC server metrics are automatically enabled when the application also uses the xref:micrometer.adoc[`quarkus-micrometer`] extension. -Micrometer collects the metrics of all the gRPC services implemented by the application. - -As an example, if you export the metrics to Prometheus, you will get: - -[source, text] ----- -# HELP grpc_server_responses_sent_messages_total The total number of responses sent -# TYPE grpc_server_responses_sent_messages_total counter -grpc_server_responses_sent_messages_total{method="SayHello",methodType="UNARY",service="helloworld.Greeter",} 6.0 -# HELP grpc_server_processing_duration_seconds The total time taken for the server to complete the call -# TYPE grpc_server_processing_duration_seconds summary -grpc_server_processing_duration_seconds_count{method="SayHello",methodType="UNARY",service="helloworld.Greeter",statusCode="OK",} 6.0 -grpc_server_processing_duration_seconds_sum{method="SayHello",methodType="UNARY",service="helloworld.Greeter",statusCode="OK",} 0.016216771 -# HELP grpc_server_processing_duration_seconds_max The total time taken for the server to complete the call -# TYPE grpc_server_processing_duration_seconds_max gauge -grpc_server_processing_duration_seconds_max{method="SayHello",methodType="UNARY",service="helloworld.Greeter",statusCode="OK",} 0.007985236 -# HELP grpc_server_requests_received_messages_total The total number of requests received -# TYPE grpc_server_requests_received_messages_total counter -grpc_server_requests_received_messages_total{method="SayHello",methodType="UNARY",service="helloworld.Greeter",} 6.0 ----- - -The service name, method and type can be found in the _tags_. - -=== Disabling metrics collection - -To disable the gRPC server metrics when `quarkus-micrometer` is used, add the following property to the application configuration: - -[source, properties] ----- -quarkus.micrometer.binder.grpc-server.enabled=false ----- diff --git a/_versions/2.7/guides/grpc.adoc b/_versions/2.7/guides/grpc.adoc deleted file mode 100644 index fb4ecc50281..00000000000 --- a/_versions/2.7/guides/grpc.adoc +++ /dev/null @@ -1,29 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= gRPC - -include::./attributes.adoc[] - -https://grpc.io/[gRPC] is a high-performance RPC framework. -It can efficiently connect services implemented using various languages and frameworks. -It is also applicable in the last mile of distributed computing to connect devices, mobile applications, and browsers to backend services. - -In general, gRPC uses HTTP/2, TLS, and https://developers.google.com/protocol-buffers[Protobuf (Protocol Buffers)]. -In a microservice architecture, gRPC is an efficient, type-safe alternative to HTTP. - -The Quarkus gRPC extension integrate gRPC in Quarkus application. -It: - -* supports implementing gRPC services -* supports consuming gRPC services -* integrates with the reactive engine from Quarkus as well as the reactive development model -* allows plain-text communication as well as TLS, and TLS with mutual authentication - -Quarkus gRPC is based on https://vertx.io/docs/vertx-grpc/java/[Vert.x gRPC]. - -* xref:grpc-getting-started.adoc[Getting Started] -* xref:grpc-service-implementation.adoc[Implementing a gRPC Service] -* xref:grpc-service-consumption.adoc[Consuming a gRPC Service] diff --git a/_versions/2.7/guides/guides.md b/_versions/2.7/guides/guides.md deleted file mode 100644 index 21a674e3752..00000000000 --- a/_versions/2.7/guides/guides.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -layout: documentation -title: Guides -permalink: /version/2.7/guides/ ---- diff --git a/_versions/2.7/guides/hibernate-orm-panache-kotlin.adoc b/_versions/2.7/guides/hibernate-orm-panache-kotlin.adoc deleted file mode 100644 index 2faf13d54cb..00000000000 --- a/_versions/2.7/guides/hibernate-orm-panache-kotlin.adoc +++ /dev/null @@ -1,217 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Simplified Hibernate ORM with Panache and Kotlin - -include::./attributes.adoc[] -:config-file: application.properties - -Hibernate ORM is the de facto standard JPA implementation and is well-known in the Java ecosystem. Hibernate ORM with Panache offers a -new layer atop this familiar framework. This guide will not dive in to the specifics of either as those are already -covered in the xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache guide]. In this guide, we'll cover the Kotlin specific changes -needed to use Hibernate ORM with Panache in your Kotlin-based Quarkus applications. - -NOTE: When using the kotlin version of Hibernate ORM with Panache, note that the `PanacheEntity`, `PanacheQuery` and `PanacheRepository` are in a different package: `io.quarkus.hibernate.orm.panache.kotlin`. - -== First: an example - -As we saw in the Hibernate with Panache guide, it allows us to extend the functionality in our entities and repositories (also known as DAOs) with some automatically -provided functionality. When using Kotlin, the approach is very similar to what we see in the Java version with a slight -change or two. To Panache-enable your entity, you would define it something like: - -[source,kotlin] ----- -@Entity -class Person: PanacheEntity { - lateinit var name: String - lateinit var birth: LocalDate - lateinit var status: Status -} ----- - -As you can see our entities remain simple. There is, however, a slight difference from the Java version. The Kotlin -language doesn't support the notion of static methods in quite the same way as Java does. Instead, we must use a -https://kotlinlang.org/docs/tutorials/kotlin-for-py/objects-and-companion-objects.html#companion-objects[companion object]: - -[source,kotlin] ----- -@Entity -class Person : PanacheEntity() { - companion object: PanacheCompanion { // <1> - fun findByName(name: String) = find("name", name).firstResult() - fun findAlive() = list("status", Status.Alive) - fun deleteStefs() = delete("name", "Stef") - } - - lateinit var name: String // <2> - lateinit var birth: LocalDate - lateinit var status: Status -} ----- -<1> The companion object holds all the methods not related to a specific instance allowing for general management and -querying bound to a specific type. -<2> Here there are options, but we've chosen the `lateinit` approach. This allows us to declare these fields as non-null -knowing they will be properly assigned either by the constructor (not shown) or by hibernate loading data from the -database. - -NOTE: These types differ from the Java types mentioned in those tutorials. For Kotlin support, all the Panache -types will be found in the `io.quarkus.hibernate.orm.panache.kotlin` package. This subpackage allows for the distinction -between the Java and Kotlin variants and allows for both to be used unambiguously in a single project. - -In the Kotlin version, we've simply moved the bulk of the link:https://www.martinfowler.com/eaaCatalog/activeRecord.html[`active record pattern`] -functionality to the `companion object`. Apart from this slight change, we can then work with our types in ways that map easily -from the Java side of world. - - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `hibernate-orm-panache-kotlin-quickstart` {quickstarts-tree-url}/hibernate-orm-panache-kotlin-quickstart[directory]. - - -== Setting up and configuring Hibernate ORM with Panache and Kotlin - -To get started using Hibernate ORM with Panache and Kotlin, you can, generally, follow the steps laid out in the Java tutorial. The biggest -change to configuring your project is the Quarkus artifact to include. You can, of course, keep the Java version if you -need but if all you need are the Kotlin APIs then include the following dependency instead: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-hibernate-orm-panache-kotlin // <1> - ----- -<1> Note the addition of `-kotlin` on the end. Generally you'll only need this version but if your project will be using -both Java and Kotlin code, you can safely include both artifacts. - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-hibernate-orm-panache-kotlin") <1> ----- -<1> Note the addition of `-kotlin` on the end. Generally you'll only need this version but if your project will be using -both Java and Kotlin code, you can safely include both artifacts. - -== Using the repository pattern - - -=== Defining your entity - -When using the repository pattern, you can define your entities as regular JPA entities. -[source,kotlin] ----- -@Entity -class Person { - @Id - @GeneratedValue - var id: Long? = null; - lateinit var name: String - lateinit var birth: LocalDate - lateinit var status: Status -} ----- - -=== Defining your repository - -When using Repositories, you get the exact same convenient methods as with the active record pattern, injected in your Repository, -by making them implement `PanacheRepository`: - -[source,kotlin] ----- -@ApplicationScoped -class PersonRepository: PanacheRepository { - fun findByName(name: String) = find("name", name).firstResult() - fun findAlive() = list("status", Status.Alive) - fun deleteStefs() = delete("name", "Stef") -} ----- - -All the operations that are defined on `PanacheEntityBase` are available on your repository, so using it -is exactly the same as using the active record pattern, except you need to inject it: - -[source,kotlin] ----- -@Inject -lateinit var personRepository: PersonRepository - -@GET -fun count() = personRepository.count() ----- - -=== Most useful operations - -Once you have written your repository, here are the most common operations you will be able to perform: - -[source,kotlin] ----- -// creating a person -var person = Person() -person.name = "Stef" -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1) -person.status = Status.Alive - -// persist it -personRepository.persist(person) - -// note that once persisted, you don't need to explicitly save your entity: all -// modifications are automatically persisted on transaction commit. - -// check if it's persistent -if(personRepository.isPersistent(person)){ - // delete it - personRepository.delete(person) -} - -// getting a list of all Person entities -val allPersons = personRepository.listAll() - -// finding a specific person by ID -person = personRepository.findById(personId) ?: throw Exception("No person with that ID") - -// finding all living persons -val livingPersons = personRepository.list("status", Status.Alive) - -// counting all persons -val countAll = personRepository.count() - -// counting all living persons -val countAlive = personRepository.count("status", Status.Alive) - -// delete all living persons -personRepository.delete("status", Status.Alive) - -// delete all persons -personRepository.deleteAll() - -// delete by id -val deleted = personRepository.deleteById(personId) - -// set the name of all living persons to 'Mortal' -personRepository.update("name = 'Mortal' where status = ?1", Status.Alive) - ----- - -All `list` methods have equivalent `stream` versions. - -[source,kotlin] ----- -val persons = personRepository.streamAll(); -val namesButEmmanuels = persons - .map { it.name.toLowerCase() } - .filter { it != "emmanuel" } ----- - -NOTE: The `stream` methods require a transaction to work. - -For more examples, please consult the xref:hibernate-orm-panache.adoc[Java version] for complete details. Both APIs -are the same and work identically except for some Kotlin-specific tweaks to make things feel more natural to -Kotlin developers. These tweaks include things like better use of nullability and the lack of `Optional` on API -methods. diff --git a/_versions/2.7/guides/hibernate-orm-panache.adoc b/_versions/2.7/guides/hibernate-orm-panache.adoc deleted file mode 100644 index ca8c587b7be..00000000000 --- a/_versions/2.7/guides/hibernate-orm-panache.adoc +++ /dev/null @@ -1,1182 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Simplified Hibernate ORM with Panache - -include::./attributes.adoc[] -:config-file: application.properties - -Hibernate ORM is the de facto JPA implementation and offers you the full breadth of an Object Relational Mapper. -It makes complex mappings possible, but it does not make simple and common mappings trivial. -Hibernate ORM with Panache focuses on making your entities trivial and fun to write in Quarkus. - -== First: an example - -What we're doing in Panache is to allow you to write your Hibernate ORM entities like this: - -[source,java] ----- -public enum Status { - Alive, - Deceased -} - -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Person findByName(String name){ - return find("name", name).firstResult(); - } - - public static List findAlive(){ - return list("status", Status.Alive); - } - - public static void deleteStefs(){ - delete("name", "Stef"); - } -} ----- - -You have noticed how much more compact and readable the code is? -Does this look interesting? Read on! - -NOTE: the `list()` method might be surprising at first. It takes fragments of HQL (JP-QL) queries and contextualizes the rest. That makes for very concise but yet readable code. - -NOTE: what was described above is essentially the link:https://www.martinfowler.com/eaaCatalog/activeRecord.html[active record pattern], sometimes just called the entity pattern. -Hibernate with Panache also allows for the use of the more classical link:https://martinfowler.com/eaaCatalog/repository.html[repository pattern] via `PanacheRepository`. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `hibernate-orm-panache-quickstart` {quickstarts-tree-url}/hibernate-orm-panache-quickstart[directory]. - - -== Setting up and configuring Hibernate ORM with Panache - -To get started: - -* add your settings in `{config-file}` -* annotate your entities with `@Entity` -* make your entities extend `PanacheEntity` (optional if you are using the repository pattern) - -Follow the xref:hibernate-orm.adoc#setting-up-and-configuring-hibernate-orm[Hibernate set-up guide for all configuration]. - -In your build file, add the following dependencies: - -* the Hibernate ORM with Panache extension -* your JDBC driver extension (`quarkus-jdbc-postgresql`, `quarkus-jdbc-h2`, `quarkus-jdbc-mariadb`, ...) - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-hibernate-orm-panache - - - - - io.quarkus - quarkus-jdbc-postgresql - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Hibernate ORM specific dependencies -implementation("io.quarkus:quarkus-hibernate-orm-panache") - -// JDBC driver dependencies -implementation("io.quarkus:quarkus-jdbc-postgresql") ----- - -Then add the relevant configuration properties in `{config-file}`. - -[source,properties] ----- -# configure your datasource -quarkus.datasource.db-kind = postgresql -quarkus.datasource.username = sarah -quarkus.datasource.password = connor -quarkus.datasource.jdbc.url = jdbc:postgresql://localhost:5432/mydatabase - -# drop and create the database at startup (use `update` to only update the schema) -quarkus.hibernate-orm.database.generation = drop-and-create ----- - -== Solution 1: using the active record pattern - -=== Defining your entity - -To define a Panache entity, simply extend `PanacheEntity`, annotate it with `@Entity` and add your -columns as public fields: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; -} ----- - -You can put all your JPA column annotations on the public fields. If you need a field to not be persisted, use the -`@Transient` annotation on it. If you need to write accessors, you can: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - // return name as uppercase in the model - public String getName(){ - return name.toUpperCase(); - } - - // store all names in lowercase in the DB - public void setName(String name){ - this.name = name.toLowerCase(); - } -} ----- - -And thanks to our field access rewrite, when your users read `person.name` they will actually call your `getName()` accessor, -and similarly for field writes and the setter. -This allows for proper encapsulation at runtime as all fields calls will be replaced by the corresponding getter/setter calls. - -=== Most useful operations - -Once you have written your entity, here are the most common operations you will be able to perform: - -[source,java] ----- -// creating a person -Person person = new Person(); -person.name = "Stef"; -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1); -person.status = Status.Alive; - -// persist it -person.persist(); - -// note that once persisted, you don't need to explicitly save your entity: all -// modifications are automatically persisted on transaction commit. - -// check if it's persistent -if(person.isPersistent()){ - // delete it - person.delete(); -} - -// getting a list of all Person entities -List allPersons = Person.listAll(); - -// finding a specific person by ID -person = Person.findById(personId); - -// finding a specific person by ID via an Optional -Optional optional = Person.findByIdOptional(personId); -person = optional.orElseThrow(() -> new NotFoundException()); - -// finding all living persons -List livingPersons = Person.list("status", Status.Alive); - -// counting all persons -long countAll = Person.count(); - -// counting all living persons -long countAlive = Person.count("status", Status.Alive); - -// delete all living persons -Person.delete("status", Status.Alive); - -// delete all persons -Person.deleteAll(); - -// delete by id -boolean deleted = Person.deleteById(personId); - -// set the name of all living persons to 'Mortal' -Person.update("name = 'Mortal' where status = ?1", Status.Alive); - ----- - -All `list` methods have equivalent `stream` versions. - -[source,java] ----- -try (Stream persons = Person.streamAll()) { - List namesButEmmanuels = persons - .map(p -> p.name.toLowerCase() ) - .filter( n -> ! "emmanuel".equals(n) ) - .collect(Collectors.toList()); -} ----- - -NOTE: The `stream` methods require a transaction to work. + -As they perform I/O operations, they should be closed via the `close()` method or via a try-with-resource to close the underlying `ResultSet`. -If not, you will see warnings from Agroal that will close the underlying `ResultSet` for you. - -=== Adding entity methods - -Add custom queries on your entities inside the entities themselves. -That way, you and your co-workers can find them easily, and queries are co-located with the object they operate on. -Adding them as static methods in your entity class is the Panache Active Record way. - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Person findByName(String name){ - return find("name", name).firstResult(); - } - - public static List findAlive(){ - return list("status", Status.Alive); - } - - public static void deleteStefs(){ - delete("name", "Stef"); - } -} ----- - -== Solution 2: using the repository pattern - - -=== Defining your entity - -When using the repository pattern, you can define your entities as regular JPA entities. - -[source,java] ----- -@Entity -public class Person { - @Id @GeneratedValue private Long id; - private String name; - private LocalDate birth; - private Status status; - - public Long getId(){ - return id; - } - public void setId(Long id){ - this.id = id; - } - public String getName() { - return name; - } - public void setName(String name) { - this.name = name; - } - public LocalDate getBirth() { - return birth; - } - public void setBirth(LocalDate birth) { - this.birth = birth; - } - public Status getStatus() { - return status; - } - public void setStatus(Status status) { - this.status = status; - } -} ----- - -TIP: If you don't want to bother defining getters/setters for your entities, you can make them extend `PanacheEntityBase` and -Quarkus will generate them for you. You can even extend `PanacheEntity` and take advantage of the default ID it provides. - -=== Defining your repository - -When using Repositories, you get the exact same convenient methods as with the active record pattern, injected in your Repository, -by making them implements `PanacheRepository`: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheRepository { - - // put your custom logic here as instance methods - - public Person findByName(String name){ - return find("name", name).firstResult(); - } - - public List findAlive(){ - return list("status", Status.Alive); - } - - public void deleteStefs(){ - delete("name", "Stef"); - } -} ----- - -All the operations that are defined on `PanacheEntityBase` are available on your repository, so using it -is exactly the same as using the active record pattern, except you need to inject it: - -[source,java] ----- -@Inject -PersonRepository personRepository; - -@GET -public long count(){ - return personRepository.count(); -} ----- - -=== Most useful operations - -Once you have written your repository, here are the most common operations you will be able to perform: - -[source,java] ----- -// creating a person -Person person = new Person(); -person.setName("Stef"); -person.setBirth(LocalDate.of(1910, Month.FEBRUARY, 1)); -person.setStatus(Status.Alive); - -// persist it -personRepository.persist(person); - -// note that once persisted, you don't need to explicitly save your entity: all -// modifications are automatically persisted on transaction commit. - -// check if it's persistent -if(personRepository.isPersistent(person)){ - // delete it - personRepository.delete(person); -} - -// getting a list of all Person entities -List allPersons = personRepository.listAll(); - -// finding a specific person by ID -person = personRepository.findById(personId); - -// finding a specific person by ID via an Optional -Optional optional = personRepository.findByIdOptional(personId); -person = optional.orElseThrow(() -> new NotFoundException()); - -// finding all living persons -List livingPersons = personRepository.list("status", Status.Alive); - -// counting all persons -long countAll = personRepository.count(); - -// counting all living persons -long countAlive = personRepository.count("status", Status.Alive); - -// delete all living persons -personRepository.delete("status", Status.Alive); - -// delete all persons -personRepository.deleteAll(); - -// delete by id -boolean deleted = personRepository.deleteById(personId); - -// set the name of all living persons to 'Mortal' -personRepository.update("name = 'Mortal' where status = ?1", Status.Alive); - ----- - -All `list` methods have equivalent `stream` versions. - -[source,java] ----- -Stream persons = personRepository.streamAll(); -List namesButEmmanuels = persons - .map(p -> p.name.toLowerCase() ) - .filter( n -> ! "emmanuel".equals(n) ) - .collect(Collectors.toList()); ----- - -NOTE: The `stream` methods require a transaction to work. - -NOTE: The rest of the documentation show usages based on the active record pattern only, -but keep in mind that they can be performed with the repository pattern as well. -The repository pattern examples have been omitted for brevity. - -== Writing a JAX-RS resource - -First, include one of the RESTEasy extensions to enable JAX-RS endpoints, for example, add the `io.quarkus:quarkus-resteasy-jackson` dependency for JAX-RS and JSON support. - -Then, you can create the following resource to create/read/update/delete your Person entity: - -[source,java] ----- -@Path("/persons") -@Produces(MediaType.APPLICATION_JSON) -@Consumes(MediaType.APPLICATION_JSON) -public class PersonResource { - - @GET - public List list() { - return Person.listAll(); - } - - @GET - @Path("/{id}") - public Person get(@PathParam("id") Long id) { - return Person.findById(id); - } - - @POST - @Transactional - public Response create(Person person) { - person.persist(); - return Response.created(URI.create("/persons/" + person.id)).build(); - } - - @PUT - @Path("/{id}") - @Transactional - public Person update(@PathParam("id") Long id, Person person) { - Person entity = Person.findById(id); - if(entity == null) { - throw new NotFoundException(); - } - - // map all fields from the person parameter to the existing entity - entity.name = person.name; - - return entity; - } - - @DELETE - @Path("/{id}") - @Transactional - public void delete(@PathParam("id") Long id) { - Person entity = Person.findById(id); - if(entity == null) { - throw new NotFoundException(); - } - entity.delete(); - } - - @GET - @Path("/search/{name}") - public Person search(@PathParam("name") String name) { - return Person.findByName(name); - } - - @GET - @Path("/count") - public Long count() { - return Person.count(); - } -} ----- - -NOTE: Be careful to use the `@Transactional` annotation on the operations that modify the database, -you can add the annotation at the class level for simplicity purpose. - -== Advanced Query - -=== Paging - -You should only use `list` and `stream` methods if your table contains small enough data sets. For larger data -sets you can use the `find` method equivalents, which return a `PanacheQuery` on which you can do paging: - -[source,java] ----- -// create a query for all living persons -PanacheQuery livingPersons = Person.find("status", Status.Alive); - -// make it use pages of 25 entries at a time -livingPersons.page(Page.ofSize(25)); - -// get the first page -List firstPage = livingPersons.list(); - -// get the second page -List secondPage = livingPersons.nextPage().list(); - -// get page 7 -List page7 = livingPersons.page(Page.of(7, 25)).list(); - -// get the number of pages -int numberOfPages = livingPersons.pageCount(); - -// get the total number of entities returned by this query without paging -long count = livingPersons.count(); - -// and you can chain methods of course -return Person.find("status", Status.Alive) - .page(Page.ofSize(25)) - .nextPage() - .stream() ----- - -The `PanacheQuery` type has many other methods to deal with paging and returning streams. - -=== Using a range instead of pages - -`PanacheQuery` also allows range-based queries. - -[source,java] ----- -// create a query for all living persons -PanacheQuery livingPersons = Person.find("status", Status.Alive); - -// make it use a range: start at index 0 until index 24 (inclusive). -livingPersons.range(0, 24); - -// get the range -List firstRange = livingPersons.list(); - -// to get the next range, you need to call range again -List secondRange = livingPersons.range(25, 49).list(); ----- - -[WARNING] -==== -You cannot mix ranges and pages: if you use a range, all methods that depend on having a current page will throw an `UnsupportedOperationException`; -you can switch back to paging using `page(Page)` or `page(int, int)`. -==== - -=== Sorting - -All methods accepting a query string also accept the following simplified query form: - -[source,java] ----- -List persons = Person.list("order by name,birth"); ----- - -But these methods also accept an optional `Sort` parameter, which allows your to abstract your sorting: - -[source,java] ----- -List persons = Person.list(Sort.by("name").and("birth")); - -// and with more restrictions -List persons = Person.list("status", Sort.by("name").and("birth"), Status.Alive); ----- - -The `Sort` class has plenty of methods for adding columns and specifying sort direction. - -=== Simplified queries - -Normally, HQL queries are of this form: `from EntityName [where ...] [order by ...]`, with optional elements -at the end. - -If your select query does not start with `from`, we support the following additional forms: - -- `order by ...` which will expand to `from EntityName order by ...` -- `` (and single parameter) which will expand to `from EntityName where = ?` -- `` will expand to `from EntityName where ` - -If your update query does not start with `update`, we support the following additional forms: - -- `from EntityName ...` which will expand to `update from EntityName ...` -- `set? ` (and single parameter) which will expand to `update from EntityName set = ?` -- `set? ` will expand to `update from EntityName set ` - -If your delete query does not start with `delete`, we support the following additional forms: - -- `from EntityName ...` which will expand to `delete from EntityName ...` -- `` (and single parameter) which will expand to `delete from EntityName where = ?` -- `` will expand to `delete from EntityName where ` - -NOTE: You can also write your queries in plain -link:https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#hql[HQL]: - -[source,java] ----- -Order.find("select distinct o from Order o left join fetch o.lineItems"); -Order.update("update from Person set name = 'Mortal' where status = ?", Status.Alive); ----- - -=== Named queries - -You can reference a named query instead of a (simplified) HQL query by prefixing its name with the '#' character. You can also use named queries for count, update and delete queries. - -[source,java] ----- -@Entity -@NamedQueries({ - @NamedQuery(name = "Person.getByName", query = "from Person where name = ?1"), - @NamedQuery(name = "Person.countByStatus", query = "select count(*) from Person p where p.status = :status"), - @NamedQuery(name = "Person.updateStatusById", query = "update Person p set p.status = :status where p.id = :id"), - @NamedQuery(name = "Person.deleteById", query = "delete from Person p where p.id = ?1") -}) - -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Person findByName(String name){ - return find("#Person.getByName", name).firstResult(); - } - - public static long countByStatus(Status status) { - return count("#Person.countByStatus", Parameters.with("status", status).map()); - } - - public static long updateStatusById(Status status, long id) { - return update("#Person.updateStatusById", Parameters.with("status", status).and("id", id)); - } - - public static long deleteById(long id) { - return delete("#Person.deleteById", id); - } -} ----- - -[WARNING] -==== -Named queries can only be defined inside your JPA entity classes (being the Panache entity class, or the repository parameterized type), -or on one of its super classes. -==== - -=== Query parameters - -You can pass query parameters by index (1-based) as shown below: - -[source,java] ----- -Person.find("name = ?1 and status = ?2", "stef", Status.Alive); ----- - -Or by name using a `Map`: - -[source,java] ----- -Map params = new HashMap<>(); -params.put("name", "stef"); -params.put("status", Status.Alive); -Person.find("name = :name and status = :status", params); ----- - -Or using the convenience class `Parameters` either as is or to build a `Map`: - -[source,java] ----- -// generate a Map -Person.find("name = :name and status = :status", - Parameters.with("name", "stef").and("status", Status.Alive).map()); - -// use it as-is -Person.find("name = :name and status = :status", - Parameters.with("name", "stef").and("status", Status.Alive)); ----- - -Every query operation accepts passing parameters by index (`Object...`), or by name (`Map` or `Parameters`). - -=== Query projection - -Query projection can be done with the `project(Class)` method on the `PanacheQuery` object that is returned by the `find()` methods. - -You can use it to restrict which fields will be returned by the database. - -Hibernate will use **DTO projection** and generate a SELECT clause with the attributes from the projection class. -This is also called **dynamic instantiation** or **constructor expression**, more info can be found on the Hibernate guide: -link:https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#hql-select-clause[hql select clause] - -The projection class needs to be a valid Java Bean and have a constructor that contains all its attributes, this constructor will be used to -instantiate the projection DTO instead of using the entity class. This class must have a matching constructor with all the class attributes as parameters. - - -[source,java] ----- -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection // <1> -public class PersonName { - public final String name; // <2> - - public PersonName(String name){ // <3> - this.name = name; - } -} - -// only 'name' will be loaded from the database -PanacheQuery query = Person.find("status", Status.Alive).project(PersonName.class); ----- -<1> The `@RegisterForReflection` annotation instructs Quarkus to keep the class and its members during the native compilation. More details about the `@RegisterForReflection` annotation can be found on the xref:writing-native-applications-tips.adoc#registerForReflection[native application tips] page. -<2> We use public fields here, but you can use private fields and getters/setters if you prefer. -<3> This constructor will be used by Hibernate, it must be the only constructor in your class and have all the class attributes as parameters. - - -[WARNING] -==== -The implementation of the `project(Class)` method uses the constructor's parameter names to build the select clause of the query, -so the compiler must be configured to store parameter names inside the compiled class. -This is enabled by default if you are using the Quarkus Maven archetype. If you are not using it, add the property `true` to your `pom.xml`. -==== - -If in the DTO projection object you have a field from a referenced entity, you can use the `@ProjectedFieldName` annotation to provide the path for the SELECT statement. - -[source,java] ----- -@Entity -public class Dog extends PanacheEntity { - public String name; - public String race; - @ManyToOne - public Person owner; -} - -@RegisterForReflection -public class DogDto { - public String name; - public String ownerName; - - public DogDto(String name, @ProjectedFieldName("owner.name") String ownerName) { // <1> - this.name = name; - this.ownerName = ownerName; - } -} - -PanacheQuery query = Dog.findAll().project(DogDto.class); ----- -<1> The `ownerName` DTO constructor's parameter will be loaded from the `owner.name` HQL property. - -== Multiple Persistence Units - -The support for multiple persistence units is described in detail in xref:hibernate-orm.adoc#multiple-persistence-units[the Hibernate ORM guide]. - -When using Panache, things are simple: - -* A given Panache entity can be attached to only a single persistence unit. -* Given that, Panache already provides the necessary plumbing to transparently find the appropriate `EntityManager` associated to a Panache entity. - -== Transactions - -Make sure to wrap methods modifying your database (e.g. `entity.persist()`) within a transaction. Marking a -CDI bean method `@Transactional` will do that for you and make that method a transaction boundary. We recommend doing -so at your application entry point boundaries like your REST endpoint controllers. - -JPA batches changes you make to your entities and sends changes (it's called flush) at the end of the transaction or before a query. -This is usually a good thing as it's more efficient. -But if you want to check optimistic locking failures, do object validation right away or generally want to get immediate feedback, you can force the flush operation by calling `entity.flush()` or even use `entity.persistAndFlush()` to make it a single method call. This will allow you to catch any `PersistenceException` that could occur when JPA send those changes to the database. -Remember, this is less efficient so don't abuse it. -And your transaction still has to be committed. - -Here is an example of the usage of the flush method to allow making a specific action in case of `PersistenceException`: -[source,java] ----- -@Transactional -public void create(Parameter parameter){ - try { - //Here I use the persistAndFlush() shorthand method on a Panache repository to persist to database then flush the changes. - return parameterRepository.persistAndFlush(parameter); - } - catch(PersistenceException pe){ - LOG.error("Unable to create the parameter", pe); - //in case of error, I save it to disk - diskPersister.save(parameter); - } -} ----- - -== Lock management - -Panache provides direct support for database locking with your entity/repository, using `findById(Object, LockModeType)` or `find().withLock(LockModeType)`. - -The following examples are for the active record pattern, but the same can be used with repositories. - -=== First: Locking using findById(). - -[source,java] ----- -public class PersonEndpoint { - - @GET - @Transactional - public Person findByIdForUpdate(Long id){ - Person p = Person.findById(id, LockModeType.PESSIMISTIC_WRITE); - //do something useful, the lock will be released when the transaction ends. - return person; - } - -} ----- - -=== Second: Locking in a find(). - -[source,java] ----- -public class PersonEndpoint { - - @GET - @Transactional - public Person findByNameForUpdate(String name){ - Person p = Person.find("name", name).withLock(LockModeType.PESSIMISTIC_WRITE).findOne(); - //do something useful, the lock will be released when the transaction ends. - return person; - } - -} ----- - -Be careful that locks are released when the transaction ends, so the method that invokes the lock query must be annotated with the `@Transactional` annotation. - -== Custom IDs - -IDs are often a touchy subject, and not everyone's up for letting them handled by the framework, once again we -have you covered. - -You can specify your own ID strategy by extending `PanacheEntityBase` instead of `PanacheEntity`. Then -you just declare whatever ID you want as a public field: - -[source,java] ----- -@Entity -public class Person extends PanacheEntityBase { - - @Id - @SequenceGenerator( - name = "personSequence", - sequenceName = "person_id_seq", - allocationSize = 1, - initialValue = 4) - @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "personSequence") - public Integer id; - - //... -} ----- - -If you're using repositories, then you will want to extend `PanacheRepositoryBase` instead of `PanacheRepository` -and specify your ID type as an extra type parameter: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheRepositoryBase { - //... -} ----- - -== Mocking - -=== Using the active record pattern - -If you are using the active record pattern you cannot use Mockito directly as it does not support mocking static methods, -but you can use the `quarkus-panache-mock` module which allows you to use Mockito to mock all provided static -methods, including your own. - -Add this dependency to your `pom.xml`: - -[source,xml] ----- - - io.quarkus - quarkus-panache-mock - test - ----- - - -Given this simple entity: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - - public String name; - - public static List findOrdered() { - return find("ORDER BY name").list(); - } -} ----- - -You can write your mocking test like this: - -[source,java] ----- -@QuarkusTest -public class PanacheFunctionalityTest { - - @Test - public void testPanacheMocking() { - PanacheMock.mock(Person.class); - - // Mocked classes always return a default value - Assertions.assertEquals(0, Person.count()); - - // Now let's specify the return value - Mockito.when(Person.count()).thenReturn(23L); - Assertions.assertEquals(23, Person.count()); - - // Now let's change the return value - Mockito.when(Person.count()).thenReturn(42L); - Assertions.assertEquals(42, Person.count()); - - // Now let's call the original method - Mockito.when(Person.count()).thenCallRealMethod(); - Assertions.assertEquals(0, Person.count()); - - // Check that we called it 4 times - PanacheMock.verify(Person.class, Mockito.times(4)).count();// <1> - - // Mock only with specific parameters - Person p = new Person(); - Mockito.when(Person.findById(12L)).thenReturn(p); - Assertions.assertSame(p, Person.findById(12L)); - Assertions.assertNull(Person.findById(42L)); - - // Mock throwing - Mockito.when(Person.findById(12L)).thenThrow(new WebApplicationException()); - Assertions.assertThrows(WebApplicationException.class, () -> Person.findById(12L)); - - // We can even mock your custom methods - Mockito.when(Person.findOrdered()).thenReturn(Collections.emptyList()); - Assertions.assertTrue(Person.findOrdered().isEmpty()); - - // Mocking a void method - Person.voidMethod(); - - // Make it throw - PanacheMock.doThrow(new RuntimeException("Stef2")).when(Person.class).voidMethod(); - try { - Person.voidMethod(); - Assertions.fail(); - } catch (RuntimeException x) { - Assertions.assertEquals("Stef2", x.getMessage()); - } - - // Back to doNothing - PanacheMock.doNothing().when(Person.class).voidMethod(); - Person.voidMethod(); - - // Make it call the real method - PanacheMock.doCallRealMethod().when(Person.class).voidMethod(); - try { - Person.voidMethod(); - Assertions.fail(); - } catch (RuntimeException x) { - Assertions.assertEquals("void", x.getMessage()); - } - - PanacheMock.verify(Person.class).findOrdered(); - PanacheMock.verify(Person.class, Mockito.atLeast(4)).voidMethod(); - PanacheMock.verify(Person.class, Mockito.atLeastOnce()).findById(Mockito.any()); - PanacheMock.verifyNoMoreInteractions(Person.class); - } -} ----- -<1> Be sure to call your `verify` and `do*` methods on `PanacheMock` rather than `Mockito`, otherwise you won't know -what mock object to pass. - -==== Mocking `EntityManager`, `Session` and entity instance methods - -If you need to mock entity instance methods, such as `persist()` you can do it by mocking the Hibernate ORM `Session` object: - -[source,java] ----- -@QuarkusTest -public class PanacheMockingTest { - - @InjectMock - Session session; - - @BeforeEach - public void setup() { - Query mockQuery = Mockito.mock(Query.class); - Mockito.doNothing().when(session).persist(Mockito.any()); - Mockito.when(session.createQuery(Mockito.anyString())).thenReturn(mockQuery); - Mockito.when(mockQuery.getSingleResult()).thenReturn(0l); - } - - @Test - public void testPanacheMocking() { - Person p = new Person(); - // mocked via EntityManager mocking - p.persist(); - Assertions.assertNull(p.id); - - Mockito.verify(session, Mockito.times(1)).persist(Mockito.any()); - } -} ----- - -=== Using the repository pattern - -If you are using the repository pattern you can use Mockito directly, using the `quarkus-junit5-mockito` module, -which makes mocking beans much easier: - -[source,xml] ----- - - io.quarkus - quarkus-junit5-mockito - test - ----- - -Given this simple entity: - -[source,java] ----- -@Entity -public class Person { - - @Id - @GeneratedValue - public Long id; - - public String name; -} ----- - -And this repository: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheRepository { - public List findOrdered() { - return find("ORDER BY name").list(); - } -} ----- - -You can write your mocking test like this: - -[source,java] ----- -@QuarkusTest -public class PanacheFunctionalityTest { - @InjectMock - PersonRepository personRepository; - - @Test - public void testPanacheRepositoryMocking() throws Throwable { - // Mocked classes always return a default value - Assertions.assertEquals(0, personRepository.count()); - - // Now let's specify the return value - Mockito.when(personRepository.count()).thenReturn(23L); - Assertions.assertEquals(23, personRepository.count()); - - // Now let's change the return value - Mockito.when(personRepository.count()).thenReturn(42L); - Assertions.assertEquals(42, personRepository.count()); - - // Now let's call the original method - Mockito.when(personRepository.count()).thenCallRealMethod(); - Assertions.assertEquals(0, personRepository.count()); - - // Check that we called it 4 times - Mockito.verify(personRepository, Mockito.times(4)).count(); - - // Mock only with specific parameters - Person p = new Person(); - Mockito.when(personRepository.findById(12L)).thenReturn(p); - Assertions.assertSame(p, personRepository.findById(12L)); - Assertions.assertNull(personRepository.findById(42L)); - - // Mock throwing - Mockito.when(personRepository.findById(12L)).thenThrow(new WebApplicationException()); - Assertions.assertThrows(WebApplicationException.class, () -> personRepository.findById(12L)); - - Mockito.when(personRepository.findOrdered()).thenReturn(Collections.emptyList()); - Assertions.assertTrue(personRepository.findOrdered().isEmpty()); - - // We can even mock your custom methods - Mockito.verify(personRepository).findOrdered(); - Mockito.verify(personRepository, Mockito.atLeastOnce()).findById(Mockito.any()); - Mockito.verifyNoMoreInteractions(personRepository); - } -} ----- - -== How and why we simplify Hibernate ORM mappings - -When it comes to writing Hibernate ORM entities, there are a number of annoying things that users have grown used to -reluctantly deal with, such as: - -- Duplicating ID logic: most entities need an ID, most people don't care how it's set, because it's not really -relevant to your model. -- Traditional EE patterns advise to split entity definition (the model) from the operations you can do on them -(DAOs, Repositories), but really that requires a split between the state and its operations even though -we would never do something like that for regular objects in the Object Oriented architecture, where state and methods -are in the same class. Moreover, this requires two classes per entity, and requires injection of the DAO or Repository -where you need to do entity operations, which breaks your edit flow and requires you to get out of the code you're -writing to set up an injection point before coming back to use it. -- Hibernate queries are super powerful, but overly verbose for common operations, requiring you to write queries even -when you don't need all the parts. -- Hibernate is very general-purpose, but does not make it trivial to do trivial operations that make up 90% of our -model usage. - -With Panache, we took an opinionated approach to tackle all these problems: - -- Make your entities extend `PanacheEntity`: it has an ID field that is auto-generated. If you require -a custom ID strategy, you can extend `PanacheEntityBase` instead and handle the ID yourself. -- Use public fields. Get rid of dumb getter and setters. Hibernate ORM w/o Panache also doesn't require you to use getters and setters, -but Panache will additionally generate all getters and setters that are missing, and rewrite every access to these fields to use the accessor methods. This way you can still -write _useful_ accessors when you need them, which will be used even though your entity users still use field accesses. This implies that from the Hibernate perspective you're using accessors via getters and setters even while it looks like field accessors. -- With the active record pattern: put all your entity logic in static methods in your entity class and don't create DAOs. -Your entity superclass comes with lots of super useful static methods, and you can add your own in your entity class. -Users can just start using your entity `Person` by typing `Person.` and getting completion for all the operations in a single place. -- Don't write parts of the query that you don't need: write `Person.find("order by name")` or -`Person.find("name = ?1 and status = ?2", "stef", Status.Alive)` or even better -`Person.find("name", "stef")`. - -That's all there is to it: with Panache, Hibernate ORM has never looked so trim and neat. - -== Defining entities in external projects or jars - -Hibernate ORM with Panache relies on compile-time bytecode enhancements to your entities. - -It attempts to identify archives with Panache entities (and consumers of Panache entities) -by the presence of the marker file `META-INF/panache-archive.marker`. Panache includes an -annotation processor that will automatically create this file in archives that depend on -Panache (even indirectly). If you have disabled annotation processors you may need to create -this file manually in some cases. - -WARNING: If you include the jpa-modelgen annotation processor this will exclude the Panache -annotation processor by default. If you do this you should either create the marker file -yourself, or add the `quarkus-panache-common` as well, as shown below: - -[source,xml] ----- - - maven-compiler-plugin - ${compiler-plugin.version} - - - - org.hibernate - hibernate-jpamodelgen - ${hibernate.version} - - - io.quarkus - quarkus-panache-common - ${quarkus.platform.version} - - - - ----- diff --git a/_versions/2.7/guides/hibernate-orm.adoc b/_versions/2.7/guides/hibernate-orm.adoc deleted file mode 100644 index 4cfd7ca77f0..00000000000 --- a/_versions/2.7/guides/hibernate-orm.adoc +++ /dev/null @@ -1,1098 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Hibernate ORM and JPA - -include::./attributes.adoc[] -:config-file: application.properties -:orm-doc-url-prefix: https://docs.jboss.org/hibernate/orm/5.6/userguide/html_single/Hibernate_User_Guide.html - -Hibernate ORM is the de facto standard JPA implementation and offers you the full breadth of an Object Relational Mapper. -It works beautifully in Quarkus. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `hibernate-orm-quickstart` {quickstarts-tree-url}/hibernate-orm-quickstart[directory]. - -== Setting up and configuring Hibernate ORM - -When using Hibernate ORM in Quarkus, you don't need to have a `persistence.xml` resource to configure it. - -Using such a classic configuration file is an option, but unnecessary unless you have specific advanced needs; -so we'll see first how Hibernate ORM can be configured without a `persistence.xml` resource. - -In Quarkus, you only need to: - -* add your configuration settings in `{config-file}` -* annotate your entities with `@Entity` and any other mapping annotation as usual - -Other configuration needs have been automated: Quarkus will make some opinionated choices and educated guesses. - -Add the following dependencies to your project: - -* the Hibernate ORM extension: `io.quarkus:quarkus-hibernate-orm` -* your JDBC driver extension; the following options are available: - - `quarkus-jdbc-db2` for link:https://www.ibm.com/products/db2-database[IBM DB2] - - `quarkus-jdbc-derby` for link:https://db.apache.org/derby/[Apache Derby] - - `quarkus-jdbc-h2` for link:https://www.h2database.com/html/main.html[H2] - - `quarkus-jdbc-mariadb` for link:https://mariadb.com/[MariaDB] - - `quarkus-jdbc-mssql` for link:https://www.microsoft.com/en-gb/sql-server/[Microsoft SQL Server] - - `quarkus-jdbc-mysql` for link:https://www.mysql.com/[MySQL] - - `quarkus-jdbc-oracle` for link:https://www.oracle.com/database/[Oracle Database] - - `quarkus-jdbc-postgresql` for link:https://www.postgresql.org/[PostgreSQL] - -For instance: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-hibernate-orm - - - - - io.quarkus - quarkus-jdbc-postgresql - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Hibernate ORM specific dependencies -implementation("io.quarkus:quarkus-hibernate-orm") - -// JDBC driver dependencies -implementation("io.quarkus:quarkus-jdbc-postgresql") ----- - -Annotate your persistent objects with `@Entity`, -then add the relevant configuration properties in `{config-file}`. - -[source,properties] -.Example `{config-file}` ----- -# datasource configuration -quarkus.datasource.db-kind = postgresql -quarkus.datasource.username = hibernate -quarkus.datasource.password = hibernate -quarkus.datasource.jdbc.url = jdbc:postgresql://localhost:5432/hibernate_db - -# drop and create the database at startup (use `update` to only update the schema) -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -Note that these configuration properties are not the same ones as in your typical Hibernate ORM configuration file. -They will often map to Hibernate ORM configuration properties but could have different names and don't necessarily map 1:1 to each other. - -Also, Quarkus will set many Hibernate ORM configuration settings automatically, and will often use more modern defaults. - -Please see below section <> for the list of properties you can set in `{config-file}`. - -An `EntityManagerFactory` will be created based on the Quarkus `datasource` configuration as long as the Hibernate ORM extension is listed among your project dependencies. - -The dialect will be selected based on the JDBC driver - unless you set one explicitly. - -You can then happily inject your `EntityManager`: - -[source,java] -.Example application bean using Hibernate ----- -@ApplicationScoped -public class SantaClausService { - @Inject - EntityManager em; <1> - - @Transactional <2> - public void createGift(String giftDescription) { - Gift gift = new Gift(); - gift.setName(giftDescription); - em.persist(gift); - } -} ----- - -<1> Inject your entity manager and have fun -<2> Mark your CDI bean method as `@Transactional` and the `EntityManager` will enlist and flush at commit. - -[source,java] -.Example Entity ----- -@Entity -public class Gift { - private Long id; - private String name; - - @Id - @SequenceGenerator(name = "giftSeq", sequenceName = "gift_id_seq", allocationSize = 1, initialValue = 1) - @GeneratedValue(generator = "giftSeq") - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } -} ----- - -To load SQL statements when Hibernate ORM starts, add an `import.sql` file to the root of your resources directory. -This script can contain any SQL DML statements. -Make sure to terminate each statement with a semicolon. - -This is useful to have a data set ready for your tests or demos. - -WARNING: Make sure to wrap methods modifying your database (e.g. `entity.persist()`) within a transaction. Marking a -CDI bean method `@Transactional` will do that for you and make that method a transaction boundary. We recommend doing -so at your application entry point boundaries like your REST endpoint controllers. - -[[hibernate-configuration-properties]] -=== Hibernate ORM configuration properties - -There are various optional properties useful to refine your `EntityManagerFactory` or guide guesses of Quarkus. - -There are no required properties, as long as a default datasource is configured. - -When no property is set, Quarkus can typically infer everything it needs to setup Hibernate ORM -and will have it use the default datasource. - -The configuration properties listed here allow you to override such defaults, and customize and tune various aspects. - -include::{generated-dir}/config/quarkus-hibernate-orm.adoc[opts=optional, leveloffset=+2] - -[NOTE] --- -Do not mix `persistence.xml` and `quarkus.hibernate-orm.*` properties in `{config-file}`. -Quarkus will raise an exception. -Make up your mind on which approach you want to use. --- - -[TIP] -==== -Want to start a PostgreSQL server on the side with Docker? - -[source,bash] ----- -docker run --rm=true --name postgres-quarkus-hibernate -e POSTGRES_USER=hibernate \ - -e POSTGRES_PASSWORD=hibernate -e POSTGRES_DB=hibernate_db \ - -p 5432:5432 postgres:14.1 ----- - -This will start a non-durable empty database: ideal for a quick experiment! -==== - -[[multiple-persistence-units]] -=== Multiple persistence units - -==== Setting up multiple persistence units - -It is possible to define multiple persistence units using the Quarkus configuration properties. - -The properties at the root of the `quarkus.hibernate-orm.` namespace define the default persistence unit. -For instance, the following snippet defines a default datasource and a default persistence unit: - -[source,properties] ----- -quarkus.datasource.db-kind=h2 -quarkus.datasource.jdbc.url=jdbc:h2:mem:default;DB_CLOSE_DELAY=-1 - -quarkus.hibernate-orm.dialect=org.hibernate.dialect.H2Dialect -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -Using a map based approach, it is possible to define named persistence units: - -[source,properties] ----- -quarkus.datasource."users".db-kind=h2 <1> -quarkus.datasource."users".jdbc.url=jdbc:h2:mem:users;DB_CLOSE_DELAY=-1 - -quarkus.datasource."inventory".db-kind=h2 <2> -quarkus.datasource."inventory".jdbc.url=jdbc:h2:mem:inventory;DB_CLOSE_DELAY=-1 - -quarkus.hibernate-orm."users".database.generation=drop-and-create <3> -quarkus.hibernate-orm."users".datasource=users <4> -quarkus.hibernate-orm."users".packages=org.acme.model.user <5> - -quarkus.hibernate-orm."inventory".database.generation=drop-and-create <6> -quarkus.hibernate-orm."inventory".datasource=inventory -quarkus.hibernate-orm."inventory".packages=org.acme.model.inventory ----- -<1> Define a datasource named `users`. -<2> Define a datasource named `inventory`. -<3> Define a persistence unit called `users`. -<4> Define the datasource used by the persistence unit. -<5> This configuration property is important but we will discuss it a bit later. -<6> Define a persistence unit called `inventory` pointing to the `inventory` datasource. - -[NOTE] -==== -You can mix the default datasource and named datasources or only have one or the other. -==== - -[NOTE] -==== -The default persistence unit points to the default datasource by default. -For named persistence units, the `datasource` property is mandatory. -You can point your persistence unit to the default datasource by setting it to `` -(which is the internal name of the default datasource). - -It is perfectly valid to have several persistence units pointing to the same datasource. -==== - -[[multiple-persistence-units-attaching-model-classes]] -==== Attaching model classes to persistence units - -There are two ways to attach model classes to persistence units, and they should not be mixed: - -* Via the `packages` configuration property; -* Via the `@io.quarkus.hibernate.orm.PersistenceUnit` package-level annotation. - -If both are mixed, the annotations are ignored and only the `packages` configuration properties are taken into account. - -Using the `packages` configuration property is simple: - -[source,properties] ----- -quarkus.hibernate-orm.database.generation=drop-and-create -quarkus.hibernate-orm.packages=org.acme.model.defaultpu - -quarkus.hibernate-orm."users".database.generation=drop-and-create -quarkus.hibernate-orm."users".datasource=users -quarkus.hibernate-orm."users".packages=org.acme.model.user ----- - -This configuration snippet will create two persistence units: - -* The default one which will contain all the model classes under the `org.acme.model.defaultpu` package, subpackages included. -* A named `users` persistence unit which will contain all the model classes under the `org.acme.model.user` package, subpackages included. - -You can attach several packages to a persistence unit: - -[source,properties] ----- -quarkus.hibernate-orm."users".packages=org.acme.model.shared,org.acme.model.user ----- - -All the model classes under the `org.acme.model.shared` and `org.acme.model.user` packages will be attached to the `users` persistence unit. - -It is also supported to attach a given model class to several persistence units. - -[NOTE] -==== -Model classes need to be consistently added to a given persistence unit. -That meant that all dependent model classes of a given entity (mapped super classes, embeddables...) are required to be attached to the persistence unit. -As we are dealing with the persistence unit at the package level, it should be simple enough. -==== - -[WARNING] -==== -Panache entities can be attached to only one persistence unit. - -For entities attached to several persistence units, you cannot use Panache. -You can mix the two approaches though and mix Panache entities and traditional entities where multiple persistence units are required. - -If you have a use case for that and clever ideas about how to implement it without cluttering the simplified Panache approach, -contact us on the link:{quarkus-mailing-list-index}[quarkus-dev mailing list]. -==== - -The second approach to attach model classes to a persistence unit is to use package-level `@io.quarkus.hibernate.orm.PersistenceUnit` annotations. -Again, the two approaches cannot be mixed. - -To obtain a configuration similar to the one above with the `packages` configuration property, create a `package-info.java` file with the following content: - -[source,java] ----- -@PersistenceUnit("users") <1> -package org.acme.model.user; - -import io.quarkus.hibernate.orm.PersistenceUnit; ----- -<1> Be careful, use the `@io.quarkus.hibernate.orm.PersistenceUnit` annotation, not the JPA one. - -[CAUTION] -==== -We only support defining the `@PersistenceUnit` for model classes at the package level, -using the `@PersistenceUnit` annotation at the class level is not supported in this case. -==== - -Note that, similarly to what we do with the configuration property, we take into account the annotated package but also all its subpackages. - -==== CDI integration - -If you are familiar with using Hibernate ORM in Quarkus, you probably already have injected the `EntityManager` using CDI: - -[source,java] ----- -@Inject -EntityManager entityManager; ----- - -This will inject the `EntityManager` of the default persistence unit. - -Injecting the `EntityManager` of a named persistence unit (`users` in our example) is as simple as: - -[source,java] ----- -@Inject -@PersistenceUnit("users") <1> -EntityManager entityManager; ----- -<1> Here again, we use the same `@io.quarkus.hibernate.orm.PersistenceUnit` annotation. - -You can inject the `EntityManagerFactory` of a named persistence unit using the exact same mechanism: - -[source,java] ----- -@Inject -@PersistenceUnit("users") -EntityManagerFactory entityManagerFactory; ----- - -[[persistence-xml]] -== Setting up and configuring Hibernate ORM with a `persistence.xml` - -Alternatively, you can use a `META-INF/persistence.xml` to set up Hibernate ORM. -This is useful for: - -* migrating existing code -* when you have relatively complex settings requiring the full flexibility of the configuration -* or if you like it the good old way - -[NOTE] -==== -If you have a `persistence.xml`, then you cannot use the `quarkus.hibernate-orm.*` properties -and only persistence units defined in `persistence.xml` will be taken into account. -==== - -Your `pom.xml` dependencies as well as your Java code would be identical to the precedent example. The only -difference is that you would specify your Hibernate ORM configuration in `META-INF/persistence.xml`: - -[source,xml] -.Example persistence.xml resource ----- - - - - - My customer entities - - - - - - - - - - - - - - - - ----- - -When using the `persistence.xml` configuration you are configuring Hibernate ORM directly, -so in this case the appropriate reference is the link:{orm-doc-url-prefix}#configurations[documentation on hibernate.org]. - -Please remember these are not the same property names as the ones used in the Quarkus `{config-file}`, nor will -the same defaults be applied. - -[[xml-mapping]] -== XML mapping - -Hibernate ORM in Quarkus supports XML mapping. -You can add mapping files following -the https://jakarta.ee/specifications/persistence/3.0/jakarta-persistence-spec-3.0.html#a16944[`orm.xml` format (JPA)] -or the http://hibernate.org/dtd/hibernate-mapping-3.0.dtd[`hbm.xml` format (specific to Hibernate ORM, deprecated)]: - -* in `application.properties` through the (build-time) link:#quarkus-hibernate-orm_quarkus.hibernate-orm.mapping-files[`quarkus.hibernate-orm.mapping-files`] property. -* in <> through the `` element. - -XML mapping files are parsed at build time. - -[IMPORTANT] -==== -The file `META-INF/orm.xml` will always be included by default, if it exists in the classpath. - -If that is not what you want, use `quarkus.hibernate-orm.mapping-files = no-file` or `no-file`. -==== - -== Defining entities in external projects or jars - -Hibernate ORM in Quarkus relies on compile-time bytecode enhancements to your entities. If you define your entities in the -same project where you build your Quarkus application, everything will work fine. - -If the entities come from external projects -or jars, you can make sure that your jar is treated like a Quarkus application library by adding an empty `META-INF/beans.xml` file. - -This will allow Quarkus to index and enhance your entities as if they were inside the current project. - -[[dev-mode]] -== Hibernate ORM in development mode - -Quarkus development mode is really useful for applications that mix front end or services and database access. - -There are a few common approaches to make the best of it. - -The first choice is to use `quarkus.hibernate-orm.database.generation=drop-and-create` in conjunction with `import.sql`. - -That way for every change to your app and in particular to your entities, the database schema will be properly recreated -and your data fixture (stored in `import.sql`) will be used to repopulate it from scratch. -This is best to perfectly control your environment and works magic with Quarkus live reload mode: -your entity changes or any change to your `import.sql` is immediately picked up and the schema updated without restarting the application! - -[TIP] -==== -By default in `dev` and `test` modes, Hibernate ORM, upon boot, will read and execute the SQL statements in the `/import.sql` file (if present). -You can change the file name by changing the property `quarkus.hibernate-orm.sql-load-script` in `application.properties`. -==== - -The second approach is to use `quarkus.hibernate-orm.database.generation=update`. -This approach is best when you do many entity changes but -still need to work on a copy of the production data -or if you want to reproduce a bug that is based on specific database entries. -`update` is a best effort from Hibernate ORM and will fail in specific situations -including altering your database structure which could lead to data loss. -For example if you change structures which violate a foreign key constraint, Hibernate ORM might have to bail out. -But for development, these limitations are acceptable. - -The third approach is to use `quarkus.hibernate-orm.database.generation=none`. -This approach is best when you are working on a copy of the production data but want to fully control the schema evolution. -Or if you use a database schema migration tool like xref:flyway.adoc[Flyway] or xref:liquibase.adoc[Liquibase]. - -With this approach when making changes to an entity, make sure to adapt the database schema accordingly; -you could also use `validate` to have Hibernate verify the schema matches its expectations. - -WARNING: Do not use `quarkus.hibernate-orm.database.generation` `drop-and-create` and `update` in your production environment. - - -These approaches become really powerful when combined with Quarkus configuration profiles. -You can define different xref:config.adoc#configuration-profiles[configuration profiles] -to select different behaviors depending on your environment. -This is great because you can define different combinations of Hibernate ORM properties matching the development style you currently need. - -[source,properties] -.application.properties ----- -%dev.quarkus.hibernate-orm.database.generation = drop-and-create -%dev.quarkus.hibernate-orm.sql-load-script = import-dev.sql - -%dev-with-data.quarkus.hibernate-orm.database.generation = update -%dev-with-data.quarkus.hibernate-orm.sql-load-script = no-file - -%prod.quarkus.hibernate-orm.database.generation = none -%prod.quarkus.hibernate-orm.sql-load-script = no-file ----- - -You can start dev mode using a custom profile: - -:dev-additional-parameters: -Dquarkus.profile=dev-with-data -include::includes/devtools/dev.adoc[] -:!dev-additional-parameters: - -== Hibernate ORM in production mode - -Quarkus comes with default profiles (`dev`, `test` and `prod`). -And you can add your own custom profiles to describe various environments (`staging`, `prod-us`, etc). - -The Hibernate ORM Quarkus extension sets some default configurations differently in dev and test modes than in other environments. - -* `quarkus.hibernate-orm.sql-load-script` is set to `no-file` for all profiles except the `dev` and `test` ones. - -You can override it in your `application.properties` explicitly -(e.g. `%prod.quarkus.hibernate-orm.sql-load-script = import.sql`) -but we wanted you to avoid overriding your database by accident in prod :) - -Speaking of, make sure to not drop your database schema in production! -Add the following in your properties file. - -[source,properties] -.application.properties ----- -%prod.quarkus.hibernate-orm.database.generation = none -%prod.quarkus.hibernate-orm.sql-load-script = no-file ----- - -[[flyway]] -== Automatically transitioning to Flyway to Manage Schemas - -If you have the xref:flyway.adoc[Flyway extension] installed when running in development mode, Quarkus provides a simple way to turn -your Hibernate ORM auto generated schema into a Flyway migration file. This is intended to make is easy to move from -the early development phase, where Hibernate can be used to quickly setup the schema, to the production phase, where -Flyway is used to manage schema changes. - -To use this feature simply open the Dev UI when the `quarkus-flyway` extension is installed and click in the `Datasources` -link in the Flyway pane. Hit the `Create Initial Migration` button and the following will happen: - -- A `db/migration/V1.0.0__\{appname\}.sql` file will be created, containing the SQL Hibernate is running to generate the schema -- `quarkus.flyway.baseline-on-migrate` will be set, telling Flyway to automatically create its baseline tables -- `quarkus.flyway.migrate-at-start` will be set, telling Flyway to automatically apply migrations on application startup -- `%dev.quarkus.flyway.clean-at-start` and ``%test.quarkus.flyway.clean-at-start` will be set, to clean the DB after reload in dev/test mode - -WARNING: This button is simply a convenience to quickly get you started with Flyway, it is up to you to determine how you want to -manage your database schemas in production. In particular the `migrate-at-start` setting may not be right for all environments. - -[[caching]] -== Caching - -Applications that frequently read the same entities can see their performance improved when the Hibernate ORM second-level cache is enabled. - -=== Caching of entities - -To enable second-level cache, mark the entities that you want cached with `@javax.persistence.Cacheable`: - -[source,java] ----- -@Entity -@Cacheable -public class Country { - int dialInCode; - // ... -} ----- - -When an entity is annotated with `@Cacheable`, all its field values are cached except for collections and relations to other entities. - -This means the entity can be loaded without querying the database, but be careful as it implies the loaded entity might not reflect recent changes in the database. - -=== Caching of collections and relations - -Collections and relations need to be individually annotated to be cached; in this case the Hibernate specific `@org.hibernate.annotations.Cache` should be used, which requires also to specify the `CacheConcurrencyStrategy`: - -[source,java] ----- -package org.acme; - -@Entity -@Cacheable -public class Country { - // ... - - @OneToMany - @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) - List cities; - - // ... -} ----- - -=== Caching of queries - -Queries can also benefit from second-level caching. Cached query results can be returned immediately to the caller, avoiding to run the query on the database. - -Be careful as this implies the results might not reflect recent changes. - -To cache a query, mark it as cacheable on the `Query` instance: - -[source,java] ----- -Query query = ... -query.setHint("org.hibernate.cacheable", Boolean.TRUE); ----- - -If you have a `NamedQuery` then you can enable caching directly on its definition, which will usually be on an entity: - -[source,java] ----- -@Entity -@NamedQuery(name = "Fruits.findAll", - query = "SELECT f FROM Fruit f ORDER BY f.name", - hints = @QueryHint(name = "org.hibernate.cacheable", value = "true") ) -public class Fruit { - ... ----- - -That's all! Caching technology is already integrated and enabled by default in Quarkus, so it's enough to set which ones are safe to be cached. - -=== Tuning of Cache Regions - -Caches store the data in separate regions to isolate different portions of data; such regions are assigned a name, which is useful for configuring each region independently, or to monitor their statistics. - -By default entities are cached in regions named after their fully qualified name, e.g. `org.acme.Country`. - -Collections are cached in regions named after the fully qualified name of their owner entity and collection field name, separated by `#` character, e.g. `org.acme.Country#cities`. - -All cached queries are by default kept in a single region dedicated to them called `default-query-results-region`. - -All regions are bounded by size and time by default. The defaults are `10000` max entries, and `100` seconds as maximum idle time. - -The size of each region can be customized via the `quarkus.hibernate-orm.cache."".memory.object-count` property (Replace __ with the actual region name). - -To set the maximum idle time, provide the duration (see note on duration's format below) via the `quarkus.hibernate-orm.cache."".expiration.max-idle` property (Replace __ with the actual region name). - -[NOTE] -==== -The double quotes are mandatory if your region name contains a dot. For instance: - -[source,properties] ----- -quarkus.hibernate-orm.cache."org.acme.MyEntity".memory.object-count=1000 ----- -==== - - -include::duration-format-note.adoc[] - -=== Limitations of Caching - -The caching technology provided within Quarkus is currently quite rudimentary and limited. - -The team thought it was better to have _some_ caching capability to start with, than having nothing; you can expect better caching solution to be integrated in future releases, and any help and feedback in this area is very welcome. - -[NOTE] -==== -These caches are kept locally, so they are not invalidated or updated when changes are made to the persistent store by other applications. - -Also, when running multiple copies of the same application (in a cluster, for example on Kubernetes/OpenShift), caches in separate copies of the application aren't synchronized. - -For these reasons, enabling caching is only suitable when certain assumptions can be made: we strongly recommend that only entities, collections and queries which never change are cached. Or at most, that when indeed such an entity is mutated and allowed to be read out of date (stale) this has no impact on the expectations of the application. - -Following this advice guarantees applications get the best performance out of the second-level cache and yet avoid unexpected behaviour. - -On top of immutable data, in certain contexts it might be acceptable to enable caching also on mutable data; this could be a necessary tradeoff on selected - entities which are read frequently and for which some degree of staleness is acceptable; this " acceptable degree of staleness" can be tuned by setting eviction properties. - This is however not recommended and should be done with extreme care, as it might - produce unexpected and unforeseen effects on the data. - -Rather than enabling caching on mutable data, ideally a better solution would be to use a clustered cache; however at this time Quarkus doesn't provide any such implementation: feel free to get in touch and let this need known so that the team can take this into account. -==== - -Finally, the second-level cache can be disabled globally by setting `hibernate.cache.use_second_level_cache` to `false`; this is a setting that needs to be specified in the `persistence.xml` configuration file. - -When second-level cache is disabled, all cache annotations are ignored and all queries are run ignoring caches; this is generally useful only to diagnose issues. - -[[envers]] -== Hibernate Envers -The Envers extension to Hibernate ORM aims to provide an easy auditing / versioning solution for entity classes. - -In Quarkus, Envers has a dedicated Quarkus Extension `io.quarkus:quarkus-hibernate-envers`; you just need to add this to your project to start using it. - -[source,xml] -.Additional dependency to enable Hibernate Envers ----- - - - io.quarkus - quarkus-hibernate-envers - ----- - -At this point the extension does not expose additional configuration properties. - -For more information about Hibernate Envers, see link:https://hibernate.org/orm/envers/[hibernate.org/orm/envers/]. - -[[metrics]] -== Metrics -Either xref:micrometer.adoc[Micrometer] or xref:microprofile-metrics.adoc[SmallRye Metrics] are -capable of exposing metrics that Hibernate ORM collects at runtime. To enable exposure of Hibernate metrics -on the `/q/metrics` endpoint, make sure your project depends on a metrics extension and set the configuration property `quarkus.hibernate-orm.metrics.enabled` to `true`. -When using link:microprofile-metrics[SmallRye Metrics], metrics will be available under the `vendor` scope. - -== Limitations and other things you should know - -Quarkus does not modify the libraries it uses; this rule applies to Hibernate ORM as well: when using -this extension you will mostly have the same experience as using the original library. - -But while they share the same code, Quarkus does configure some components automatically and injects custom implementations -for some extension points; this should be transparent and useful but if you're an expert of Hibernate you might want to -know what is being done. - -=== Automatic build time enhancement - -Hibernate ORM can use build time enhanced entities; normally this is not mandatory but it's useful and will have your -applications perform better. - -Typically you would need to adapt your build scripts to include the Hibernate Enhancement plugins; in Quarkus this is -not necessary as the enhancement step is integrated in the build and analysis of the Quarkus application. - -[WARNING] -==== -Due to the usage of enhancement, using the `clone()` method on entities is currently not supported -as it will also clone some enhancement-specific fields that are specific to the entity. - -This limitation might be removed in the future. -==== - -=== Automatic integration - -Transaction Manager integration:: -You don't need to set this up, Quarkus automatically injects the reference to the Narayana Transaction Manager. -The dependency is included automatically as a transitive dependency of the Hibernate ORM extension. -All configuration is optional; for more details see xref:transaction.adoc[Using Transactions in Quarkus]. - -Connection pool:: -Don't need to choose one either. Quarkus automatically includes the Agroal connection pool; -configure your datasource as in the above examples and it will setup Hibernate ORM to use Agroal. -More details about this connection pool can be found in xref:datasource.adoc[Quarkus - Datasources]. - -Second Level Cache:: -as explained above in section <>, you don't need to pick an implementation. -A suitable implementation based on technologies from link:https://infinispan.org/[Infinispan] and link:https://github.com/ben-manes/caffeine[Caffeine] is included as a transitive dependency of the Hibernate ORM extension, and automatically integrated during the build. - -=== Limitations - -XML mapping with duplicate files in the classpath:: -<> files are expected to have a unique path. -+ -In practice, it's only possible to have duplicate XML mapping files in the classpath in very specific scenarios. -For example, if two JARs include a `META-INF/orm.xml` file (with the exact same path, but in different JARs), -then the mapping file path `META-INF/orm.xml` can only be referenced from a `persistence.xml` -**in the same JAR as the `META-INF/orm.xml` file**. - -JMX:: -Management beans are not working in GraalVM native images; -therefore Hibernate's capability to register statistics and management operations with the JMX bean is disabled when compiling into a native image. -This limitation is likely permanent, as it's not a goal for native images -to implement support for JMX. All such metrics can be accessed in other ways. - -JACC Integration:: -Hibernate ORM's capability to integrate with JACC is disabled when building GraalVM native images, -as JACC is not available - nor useful - in native mode. - -Binding the Session to ThreadLocal context:: -It is not possible to use the `ThreadLocalSessionContext` helper of Hibernate ORM as support for it is not implemented. -Since Quarkus provides out of the box support for CDI, we believe using injection or programmatic CDI lookup to be a better approach. -This feature also didn't integrate well with reactive components and more modern context propagation techniques, making us believe this legacy feature has no future. -If you badly need to bind it to a ThreadLocal it should be trivial to implement in your own code. - -JNDI:: -The JNDI technology is commonly used in other runtimes to integrate different components. -A common use case is Java Enterprise servers to bind the TransactionManager and the Datasource components to a name, and then have Hibernate ORM -configured to look these components up by name. -But in Quarkus this use case doesn't apply as components are injected directly, making JNDI support an unnecessary legacy. -As a precaution, to avoid unexpected use of JNDI, the whole support for JNDI has been disabled in the Hibernate ORM extension for Quarkus. -This is both a security precaution and an optimisation. - -=== Other notable differences - -Format of `import.sql`:: -When importing a `import.sql` to setup your database, keep in mind that Quarkus reconfigures Hibernate ORM so to require a semicolon (';') to terminate each statement. -The default in Hibernate is to have a statement per line, without requiring a terminator other than newline: remember to convert your scripts to use the ';' terminator character if you're reusing existing scripts. -This is useful so to allow multi-line statements and human friendly formatting. - -== Simplifying Hibernate ORM with Panache - -The xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache] extension facilitates the usage of Hibernate ORM by providing active record style entities (and repositories) and focuses on making your entities trivial and fun to write in Quarkus. - -== Configure your datasource - -Datasource configuration is extremely simple, but is covered in a different guide as technically -it's implemented by the Agroal connection pool extension for Quarkus. - -Jump over to xref:datasource.adoc[Quarkus - Datasources] for all details. - -[[multitenancy]] -== Multitenancy - -"The term multitenancy, in general, is applied to software development to indicate an architecture in which a single running instance of an application simultaneously serves multiple clients (tenants). This is highly common in SaaS solutions. Isolating information (data, customizations, etc.) pertaining to the various tenants is a particular challenge in these systems. This includes the data owned by each tenant stored in the database" (link:{orm-doc-url-prefix}#multitenacy[Hibernate User Guide]). - -Quarkus currently supports the link:{orm-doc-url-prefix}#multitenacy-separate-database[separate database] and the link:{orm-doc-url-prefix}#multitenacy-separate-schema[separate schema] approach. - -To see multitenancy in action, you can check out the {quickstarts-tree-url}/hibernate-orm-multi-tenancy-quickstart[hibernate-orm-multi-tenancy-quickstart] quickstart. - -=== Writing the application - -Let's start by implementing the `/{tenant}` endpoint. As you can see from the source code below it is just a regular JAX-RS resource: - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.persistence.EntityManager; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@ApplicationScoped -@Path("/{tenant}") -public class FruitResource { - - @Inject - EntityManager entityManager; - - @GET - @Path("fruits") - public Fruit[] getFruits() { - return entityManager.createNamedQuery("Fruits.findAll", Fruit.class) - .getResultList().toArray(new Fruit[0]); - } - -} ----- - -In order to resolve the tenant from incoming requests and map it to a specific tenant configuration, you need to create an implementation for the `io.quarkus.hibernate.orm.runtime.tenant.TenantResolver` interface. - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.hibernate.orm.runtime.tenant.TenantResolver; -import io.vertx.ext.web.RoutingContext; - -@PersistenceUnitExtension // <1> -@RequestScoped // <2> -public class CustomTenantResolver implements TenantResolver { - - @Inject - RoutingContext context; - - @Override - public String getDefaultTenantId() { - return "base"; - } - - @Override - public String resolveTenantId() { - String path = context.request().path(); - String[] parts = path.split("/"); - - if (parts.length == 0) { - // resolve to default tenant config - return getDefaultTenantId(); - } - - return parts[1]; - } - -} ----- -<1> Annotate the TenantResolver implementation with the `@PersistenceUnitExtension` qualifier -to tell Quarkus it should be used in the default persistence unit. -+ -For <>, use `@PersistenceUnitExtension("nameOfYourPU")`. -<2> The bean is made `@RequestScoped` as the tenant resolution depends on the incoming request. - -From the implementation above, tenants are resolved from the request path so that in case no tenant could be inferred, the default tenant identifier is returned. - -[NOTE] -==== -If you also use xref:security-openid-connect-multitenancy.adoc[OIDC multitenancy] and both OIDC and Hibernate ORM tenant IDs are the same and must be extracted from the Vert.x `RoutingContext` then you can pass the tenant id from the OIDC Tenant Resolver to the Hibernate ORM Tenant Resolver as a `RoutingContext` attribute, for example: - -[source,java] ----- -import io.quarkus.hibernate.orm.runtime.tenant.TenantResolver; -import io.vertx.ext.web.RoutingContext; - -@PersistenceUnitExtension -@RequestScoped -public class CustomTenantResolver implements TenantResolver { - - @Inject - RoutingContext context; - ... - @Override - public String resolveTenantId() { - // OIDC TenantResolver has already calculated the tenant id and saved it as a RoutingContext `tenantId` attribute: - return context.get("tenantId"); - } -} ----- -==== - -=== Configuring the application - -In general it is not possible to use the Hibernate ORM database generation feature in conjunction with a multitenancy setup. -Therefore you have to disable it and you need to make sure that the tables are created per schema. -The following setup will use the xref:flyway.adoc[Flyway] extension to achieve this goal. - -==== SCHEMA approach - -The same data source will be used for all tenants and a schema has to be created for every tenant inside that data source. -CAUTION: Some databases like MariaDB/MySQL do not support database schemas. In these cases you have to use the DATABASE approach below. - -[source,properties] ----- -# Disable generation -quarkus.hibernate-orm.database.generation=none - -# Enable SCHEMA approach and use default datasource -quarkus.hibernate-orm.multitenant=SCHEMA -# You could use a non-default datasource by using the following setting -# quarkus.hibernate-orm.multitenant-schema-datasource=other - -# The default data source used for all tenant schemas -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus_test -quarkus.datasource.password=quarkus_test -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/quarkus_test - -# Enable Flyway configuration to create schemas -quarkus.flyway.schemas=base,mycompany -quarkus.flyway.locations=classpath:schema -quarkus.flyway.migrate-at-start=true ----- - -Here is an example of the Flyway SQL (`V1.0.0__create_fruits.sql`) to be created in the configured folder `src/main/resources/schema`. - -[source,sql] ----- -CREATE SEQUENCE base.known_fruits_id_seq; -SELECT setval('base."known_fruits_id_seq"', 3); -CREATE TABLE base.known_fruits -( - id INT, - name VARCHAR(40) -); -INSERT INTO base.known_fruits(id, name) VALUES (1, 'Cherry'); -INSERT INTO base.known_fruits(id, name) VALUES (2, 'Apple'); -INSERT INTO base.known_fruits(id, name) VALUES (3, 'Banana'); - -CREATE SEQUENCE mycompany.known_fruits_id_seq; -SELECT setval('mycompany."known_fruits_id_seq"', 3); -CREATE TABLE mycompany.known_fruits -( - id INT, - name VARCHAR(40) -); -INSERT INTO mycompany.known_fruits(id, name) VALUES (1, 'Avocado'); -INSERT INTO mycompany.known_fruits(id, name) VALUES (2, 'Apricots'); -INSERT INTO mycompany.known_fruits(id, name) VALUES (3, 'Blackberries'); ----- - - - -==== DATABASE approach - -For every tenant you need to create a named data source with the same identifier that is returned by the `TenantResolver`. - -[source,properties] ----- -# Disable generation -quarkus.hibernate-orm.database.generation=none - -# Enable DATABASE approach -quarkus.hibernate-orm.multitenant=DATABASE - -# Default tenant 'base' -quarkus.datasource.base.db-kind=postgresql -quarkus.datasource.base.username=quarkus_test -quarkus.datasource.base.password=quarkus_test -quarkus.datasource.base.jdbc.url=jdbc:postgresql://localhost:5432/quarkus_test - -# Tenant 'mycompany' -quarkus.datasource.mycompany.db-kind=postgresql -quarkus.datasource.mycompany.username=mycompany -quarkus.datasource.mycompany.password=mycompany -quarkus.datasource.mycompany.jdbc.url=jdbc:postgresql://localhost:5433/mycompany - -# Flyway configuration for the default datasource -quarkus.flyway.locations=classpath:database/default -quarkus.flyway.migrate-at-start=true - -# Flyway configuration for the mycompany datasource -quarkus.flyway.mycompany.locations=classpath:database/mycompany -quarkus.flyway.mycompany.migrate-at-start=true ----- - -Following are examples of the Flyway SQL files to be created in the configured folder `src/main/resources/database`. - -Default schema (`src/main/resources/database/default/V1.0.0__create_fruits.sql`): - -[source,sql] ----- -CREATE SEQUENCE known_fruits_id_seq; -SELECT setval('known_fruits_id_seq', 3); -CREATE TABLE known_fruits -( - id INT, - name VARCHAR(40) -); -INSERT INTO known_fruits(id, name) VALUES (1, 'Cherry'); -INSERT INTO known_fruits(id, name) VALUES (2, 'Apple'); -INSERT INTO known_fruits(id, name) VALUES (3, 'Banana'); ----- - -Mycompany schema (`src/main/resources/database/mycompany/V1.0.0__create_fruits.sql`): - -[source,sql] ----- -CREATE SEQUENCE known_fruits_id_seq; -SELECT setval('known_fruits_id_seq', 3); -CREATE TABLE known_fruits -( - id INT, - name VARCHAR(40) -); -INSERT INTO known_fruits(id, name) VALUES (1, 'Avocado'); -INSERT INTO known_fruits(id, name) VALUES (2, 'Apricots'); -INSERT INTO known_fruits(id, name) VALUES (3, 'Blackberries'); ----- - -=== Programmatically Resolving Tenants Connections - -If you need a more dynamic configuration for the different tenants you want to support and don't want to end up with multiple entries in your configuration file, -you can use the `io.quarkus.hibernate.orm.runtime.tenant.TenantConnectionResolver` interface to implement your own logic for retrieving a connection. -Creating an application-scoped bean that implements this interface -and annotating it with `@PersistenceUnitExtension` (or `@PersistenceUnitExtension("nameOfYourPU")` for a <>) -will replace the current Quarkus default implementation `io.quarkus.hibernate.orm.runtime.tenant.DataSourceTenantConnectionResolver`. -Your custom connection resolver would allow for example to read tenant information from a database and create a connection per tenant at runtime based on it. - -[[interceptors]] -== Interceptors - -You can assign an link:{orm-doc-url-prefix}#events-interceptors[`org.hibernate.Interceptor`] -to your `SessionFactory` by simply defining a CDI bean with the appropriate qualifier: - -[source,java] ----- -@PersistenceUnitExtension // <1> -public static class MyInterceptor extends EmptyInterceptor { // <2> - @Override - public boolean onLoad(Object entity, Serializable id, Object[] state, // <3> - String[] propertyNames, Type[] types) { - // ... - return false; - } -} ----- -<1> Annotate the interceptor implementation with the `@PersistenceUnitExtension` qualifier -to tell Quarkus it should be used in the default persistence unit. -+ -For <>, use `@PersistenceUnitExtension("nameOfYourPU")` -<2> Either extend `org.hibernate.EmptyInterceptor` or implement `org.hibernate.Interceptor` directly. -<3> Implement methods as necessary. - -[TIP] -==== -By default, interceptor beans annotated with `@PersistenceUnitExtension` are application-scoped, -which means only one interceptor instance will be created per application -and reused across all entity managers. -For this reason, the interceptor implementation must be thread-safe. - -In order to create one interceptor instance per entity manager instead, -annotate your bean with `@Dependent`. -In that case, the interceptor implementation doesn't need to be thread-safe. -==== - -[NOTE] -==== -Due to a limitation in Hibernate ORM itself, -`@PreDestroy` methods on `@Dependent`-scoped interceptors will never get called. -==== diff --git a/_versions/2.7/guides/hibernate-reactive-panache.adoc b/_versions/2.7/guides/hibernate-reactive-panache.adoc deleted file mode 100644 index 5020b5824cb..00000000000 --- a/_versions/2.7/guides/hibernate-reactive-panache.adoc +++ /dev/null @@ -1,1066 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Simplified Hibernate Reactive with Panache - -include::./attributes.adoc[] -:config-file: application.properties - -link:http://hibernate.org/reactive/[Hibernate Reactive] is the only reactive JPA implementation and offers you the full -breadth of an Object Relational Mapper allowing you to access your database over reactive drivers. -It makes complex mappings possible, but it does not make simple and common mappings trivial. -Hibernate Reactive with Panache focuses on making your entities trivial and fun to write in Quarkus. - -== First: an example - -What we're doing in Panache is allow you to write your Hibernate Reactive entities like this: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Uni findByName(String name){ - return find("name", name).firstResult(); - } - - public static Uni> findAlive(){ - return list("status", Status.Alive); - } - - public static Uni deleteStefs(){ - return delete("name", "Stef"); - } -} ----- - -You have noticed how much more compact and readable the code is? -Does this look interesting? Read on! - -NOTE: the `list()` method might be surprising at first. It takes fragments of HQL (JP-QL) queries and contextualizes the rest. That makes for very concise but yet readable code. - -NOTE: what was described above is essentially the link:https://www.martinfowler.com/eaaCatalog/activeRecord.html[active record pattern], sometimes just called the entity pattern. -Hibernate with Panache also allows for the use of the more classical link:https://martinfowler.com/eaaCatalog/repository.html[repository pattern] via `PanacheRepository`. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `hibernate-reactive-panache-quickstart` {quickstarts-tree-url}/hibernate-reactive-panache-quickstart[directory]. - - -== Setting up and configuring Hibernate Reactive with Panache - -To get started: - -* add your settings in `{config-file}` -* annotate your entities with `@Entity` -* make your entities extend `PanacheEntity` (optional if you are using the repository pattern) - -Follow the xref:hibernate-orm.adoc#setting-up-and-configuring-hibernate-orm[Hibernate set-up guide for all configuration]. - -In your `pom.xml`, add the following dependencies: - -* the Hibernate Reactive with Panache extension -* your reactive driver extension (`quarkus-reactive-pg-client`, `quarkus-reactive-mysql-client`, `quarkus-reactive-db2-client`, ...) - -For instance: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-hibernate-reactive-panache - - - - - io.quarkus - quarkus-reactive-pg-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Hibernate Reactive dependency -implementation("io.quarkus:quarkus-hibernate-reactive-panache") - -Reactive SQL client for PostgreSQL -implementation("io.quarkus:quarkus-reactive-pg-client") ----- - -Then add the relevant configuration properties in `{config-file}`. - -[source,properties] ----- -# configure your datasource -quarkus.datasource.db-kind = postgresql -quarkus.datasource.username = sarah -quarkus.datasource.password = connor -quarkus.datasource.reactive.url = vertx-reactive:postgresql://localhost:5432/mydatabase - -# drop and create the database at startup (use `update` to only update the schema) -quarkus.hibernate-orm.database.generation = drop-and-create ----- - -== Solution 1: using the active record pattern - -=== Defining your entity - -To define a Panache entity, simply extend `PanacheEntity`, annotate it with `@Entity` and add your -columns as public fields: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; -} ----- - -You can put all your JPA column annotations on the public fields. If you need a field to not be persisted, use the -`@Transient` annotation on it. If you need to write accessors, you can: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - // return name as uppercase in the model - public String getName(){ - return name.toUpperCase(); - } - - // store all names in lowercase in the DB - public void setName(String name){ - this.name = name.toLowerCase(); - } -} ----- - -And thanks to our field access rewrite, when your users read `person.name` they will actually call your `getName()` accessor, -and similarly for field writes and the setter. -This allows for proper encapsulation at runtime as all fields calls will be replaced by the corresponding getter/setter calls. - -=== Most useful operations - -Once you have written your entity, here are the most common operations you will be able to perform: - -[source,java] ----- -// creating a person -Person person = new Person(); -person.name = "Stef"; -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1); -person.status = Status.Alive; - -// persist it -Uni persistOperation = person.persist(); - -// note that once persisted, you don't need to explicitly save your entity: all -// modifications are automatically persisted on transaction commit. - -// check if it's persistent -if(person.isPersistent()){ - // delete it - Uni deleteOperation = person.delete(); -} - -// getting a list of all Person entities -Uni> allPersons = Person.listAll(); - -// finding a specific person by ID -Uni personById = Person.findById(23L); - -// finding all living persons -Uni> livingPersons = Person.list("status", Status.Alive); - -// counting all persons -Uni countAll = Person.count(); - -// counting all living persons -Uni countAlive = Person.count("status", Status.Alive); - -// delete all living persons -Uni deleteAliveOperation = Person.delete("status", Status.Alive); - -// delete all persons -Uni deleteAllOperation = Person.deleteAll(); - -// delete by id -Uni deleteByIdOperation = Person.deleteById(23L); - -// set the name of all living persons to 'Mortal' -Uni updateOperation = Person.update("name = 'Mortal' where status = ?1", Status.Alive); ----- - -=== Adding entity methods - -Add custom queries on your entities inside the entities themselves. -That way, you and your co-workers can find them easily, and queries are co-located with the object they operate on. -Adding them as static methods in your entity class is the Panache Active Record way. - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Uni findByName(String name){ - return find("name", name).firstResult(); - } - - public static Uni> findAlive(){ - return list("status", Status.Alive); - } - - public static Uni deleteStefs(){ - return delete("name", "Stef"); - } -} ----- - -== Solution 2: using the repository pattern - - -=== Defining your entity - -When using the repository pattern, you can define your entities as regular JPA entities. - -[source,java] ----- -@Entity -public class Person { - @Id @GeneratedValue private Long id; - private String name; - private LocalDate birth; - private Status status; - - public Long getId(){ - return id; - } - public void setId(Long id){ - this.id = id; - } - public String getName() { - return name; - } - public void setName(String name) { - this.name = name; - } - public LocalDate getBirth() { - return birth; - } - public void setBirth(LocalDate birth) { - this.birth = birth; - } - public Status getStatus() { - return status; - } - public void setStatus(Status status) { - this.status = status; - } -} ----- - -TIP: If you don't want to bother defining getters/setters for your entities, you can make them extend `PanacheEntityBase` and -Quarkus will generate them for you. You can even extend `PanacheEntity` and take advantage of the default ID it provides. - -=== Defining your repository - -When using Repositories, you get the exact same convenient methods as with the active record pattern, injected in your Repository, -by making them implements `PanacheRepository`: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheRepository { - - // put your custom logic here as instance methods - - public Uni findByName(String name){ - return find("name", name).firstResult(); - } - - public Uni> findAlive(){ - return list("status", Status.Alive); - } - - public Uni deleteStefs(){ - return delete("name", "Stef"); - } -} ----- - -All the operations that are defined on `PanacheEntityBase` are available on your repository, so using it -is exactly the same as using the active record pattern, except you need to inject it: - -[source,java] ----- -@Inject -PersonRepository personRepository; - -@GET -public Uni count(){ - return personRepository.count(); -} ----- - -=== Most useful operations - -Once you have written your repository, here are the most common operations you will be able to perform: - -[source,java] ----- -// creating a person -Person person = new Person(); -person.setName("Stef"); -person.setBirth(LocalDate.of(1910, Month.FEBRUARY, 1)); -person.setStatus(Status.Alive); - -// persist it -Uni persistOperation = personRepository.persist(person); - -// note that once persisted, you don't need to explicitly save your entity: all -// modifications are automatically persisted on transaction commit. - -// check if it's persistent -if(personRepository.isPersistent(person)){ - // delete it - Uni deleteOperation = personRepository.delete(person); -} - -// getting a list of all Person entities -Uni> allPersons = personRepository.listAll(); - -// finding a specific person by ID -Uni personById = personRepository.findById(23L); - -// finding all living persons -Uni> livingPersons = personRepository.list("status", Status.Alive); - -// counting all persons -Uni countAll = personRepository.count(); - -// counting all living persons -Uni countAlive = personRepository.count("status", Status.Alive); - -// delete all living persons -Uni deleteLivingOperation = personRepository.delete("status", Status.Alive); - -// delete all persons -Uni deleteAllOperation = personRepository.deleteAll(); - -// delete by id -Uni deleteByIdOperation = personRepository.deleteById(23L); - -// set the name of all living persons to 'Mortal' -Uni updateOperation = personRepository.update("name = 'Mortal' where status = ?1", Status.Alive); ----- - -NOTE: The rest of the documentation show usages based on the active record pattern only, -but keep in mind that they can be performed with the repository pattern as well. -The repository pattern examples have been omitted for brevity. - -== Advanced Query - -=== Paging - -You should only use the `list` methods if your table contains small enough data sets. For larger data -sets you can use the `find` method equivalents, which return a `PanacheQuery` on which you can do paging: - -[source,java] ----- -// create a query for all living persons -PanacheQuery livingPersons = Person.find("status", Status.Alive); - -// make it use pages of 25 entries at a time -livingPersons.page(Page.ofSize(25)); - -// get the first page -Uni> firstPage = livingPersons.list(); - -// get the second page -Uni> secondPage = livingPersons.nextPage().list(); - -// get page 7 -Uni> page7 = livingPersons.page(Page.of(7, 25)).list(); - -// get the number of pages -Uni numberOfPages = livingPersons.pageCount(); - -// get the total number of entities returned by this query without paging -Uni count = livingPersons.count(); - -// and you can chain methods of course -Uni> persons = Person.find("status", Status.Alive) - .page(Page.ofSize(25)) - .nextPage() - .list(); ----- - -The `PanacheQuery` type has many other methods to deal with paging and returning streams. - -=== Using a range instead of pages - -`PanacheQuery` also allows range-based queries. - -[source,java] ----- -// create a query for all living persons -PanacheQuery livingPersons = Person.find("status", Status.Alive); - -// make it use a range: start at index 0 until index 24 (inclusive). -livingPersons.range(0, 24); - -// get the range -Uni> firstRange = livingPersons.list(); - -// to get the next range, you need to call range again -Uni> secondRange = livingPersons.range(25, 49).list(); ----- - -[WARNING] -==== -You cannot mix ranges and pages: if you use a range, all methods that depend on having a current page will throw an `UnsupportedOperationException`; -you can switch back to paging using `page(Page)` or `page(int, int)`. -==== - -=== Sorting - -All methods accepting a query string also accept the following simplified query form: - -[source,java] ----- -Uni> persons = Person.list("order by name,birth"); ----- - -But these methods also accept an optional `Sort` parameter, which allows your to abstract your sorting: - -[source,java] ----- -Uni> persons = Person.list(Sort.by("name").and("birth")); - -// and with more restrictions -Uni> persons = Person.list("status", Sort.by("name").and("birth"), Status.Alive); ----- - -The `Sort` class has plenty of methods for adding columns and specifying sort direction. - -=== Simplified queries - -Normally, HQL queries are of this form: `from EntityName [where ...] [order by ...]`, with optional elements -at the end. - -If your select query does not start with `from`, we support the following additional forms: - -- `order by ...` which will expand to `from EntityName order by ...` -- `` (and single parameter) which will expand to `from EntityName where = ?` -- `` will expand to `from EntityName where ` - -If your update query does not start with `update`, we support the following additional forms: - -- `from EntityName ...` which will expand to `update from EntityName ...` -- `set? ` (and single parameter) which will expand to `update from EntityName set = ?` -- `set? ` will expand to `update from EntityName set ` - -If your delete query does not start with `delete`, we support the following additional forms: - -- `from EntityName ...` which will expand to `delete from EntityName ...` -- `` (and single parameter) which will expand to `delete from EntityName where = ?` -- `` will expand to `delete from EntityName where ` - -NOTE: You can also write your queries in plain -link:https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#hql[HQL]: - -[source,java] ----- -Order.find("select distinct o from Order o left join fetch o.lineItems"); -Order.update("update from Person set name = 'Mortal' where status = ?", Status.Alive); ----- - -=== Named queries - -You can reference a named query instead of a (simplified) HQL query by prefixing its name with the '#' character. You can also use named queries for count, update and delete queries. - -[source,java] ----- -@Entity -@NamedQueries({ - @NamedQuery(name = "Person.getByName", query = "from Person where name = ?1"), - @NamedQuery(name = "Person.countByStatus", query = "select count(*) from Person p where p.status = :status"), - @NamedQuery(name = "Person.updateStatusById", query = "update Person p set p.status = :status where p.id = :id"), - @NamedQuery(name = "Person.deleteById", query = "delete from Person p where p.id = ?1") -}) -public class Person extends PanacheEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Uni findByName(String name){ - return find("#Person.getByName", name).firstResult(); - } - - public static Uni countByStatus(Status status) { - return count("#Person.countByStatus", Parameters.with("status", status).map()); - } - - public static Uni updateStatusById(Status status, Long id) { - return update("#Person.updateStatusById", Parameters.with("status", status).and("id", id)); - } - - public static Uni deleteById(Long id) { - return delete("#Person.deleteById", id); - } -} ----- - -[WARNING] -==== -Named queries can only be defined inside your JPA entity classes (being the Panache entity class, or the repository parameterized type), -or on one of its super classes. -==== - -=== Query parameters - -You can pass query parameters by index (1-based) as shown below: - -[source,java] ----- -Person.find("name = ?1 and status = ?2", "stef", Status.Alive); ----- - -Or by name using a `Map`: - -[source,java] ----- -Map params = new HashMap<>(); -params.put("name", "stef"); -params.put("status", Status.Alive); -Person.find("name = :name and status = :status", params); ----- - -Or using the convenience class `Parameters` either as is or to build a `Map`: - -[source,java] ----- -// generate a Map -Person.find("name = :name and status = :status", - Parameters.with("name", "stef").and("status", Status.Alive).map()); - -// use it as-is -Person.find("name = :name and status = :status", - Parameters.with("name", "stef").and("status", Status.Alive)); ----- - -Every query operation accepts passing parameters by index (`Object...`), or by name (`Map` or `Parameters`). - -=== Query projection - -Query projection can be done with the `project(Class)` method on the `PanacheQuery` object that is returned by the `find()` methods. - -You can use it to restrict which fields will be returned by the database. - -Hibernate will use **DTO projection** and generate a SELECT clause with the attributes from the projection class. -This is also called **dynamic instantiation** or **constructor expression**, more info can be found on the Hibernate guide: -link:https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#hql-select-clause[hql select clause] - -The projection class needs to be a valid Java Bean and have a constructor that contains all its attributes, this constructor will be used to -instantiate the projection DTO instead of using the entity class. This must be the only constructor of the class. - -[source,java] ----- -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection // <1> -public class PersonName { - public final String name; // <2> - - public PersonName(String name){ // <3> - this.name = name; - } -} - -// only 'name' will be loaded from the database -PanacheQuery query = Person.find("status", Status.Alive).project(PersonName.class); ----- -<1> The `@RegisterForReflection` annotation instructs Quarkus to keep the class and its members during the native compilation. More details about the `@RegisterForReflection` annotation can be found on the xref:writing-native-applications-tips.adoc#registerForReflection[native application tips] page. -<2> We use public fields here, but you can use private fields and getters/setters if you prefer. -<3> This constructor will be used by Hibernate, and it must have a matching constructor with all the class attributes as parameters. - - -[WARNING] -==== -The implementation of the `project(Class)` method uses the constructor's parameter names to build the select clause of the query, -so the compiler must be configured to store parameter names inside the compiled class. -This is enabled by default if you are using the Quarkus Maven archetype. If you are not using it, add the property `true` to your pom.xml. -==== - -If in the DTO projection object you have a field from a referenced entity, you can use the `@ProjectedFieldName` annotation to provide the path for the SELECT statement. - -[source,java] ----- -@Entity -public class Dog extends PanacheEntity { - public String name; - public String race; - @ManyToOne - public Person owner; -} - -@RegisterForReflection -public class DogDto { - public String name; - public String ownerName; - - public DogDto(String name, @ProjectedFieldName("owner.name") String ownerName) { // <1> - this.name = name; - this.ownerName = ownerName; - } -} - -PanacheQuery query = Dog.findAll().project(DogDto.class); ----- -<1> The `ownerName` DTO constructor's parameter will be loaded from the `owner.name` HQL property. - -== Multiple Persistence Units - -Hibernate Reactive in Quarkus currently does not support multiple persistence units. - -== Transactions - -Make sure to wrap methods modifying your database (e.g. `entity.persist()`) within a transaction. Marking a -CDI bean method `@ReactiveTransactional` will do that for you and make that method a transaction boundary. Alternatively, -you can use `Panache.withTransaction()` for the same effect. We recommend doing -so at your application entry point boundaries like your REST endpoint controllers. - -NOTE: You cannot use `@Transactional` with Hibernate Reactive for your transactions: you must use `@ReactiveTransactional`, -and your annotated method must return a `Uni` to be non-blocking. Otherwise it needs be called from a non-`VertxThread` thread -and will become blocking. - -JPA batches changes you make to your entities and sends changes (it's called flush) at the end of the transaction or before a query. -This is usually a good thing as it's more efficient. -But if you want to check optimistic locking failures, do object validation right away or generally want to get immediate feedback, you can force the flush operation by calling `entity.flush()` or even use `entity.persistAndFlush()` to make it a single method call. This will allow you to catch any `PersistenceException` that could occur when JPA send those changes to the database. -Remember, this is less efficient so don't abuse it. -And your transaction still has to be committed. - -Here is an example of the usage of the flush method to allow making a specific action in case of `PersistenceException`: -[source,java] ----- -@ReactiveTransactional -public Uni create(Person person){ - //Here I use the persistAndFlush() shorthand method on a Panache repository to persist to database then flush the changes. - return person.persistAndFlush() - .onFailure(PersistenceException.class) - .recoverWithItem(() -> { - LOG.error("Unable to create the parameter", pe); - //in case of error, I save it to disk - diskPersister.save(person); - return null; - }); -} ----- - -The `@ReactiveTransactional` annotation will also work for testing. -This means that changes done during the test will be propagated to the database. -If you want any changes made to be rolled back at the end of -the test you can use the `io.quarkus.test.TestReactiveTransaction` annotation. -This will run the test method in a transaction, but roll it back once the test method is -complete to revert any database changes. - -== Lock management - -Panache provides direct support for database locking with your entity/repository, using `findById(Object, LockModeType)` or `find().withLock(LockModeType)`. - -The following examples are for the active record pattern, but the same can be used with repositories. - -=== First: Locking using findById(). - -[source,java] ----- -public class PersonEndpoint { - - @GET - public Uni findByIdForUpdate(Long id){ - return Panache.withTransaction(() -> { - return Person.findById(id, LockModeType.PESSIMISTIC_WRITE) - .invoke(person -> { - //do something useful, the lock will be released when the transaction ends. - }); - }); - } -} ----- - -=== Second: Locking in a find(). - -[source,java] ----- -public class PersonEndpoint { - - @GET - public Uni findByNameForUpdate(String name){ - return Panache.withTransaction(() -> { - return Person.find("name", name).withLock(LockModeType.PESSIMISTIC_WRITE).firstResult() - .invoke(person -> { - //do something useful, the lock will be released when the transaction ends. - }); - }); - } - -} ----- - -Be careful that locks are released when the transaction ends, so the method that invokes the lock query must be called within a transaction. - -== Custom IDs - -IDs are often a touchy subject, and not everyone's up for letting them handled by the framework, once again we -have you covered. - -You can specify your own ID strategy by extending `PanacheEntityBase` instead of `PanacheEntity`. Then -you just declare whatever ID you want as a public field: - -[source,java] ----- -@Entity -public class Person extends PanacheEntityBase { - - @Id - @SequenceGenerator( - name = "personSequence", - sequenceName = "person_id_seq", - allocationSize = 1, - initialValue = 4) - @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "personSequence") - public Integer id; - - //... -} ----- - -If you're using repositories, then you will want to extend `PanacheRepositoryBase` instead of `PanacheRepository` -and specify your ID type as an extra type parameter: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheRepositoryBase { - //... -} ----- - -== Mocking - -=== Using the active record pattern - -If you are using the active record pattern you cannot use Mockito directly as it does not support mocking static methods, -but you can use the `quarkus-panache-mock` module which allows you to use Mockito to mock all provided static -methods, including your own. - -Add this dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-panache-mock - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-panache-mock") ----- - -Given this simple entity: - -[source,java] ----- -@Entity -public class Person extends PanacheEntity { - - public String name; - - public static Uni> findOrdered() { - return find("ORDER BY name").list(); - } -} ----- - -You can write your mocking test like this: - -[source,java] ----- -@QuarkusTest -public class PanacheFunctionalityTest { - - @Test - public void testPanacheMocking() { - PanacheMock.mock(Person.class); - - // Mocked classes always return a default value - Assertions.assertEquals(0, Person.count().await().indefinitely()); - - // Now let's specify the return value - Mockito.when(Person.count()).thenReturn(Uni.createFrom().item(23l)); - Assertions.assertEquals(23, Person.count().await().indefinitely()); - - // Now let's change the return value - Mockito.when(Person.count()).thenReturn(Uni.createFrom().item(42l)); - Assertions.assertEquals(42, Person.count().await().indefinitely()); - - // Now let's call the original method - Mockito.when(Person.count()).thenCallRealMethod(); - Assertions.assertEquals(0, Person.count().await().indefinitely()); - - // Check that we called it 4 times - PanacheMock.verify(Person.class, Mockito.times(4)).count();// <1> - - // Mock only with specific parameters - Person p = new Person(); - Mockito.when(Person.findById(12l)).thenReturn(Uni.createFrom().item(p)); - Assertions.assertSame(p, Person.findById(12l).await().indefinitely()); - Assertions.assertNull(Person.findById(42l).await().indefinitely()); - - // Mock throwing - Mockito.when(Person.findById(12l)).thenThrow(new WebApplicationException()); - try { - Person.findById(12l); - Assertions.fail(); - } catch (WebApplicationException x) { - } - - // We can even mock your custom methods - Mockito.when(Person.findOrdered()).thenReturn(Uni.createFrom().item(Collections.emptyList())); - Assertions.assertTrue(Person.findOrdered().await().indefinitely().isEmpty()); - - PanacheMock.verify(Person.class).findOrdered(); - PanacheMock.verify(Person.class, Mockito.atLeastOnce()).findById(Mockito.any()); - PanacheMock.verifyNoMoreInteractions(Person.class); - } -} ----- -<1> Be sure to call your `verify` and `do*` methods on `PanacheMock` rather than `Mockito`, otherwise you won't know -what mock object to pass. - -==== Mocking `Mutiny.Session` and entity instance methods - -If you need to mock entity instance methods, such as `persist()` you can do it by mocking the Hibernate Reactive `Mutiny.Session` object: - -[source,java] ----- -@QuarkusTest -public class PanacheMockingTest { - - @InjectMock - Mutiny.Session session; - - @Test - public void testPanacheSessionMocking() { - Person p = new Person(); - // mocked via Mutiny.Session mocking - p.persist().await().indefinitely(); - Assertions.assertNull(p.id); - - Mockito.verify(session, Mockito.times(1)).persist(Mockito.any()); - } -} ----- - -=== Using the repository pattern - -If you are using the repository pattern you can use Mockito directly, using the `quarkus-junit5-mockito` module, -which makes mocking beans much easier: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-junit5-mockito - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-junit5-mockito") ----- - -Given this simple entity: - -[source,java] ----- -@Entity -public class Person { - - @Id - @GeneratedValue - public Long id; - - public String name; -} ----- - -And this repository: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheRepository { - public Uni> findOrdered() { - return find("ORDER BY name").list(); - } -} ----- - -You can write your mocking test like this: - -[source,java] ----- -@QuarkusTest -public class PanacheFunctionalityTest { - @InjectMock - PersonRepository personRepository; - - @Test - public void testPanacheRepositoryMocking() throws Throwable { - // Mocked classes always return a default value - Assertions.assertEquals(0, mockablePersonRepository.count().await().indefinitely()); - - // Now let's specify the return value - Mockito.when(mockablePersonRepository.count()).thenReturn(Uni.createFrom().item(23l)); - Assertions.assertEquals(23, mockablePersonRepository.count().await().indefinitely()); - - // Now let's change the return value - Mockito.when(mockablePersonRepository.count()).thenReturn(Uni.createFrom().item(42l)); - Assertions.assertEquals(42, mockablePersonRepository.count().await().indefinitely()); - - // Now let's call the original method - Mockito.when(mockablePersonRepository.count()).thenCallRealMethod(); - Assertions.assertEquals(0, mockablePersonRepository.count().await().indefinitely()); - - // Check that we called it 4 times - Mockito.verify(mockablePersonRepository, Mockito.times(4)).count(); - - // Mock only with specific parameters - Person p = new Person(); - Mockito.when(mockablePersonRepository.findById(12l)).thenReturn(Uni.createFrom().item(p)); - Assertions.assertSame(p, mockablePersonRepository.findById(12l).await().indefinitely()); - Assertions.assertNull(mockablePersonRepository.findById(42l).await().indefinitely()); - - // Mock throwing - Mockito.when(mockablePersonRepository.findById(12l)).thenThrow(new WebApplicationException()); - try { - mockablePersonRepository.findById(12l); - Assertions.fail(); - } catch (WebApplicationException x) { - } - - // We can even mock your custom methods - Mockito.when(mockablePersonRepository.findOrdered()).thenReturn(Uni.createFrom().item(Collections.emptyList())); - Assertions.assertTrue(mockablePersonRepository.findOrdered().await().indefinitely().isEmpty()); - - Mockito.verify(mockablePersonRepository).findOrdered(); - Mockito.verify(mockablePersonRepository, Mockito.atLeastOnce()).findById(Mockito.any()); - Mockito.verify(mockablePersonRepository).persist(Mockito. any()); - Mockito.verifyNoMoreInteractions(mockablePersonRepository); - } -} ----- - -== How and why we simplify Hibernate Reactive mappings - -When it comes to writing Hibernate Reactive entities, there are a number of annoying things that users have grown used to -reluctantly deal with, such as: - -- Duplicating ID logic: most entities need an ID, most people don't care how it's set, because it's not really -relevant to your model. -- Dumb getters and setters: since Java lacks support for properties in the language, we have to create fields, -then generate getters and setters for those fields, even if they don't actually do anything more than read/write -the fields. -- Traditional EE patterns advise to split entity definition (the model) from the operations you can do on them -(DAOs, Repositories), but really that requires an unnatural split between the state and its operations even though -we would never do something like that for regular objects in the Object Oriented architecture, where state and methods -are in the same class. Moreover, this requires two classes per entity, and requires injection of the DAO or Repository -where you need to do entity operations, which breaks your edit flow and requires you to get out of the code you're -writing to set up an injection point before coming back to use it. -- Hibernate queries are super powerful, but overly verbose for common operations, requiring you to write queries even -when you don't need all the parts. -- Hibernate is very general-purpose, but does not make it trivial to do trivial operations that make up 90% of our -model usage. - -With Panache, we took an opinionated approach to tackle all these problems: - -- Make your entities extend `PanacheEntity`: it has an ID field that is auto-generated. If you require -a custom ID strategy, you can extend `PanacheEntityBase` instead and handle the ID yourself. -- Use public fields. Get rid of dumb getter and setters. Under the hood, we will generate all getters and setters -that are missing, and rewrite every access to these fields to use the accessor methods. This way you can still -write _useful_ accessors when you need them, which will be used even though your entity users still use field accesses. -- With the active record pattern: put all your entity logic in static methods in your entity class and don't create DAOs. -Your entity superclass comes with lots of super useful static methods, and you can add your own in your entity class. -Users can just start using your entity `Person` by typing `Person.` and getting completion for all the operations in a single place. -- Don't write parts of the query that you don't need: write `Person.find("order by name")` or -`Person.find("name = ?1 and status = ?2", "stef", Status.Alive)` or even better -`Person.find("name", "stef")`. - -That's all there is to it: with Panache, Hibernate Reactive has never looked so trim and neat. - -== Defining entities in external projects or jars - -Hibernate Reactive with Panache relies on compile-time bytecode enhancements to your entities. - -It attempts to identify archives with Panache entities (and consumers of Panache entities) -by the presence of the marker file `META-INF/panache-archive.marker`. Panache includes an -annotation processor that will automatically create this file in archives that depend on -Panache (even indirectly). If you have disabled annotation processors you may need to create -this file manually in some cases. - -WARNING: If you include the jpa-modelgen annotation processor this will exclude the Panache -annotation processor by default. If you do this you should either create the marker file -yourself, or add the `quarkus-panache-common` as well, as shown below: - -[source,xml] ----- - - maven-compiler-plugin - ${compiler-plugin.version} - - - - org.hibernate - hibernate-jpamodelgen - ${hibernate.version} - - - io.quarkus - quarkus-panache-common - ${quarkus.platform.version} - - - - ----- diff --git a/_versions/2.7/guides/hibernate-reactive.adoc b/_versions/2.7/guides/hibernate-reactive.adoc deleted file mode 100644 index b1277593467..00000000000 --- a/_versions/2.7/guides/hibernate-reactive.adoc +++ /dev/null @@ -1,240 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Hibernate Reactive - -include::./attributes.adoc[] -:config-file: application.properties -:reactive-doc-url-prefix: https://hibernate.org/reactive/documentation/1.1/reference/html_single/#getting-started - -link:https://hibernate.org/reactive/[Hibernate Reactive] is a reactive API for Hibernate ORM, supporting non-blocking database drivers -and a reactive style of interaction with the database. - -[NOTE] -==== -Hibernate Reactive works with the same annotations and most of the configuration described in the -xref:quarkus-hibernate-orm.adoc[Hibernate ORM guide]. This guide will only focus on what's specific -for Hibernate Reactive. -==== - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `hibernate-reactive-quickstart` {quickstarts-tree-url}/hibernate-reactive-quickstart[directory]. - -[[hr-getting-started]] -== Setting up and configuring Hibernate Reactive - -When using Hibernate Reactive in Quarkus, you need to: - -* add your configuration settings in `{config-file}` -* annotate your entities with `@Entity` and any other mapping annotations as usual - -Other configuration needs have been automated: Quarkus will make some opinionated choices and educated guesses. - -Add the following dependencies to your project: - -* the Hibernate Reactive extension: `io.quarkus:quarkus-hibernate-reactive` -* the xref:reactive-sql-clients.adoc[Reactive SQL client extension] for the database of your choice; the following options are available: - - `quarkus-reactive-pg-client`: link:https://vertx.io/docs/vertx-pg-client/java[the client for PostgreSQL or CockroachDB] - - `quarkus-reactive-mysql-client`: link:https://vertx.io/docs/vertx-mysql-client/java[the client MySQL or MariaDB] - - `quarkus-reactive-mssql-client`: link:https://vertx.io/docs/vertx-mssql-client/java[the client for Microsoft SQL Server] - - `quarkus-reactive-db2-client`: link:https://vertx.io/docs/vertx-db2-client/java[the client for IBM Db2] - - `quarkus-reactive-oracle-client`: link:https://vertx.io/docs/vertx-oracle-client/java[the client for Oracle] - -For instance: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-hibernate-reactive - - - - - io.quarkus - quarkus-reactive-pg-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Hibernate Reactive dependency -implementation("io.quarkus:quarkus-hibernate-reactive") - -Reactive SQL client for PostgreSQL -implementation("io.quarkus:quarkus-reactive-pg-client") ----- - -Annotate your persistent objects with `@Entity`, -then add the relevant configuration properties in `{config-file}`: - -[source,properties] -.Example `{config-file}` ----- -# datasource configuration -quarkus.datasource.db-kind = postgresql -quarkus.datasource.username = quarkus_test -quarkus.datasource.password = quarkus_test - -quarkus.datasource.reactive.url = vertx-reactive:postgresql://localhost/quarkus_test <1> - -# drop and create the database at startup (use `update` to only update the schema) -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -<1> The only different property from a Hibernate ORM configuration - -Note that these configuration properties are not the same ones as in your typical Hibernate Reactive configuration file. -They will often map to Hibernate Reactive configuration properties but could have different names and don't necessarily map 1:1 to each other. - -Also, Quarkus will set many Hibernate Reactive configuration settings automatically, and will often use more modern defaults. - -WARNING: Configuring Hibernate Reactive using the standard `persistence.xml` configuration file is not supported. - -Please, see section <> for the list of properties you can set in `{config-file}`. - -A `Mutiny.SessionFactory` will be created based on the Quarkus `datasource` configuration as long as the Hibernate Reactive extension is listed among your project dependencies. - -The dialect will be selected based on the Reactive SQL client - unless you set one explicitly. - -You can then happily inject your `Mutiny.SessionFactory`: - -[source,java] -.Example application bean using Hibernate Reactive ----- -@ApplicationScoped -public class SantaClausService { - @Inject - Mutiny.SessionFactory sf; <1> - - public Uni createGift(String giftDescription) { - Gift gift = new Gift(); - gift.setName(giftDescription); - return sf.withTransaction(session -> session.persist(gift)) <2> - } -} ----- - -<1> Inject your session factory and have fun -<2> `.withTransaction()` will automatically flush at commit - -WARNING: Make sure to wrap methods modifying your database (e.g. `session.persist(entity)`) within a transaction. - -[source,java] -.Example of an Entity ----- -@Entity -public class Gift { - private Long id; - private String name; - - @Id - @SequenceGenerator(name = "giftSeq", sequenceName = "gift_id_seq", allocationSize = 1, initialValue = 1) - @GeneratedValue(generator = "giftSeq") - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } -} ----- - -To load SQL statements when Hibernate Reactive starts, add an `import.sql` file in your `src/main/resources/` directory. -This script can contain any SQL DML statements. -Make sure to terminate each statement with a semicolon. - -This is useful to have a data set ready for your tests or demos. - -[[hr-configuration-properties]] -=== Hibernate Reactive configuration properties - -There are various optional properties useful to refine your session factory or guide Quarkus' guesses. - -When no properties are set, Quarkus can typically infer everything it needs to setup Hibernate Reactive -and will have it use the default datasource. - -The configuration properties listed here allow you to override such defaults, and customize and tune various aspects. - -Hibernate Reactive uses the same properties you would use for Hibernate ORM. You will notice that some properties -contain `jdbc` in the name but there is not JDBC in Hibernate Reactive, these are simply legacy property names. - -include::{generated-dir}/config/quarkus-hibernate-orm.adoc[opts=optional, leveloffset=+2] - -[TIP] -==== -Want to start a PostgreSQL server on the side with Docker? - -[source,bash] ----- -docker run --rm --name postgres-quarkus-hibernate -e POSTGRES_USER=quarkus_test \ - -e POSTGRES_PASSWORD=quarkus_test -e POSTGRES_DB=quarkus_test \ - -p 5432:5432 postgres:14.1 ----- - -This will start a non-durable empty database: ideal for a quick experiment! -==== - -[[hr-cdi-integration]] -==== CDI integration - -If you are familiar with using Hibernate Reactive in Quarkus, you probably already have injected the `Mutiny.SessionFactory` using CDI: - -[source,java] ----- -@Inject -Mutiny.SessionFactory sessionFactory; ----- - -This will inject the `Mutiny.SessionFactory` of the default persistence unit. - -You can also inject an instance of `Uni` using the exact same mechanism: - -[source,java] ----- -@Inject -Uni session; ----- - -[[hr-limitations]] -== Limitations and other things you should know - -Quarkus does not modify the libraries it uses; this rule applies to Hibernate Reactive as well: when using -this extension you will mostly have the same experience as using the original library. - -But while they share the same code, Quarkus does configure some components automatically and inject custom implementations -for some extension points; this should be transparent and useful but if you're an expert of Hibernate Reactive you might want to -know what is being done. - -Here's a list of things to pay attention when using Hibernate Reactive in Quarkus: - -* it's not possible to configure multiple persistence units at the moment -* it's not configurable via a `persistence.xml` file -* integration with the Envers extension is not supported -* transaction demarcation cannot be done using `javax.transaction.Transactional` - -== Simplifying Hibernate Reactive with Panache - -The xref:hibernate-reactive-panache.adoc[Hibernate Reactive with Panache] extension facilitates the usage of Hibernate Reactive -by providing active record style entities (and repositories) and focuses on making your entities trivial and fun to write in Quarkus. - diff --git a/_versions/2.7/guides/hibernate-search-orm-elasticsearch.adoc b/_versions/2.7/guides/hibernate-search-orm-elasticsearch.adoc deleted file mode 100644 index fbeeee54f17..00000000000 --- a/_versions/2.7/guides/hibernate-search-orm-elasticsearch.adoc +++ /dev/null @@ -1,945 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Hibernate Search guide -:hibernate-search-doc-prefix: https://docs.jboss.org/hibernate/search/6.1/reference/en-US/html_single/ -include::./attributes.adoc[] - -You have a Hibernate ORM-based application? You want to provide a full-featured full-text search to your users? You're at the right place. - -With this guide, you'll learn how to synchronize your entities to an Elasticsearch or OpenSearch cluster in a heartbeat with Hibernate Search. -We will also explore how you can query your Elasticsearch or OpenSearch cluster using the Hibernate Search API. - -== Prerequisites - -:prerequisites-time: 20 minutes -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -The application described in this guide allows to manage a (simple) library: you manage authors and their books. - -The entities are stored in a PostgreSQL database and indexed in an Elasticsearch cluster. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `hibernate-search-orm-elasticsearch-quickstart` {quickstarts-tree-url}/hibernate-search-orm-elasticsearch-quickstart[directory]. - -[NOTE] -==== -The provided solution contains a few additional elements such as tests and testing infrastructure. -==== - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: hibernate-search-orm-elasticsearch-quickstart -:create-app-extensions: hibernate-orm-panache,jdbc-postgresql,hibernate-search-orm-elasticsearch,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a Maven structure importing the following extensions: - - * Hibernate ORM with Panache, - * the PostgreSQL JDBC driver, - * Hibernate Search + Elasticsearch, - * RESTEasy and Jackson. - -If you already have your Quarkus project configured, you can add the `hibernate-search-orm-elasticsearch` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: hibernate-search-orm-elasticsearch -include::includes/devtools/extension-add.adoc[] - -This will add the following to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-hibernate-search-orm-elasticsearch - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-hibernate-search-orm-elasticsearch") ----- - -== Creating the bare entities - -First, let's create our Hibernate ORM entities `Book` and `Author` in the `model` subpackage. - -[source,java] ----- -package org.acme.hibernate.search.elasticsearch.model; - -import java.util.List; -import java.util.Objects; - -import javax.persistence.CascadeType; -import javax.persistence.Entity; -import javax.persistence.FetchType; -import javax.persistence.OneToMany; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; - -@Entity -public class Author extends PanacheEntity { // <1> - - public String firstName; - - public String lastName; - - @OneToMany(mappedBy = "author", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.EAGER) // <2> - public List books; - - @Override - public boolean equals(Object o) { - if (this == o) { - return true; - } - if (!(o instanceof Author)) { - return false; - } - - Author other = (Author) o; - - return Objects.equals(id, other.id); - } - - @Override - public int hashCode() { - return 31; - } -} ----- -<1> We are using Hibernate ORM with Panache, it is not mandatory. -<2> We are loading these elements eagerly so that they are present in the JSON output. -In a real world application, you should probably use a DTO approach. - -[source,java] ----- -package org.acme.hibernate.search.elasticsearch.model; - -import java.util.Objects; - -import javax.persistence.Entity; -import javax.persistence.ManyToOne; - -import com.fasterxml.jackson.annotation.JsonIgnore; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; - -@Entity -public class Book extends PanacheEntity { - - public String title; - - @ManyToOne - @JsonIgnore <1> - public Author author; - - @Override - public boolean equals(Object o) { - if (this == o) { - return true; - } - if (!(o instanceof Book)) { - return false; - } - - Book other = (Book) o; - - return Objects.equals(id, other.id); - } - - @Override - public int hashCode() { - return 31; - } -} ----- -<1> We mark this property with `@JsonIgnore` to avoid infinite loops when serializing with Jackson. - -== Initializing the REST service - -While everything is not yet set up for our REST service, we can initialize it with the standard CRUD operations we will need. - -Create the `org.acme.hibernate.search.elasticsearch.LibraryResource` class: - -[source,java] ----- -package org.acme.hibernate.search.elasticsearch; - -import javax.transaction.Transactional; -import javax.ws.rs.Consumes; -import javax.ws.rs.DELETE; -import javax.ws.rs.POST; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.acme.hibernate.search.elasticsearch.model.Author; -import org.acme.hibernate.search.elasticsearch.model.Book; -import org.jboss.resteasy.annotations.jaxrs.FormParam; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -@Path("/library") -public class LibraryResource { - - @PUT - @Path("book") - @Transactional - @Consumes(MediaType.APPLICATION_FORM_URLENCODED) - public void addBook(@FormParam String title, @FormParam Long authorId) { - Author author = Author.findById(authorId); - if (author == null) { - return; - } - - Book book = new Book(); - book.title = title; - book.author = author; - book.persist(); - - author.books.add(book); - author.persist(); - } - - @DELETE - @Path("book/{id}") - @Transactional - public void deleteBook(@PathParam Long id) { - Book book = Book.findById(id); - if (book != null) { - book.author.books.remove(book); - book.delete(); - } - } - - @PUT - @Path("author") - @Transactional - @Consumes(MediaType.APPLICATION_FORM_URLENCODED) - public void addAuthor(@FormParam String firstName, @FormParam String lastName) { - Author author = new Author(); - author.firstName = firstName; - author.lastName = lastName; - author.persist(); - } - - @POST - @Path("author/{id}") - @Transactional - @Consumes(MediaType.APPLICATION_FORM_URLENCODED) - public void updateAuthor(@PathParam Long id, @FormParam String firstName, @FormParam String lastName) { - Author author = Author.findById(id); - if (author == null) { - return; - } - author.firstName = firstName; - author.lastName = lastName; - author.persist(); - } - - @DELETE - @Path("author/{id}") - @Transactional - public void deleteAuthor(@PathParam Long id) { - Author author = Author.findById(id); - if (author != null) { - author.delete(); - } - } -} ----- - -Nothing out of the ordinary here: it is just good old Hibernate ORM with Panache operations in a standard JAX-RS service. - -In fact, the interesting part is that we will need to add very few elements to make our full text search application working. - -== Using Hibernate Search annotations - -Let's go back to our entities. - -Enabling full text search capabilities for them is as simple as adding a few annotations. - -Let's edit the `Book` entity again to include this content: - -[source,java] ----- -package org.acme.hibernate.search.elasticsearch.model; - -import java.util.Objects; - -import javax.persistence.Entity; -import javax.persistence.ManyToOne; - -import org.hibernate.search.mapper.pojo.mapping.definition.annotation.FullTextField; -import org.hibernate.search.mapper.pojo.mapping.definition.annotation.Indexed; - -import com.fasterxml.jackson.annotation.JsonIgnore; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; - -@Entity -@Indexed // <1> -public class Book extends PanacheEntity { - - @FullTextField(analyzer = "english") // <2> - public String title; - - @ManyToOne - @JsonIgnore - public Author author; - - // Preexisting equals()/hashCode() methods -} ----- -<1> First, let's use the `@Indexed` annotation to register our `Book` entity as part of the full text index. -<2> The `@FullTextField` annotation declares a field in the index specifically tailored for full text search. -In particular, we have to define an analyzer to split and analyze the tokens (~ words) - more on this later. - -Now that our books are indexed, we can do the same for the authors. - -Open the `Author` class and include the content below. - -Things are quite similar here: we use the `@Indexed`, `@FullTextField` and `@KeywordField` annotations. - -There are a few differences/additions though. Let's check them out. - -[source,java] ----- -package org.acme.hibernate.search.elasticsearch.model; - -import java.util.List; -import java.util.Objects; - -import javax.persistence.CascadeType; -import javax.persistence.Entity; -import javax.persistence.FetchType; -import javax.persistence.OneToMany; - -import org.hibernate.search.engine.backend.types.Sortable; -import org.hibernate.search.mapper.pojo.mapping.definition.annotation.FullTextField; -import org.hibernate.search.mapper.pojo.mapping.definition.annotation.Indexed; -import org.hibernate.search.mapper.pojo.mapping.definition.annotation.IndexedEmbedded; -import org.hibernate.search.mapper.pojo.mapping.definition.annotation.KeywordField; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; - -@Entity -@Indexed -public class Author extends PanacheEntity { - - @FullTextField(analyzer = "name") // <1> - @KeywordField(name = "firstName_sort", sortable = Sortable.YES, normalizer = "sort") // <2> - public String firstName; - - @FullTextField(analyzer = "name") - @KeywordField(name = "lastName_sort", sortable = Sortable.YES, normalizer = "sort") - public String lastName; - - @OneToMany(mappedBy = "author", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.EAGER) - @IndexedEmbedded // <3> - public List books; - - // Preexisting equals()/hashCode() methods -} ----- -<1> We use a `@FullTextField` similar to what we did for `Book` but you'll notice that the analyzer is different - more on this later. -<2> As you can see, we can define several fields for the same property. -Here, we define a `@KeywordField` with a specific name. -The main difference is that a keyword field is not tokenized (the string is kept as one single token) but can be normalized (i.e. filtered) - more on this later. -This field is marked as sortable as our intention is to use it for sorting our authors. -<3> The purpose of `@IndexedEmbedded` is to include the `Book` fields into the `Author` index. -In this case, we just use the default configuration: all the fields of the associated `Book` entities are included in the index (i.e. the `title` field). -The nice thing with `@IndexedEmbedded` is that it is able to automatically reindex an `Author` if one of its ``Book``s has been updated thanks to the bidirectional relation. -`@IndexedEmbedded` also supports nested documents (using the `storage = NESTED` attribute) but we don't need it here. -You can also specify the fields you want to include in your parent index using the `includePaths` attribute if you don't want them all. - -== Analyzers and normalizers - -=== Introduction - -Analysis is a big part of full text search: it defines how text will be processed when indexing or building search queries. - -The role of analyzers is to split the text into tokens (~ words) and filter them (making it all lowercase and removing accents for instance). - -Normalizers are a special type of analyzers that keeps the input as a single token. -It is especially useful for sorting or indexing keywords. - -There are a lot of bundled analyzers but you can also develop your own for your own specific purposes. - -You can learn more about the Elasticsearch analysis framework in the https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis.html[Analysis section of the Elasticsearch documentation]. - -=== Defining the analyzers used - -When we added the Hibernate Search annotations to our entities, we defined the analyzers and normalizers used. -Typically: - -[source,java] ----- -@FullTextField(analyzer = "english") ----- - -[source,java] ----- -@FullTextField(analyzer = "name") ----- - -[source,java] ----- -@KeywordField(name = "lastName_sort", sortable = Sortable.YES, normalizer = "sort") ----- - -We use: - - * an analyzer called `name` for person names, - * an analyzer called `english` for book titles, - * a normalizer called `sort` for our sort fields - -but we haven't set them up yet. - -Let's see how you can do it with Hibernate Search. - -[[analysis-configurer]] -=== Setting up the analyzers - -It is an easy task, we just need to create an implementation of `ElasticsearchAnalysisConfigurer` -(and configure Quarkus to use it, more on that later). - -To fulfill our requirements, let's create the following implementation: - -[source,java] ----- -package org.acme.hibernate.search.elasticsearch.config; - -import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurationContext; -import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurer; - -import javax.enterprise.context.Dependent; -import javax.inject.Named; - -@Dependent -@Named("myAnalysisConfigurer") // <1> -public class AnalysisConfigurer implements ElasticsearchAnalysisConfigurer { - - @Override - public void configure(ElasticsearchAnalysisConfigurationContext context) { - context.analyzer("name").custom() // <2> - .tokenizer("standard") - .tokenFilters("asciifolding", "lowercase"); - - context.analyzer("english").custom() // <3> - .tokenizer("standard") - .tokenFilters("asciifolding", "lowercase", "porter_stem"); - - context.normalizer("sort").custom() // <4> - .tokenFilters("asciifolding", "lowercase"); - } -} ----- -<1> We will need to reference the configurer from the configuration properties, so we make it a named bean. -<2> This is a simple analyzer separating the words on spaces, removing any non-ASCII characters by its ASCII counterpart (and thus removing accents) and putting everything in lowercase. -It is used in our examples for the author's names. -<3> We are a bit more aggressive with this one and we include some stemming: we will be able to search for `mystery` and get a result even if the indexed input contains `mysteries`. -It is definitely too aggressive for person names but it is perfect for the book titles. -<4> Here is the normalizer used for sorting. Very similar to our first analyzer, except we don't tokenize the words as we want one and only one token. - -== Adding full text capabilities to our REST service - -In our existing `LibraryResource`, we just need to inject the `SearchSession`: - -[source,java] ----- - @Inject - SearchSession searchSession; // <1> ----- -<1> Inject a Hibernate Search session, which relies on the `EntityManager` under the hood. -Applications with multiple persistence units can use the CDI qualifier `@io.quarkus.hibernate.orm.PersistenceUnit` -to select the right one: -see <>. - -And then we can add the following methods (and a few ``import``s): - -[source,java] ----- - @Transactional // <1> - void onStart(@Observes StartupEvent ev) throws InterruptedException { // <2> - // only reindex if we imported some content - if (Book.count() > 0) { - searchSession.massIndexer() - .startAndWait(); - } - } - - @GET - @Path("author/search") // <3> - @Transactional - public List searchAuthors(@QueryParam String pattern, // <4> - @QueryParam Optional size) { - return searchSession.search(Author.class) // <5> - .where(f -> - pattern == null || pattern.trim().isEmpty() ? - f.matchAll() : // <6> - f.simpleQueryString() - .fields("firstName", "lastName", "books.title").matching(pattern) // <7> - ) - .sort(f -> f.field("lastName_sort").then().field("firstName_sort")) // <8> - .fetchHits(size.orElse(20)); // <9> - } ----- -<1> Important point: we need a transactional context for these methods. -<2> As we will import data into the PostgreSQL database using an SQL script, we need to reindex the data at startup. -For this, we use Hibernate Search's mass indexer, which allows to index a lot of data efficiently (you can fine tune it for better performances). -All the upcoming updates coming through Hibernate ORM operations will be synchronized automatically to the full text index. -If you don't import data manually in the database, you don't need that: -the mass indexer should then only be used when you change your indexing configuration (adding a new field, changing an analyzer's configuration...) and you want the new configuration to be applied to your existing entities. -<3> This is where the magic begins: just adding the annotations to our entities makes them available for full text search: we can now query the index using the Hibernate Search DSL. -<4> Use the `org.jboss.resteasy.annotations.jaxrs.QueryParam` annotation type to avoid repeating the parameter name. -<5> We indicate that we are searching for ``Author``s. -<6> We create a predicate: if the pattern is empty, we use a `matchAll()` predicate. -<7> If we have a valid pattern, we create a https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html[`simpleQueryString()`] predicate on the `firstName`, `lastName` and `books.title` fields matching our pattern. -<8> We define the sort order of our results. Here we sort by last name, then by first name. Note that we use the specific fields we created for sorting. -<9> Fetch the `size` top hits, `20` by default. Obviously, paging is also supported. - -[NOTE] -==== -The Hibernate Search DSL supports a significant subset of the Elasticsearch predicates (match, range, nested, phrase, spatial...). -Feel free to explore the DSL using autocompletion. - -When that's not enough, you can always fall back to -link:{hibernate-search-doc-prefix}#search-dsl-predicate-extensions-elasticsearch-from-json[defining a predicate using JSON directly]. -==== - -== Configuring the application - -As usual, we can configure everything in the Quarkus configuration file, `application.properties`. - -Edit `src/main/resources/application.properties` and inject the following configuration: - -[source,properties] ----- -quarkus.ssl.native=false <1> - -quarkus.datasource.db-kind=postgresql <2> -quarkus.datasource.username=quarkus_test -quarkus.datasource.password=quarkus_test -quarkus.datasource.jdbc.url=jdbc:postgresql:quarkus_test - -quarkus.hibernate-orm.database.generation=drop-and-create <3> -quarkus.hibernate-orm.sql-load-script=import.sql <4> - -quarkus.hibernate-search-orm.elasticsearch.version=7 <5> -quarkus.hibernate-search-orm.elasticsearch.analysis.configurer=bean:myAnalysisConfigurer <6> -quarkus.hibernate-search-orm.schema-management.strategy=drop-and-create <7> -quarkus.hibernate-search-orm.automatic-indexing.synchronization.strategy=sync <8> ----- -<1> We won't use SSL so we disable it to have a more compact native executable. -<2> Let's create a PostgreSQL datasource. -<3> We will drop and recreate the schema every time we start the application. -<4> We load some initial data. -<5> We need to tell Hibernate Search about the version of Elasticsearch we will use. -It is important because there are significant differences between Elasticsearch mapping syntax depending on the version. -Since the mapping is created at build time to reduce startup time, Hibernate Search cannot connect to the cluster to automatically detect the version. -Note that, for OpenSearch, you need to prefix the version with `opensearch:`; see <>. -<6> We point to the custom `AnalysisConfigurer` which defines the configuration of our analyzers and normalizers. -<7> Obviously, this is not for production: we drop and recreate the index every time we start the application. -<8> This means that we wait for the entities to be searchable before considering a write complete. -On a production setup, the `write-sync` default will provide better performance. -Using `sync` is especially important when testing as you need the entities to be searchable immediately. - -[TIP] -For more information about the Hibernate Search extension configuration please refer to the <>. - -== Creating a frontend - -Now let's add a simple web page to interact with our `LibraryResource`. -Quarkus automatically serves static resources located under the `META-INF/resources` directory. -In the `src/main/resources/META-INF/resources` directory, overwrite the existing `index.html` file with the content from this -{quickstarts-blob-url}/hibernate-search-orm-elasticsearch-quickstart/src/main/resources/META-INF/resources/index.html[index.html] file. - -== Automatic import script - -For the purpose of this demonstration, let's import an initial dataset. - -Let's create a `src/main/resources/import.sql` file with the following content: - -[source,sql] ----- -INSERT INTO author(id, firstname, lastname) VALUES (nextval('hibernate_sequence'), 'John', 'Irving'); -INSERT INTO author(id, firstname, lastname) VALUES (nextval('hibernate_sequence'), 'Paul', 'Auster'); - -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'The World According to Garp', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'The Hotel New Hampshire', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'The Cider House Rules', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'A Prayer for Owen Meany', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'Last Night in Twisted River', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'In One Person', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'Avenue of Mysteries', 1); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'The New York Trilogy', 2); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'Mr. Vertigo', 2); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'The Brooklyn Follies', 2); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'Invisible', 2); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), 'Sunset Park', 2); -INSERT INTO book(id, title, author_id) VALUES (nextval('hibernate_sequence'), '4 3 2 1', 2); ----- - -== Preparing the infrastructure - -We need a PostgreSQL instance and an Elasticsearch cluster. - -Let's use Docker to start one of each: - -[source,bash,subs=attributes+] ----- -docker run -it --rm=true --name elasticsearch_quarkus_test -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch-oss:{elasticsearch-version} ----- - -[source,bash] ----- -docker run -it --rm=true --name postgresql_quarkus_test -e POSTGRES_USER=quarkus_test -e POSTGRES_PASSWORD=quarkus_test -e POSTGRES_DB=quarkus_test -p 5432:5432 postgres:14.1 ----- - -== Time to play with your application - -You can now interact with your REST service: - -:devtools-wrapped: - - * start your Quarkus application with: -+ -include::includes/devtools/dev.adoc[] - * open a browser to `http://localhost:8080/` - * search for authors or book titles (we initialized some data for you) - * create new authors and books and search for them too - -:!devtools-wrapped: - -As you can see, all your updates are automatically synchronized to the Elasticsearch cluster. - -[[opensearch]] -== OpenSearch compatibility - -Hibernate Search is compatible with both https://www.elastic.co/elasticsearch[Elasticsearch] -and https://www.opensearch.org/[OpenSearch], -but it assumes it is working with an Elasticsearch cluster by default. - -To have Hibernate Search work with an OpenSearch cluster instead, -link:{hibernate-search-doc-prefix}#backend-elasticsearch-configuration-version[prefix the configured version with `opensearch:`], -as shown below. - -[source,properties] ----- -quarkus.hibernate-search-orm.elasticsearch.version=opensearch:1.2 ----- - -All other configuration options and APIs are exactly the same as with Elasticsearch. - -You can find more information about compatible distributions and versions of Elasticsearch in -link:{hibernate-search-doc-prefix}#getting-started-compatibility[this section of Hibernate Search's reference documentation]. - -[[multiple-persistence-units]] -== Multiple persistence units - -=== Configuring multiple persistence units - -With the Hibernate ORM extension, -xref:hibernate-orm.adoc#multiple-persistence-units[you can set up multiple persistence units], -each with its own datasource and configuration. - -If you do declare multiple persistence units, -you will also configure Hibernate Search separately for each persistence unit. - -The properties at the root of the `quarkus.hibernate-search-orm.` namespace define the default persistence unit. -For instance, the following snippet defines a default datasource and a default persistence unit, -and sets the Elasticsearch host for that persistence unit to `es1.mycompany.com:9200`. - -[source,properties] ----- -quarkus.datasource.db-kind=h2 -quarkus.datasource.jdbc.url=jdbc:h2:mem:default;DB_CLOSE_DELAY=-1 - -quarkus.hibernate-orm.dialect=org.hibernate.dialect.H2Dialect - -quarkus.hibernate-search-orm.elasticsearch.hosts=es1.mycompany.com:9200 -quarkus.hibernate-search-orm.elasticsearch.version=7 -quarkus.hibernate-search-orm.automatic-indexing.synchronization.strategy=write-sync ----- - -Using a map based approach, it is also possible to configure named persistence units: - -[source,properties] ----- -quarkus.datasource."users".db-kind=h2 <1> -quarkus.datasource."users".jdbc.url=jdbc:h2:mem:users;DB_CLOSE_DELAY=-1 - -quarkus.datasource."inventory".db-kind=h2 <2> -quarkus.datasource."inventory".jdbc.url=jdbc:h2:mem:inventory;DB_CLOSE_DELAY=-1 - -quarkus.hibernate-orm."users".datasource=users <3> -quarkus.hibernate-orm."users".packages=org.acme.model.user - -quarkus.hibernate-orm."inventory".datasource=inventory <4> -quarkus.hibernate-orm."inventory".packages=org.acme.model.inventory - -quarkus.hibernate-search-orm."users".elasticsearch.hosts=es1.mycompany.com:9200 <5> -quarkus.hibernate-search-orm."users".elasticsearch.version=7 -quarkus.hibernate-search-orm."users".automatic-indexing.synchronization.strategy=write-sync - -quarkus.hibernate-search-orm."inventory".elasticsearch.hosts=es2.mycompany.com:9200 <6> -quarkus.hibernate-search-orm."inventory".elasticsearch.version=7 -quarkus.hibernate-search-orm."inventory".automatic-indexing.synchronization.strategy=write-sync ----- -<1> Define a datasource named `users`. -<2> Define a datasource named `inventory`. -<3> Define a persistence unit called `users` pointing to the `users` datasource. -<4> Define a persistence unit called `inventory` pointing to the `inventory` datasource. -<5> Configure Hibernate Search for the `users` persistence unit, -setting the Elasticsearch host for that persistence unit to `es1.mycompany.com:9200`. -<6> Configure Hibernate Search for the `inventory` persistence unit, -setting the Elasticsearch host for that persistence unit to `es2.mycompany.com:9200`. - -[[multiple-persistence-units-attaching-model-classes]] -=== Attaching model classes to persistence units - -For each persistence unit, Hibernate Search will only consider indexed entities that are attached to that persistence unit. -Entities are attached to a persistence unit by -xref:hibernate-orm.adoc#multiple-persistence-units-attaching-model-classes[configuring the Hibernate ORM extension]. - -[[multiple-persistence-units-attaching-cdi]] -== CDI integration - -You can inject Hibernate Search's main entry points, `SearchSession` and `SearchMapping`, using CDI: - -[source,java] ----- -@Inject -SearchSession searchSession; ----- - -This will inject the `SearchSession` of the default persistence unit. - -To inject the `SearchSession` of a named persistence unit (`users` in our example), -just add a qualifier: - -[source,java] ----- -@Inject -@PersistenceUnit("users") <1> -SearchSession searchSession; ----- -<1> This is the `@io.quarkus.hibernate.orm.PersistenceUnit` annotation. - -You can inject the `SearchMapping` of a named persistence unit using the exact same mechanism: - -[source,java] ----- -@Inject -@PersistenceUnit("users") -SearchMapping searchMapping; ----- - -== Building a native executable - -You can build a native executable with the usual command `./mvnw package -Pnative`. - -[NOTE] -==== -As usual with native executable compilation, this operation consumes a lot of memory. - -It might be safer to stop the two containers while you are building the native executable and start them again once you are done. -==== - -Running it is as simple as executing `./target/hibernate-search-orm-elasticsearch-quickstart-1.0.0-SNAPSHOT-runner`. - -You can then point your browser to `http://localhost:8080/` and use your application. - -[NOTE] -==== -The startup is a bit slower than usual: it is mostly due to us dropping and recreating the database schema and the Elasticsearch mapping every time at startup. -We also inject some data and execute the mass indexer. - -In a real life application, it is obviously something you won't do at startup. -==== - -[[offline-startup]] -== Offline startup - -By default, Hibernate Search sends a few requests to the Elasticsearch cluster on startup. -If the Elasticsearch cluster is not necessarily up and running when Hibernate Search starts, -this could cause a startup failure. - -To address this, you can configure Hibernate Search to not send any request on startup: - -* Disable Elasticsearch version checks on startup by setting the configuration property - link:#quarkus-hibernate-search-orm-elasticsearch_quarkus.hibernate-search-orm.elasticsearch.version-check.enabled[`quarkus.hibernate-search-orm.elasticsearch.version-check.enabled`] - to `false`. -* Disable schema management on startup by setting the configuration property - link:#quarkus-hibernate-search-orm-elasticsearch_quarkus.hibernate-search-orm.schema-management.strategy[`quarkus.hibernate-search-orm.schema-management.strategy`] - to `none`. - -Of course, even with this configuration, Hibernate Search still won't be able to index anything or run search queries -until the Elasticsearch cluster becomes accessible. - -[IMPORTANT] -==== -If you disable automatic schema creation by setting `quarkus.hibernate-search-orm.schema-management.strategy` to `none`, -you will have to create the schema manually at some point before your application starts persisting/updating entities -and executing search requests. - -See link:{hibernate-search-doc-prefix}#mapper-orm-schema-management-manager[this section of the reference documentation] -for more information. -==== - -[[coordination]] -== Coordination through outbox polling - -[CAUTION] -==== -Coordination through outbox polling is considered preview. - -In _preview_, backward compatibility and presence in the ecosystem is not guaranteed. -Specific improvements might require changing configuration or APIs, or even storage formats, -and plans to become _stable_ are under way. -Feedback is welcome on our https://groups.google.com/d/forum/quarkus-dev[mailing list] -or as issues in our https://github.com/quarkusio/quarkus/issues[GitHub issue tracker]. -==== - -While it’s technically possible to use Hibernate Search and Elasticsearch in distributed applications, -by default they suffer from -link:{hibernate-search-doc-prefix}#architecture-examples-no-coordination-elasticsearch-pros-and-cons[a few limitations]. - -These limitations are the result of Hibernate Search not coordinating between threads or application nodes by default. - -In order to get rid of these limitations, you can -link:{hibernate-search-doc-prefix}#architecture-examples-outbox-polling-elasticsearch[use the `outbox-polling` coordination strategy]. -This strategy creates an outbox table in the database to push entity change events to, -and relies on a background processor to consume these events and perform automatic indexing. - -To enable the `outbox-polling` coordination strategy, an additional extension is required: - -:add-extension-extensions: hibernate-search-orm-coordination-outbox-polling -include::includes/devtools/extension-add.adoc[] - -Once the extension is there, you will need to explicitly select the `outbox-polling` strategy -by setting link:#quarkus-hibernate-search-orm-elasticsearch_quarkus.hibernate-search-orm.coordination.strategy[`quarkus.hibernate-search-orm.coordination.strategy`] -to `outbox-polling`. - -Finally, you will need to make sure that the Hibernate ORM entities added by Hibernate Search -(to represent the outbox and agents) have corresponding tables/sequences in your database: - -* If you are just starting with your application -and intend to xref:hibernate-orm.adoc#dev-mode[let Hibernate ORM generate your database schema], -then no worries: the entities required by Hibernate Search will be included in the generated schema. -* Otherwise, you must -link:{hibernate-search-doc-prefix}#coordination-outbox-polling-schema[manually alter your schema to add the necessary tables/sequences]. - -Once you are done with the above, you're ready to use Hibernate Search with an outbox. -Don't change any code, and just start your application: -it will automatically detect when multiple applications are connected to the same database, -and coordinate the index updates accordingly. - -[NOTE] -==== -Hibernate Search mostly behaves the same when using the `outbox-polling` coordination strategy -as when not using it: application code (persisting entities, searching, etc.) should not require any change. - -However, there is one key difference: index updates are necessarily asynchronous; -they are guaranteed to happen _eventually_, but not immediately. - -This means in particular that the configuration property -link:#quarkus-hibernate-search-orm-elasticsearch_quarkus.hibernate-search-orm.automatic-indexing.synchronization.strategy[`quarkus.hibernate-search-orm.automatic-indexing.synchronization.strategy`] -cannot be set when using the `outbox-polling` coordination strategy: -Hibernate Search will always behave as if this property was set to `write-sync` (the default). - -This behavior is consistent with Elasticsearch's -https://www.elastic.co/guide/en/elasticsearch/reference/current/near-real-time.html[near-real-time search] -and the recommended way of using Hibernate Search even when coordination is disabled. -==== - -For more information about coordination in Hibernate Search, -see link:{hibernate-search-doc-prefix}#coordination[this section of the reference documentation]. - -For more information about configuration options related to coordination, -see <>. - -[[aws-request-signing]] -== [[configuration-reference-aws]] AWS request signing - -If you need to use https://docs.aws.amazon.com/elasticsearch-service/[Amazon’s managed Elasticsearch service], -you will find it requires a proprietary authentication method involving request signing. - -You can enable AWS request signing in Hibernate Search by adding a dedicated extension to your project and configuring it. - -See link:{hibernate-search-orm-elasticsearch-aws-guide}#aws-configuration-reference[the documentation for the Hibernate Search ORM + Elasticsearch AWS extension] -for more information. - -== Further reading - -If you are interested in learning more about Hibernate Search 6, -the Hibernate team publishes link:{hibernate-search-doc-prefix}[an extensive reference documentation]. - -== FAQ - -=== Why Elasticsearch only? - -Hibernate Search supports both a Lucene backend and an Elasticsearch backend. - -In the context of Quarkus and to build microservices, we thought the latter would make more sense. -Thus we focused our efforts on it. - -We don't have plans to support the Lucene backend in Quarkus for now. - -[[configuration-reference]] -== Hibernate Search Configuration Reference - -[[configuration-reference-main]] -=== Main Configuration - -include::{generated-dir}/config/quarkus-hibernate-search-orm-elasticsearch.adoc[leveloffset=+1, opts=optional] - -[NOTE] -[[bean-reference-note-anchor]] -.About bean references -==== -When referencing beans using a string value in configuration properties, that string is parsed. - -Here are the most common formats: - -* `bean:` followed by the name of a `@Named` CDI bean. -For example `bean:myBean`. -* `class:` followed by the fully-qualified name of a class, to be instantiated through CDI if it's a CDI bean, -or through its public, no-argument constructor otherwise. -For example `class:com.mycompany.MyClass`. -* An arbitrary string referencing a built-in implementation. -Available values are detailed in the documentation of each configuration property, -such as `async`/`read-sync`/`write-sync`/`sync` for -<>. - -Other formats are also accepted, but are only useful for advanced use cases. -See link:{hibernate-search-doc-prefix}#configuration-bean-reference-parsing[this section of Hibernate Search's reference documentation] -for more information. -==== - -:no-duration-note: true - -[[configuration-reference-coordination-outbox-polling]] -=== Configuration of coordination with outbox polling - -NOTE: These configuration properties require an additional extension. See <>. - -include::{generated-dir}/config/quarkus-hibernate-search-orm-coordination-outboxpolling.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/http-reference.adoc b/_versions/2.7/guides/http-reference.adoc deleted file mode 100644 index fb34a335657..00000000000 --- a/_versions/2.7/guides/http-reference.adoc +++ /dev/null @@ -1,374 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= HTTP Reference - -include::./attributes.adoc[] -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -:numbered: -:sectnums: -:sectnumlevels: 4 - - -This document explains various HTTP features that you can use in Quarkus. - -HTTP is provided using Eclipse Vert.x as the base HTTP layer. Servlet's are supported using a modified version of Undertow that -runs on top of Vert.x, and RESTEasy is used to provide JAX-RS support. If Undertow is present RESTEasy will run as a -Servlet filter, otherwise it will run directly on top of Vert.x with no Servlet involvement. - -== Serving Static Resources - -To serve static resources you must place them in the `META-INF/resources` directory of your application. This location -was chosen as it is the standard location for resources in `jar` files as defined by the Servlet spec. Even though -Quarkus can be used without Servlet following this convention allows existing code that places its resources in this -location to function correctly. - -=== WebJar Locator Support - -If you are using webjars, like the following JQuery one: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.webjars - jquery - 3.1.1 - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.webjars:jquery:3.1.1") ----- - -and rather write `/webjars/jquery/jquery.min.js` instead of `/webjars/jquery/3.1.1/jquery.min.js` -in your HTML files, you can add the `quarkus-webjars-locator` extension to your project. -To use it, add the following to your project's dependencies: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-webjars-locator - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-webjars-locator") ----- - -== Configuring the Context path - -By default Quarkus will serve content from under the root context. If you want to change this you can use the -`quarkus.http.root-path` config key to set the context path. - -If you are using Servlet you can control the Servlet context path via `quarkus.servlet.context-path`. This item is relative -to the http root above, and will only affect Servlet and things that run on top of Servlet. Most applications will -want to use the HTTP root as this affects everything that Quarkus serves. - -If both are specified then all non-Servlet web endpoints will be relative to `quarkus.http.root-path`, while Servlet's -will be served relative to `{quarkus.http.root-path}/{quarkus.servlet.context-path}`. - -If REST Assured is used for testing and `quarkus.http.root-path` is set then Quarkus will automatically configure the -base URL for use in Quarkus tests, so test URL's should not include the root path. - -[[ssl]] -== Supporting secure connections with SSL - -In order to have Quarkus support secure connections, you must either provide a certificate and associated key file, or supply a keystore. - -In both cases, a password must be provided. See the designated paragraph for a detailed description of how to provide it. - -[TIP] -==== -To enable SSL support with native executables, please refer to our xref:native-and-ssl.adoc[Using SSL With Native Executables guide]. -==== - -=== Providing a certificate and key file - -If the certificate has not been loaded into a keystore, it can be provided directly using the properties listed below. -Quarkus will first try to load the given files as resources, and uses the filesystem as a fallback. -The certificate / key pair will be loaded into a newly created keystore on startup. - -Your `application.properties` would then look like this: - -[source,properties] ----- -quarkus.http.ssl.certificate.file=/path/to/certificate -quarkus.http.ssl.certificate.key-file=/path/to/key ----- - -=== Providing a keystore - -An alternate solution is to directly provide a keystore which already contains a default entry with a certificate - You will need to at least provide the file and a password. - -As with the certificate/key file combination, Quarkus will first try to resolve the given path as a resource, before attempting to read it from the filesystem. - -Add the following property to your `application.properties`: - -[source,bash] ----- -quarkus.http.ssl.certificate.key-store-file=/path/to/keystore ----- - -As an optional hint, the type of keystore can be provided as one of the options listed. -If the type is not provided, Quarkus will try to deduce it from the file extensions, defaulting to type JKS. - -[source,properties] ----- -quarkus.http.ssl.certificate.key-store-file-type=[one of JKS, JCEKS, P12, PKCS12, PFX] ----- - -=== Setting the password - -In both aforementioned scenarios, a password needs to be provided to create/load the keystore with. -The password can be set in your `application.properties` (in plain-text) using the following property: - -[source, properties] ----- -quarkus.http.ssl.certificate.key-store-password=your-password ----- - -However, instead of providing the password as plain-text in the configuration file (which is considered bad practice), it can instead be supplied (using link:https://github.com/eclipse/microprofile-config[MicroProfile Config]) -as the environment variable `QUARKUS_HTTP_SSL_CERTIFICATE_KEY_STORE_PASSWORD`. -This will also work in tandem with link:https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables[Kubernetes secrets]. - -_Note: in order to remain compatible with earlier versions of Quarkus (before 0.16) the default password is set to "password". It is therefore not a mandatory parameter!_ - -=== Disable the HTTP port - -It is possible to disable the HTTP port and only support secure requests. This is done via the -`quarkus.http.insecure-requests` property in `application.properties`. There are three possible -values: - -`enabled`:: The default, HTTP works as normal -`redirect`:: HTTP requests will be redirected to the HTTPS port -`disabled`:: The HTTP port will not be opened. - -NOTE: if you use `redirect` or `disabled` and have not added a SSL certificate or keystore, your server will not start! - -== Additional HTTP Headers - -To enable HTTP headers to be sent on every response, add the following properties: - -[source, properties] ----- -quarkus.http.header."X-Content-Type-Options".value=nosniff ----- - -This will include the `X-Content-Type-Options: nosniff` HTTP Header on responses for requests performed on any resource in the application. - -You can also specify a `path` pattern and the HTTP `methods` the header needs to be applied: - -[source, properties] ----- -quarkus.http.header.Pragma.value=no-cache -quarkus.http.header.Pragma.path=/headers/pragma -quarkus.http.header.Pragma.methods=GET,HEAD ----- - -This will apply the `Pragma` header only when the `/headers/pragma` resource is called with a `GET` or a `HEAD` method - -include::{generated-dir}/config/quarkus-vertx-http-config-group-header-config.adoc[leveloffset=+1, opts=optional] - -== HTTP/2 Support - -HTTP/2 is enabled by default, and will be used by browsers if SSL is in use on JDK11 or higher. JDK8 does not support -ALPN so cannot be used to run HTTP/2 over SSL. Even if SSL is not in use HTTP/2 via cleartext upgrade is supported, -and may be used by non-browser clients. - -If you want to disable HTTP/2 you can set: - -[source, properties] ----- -quarkus.http.http2=false ----- - -== Listening on a Random Port - -If you don't want to specify a port you can set `quarkus.http.port=0` or `quarkus.http.test-port=0`. A random open port -will be picked by the OS, and a log message printed in the console. When the port is bound the `quarkus.http.port` system -property will be set to the actual port that was selected, so you can use this to get the actual port number from inside -the application. If you are in a test you can inject the URL normally and this will be configured with the actual port, -and REST Assured will also be configured appropriately. - -WARNING: As this sets a system property you can access `quarkus.http.port` via MicroProfile Config, however if you use -injection the injected value may not always be correct. This port allocation is one of the last things to happen in -Quarkus startup, so if your object that is being injected is created eagerly before the port has opened the injected -value will not be correct. - -== CORS filter - -link:https://en.wikipedia.org/wiki/Cross-origin_resource_sharing[Cross-origin resource sharing] (CORS) is a mechanism that -allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource -was served. - -Quarkus comes with a CORS filter which implements the `javax.servlet.Filter` interface and intercepts all incoming HTTP -requests. It can be enabled in the Quarkus configuration file, `src/main/resources/application.properties`: - -[source, properties] ----- -quarkus.http.cors=true ----- - -If the filter is enabled and an HTTP request is identified as cross-origin, the CORS policy and headers defined using the -following properties will be applied before passing the request on to its actual target (servlet, JAX-RS resource, etc.): - - -include::{generated-dir}/config/quarkus-vertx-http-config-group-cors-cors-config.adoc[leveloffset=+1, opts=optional] - -Here's what a full CORS filter configuration could look like, including a regular expression defining an allowed origin: - -[source, properties] ----- -quarkus.http.cors=true -quarkus.http.cors.origins=http://foo.com,http://www.bar.io,/https://([a-z0-9\\-_]+)\\.app\\.mydomain\\.com/ -quarkus.http.cors.methods=GET,PUT,POST -quarkus.http.cors.headers=X-Custom -quarkus.http.cors.exposed-headers=Content-Disposition -quarkus.http.cors.access-control-max-age=24H -quarkus.http.cors.access-control-allow-credentials=true ----- - -== HTTP Limits Configuration - -include::{generated-dir}/config/quarkus-vertx-http-config-group-server-limits-config.adoc[leveloffset=+1, opts=optional] - -== Configuring HTTP Access Logs - -You can add HTTP request logging by configuring it in `application.properties`. There are two options for logging, -either logging to the standard JBoss logging output, or logging to a dedicated file. - -include::{generated-dir}/config/quarkus-vertx-http-config-group-access-log-config.adoc[opts=optional, leveloffset=+1] - -[frame="topbot",options="header"] -|=== -|Attribute |Short Form|Long Form -|Remote IP address | `%a` | `%{REMOTE_IP}` -|Local IP address | `%A` | `%{LOCAL_IP}` -|Bytes sent, excluding HTTP headers, or '-' if no bytes were sent | `%b` | -|Bytes sent, excluding HTTP headers | `%B` | `%{BYTES_SENT}` -|Remote host name | `%h` | `%{REMOTE_HOST}` -|Request protocol | `%H` | `%{PROTOCOL}` -|Request method | `%m` | `%{METHOD}` -|Local port | `%p` | `%{LOCAL_PORT}` -|Query string (prepended with a '?' if it exists, otherwise an empty string) | `%q` | `%{QUERY_STRING}` -|First line of the request | `%r` | `%{REQUEST_LINE}` -|HTTP status code of the response | `%s` | `%{RESPONSE_CODE}` -|Date and time, in Common Log Format format | `%t` | `%{DATE_TIME}` -|Remote user that was authenticated | `%u` | `%{REMOTE_USER}` -|Requested URL path | `%U` | `%{REQUEST_URL}` -|Request relative path | `%R` | `%{REQUEST_PATH}` -|Local server name | `%v` | `%{LOCAL_SERVER_NAME}` -|Time taken to process the request, in millis | `%D` | `%{RESPONSE_TIME}` -|Time taken to process the request, in seconds | `%T` | -|Time taken to process the request, in micros | | `%{RESPONSE_TIME_MICROS}` -|Time taken to process the request, in nanos | | `%{RESPONSE_TIME_NANOS}` -|Current request thread name | `%I` | `%{THREAD_NAME}` -|SSL cypher | | `%{SSL_CIPHER}` -|SSL client certificate | | `%{SSL_CLIENT_CERT}` -|SSL session id | | `%{SSL_SESSION_ID}` -|All request headers | | `%{ALL_REQUEST_HEADERS}` -|Cookie value | | `%{c,cookie_name}` -|Query parameter | | `%{q,query_param_name}` -|Request header | | `%{i,request_header_name}` -|Response header | | `%{o,response_header_name}` -|=== - - -[[reverse-proxy]] -== Running behind a reverse proxy - -Quarkus could be accessed through proxies that additionally generate headers (e.g. `X-Forwarded-Host`) to keep -information from the client-facing side of the proxy servers that is altered or lost when they are involved. -In those scenarios, Quarkus can be configured to automatically update information like protocol, host, port and URI -reflecting the values in these headers. - -IMPORTANT: Activating this feature leaves the server exposed to several security issues (i.e. information spoofing). -Consider activate it only when running behind a reverse proxy. - -To setup this feature, please include the following lines in `src/main/resources/application.properties`: -[source,properties] ----- -quarkus.http.proxy-address-forwarding=true ----- - -To consider only de-facto standard header (`Forwarded` header), please include the following lines in `src/main/resources/application.properties`: -[source,properties] ----- -quarkus.http.proxy.allow-forwarded=true ----- - -To consider only non-standard headers, please include the following lines instead in `src/main/resources/application.properties`: - -[source,properties] ----- -quarkus.http.proxy.proxy-address-forwarding=true -quarkus.http.proxy.enable-forwarded-host=true -quarkus.http.proxy.enable-forwarded-prefix=true ----- - -Both configurations related to standard and non-standard headers can be combined, although the standard headers configuration will have precedence. - -Supported forwarding address headers are: - -* `Forwarded` -* `X-Forwarded-Proto` -* `X-Forwarded-Host` -* `X-Forwarded-Port` -* `X-Forwarded-Ssl` -* `X-Forwarded-Prefix` - -[[same-site-cookie]] -== SameSite cookies - -One can easily add a https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite[SameSite] cookie property to any of the cookies set by a Quarkus endpoint by listing a cookie name and a `SameSite` attribute, for example: - -[source] ----- -quarkus.http.same-site-cookie.jwt.value=Lax -quarkus.http.same-site-cookie.session.value=Strict ----- - -Given this configuration, the `jwt` cookie will have a `SameSite=Lax` attribute and the `session` cookie will have a `SameSite=Strict` attribute. - -== Servlet Config - -To use Servlet you need to explicitly include `quarkus-undertow`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-undertow - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-undertow") ----- - -=== undertow-handlers.conf - -You can make use of the Undertow predicate language using an `undertow-handlers.conf` file. This file should be placed -in the `META-INF` directory of your application jar. This file contains handlers defined using the -link:http://undertow.io/undertow-docs/undertow-docs-2.0.0/index.html#predicates-attributes-and-handlers[Undertow predicate language]. - -=== web.xml - -If you are using a `web.xml` file as your configuration file, you can place it in the `src/main/resources/META-INF` directory. diff --git a/_versions/2.7/guides/ide-tooling.adoc b/_versions/2.7/guides/ide-tooling.adoc deleted file mode 100644 index 1cfd441f15c..00000000000 --- a/_versions/2.7/guides/ide-tooling.adoc +++ /dev/null @@ -1,161 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Tools in your favorite IDE - -include::./attributes.adoc[] - -The following IDEs have support for the community developed Quarkus Tools: - -* https://marketplace.visualstudio.com/items?itemName=redhat.vscode-quarkus[Quarkus Tools for Visual Studio Code] -* https://marketplace.eclipse.org/content/quarkus-tools[Quarkus Tools for Eclipse] -* https://plugins.jetbrains.com/plugin/13234-quarkus/versions[Quarkus Tools for IntelliJ] -* https://github.com/eclipse/che-devfile-registry/blob/main/devfiles/quarkus/devfile.yaml[Quarkus Tools for Eclipse Che] - -In addition IntelliJ has additional support for Quarkus in their Ultimate non-open source version. - -* https://www.jetbrains.com/help/idea/quarkus.html[IntelliJ Ultimate Edition built-in Quarkus support] - -The table below gives an overview of the current IDEs with links and a high-level overview of their features. - -:vscode-logo: https://simpleicons.org/icons/visualstudiocode.svg -:eclipse-logo: https://simpleicons.org/icons/eclipseide.svg -:intellij-logo: https://simpleicons.org/icons/intellijidea.svg -:che-logo: https://simpleicons.org/icons/eclipseche.svg -[cols="6*^", header] -|=== -| . -| image:{vscode-logo}[VSCode,100,100] -{empty} + -VSCode Quarkus Tools -| image:{eclipse-logo}[Eclipse,100,100] -{empty} + -Eclipse Quarkus Tools -| image:{intellij-logo}[IntelliJ,100,100] -{empty} + -IntelliJ Quarkus Tools -| image:{intellij-logo}[IntelliJ,100,100] -{empty} + -IntelliJ Ultimate -| image:{che-logo}[Eclipse Che,100,100] -{empty} + -Eclipse Che - -|Description -|Visual Studio Code extension to install using the marketplace -|Eclipse plugin to install into Eclipse using an updatesite -|IntelliJ plugin that works in IntelliJ Community and Ultimate. Available from Marketplace. -|Built-in Quarkus features available only in IntelliJ Ultimate -|Built-in Quarkus features available in Eclipse Che incl. che.openshift.io. - -|Status -|Stable -|Stable -|Stable -|Stable -|Stable - -|Downloads -| https://marketplace.visualstudio.com/items?itemName=redhat.vscode-quarkus[Marketplace] -{empty} + - https://download.jboss.org/jbosstools/vscode/snapshots/vscode-quarkus/?C=M;O=D[Development Builds] -| https://download.jboss.org/jbosstools/photon/snapshots/builds/jbosstools-quarkus_master/[Development Update Site] -| https://plugins.jetbrains.com/plugin/13234-quarkus/versions[Marketplace] -{empty} + -https://download.jboss.org/jbosstools/intellij/snapshots/intellij-quarkus/[Development Builds] -| https://www.jetbrains.com/idea/nextversion/[Installer] -| https://che.openshift.io/f?url=https://raw.githubusercontent.com/redhat-developer/devfile/master/getting-started/quarkus/devfile.yaml[Start Che Workspace] - -|Source -|https://github.com/redhat-developer/vscode-quarkus[GitHub] -|https://github.com/jbosstools/jbosstools-quarkus[GitHub] -|https://github.com/redhat-developer/intellij-quarkus[GitHub] -|Closed-Source -| - -|https://github.com/redhat-developer/quarkus-ls[Quarkus Language Server] -|icon:check[] -|icon:check[] -|icon:check[] -|icon:times[] -|icon:check[] - -|Wizards w/code.quarkus.io -|icon:check[] -|icon:check[] -|https://issues.jboss.org/browse/JBIDE-26950[icon:times[]] -|icon:check[] -|icon:check[] - -|Custom Wizard -|icon:times[] -|icon:times[] -|icon:check[] -|icon:check[] -|icon:times[] - -|Config editor -|icon:check[] -|icon:check[] -|icon:check[] -|icon:times[] -|icon:check[] - -|Config autocompletion -|icon:check[] -|icon:check[] -|icon:check[] -|icon:check[] -|icon:check[] - -|Config validation -|icon:check[] -|icon:check[] -|icon:check[] -|icon:check[] -|icon:check[] - -|Config jump to definition -|icon:check[] -|icon:check[] -|icon:check[] -|? -|icon:check[] - -|Config profiles -|icon:check[] -|icon:check[] -|icon:check[] -|icon:check[] -|icon:check[] - -|Config outline -|icon:check[] -|icon:check[] -|icon:check[] -|icon:times[] -|icon:check[] - -|Easy Launch debug/dev:mode -|icon:check[] -|icon:check[] -|icon:times[] -|icon:check[] -|icon:check[] - -|Quarkus Code Snippets -|icon:check[] -|icon:check[] -|icon:check[] -|icon:times[] -|icon:check[] - -|Injection Discovery/Navigation -|icon:times[] -|icon:times[] -|icon:times[] -|icon:check[] -|icon:times[] -|=== diff --git a/_versions/2.7/guides/images/amqp-guide-architecture.png b/_versions/2.7/guides/images/amqp-guide-architecture.png deleted file mode 100644 index ea6682da1b6..00000000000 Binary files a/_versions/2.7/guides/images/amqp-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/amqp-qs-app-screenshot.png b/_versions/2.7/guides/images/amqp-qs-app-screenshot.png deleted file mode 100644 index 864b952a782..00000000000 Binary files a/_versions/2.7/guides/images/amqp-qs-app-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/images/amqp-qs-architecture.png b/_versions/2.7/guides/images/amqp-qs-architecture.png deleted file mode 100644 index a6bad5bfbe5..00000000000 Binary files a/_versions/2.7/guides/images/amqp-qs-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/architecture-phases.png b/_versions/2.7/guides/images/architecture-phases.png deleted file mode 100644 index 14d75477c25..00000000000 Binary files a/_versions/2.7/guides/images/architecture-phases.png and /dev/null differ diff --git a/_versions/2.7/guides/images/blocking-threads.png b/_versions/2.7/guides/images/blocking-threads.png deleted file mode 100644 index 474f3648898..00000000000 Binary files a/_versions/2.7/guides/images/blocking-threads.png and /dev/null differ diff --git a/_versions/2.7/guides/images/build-time-principle.png b/_versions/2.7/guides/images/build-time-principle.png deleted file mode 100644 index f590691f941..00000000000 Binary files a/_versions/2.7/guides/images/build-time-principle.png and /dev/null differ diff --git a/_versions/2.7/guides/images/config-sources.png b/_versions/2.7/guides/images/config-sources.png deleted file mode 100644 index d69e91eba05..00000000000 Binary files a/_versions/2.7/guides/images/config-sources.png and /dev/null differ diff --git a/_versions/2.7/guides/images/containerization-process.png b/_versions/2.7/guides/images/containerization-process.png deleted file mode 100644 index 72ae0276dc7..00000000000 Binary files a/_versions/2.7/guides/images/containerization-process.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-embedded-file.png b/_versions/2.7/guides/images/dev-ui-embedded-file.png deleted file mode 100644 index 3a9db71c6c9..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-embedded-file.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-embedded.png b/_versions/2.7/guides/images/dev-ui-embedded.png deleted file mode 100644 index f73049c7d54..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-embedded.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-interactive.png b/_versions/2.7/guides/images/dev-ui-interactive.png deleted file mode 100644 index d057c25975a..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-interactive.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-admin.png b/_versions/2.7/guides/images/dev-ui-keycloak-admin.png deleted file mode 100644 index 4df4dc195cd..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-admin.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-client-credentials-grant.png b/_versions/2.7/guides/images/dev-ui-keycloak-client-credentials-grant.png deleted file mode 100644 index 2995690b656..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-client-credentials-grant.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-decoded-tokens.png b/_versions/2.7/guides/images/dev-ui-keycloak-decoded-tokens.png deleted file mode 100644 index 0e6f8301cd6..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-decoded-tokens.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-image.png b/_versions/2.7/guides/images/dev-ui-keycloak-image.png deleted file mode 100644 index ba31d9bbe12..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-image.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-login-error.png b/_versions/2.7/guides/images/dev-ui-keycloak-login-error.png deleted file mode 100644 index 6ce8fef088d..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-login-error.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-logout.png b/_versions/2.7/guides/images/dev-ui-keycloak-logout.png deleted file mode 100644 index c7bfe999b2b..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-logout.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-password-grant.png b/_versions/2.7/guides/images/dev-ui-keycloak-password-grant.png deleted file mode 100644 index 6a811b158c5..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-password-grant.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-sign-in-to-service.png b/_versions/2.7/guides/images/dev-ui-keycloak-sign-in-to-service.png deleted file mode 100644 index 0b441489c0a..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-sign-in-to-service.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-sign-in-to-spa.png b/_versions/2.7/guides/images/dev-ui-keycloak-sign-in-to-spa.png deleted file mode 100644 index 1e31b0c1c8e..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-sign-in-to-spa.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-test-service-from-spa.png b/_versions/2.7/guides/images/dev-ui-keycloak-test-service-from-spa.png deleted file mode 100644 index bd79330eda5..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-test-service-from-spa.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-keycloak-test-service-swaggerui-graphql.png b/_versions/2.7/guides/images/dev-ui-keycloak-test-service-swaggerui-graphql.png deleted file mode 100644 index f3f22ccd9a9..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-keycloak-test-service-swaggerui-graphql.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-kogito-data-index-card.png b/_versions/2.7/guides/images/dev-ui-kogito-data-index-card.png deleted file mode 100644 index c2e4711db25..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-kogito-data-index-card.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-kogito-data-index.png b/_versions/2.7/guides/images/dev-ui-kogito-data-index.png deleted file mode 100644 index 6c744dfdc64..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-kogito-data-index.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-message.png b/_versions/2.7/guides/images/dev-ui-message.png deleted file mode 100644 index fe77b0f884d..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-message.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-oidc-card.png b/_versions/2.7/guides/images/dev-ui-oidc-card.png deleted file mode 100644 index b972a88330a..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-oidc-card.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-oidc-devconsole-card.png b/_versions/2.7/guides/images/dev-ui-oidc-devconsole-card.png deleted file mode 100644 index 564bb911f44..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-oidc-devconsole-card.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-oidc-keycloak-card.png b/_versions/2.7/guides/images/dev-ui-oidc-keycloak-card.png deleted file mode 100644 index f88c3bbf8b0..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-oidc-keycloak-card.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-overview.png b/_versions/2.7/guides/images/dev-ui-overview.png deleted file mode 100644 index de9b428e504..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-overview.png and /dev/null differ diff --git a/_versions/2.7/guides/images/dev-ui-page.png b/_versions/2.7/guides/images/dev-ui-page.png deleted file mode 100644 index 59a2ce0254a..00000000000 Binary files a/_versions/2.7/guides/images/dev-ui-page.png and /dev/null differ diff --git a/_versions/2.7/guides/images/getting-started-architecture.png b/_versions/2.7/guides/images/getting-started-architecture.png deleted file mode 100644 index 5ea746a1872..00000000000 Binary files a/_versions/2.7/guides/images/getting-started-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/graphql-ui-screenshot01.png b/_versions/2.7/guides/images/graphql-ui-screenshot01.png deleted file mode 100644 index 33fb24a1a11..00000000000 Binary files a/_versions/2.7/guides/images/graphql-ui-screenshot01.png and /dev/null differ diff --git a/_versions/2.7/guides/images/health-ui-screenshot01.png b/_versions/2.7/guides/images/health-ui-screenshot01.png deleted file mode 100644 index 782c66f47ba..00000000000 Binary files a/_versions/2.7/guides/images/health-ui-screenshot01.png and /dev/null differ diff --git a/_versions/2.7/guides/images/http-architecture.png b/_versions/2.7/guides/images/http-architecture.png deleted file mode 100644 index ed31f9b3076..00000000000 Binary files a/_versions/2.7/guides/images/http-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/http-blocking-sequence.png b/_versions/2.7/guides/images/http-blocking-sequence.png deleted file mode 100644 index 9556452764a..00000000000 Binary files a/_versions/2.7/guides/images/http-blocking-sequence.png and /dev/null differ diff --git a/_versions/2.7/guides/images/http-reactive-sequence.png b/_versions/2.7/guides/images/http-reactive-sequence.png deleted file mode 100644 index a2dfd5820d2..00000000000 Binary files a/_versions/2.7/guides/images/http-reactive-sequence.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-guide-architecture.png b/_versions/2.7/guides/images/kafka-guide-architecture.png deleted file mode 100644 index 00306650f97..00000000000 Binary files a/_versions/2.7/guides/images/kafka-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-one-app-one-consumer.png b/_versions/2.7/guides/images/kafka-one-app-one-consumer.png deleted file mode 100644 index 345b78f74f4..00000000000 Binary files a/_versions/2.7/guides/images/kafka-one-app-one-consumer.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-one-app-two-consumers.png b/_versions/2.7/guides/images/kafka-one-app-two-consumers.png deleted file mode 100644 index a25aab0eda5..00000000000 Binary files a/_versions/2.7/guides/images/kafka-one-app-two-consumers.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-qs-app-screenshot.png b/_versions/2.7/guides/images/kafka-qs-app-screenshot.png deleted file mode 100644 index c6e62cbe683..00000000000 Binary files a/_versions/2.7/guides/images/kafka-qs-app-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-qs-architecture.png b/_versions/2.7/guides/images/kafka-qs-architecture.png deleted file mode 100644 index 3209c223de2..00000000000 Binary files a/_versions/2.7/guides/images/kafka-qs-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-streams-guide-architecture-distributed.png b/_versions/2.7/guides/images/kafka-streams-guide-architecture-distributed.png deleted file mode 100644 index e00f5c63653..00000000000 Binary files a/_versions/2.7/guides/images/kafka-streams-guide-architecture-distributed.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-streams-guide-architecture.png b/_versions/2.7/guides/images/kafka-streams-guide-architecture.png deleted file mode 100644 index 7377f80576e..00000000000 Binary files a/_versions/2.7/guides/images/kafka-streams-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-two-app-one-consumer-group.png b/_versions/2.7/guides/images/kafka-two-app-one-consumer-group.png deleted file mode 100644 index 8962c698960..00000000000 Binary files a/_versions/2.7/guides/images/kafka-two-app-one-consumer-group.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kafka-two-app-two-consumer-groups.png b/_versions/2.7/guides/images/kafka-two-app-two-consumer-groups.png deleted file mode 100644 index aacd89f349a..00000000000 Binary files a/_versions/2.7/guides/images/kafka-two-app-two-consumer-groups.png and /dev/null differ diff --git a/_versions/2.7/guides/images/keycloak-authorization-permissions.png b/_versions/2.7/guides/images/keycloak-authorization-permissions.png deleted file mode 100644 index 3f320642088..00000000000 Binary files a/_versions/2.7/guides/images/keycloak-authorization-permissions.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-DRG.png b/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-DRG.png deleted file mode 100644 index 79c5e101d08..00000000000 Binary files a/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-DRG.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-DT.png b/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-DT.png deleted file mode 100644 index 39e17ad7277..00000000000 Binary files a/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-DT.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-scesim.png b/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-scesim.png deleted file mode 100644 index 09c0be64d75..00000000000 Binary files a/_versions/2.7/guides/images/kogito-DMN-guide-screenshot-scesim.png and /dev/null differ diff --git a/_versions/2.7/guides/images/kogito-guide-screenshot.png b/_versions/2.7/guides/images/kogito-guide-screenshot.png deleted file mode 100644 index 6bb8487fe16..00000000000 Binary files a/_versions/2.7/guides/images/kogito-guide-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/images/native-executable-process.png b/_versions/2.7/guides/images/native-executable-process.png deleted file mode 100644 index 1e007c516ff..00000000000 Binary files a/_versions/2.7/guides/images/native-executable-process.png and /dev/null differ diff --git a/_versions/2.7/guides/images/native-reference-multi-flamegraph-joined-threads.svg b/_versions/2.7/guides/images/native-reference-multi-flamegraph-joined-threads.svg deleted file mode 100644 index 3a810e745b6..00000000000 --- a/_versions/2.7/guides/images/native-reference-multi-flamegraph-joined-threads.svg +++ /dev/null @@ -1,7176 +0,0 @@ - - - - - - - - - - - - - - -Flame Graph - -Reset Zoom -Search -ic - - - -__hrtimer_run_queues (3 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (11 samples, 0.04%) - - - -enqueue_task (20 samples, 0.07%) - - - -fput_many (4 samples, 0.01%) - - - -skb_network_protocol (16 samples, 0.05%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (15 samples, 0.05%) - - - -skb_release_data (3 samples, 0.01%) - - - -bpf_lsm_xfrm_decode_session (14 samples, 0.05%) - - - -__pthread_mutex_cond_lock (26 samples, 0.09%) - - - -try_to_wake_up (12 samples, 0.04%) - - - -netdev_core_pick_tx (11 samples, 0.04%) - - - -check_stack_object (6 samples, 0.02%) - - - -__x86_indirect_thunk_rax (10 samples, 0.03%) - - - -fib_table_lookup (3 samples, 0.01%) - - - -acpi_idle_do_entry (1,165 samples, 3.98%) -acpi.. - - -raw_icmp_error (4 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (8 samples, 0.03%) - - - -skb_put (6 samples, 0.02%) - - - -_raw_spin_lock_irqsave (3 samples, 0.01%) - - - -ntloop-thread (5 samples, 0.02%) - - - -siphash_3u32 (5 samples, 0.02%) - - - -cpuidle_enter_state (68 samples, 0.23%) - - - -do_csum (11 samples, 0.04%) - - - -JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800 (3 samples, 0.01%) - - - -mark_wake_futex (4 samples, 0.01%) - - - -skb_release_data (12 samples, 0.04%) - - - -copy_user_generic_string (6 samples, 0.02%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (12 samples, 0.04%) - - - -__update_load_avg_se (37 samples, 0.13%) - - - -selinux_sk_getsecid (3 samples, 0.01%) - - - -blk_mq_sched_dispatch_requests (3 samples, 0.01%) - - - -[perf] (4 samples, 0.01%) - - - -secondary_startup_64_no_verify (1,047 samples, 3.58%) -sec.. - - -update_rq_clock (5 samples, 0.02%) - - - -futex_wake (11 samples, 0.04%) - - - -futex_wait_queue_me (18 samples, 0.06%) - - - -sockfd_lookup_light (8 samples, 0.03%) - - - -udp_send_skb (5 samples, 0.02%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (3,064 samples, 10.48%) -ReentrantLock_l.. - - -_raw_spin_lock (10 samples, 0.03%) - - - -end_repeat_nmi (3 samples, 0.01%) - - - -__switch_to (27 samples, 0.09%) - - - -update_load_avg (7 samples, 0.02%) - - - -__softirqentry_text_start (14 samples, 0.05%) - - - -mark_wake_futex (3 samples, 0.01%) - - - -select_task_rq_fair (6 samples, 0.02%) - - - -native_write_msr (3 samples, 0.01%) - - - -timekeeping_max_deferment (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (27 samples, 0.09%) - - - -__softirqentry_text_start (4 samples, 0.01%) - - - -OSCommittedMemoryProvider_allocate_f8d80d596cf0c26612afa4c08c54998e761aa867 (6 samples, 0.02%) - - - -available_idle_cpu (3 samples, 0.01%) - - - -__update_load_avg_se (16 samples, 0.05%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (4 samples, 0.01%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (12 samples, 0.04%) - - - -__switch_to_asm (30 samples, 0.10%) - - - -hrtimer_get_next_event (3 samples, 0.01%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (29 samples, 0.10%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (433 samples, 1.48%) - - - -ip_route_output_flow (3 samples, 0.01%) - - - -selinux_ipv4_output (40 samples, 0.14%) - - - -icmp_unreach (3 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (17 samples, 0.06%) - - - -VMOperationControl_guaranteeOkayToBlock_6c18be2cba7df7cda24be664a42b08f35232e6be (6 samples, 0.02%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (7 samples, 0.02%) - - - -enqueue_task (4 samples, 0.01%) - - - -copy_user_generic_string (5 samples, 0.02%) - - - -do_syscall_64 (19 samples, 0.06%) - - - -sock_sendmsg (12 samples, 0.04%) - - - -avc_has_perm (3 samples, 0.01%) - - - -ip_finish_output (16 samples, 0.05%) - - - -decode_session4 (5 samples, 0.02%) - - - -perf_event_for_each_child (15 samples, 0.05%) - - - -__ip_finish_output (4 samples, 0.01%) - - - -irqtime_account_irq (34 samples, 0.12%) - - - -JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b (11 samples, 0.04%) - - - -check_preempt_curr (4 samples, 0.01%) - - - -native_sched_clock (3 samples, 0.01%) - - - -__rdgsbase_inactive (10 samples, 0.03%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -flush_smp_call_function_queue (6 samples, 0.02%) - - - -siphash_3u32 (3 samples, 0.01%) - - - -__ip_append_data (4 samples, 0.01%) - - - -sock_alloc_send_pskb (33 samples, 0.11%) - - - -sched_ttwu_pending (6 samples, 0.02%) - - - -__update_load_avg_cfs_rq (36 samples, 0.12%) - - - -raw_spin_rq_lock_nested (26 samples, 0.09%) - - - -[perf] (7 samples, 0.02%) - - - -__switch_to (9 samples, 0.03%) - - - -ip_rcv (4 samples, 0.01%) - - - -native_write_msr (136 samples, 0.47%) - - - -__x64_sys_futex (3 samples, 0.01%) - - - -__update_idle_core (8 samples, 0.03%) - - - -ip_local_deliver_finish (4 samples, 0.01%) - - - -__ip_select_ident (31 samples, 0.11%) - - - -update_rq_clock (8 samples, 0.03%) - - - -dst_release (18 samples, 0.06%) - - - -__udp4_lib_rcv (438 samples, 1.50%) - - - -__sys_sendto (12 samples, 0.04%) - - - -alloc_skb_with_frags (44 samples, 0.15%) - - - -flush_smp_call_function_queue (23 samples, 0.08%) - - - -dst_release (4 samples, 0.01%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (2,747 samples, 9.40%) -LockSupport_p.. - - -__kmalloc_node_track_caller (88 samples, 0.30%) - - - -__do_set_cpus_allowed (6 samples, 0.02%) - - - -rb_insert_color (4 samples, 0.01%) - - - -__switch_to (26 samples, 0.09%) - - - -dst_release (3 samples, 0.01%) - - - -do_syscall_64 (4 samples, 0.01%) - - - -cpuidle_enter (27 samples, 0.09%) - - - -tick_sched_timer (12 samples, 0.04%) - - - -ip_finish_output2 (763 samples, 2.61%) -ip.. - - -do_epoll_pwait.part.0 (3 samples, 0.01%) - - - -migrate_disable (4 samples, 0.01%) - - - -schedule_idle (38 samples, 0.13%) - - - -update_rq_clock (5 samples, 0.02%) - - - -__vm_munmap (4 samples, 0.01%) - - - -__napi_poll (650 samples, 2.22%) -_.. - - -icmp_rcv (5 samples, 0.02%) - - - -tick_nohz_idle_exit (4 samples, 0.01%) - - - -copy_user_generic_string (21 samples, 0.07%) - - - -irq_enter_rcu (39 samples, 0.13%) - - - -memcg_slab_post_alloc_hook (33 samples, 0.11%) - - - -ip_append_data (75 samples, 0.26%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (1,589 samples, 5.43%) -Abstrac.. - - -poll_idle (58 samples, 0.20%) - - - -update_curr (7 samples, 0.02%) - - - -Thread_start0_1ac299bac29d78e193ed792d1de667f50cd6b267 (4 samples, 0.01%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (12 samples, 0.04%) - - - -psi_group_change (4 samples, 0.01%) - - - -dev_hard_start_xmit (46 samples, 0.16%) - - - -flush_smp_call_function_from_idle (4 samples, 0.01%) - - - -_start (12 samples, 0.04%) - - - -psi_task_change (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -check_preempt_curr (4 samples, 0.01%) - - - -avc_lookup (12 samples, 0.04%) - - - -skb_clone_tx_timestamp (5 samples, 0.02%) - - - -ip_finish_output (4 samples, 0.01%) - - - -ip_finish_output2 (3 samples, 0.01%) - - - -update_load_avg (63 samples, 0.22%) - - - -udp_err (7 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.02%) - - - -__wrgsbase_inactive (9 samples, 0.03%) - - - -ctx_sched_in (9 samples, 0.03%) - - - -_copy_from_user (11 samples, 0.04%) - - - -do_idle (2,908 samples, 9.95%) -do_idle - - -acpi_processor_ffh_cstate_enter (24 samples, 0.08%) - - - -nohz_run_idle_balance (5 samples, 0.02%) - - - -mark_wake_futex (4 samples, 0.01%) - - - -__condvar_dec_grefs (264 samples, 0.90%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (2,721 samples, 9.31%) -Unsafe_park_8.. - - -ip_finish_output2 (120 samples, 0.41%) - - - -cpuacct_charge (6 samples, 0.02%) - - - -newidle_balance (3 samples, 0.01%) - - - -all (29,238 samples, 100%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (5 samples, 0.02%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -psi_group_change (42 samples, 0.14%) - - - -kmem_cache_alloc_node (3 samples, 0.01%) - - - -poll_idle (316 samples, 1.08%) - - - -raw_spin_rq_lock_nested (3 samples, 0.01%) - - - -psi_task_switch (7 samples, 0.02%) - - - -raw_local_deliver (7 samples, 0.02%) - - - -raw_spin_rq_unlock (11 samples, 0.04%) - - - -update_load_avg (146 samples, 0.50%) - - - -__get_user_nocheck_4 (12 samples, 0.04%) - - - -psi_group_change (5 samples, 0.02%) - - - -selinux_ip_postroute (3 samples, 0.01%) - - - -__ip_append_data (133 samples, 0.45%) - - - -do_softirq (710 samples, 2.43%) -do.. - - -__update_load_avg_se (3 samples, 0.01%) - - - -__irq_exit_rcu (4 samples, 0.01%) - - - -_raw_spin_lock_irqsave (11 samples, 0.04%) - - - -__libc_sendto (13 samples, 0.04%) - - - -__skb_checksum_complete (4 samples, 0.01%) - - - -__wrgsbase_inactive (11 samples, 0.04%) - - - -__switch_to_asm (3 samples, 0.01%) - - - -__switch_to_asm (3 samples, 0.01%) - - - -psi_group_change (4 samples, 0.01%) - - - -psi_task_change (5 samples, 0.02%) - - - -wake_up_q (3 samples, 0.01%) - - - -__softirqentry_text_start (6 samples, 0.02%) - - - -ip_local_deliver_finish (11 samples, 0.04%) - - - -ip_protocol_deliver_rcu (3 samples, 0.01%) - - - -kmalloc_slab (18 samples, 0.06%) - - - -__get_user_nocheck_4 (3 samples, 0.01%) - - - -sock_sendmsg (9 samples, 0.03%) - - - -__pthread_create_2_1 (4 samples, 0.01%) - - - -acpi_idle_do_entry (27 samples, 0.09%) - - - -__get_user_nocheck_4 (5 samples, 0.02%) - - - -try_to_wake_up (10 samples, 0.03%) - - - -native_sched_clock (3 samples, 0.01%) - - - -finish_task_switch.isra.0 (3 samples, 0.01%) - - - -DeploymentManager_deployVerticle_8f348e5595b996709d456a4d080275d9279aea32 (4 samples, 0.01%) - - - -menu_select (17 samples, 0.06%) - - - -skb_release_head_state (5 samples, 0.02%) - - - -do_futex (17 samples, 0.06%) - - - -syscall_exit_to_user_mode_prepare (6 samples, 0.02%) - - - -___pthread_cond_broadcast (429 samples, 1.47%) - - - -psi_group_change (22 samples, 0.08%) - - - -ApplicationLifecycleManager_run_dbf144db2a98237beac0f2d82fb961c3bd6ed251 (11 samples, 0.04%) - - - -syscall_return_via_sysret (4 samples, 0.01%) - - - -native_sched_clock (7 samples, 0.02%) - - - -___pthread_mutex_lock (8 samples, 0.03%) - - - -update_load_avg (21 samples, 0.07%) - - - -kmem_cache_free (3 samples, 0.01%) - - - -cpuacct_charge (3 samples, 0.01%) - - - -update_load_avg (5 samples, 0.02%) - - - -__schedule (9 samples, 0.03%) - - - -__xfrm_decode_session (6 samples, 0.02%) - - - -select_task_rq_fair (3 samples, 0.01%) - - - -ip_setup_cork (3 samples, 0.01%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (3 samples, 0.01%) - - - -psi_group_change (24 samples, 0.08%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (11 samples, 0.04%) - - - -__udp4_lib_rcv (3 samples, 0.01%) - - - -tick_sched_handle (3 samples, 0.01%) - - - -__skb_checksum (4 samples, 0.01%) - - - -dev_hard_start_xmit (8 samples, 0.03%) - - - -VertxImpl_constructor_775d041b08f67497d294acf44ec22a5d77cc1fc8 (5 samples, 0.02%) - - - -udp_rcv (4 samples, 0.01%) - - - -__common_interrupt (7 samples, 0.02%) - - - -skb_release_head_state (4 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (12,437 samples, 42.54%) -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f83.. - - -acpi_processor_ffh_cstate_enter (69 samples, 0.24%) - - - -__libc_start_call_main (12 samples, 0.04%) - - - -selinux_ip_postroute_compat (6 samples, 0.02%) - - - -icmp_rcv (6 samples, 0.02%) - - - -sysvec_apic_timer_interrupt (122 samples, 0.42%) - - - -enqueue_task (17 samples, 0.06%) - - - -selinux_xfrm_postroute_last (5 samples, 0.02%) - - - -cgroup_rstat_updated (4 samples, 0.01%) - - - -ip_setup_cork (5 samples, 0.02%) - - - -select_task_rq_fair (7 samples, 0.02%) - - - -native_sched_clock (4 samples, 0.01%) - - - -ip_rcv (29 samples, 0.10%) - - - -_copy_from_user (34 samples, 0.12%) - - - -__ip_local_out (26 samples, 0.09%) - - - -cpuidle_enter_state (1,009 samples, 3.45%) -cpu.. - - -__x86_indirect_thunk_rax (22 samples, 0.08%) - - - -__x64_sys_sched_setaffinity (11 samples, 0.04%) - - - -__ip_make_skb (45 samples, 0.15%) - - - -dequeue_entity (10 samples, 0.03%) - - - -ktime_get (6 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (8 samples, 0.03%) - - - -netif_skb_features (24 samples, 0.08%) - - - -ip_local_deliver (3 samples, 0.01%) - - - -pick_next_task_fair (7 samples, 0.02%) - - - -__check_object_size (9 samples, 0.03%) - - - -__netif_receive_skb_one_core (7 samples, 0.02%) - - - -native_sched_clock (11 samples, 0.04%) - - - -__clone3 (4 samples, 0.01%) - - - -hrtimer_wakeup (12 samples, 0.04%) - - - -ktime_get (11 samples, 0.04%) - - - -do_csum (3 samples, 0.01%) - - - -PosixVirtualMemoryProvider_reserve_b6c76ffcfaac89204e3ddd5f1a5cd110a1860862 (5 samples, 0.02%) - - - -__update_load_avg_cfs_rq (3 samples, 0.01%) - - - -psi_group_change (282 samples, 0.96%) - - - -update_rq_clock (5 samples, 0.02%) - - - -selinux_ip_postroute_compat (5 samples, 0.02%) - - - -psi_task_change (3 samples, 0.01%) - - - -kfree (5 samples, 0.02%) - - - -__ip_select_ident (30 samples, 0.10%) - - - -try_to_wake_up (8 samples, 0.03%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (2,645 samples, 9.05%) -PosixParkEven.. - - -AbstractQueuedSynchronizer_shouldParkAfterFailedAcquire_afac5da03eda0b8f7c056a512c05f34b22f4a8c2 (9 samples, 0.03%) - - - -skb_csum_hwoffload_help (4 samples, 0.01%) - - - -ReentrantLock$Sync_nonfairTryAcquire_0a9290a8427787ed8158d141c47a3ec430d345c2 (4 samples, 0.01%) - - - -kmem_cache_free (12 samples, 0.04%) - - - -do_idle (11 samples, 0.04%) - - - -__icmp_send (104 samples, 0.36%) - - - -skb_release_head_state (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -hrtimer_get_next_event (6 samples, 0.02%) - - - -flush_smp_call_function_from_idle (4 samples, 0.01%) - - - -__hrtimer_run_queues (35 samples, 0.12%) - - - -enqueue_to_backlog (3 samples, 0.01%) - - - -futex_wait (4 samples, 0.01%) - - - -__update_load_avg_se (4 samples, 0.01%) - - - -syscall_exit_to_user_mode_prepare (6 samples, 0.02%) - - - -update_min_vruntime (10 samples, 0.03%) - - - -__udp4_lib_err (6 samples, 0.02%) - - - -__GI___pthread_disable_asynccancel (13 samples, 0.04%) - - - -Application_start_9a0b63742d6e66c1b5dc0121670fdf46106d2d88 (11 samples, 0.04%) - - - -do_syscall_64 (5 samples, 0.02%) - - - -ReentrantLock$NonfairSync_tryAcquire_0c5b5d7ba39229cb63cc2549c6b5028de3f821c1 (5 samples, 0.02%) - - - -fib_table_lookup (5 samples, 0.02%) - - - -native_sched_clock (3 samples, 0.01%) - - - -SingleThreadEventExecutor_execute_b9fc33f6cf952ec696d6a219f6499740711801a6 (4 samples, 0.01%) - - - -csum_partial_copy_nocheck (3 samples, 0.01%) - - - -copy_user_generic_string (9 samples, 0.03%) - - - -merge_sched_in (8 samples, 0.03%) - - - -__ip_append_data (66 samples, 0.23%) - - - -acpi_processor_ffh_cstate_enter (12 samples, 0.04%) - - - -common_interrupt (8 samples, 0.03%) - - - -decode_session4 (45 samples, 0.15%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -__skb_checksum_complete (31 samples, 0.11%) - - - -__kmalloc_node_track_caller (8 samples, 0.03%) - - - -EventLoopContext_runOnContext_1032f3075a9010887ecdd3fdc7989166bf814f22 (4 samples, 0.01%) - - - -net_rx_action (3 samples, 0.01%) - - - -check_preempt_curr (3 samples, 0.01%) - - - -perf_ibs_handle_irq (4 samples, 0.01%) - - - -[perf] (302 samples, 1.03%) - - - -__x64_sys_sendto (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (20 samples, 0.07%) - - - -perf_ioctl (30 samples, 0.10%) - - - -skb_release_data (6 samples, 0.02%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (4 samples, 0.01%) - - - -__udp4_lib_rcv (5 samples, 0.02%) - - - -netlbl_enabled (4 samples, 0.01%) - - - -update_cfs_group (13 samples, 0.04%) - - - -loopback_xmit (3 samples, 0.01%) - - - -irqtime_account_irq (13 samples, 0.04%) - - - -ThreadPerTaskExecutor_execute_9afc5d4473f674f08e02dd448b4e6a6247aa748d (4 samples, 0.01%) - - - -icmp_push_reply (21 samples, 0.07%) - - - -psi_group_change (7 samples, 0.02%) - - - -ktime_get (4 samples, 0.01%) - - - -alloc_skb_with_frags (37 samples, 0.13%) - - - -ip_finish_output2 (3 samples, 0.01%) - - - -sched_clock_cpu (3 samples, 0.01%) - - - -select_task_rq_fair (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (7 samples, 0.02%) - - - -try_to_wake_up (4 samples, 0.01%) - - - -SingleThreadEventExecutor_startThread_01a2f6913975a9e3a694adc6e29d550c50d76f00 (4 samples, 0.01%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (8 samples, 0.03%) - - - -move_addr_to_kernel.part.0 (39 samples, 0.13%) - - - -futex_wait_queue_me (8 samples, 0.03%) - - - -cpuidle_reflect (4 samples, 0.01%) - - - -do_futex (4 samples, 0.01%) - - - -__switch_to (16 samples, 0.05%) - - - -ip_rcv (11 samples, 0.04%) - - - -do_idle (1,721 samples, 5.89%) -do_idle - - -sched_setaffinity (11 samples, 0.04%) - - - -native_sched_clock (12 samples, 0.04%) - - - -___pthread_cond_broadcast (30 samples, 0.10%) - - - -wake_q_add_safe (3 samples, 0.01%) - - - -_raw_spin_lock_irqsave (4 samples, 0.01%) - - - -__ip_select_ident (14 samples, 0.05%) - - - -kfree (8 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.02%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (4 samples, 0.01%) - - - -MultiThreadedMonitorSupport_monitorExit_f765f7445e650efe1207579ef06c6f8ac708d1b5 (36 samples, 0.12%) - - - -selinux_ip_postroute_compat (3 samples, 0.01%) - - - -__ip_append_data (9 samples, 0.03%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (7 samples, 0.02%) - - - -native_sched_clock (162 samples, 0.55%) - - - -PosixJavaThreads_beforeThreadRun_74270183030d3cf183dcaf07b8ca65494761107e (8 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (18 samples, 0.06%) - - - -__get_user_nocheck_4 (10 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (68 samples, 0.23%) - - - -kmem_cache_free (7 samples, 0.02%) - - - -menu_select (36 samples, 0.12%) - - - -nv04_timer_intr (3 samples, 0.01%) - - - -do_csum (19 samples, 0.06%) - - - -kmem_cache_alloc_node (8 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (3 samples, 0.01%) - - - -do_idle (27 samples, 0.09%) - - - -skb_release_head_state (7 samples, 0.02%) - - - -_raw_spin_trylock (6 samples, 0.02%) - - - -cpuidle_enter_state (62 samples, 0.21%) - - - -__udp4_lib_lookup (11 samples, 0.04%) - - - -acpi_idle_enter (4 samples, 0.01%) - - - -_copy_from_iter (25 samples, 0.09%) - - - -__kmalloc_node (3 samples, 0.01%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (281 samples, 0.96%) - - - -copy_user_generic_string (3 samples, 0.01%) - - - -sched_clock_cpu (14 samples, 0.05%) - - - -native_sched_clock (18 samples, 0.06%) - - - -__x86_indirect_thunk_rax (40 samples, 0.14%) - - - -__entry_text_start (10 samples, 0.03%) - - - -futex_wait_queue_me (77 samples, 0.26%) - - - -raw_spin_rq_lock_nested (5 samples, 0.02%) - - - -__alloc_skb (122 samples, 0.42%) - - - -tcache_init.part.0 (3 samples, 0.01%) - - - -put_prev_task_fair (4 samples, 0.01%) - - - -fib_table_lookup (11 samples, 0.04%) - - - -__switch_to (20 samples, 0.07%) - - - -kfree (5 samples, 0.02%) - - - -tick_nohz_next_event (4 samples, 0.01%) - - - -acpi_idle_do_entry (764 samples, 2.61%) -ac.. - - -__schedule (3 samples, 0.01%) - - - -native_sched_clock (14 samples, 0.05%) - - - -__calc_delta (12 samples, 0.04%) - - - -ThreadExecutorMap$1_execute_82a130cdd46546392da3ffac84de8c998f29d43c (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -native_sched_clock (12 samples, 0.04%) - - - -scheduler_tick (3 samples, 0.01%) - - - -native_write_msr (3 samples, 0.01%) - - - -getInetAddress_family (99 samples, 0.34%) - - - -rcu_eqs_enter.constprop.0 (3 samples, 0.01%) - - - -ip_protocol_deliver_rcu (12 samples, 0.04%) - - - -update_cfs_group (20 samples, 0.07%) - - - -acpi_processor_ffh_cstate_enter (62 samples, 0.21%) - - - -entry_SYSCALL_64_after_hwframe (3 samples, 0.01%) - - - -do_syscall_64 (10 samples, 0.03%) - - - -schedule_idle (105 samples, 0.36%) - - - -dequeue_task_fair (15 samples, 0.05%) - - - -do_syscall_64 (4 samples, 0.01%) - - - -do_syscall_64 (4 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (12,437 samples, 42.54%) -Thread_run_857ee078f8137062fcf27275732adf5c4870652a - - -entry_SYSCALL_64_after_hwframe (4 samples, 0.01%) - - - -wake_q_add_safe (4 samples, 0.01%) - - - -selinux_ip_postroute_compat (3 samples, 0.01%) - - - -NioEventLoop_constructor_49df0e0d6cddf8f78e642a99ad82de56c1f0a39b (5 samples, 0.02%) - - - -mark_wake_futex (8 samples, 0.03%) - - - -VertxHttpProcessor$openSocket1866188241_deploy_7b07d97e327c2c1535eef8489b04526037b1f0ff (4 samples, 0.01%) - - - -__ip_make_skb (5 samples, 0.02%) - - - -secondary_startup_64_no_verify (1,793 samples, 6.13%) -secondar.. - - -tick_irq_enter (16 samples, 0.05%) - - - -udp4_hwcsum (13 samples, 0.04%) - - - -cpuidle_enter_state (1,444 samples, 4.94%) -cpuidl.. - - -ip_push_pending_frames (164 samples, 0.56%) - - - -acpi_processor_ffh_cstate_enter (26 samples, 0.09%) - - - -__x86_indirect_thunk_rax (5 samples, 0.02%) - - - -sockfd_lookup_light (4 samples, 0.01%) - - - -syscall_enter_from_user_mode (8 samples, 0.03%) - - - -syscall_exit_to_user_mode_prepare (3 samples, 0.01%) - - - -rcu_all_qs (9 samples, 0.03%) - - - -__udp4_lib_err (7 samples, 0.02%) - - - -irq_work_needs_cpu (3 samples, 0.01%) - - - -loopback_xmit (39 samples, 0.13%) - - - -enqueue_entity (97 samples, 0.33%) - - - -sched_clock_cpu (13 samples, 0.04%) - - - -loopback_xmit (23 samples, 0.08%) - - - -rb_next (3 samples, 0.01%) - - - -psi_task_change (4 samples, 0.01%) - - - -kmem_cache_free (21 samples, 0.07%) - - - -selinux_ipv4_postroute (7 samples, 0.02%) - - - -xfrm_lookup_with_ifid (94 samples, 0.32%) - - - -enqueue_to_backlog (5 samples, 0.02%) - - - -__netif_receive_skb_core.constprop.0 (62 samples, 0.21%) - - - -__dev_queue_xmit (8 samples, 0.03%) - - - -__switch_to_asm (43 samples, 0.15%) - - - -process_backlog (6 samples, 0.02%) - - - -perf_event_update_userpage (3 samples, 0.01%) - - - -__list_add_valid (5 samples, 0.02%) - - - -check_preempt_curr (6 samples, 0.02%) - - - -enqueue_to_backlog (3 samples, 0.01%) - - - -schedule (3 samples, 0.01%) - - - -__kmalloc_node_track_caller (3 samples, 0.01%) - - - -process_backlog (3 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (6 samples, 0.02%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (12,428 samples, 42.51%) -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (420 samples, 1.44%) - - - -__wrgsbase_inactive (11 samples, 0.04%) - - - -__kmalloc_node_track_caller (3 samples, 0.01%) - - - -futex_wait (90 samples, 0.31%) - - - -__get_user_nocheck_4 (11 samples, 0.04%) - - - -cpuidle_enter_state (3 samples, 0.01%) - - - -save_fpregs_to_fpstate (17 samples, 0.06%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -sched_clock_cpu (13 samples, 0.04%) - - - -get_futex_key (13 samples, 0.04%) - - - -icmp_route_lookup.constprop.0 (119 samples, 0.41%) - - - -worker_thread (4 samples, 0.01%) - - - -kfree (5 samples, 0.02%) - - - -ip_rcv_core (13 samples, 0.04%) - - - -_raw_spin_lock_irqsave (7 samples, 0.02%) - - - -update_load_avg (5 samples, 0.02%) - - - -__sys_sendto (1,038 samples, 3.55%) -__s.. - - -dequeue_entity (12 samples, 0.04%) - - - -event_function (14 samples, 0.05%) - - - -switch_fpu_return (4 samples, 0.01%) - - - -plist_del (7 samples, 0.02%) - - - -tick_nohz_next_event (6 samples, 0.02%) - - - -select_task_rq_fair (7 samples, 0.02%) - - - -security_skb_classify_flow (6 samples, 0.02%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (6 samples, 0.02%) - - - -ip_route_output_flow (6 samples, 0.02%) - - - -down_write_killable (4 samples, 0.01%) - - - -VertxImpl_deployVerticle_097940d891a15e3dd0d5dcf2f21cd8dece35a792 (4 samples, 0.01%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (3 samples, 0.01%) - - - -do_syscall_64 (3 samples, 0.01%) - - - -save_fpregs_to_fpstate (68 samples, 0.23%) - - - -ip_route_output_key_hash_rcu (5 samples, 0.02%) - - - -___pthread_mutex_lock (204 samples, 0.70%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (3,064 samples, 10.48%) -AbstractQueuedS.. - - -VertxHttpProcessor$openSocket1866188241_deploy_0_f62af8cc66423d57d1e40c5a1ec11136d1b717ee (4 samples, 0.01%) - - - -ttwu_queue_wakelist (3 samples, 0.01%) - - - -__check_object_size (34 samples, 0.12%) - - - -acpi_processor_ffh_cstate_enter (13 samples, 0.04%) - - - -nvme_queue_rq (3 samples, 0.01%) - - - -__icmp_send (397 samples, 1.36%) - - - -futex_wake (9 samples, 0.03%) - - - -save_fpregs_to_fpstate (36 samples, 0.12%) - - - -wake_q_add_safe (6 samples, 0.02%) - - - -ip_local_deliver_finish (580 samples, 1.98%) -i.. - - -sysvec_apic_timer_interrupt (4 samples, 0.01%) - - - -psi_flags_change (3 samples, 0.01%) - - - -__condvar_confirm_wakeup (14 samples, 0.05%) - - - -ttwu_do_activate (16 samples, 0.05%) - - - -sock_alloc_send_skb (3 samples, 0.01%) - - - -IsolateEnterStub_JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b_a61fe6c26e84dd4037e4629852b5488bfcc16e7e (12 samples, 0.04%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (7,142 samples, 24.43%) -Java_sun_nio_ch_DatagramChannelImpl_se.. - - -native_sched_clock (29 samples, 0.10%) - - - -ip_rcv_finish_core.constprop.0 (3 samples, 0.01%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (92 samples, 0.31%) - - - -__ip_make_skb (4 samples, 0.01%) - - - -Arrays_copyOfRange_289badfd980998aad0ada38eb3a926841af70498 (8 samples, 0.03%) - - - -process_backlog (50 samples, 0.17%) - - - -enqueue_entity (8 samples, 0.03%) - - - -cpu_startup_entry (2,919 samples, 9.98%) -cpu_startup_en.. - - -curl (3,986 samples, 13.63%) -curl - - -selinux_socket_sendmsg (41 samples, 0.14%) - - - -start_kernel (27 samples, 0.09%) - - - -process_backlog (632 samples, 2.16%) -p.. - - -fib_table_lookup (22 samples, 0.08%) - - - -csum_partial_copy_generic (5 samples, 0.02%) - - - -__x86_indirect_thunk_rax (6 samples, 0.02%) - - - -mark_wake_futex (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -validate_xmit_xfrm (3 samples, 0.01%) - - - -selinux_parse_skb.constprop.0 (32 samples, 0.11%) - - - -pick_next_entity (9 samples, 0.03%) - - - -_raw_spin_lock (6 samples, 0.02%) - - - -psi_group_change (182 samples, 0.62%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.01%) - - - -__schedule (90 samples, 0.31%) - - - -tick_nohz_idle_stop_tick (3 samples, 0.01%) - - - -native_write_msr (4 samples, 0.01%) - - - -__wrgsbase_inactive (4 samples, 0.01%) - - - -rcu_eqs_exit.constprop.0 (3 samples, 0.01%) - - - -available_idle_cpu (3 samples, 0.01%) - - - -ip_rcv_core (41 samples, 0.14%) - - - -get_next_timer_interrupt (6 samples, 0.02%) - - - -security_task_setscheduler (3 samples, 0.01%) - - - -avc_has_perm (18 samples, 0.06%) - - - -update_load_avg (8 samples, 0.03%) - - - -native_write_msr (12 samples, 0.04%) - - - -selinux_sk_getsecid (19 samples, 0.06%) - - - -tick_nohz_next_event (8 samples, 0.03%) - - - -ip_route_output_key_hash_rcu (5 samples, 0.02%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (8 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (9 samples, 0.03%) - - - -__skb_checksum (10 samples, 0.03%) - - - -clear_buddies (3 samples, 0.01%) - - - -is_cpu_allowed (14 samples, 0.05%) - - - -__ip_append_data (3 samples, 0.01%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (13 samples, 0.04%) - - - -psi_group_change (3 samples, 0.01%) - - - -do_futex (6 samples, 0.02%) - - - -udp_sendmsg (7 samples, 0.02%) - - - -do_futex (35 samples, 0.12%) - - - -slab_free_freelist_hook.constprop.0 (23 samples, 0.08%) - - - -JNIGeneratedMethodSupport_getFieldOffsetFromId_5041c78d77a7b3d62103393b72fc35d80d2cc709 (6 samples, 0.02%) - - - -ret_from_fork (7 samples, 0.02%) - - - -select_task_rq_fair (9 samples, 0.03%) - - - -udp4_lib_lookup2 (3 samples, 0.01%) - - - -menu_reflect (15 samples, 0.05%) - - - -switch_mm_irqs_off (18 samples, 0.06%) - - - -futex_wait (7 samples, 0.02%) - - - -schedule (6 samples, 0.02%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (3 samples, 0.01%) - - - -__schedule (3 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (12,437 samples, 42.54%) -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63.. - - -poll_idle (6 samples, 0.02%) - - - -asm_common_interrupt (6 samples, 0.02%) - - - -__ip_dev_find (15 samples, 0.05%) - - - -menu_select (36 samples, 0.12%) - - - -EPollSelectorImpl_constructor_e766728b6679c7c5e8eabbee0cfd0f70e475eb9e (3 samples, 0.01%) - - - -PosixJavaThreads_doStartThread_d86493a94746fb837887c6a0e52e99e18ac5be71 (4 samples, 0.01%) - - - -native_sched_clock (70 samples, 0.24%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (103 samples, 0.35%) - - - -copy_user_generic_string (19 samples, 0.06%) - - - -csum_block_add_ext (4 samples, 0.01%) - - - -slab_free_freelist_hook.constprop.0 (12 samples, 0.04%) - - - -ktime_get (5 samples, 0.02%) - - - -ktime_get (12 samples, 0.04%) - - - -__schedule (7 samples, 0.02%) - - - -__GI___pthread_cond_wait (3 samples, 0.01%) - - - -__check_heap_object (22 samples, 0.08%) - - - -icmp_unreach (4 samples, 0.01%) - - - -security_socket_sendmsg (9 samples, 0.03%) - - - -__kmalloc_node_track_caller (6 samples, 0.02%) - - - -ip_rcv_core (15 samples, 0.05%) - - - -VertxHttpProcessor$preinitializeRouter1141331088_deploy_0_04f518fcb19517993a4ab43510a8b1bf5082b981 (5 samples, 0.02%) - - - -VertxCoreRecorder$VertxSupplier_get_ad6de8dda214b81feb5c157bb64f41c2109a30fb (5 samples, 0.02%) - - - -switch_mm_irqs_off (3 samples, 0.01%) - - - -schedule_idle (3 samples, 0.01%) - - - -acpi_idle_do_entry (67 samples, 0.23%) - - - -entry_SYSCALL_64_after_hwframe (43 samples, 0.15%) - - - -enqueue_entity (11 samples, 0.04%) - - - -security_xfrm_decode_session (14 samples, 0.05%) - - - -_raw_spin_lock_irqsave (21 samples, 0.07%) - - - -update_cfs_group (14 samples, 0.05%) - - - -ipv4_mtu (44 samples, 0.15%) - - - -enqueue_task_fair (12 samples, 0.04%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (7,580 samples, 25.93%) -DatagramChannelImpl_send_a43258374f29d362.. - - -select_task_rq_fair (4 samples, 0.01%) - - - -loopback_xmit (17 samples, 0.06%) - - - -bpf_lsm_socket_sendmsg (16 samples, 0.05%) - - - -_raw_spin_unlock_irqrestore (4 samples, 0.01%) - - - -VertxImpl_deployVerticle_b3ce74c752ac28ceba0a6a5f10e8c73f24f312fb (4 samples, 0.01%) - - - -plist_del (4 samples, 0.01%) - - - -__udp4_lib_lookup (35 samples, 0.12%) - - - -mark_wake_futex (5 samples, 0.02%) - - - -new_heap (3 samples, 0.01%) - - - -set_next_entity (9 samples, 0.03%) - - - -NioEventLoop_openSelector_807d094eb73208664264916255f5760d290089d2 (5 samples, 0.02%) - - - -dequeue_task_fair (11 samples, 0.04%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (11 samples, 0.04%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (8 samples, 0.03%) - - - -mark_wake_futex (3 samples, 0.01%) - - - -MultiThreadedMonitorSupport_slowPathMonitorExit_183871de385508d0f6b4f0881e8e0c44628018b3 (59 samples, 0.20%) - - - -psi_flags_change (3 samples, 0.01%) - - - -__x64_sys_sendto (1,039 samples, 3.55%) -__x.. - - -_raw_spin_lock_irqsave (15 samples, 0.05%) - - - -ip_options_build (3 samples, 0.01%) - - - -psi_group_change (10 samples, 0.03%) - - - -update_rq_clock (23 samples, 0.08%) - - - -nvkm_timer_alarm_trigger (4 samples, 0.01%) - - - -security_xfrm_decode_session (8 samples, 0.03%) - - - -udp_sendmsg (962 samples, 3.29%) -udp.. - - -hrtimer_interrupt (3 samples, 0.01%) - - - -__cond_resched (3 samples, 0.01%) - - - -psi_task_switch (5 samples, 0.02%) - - - -__switch_to_asm (7 samples, 0.02%) - - - -rcu_eqs_enter.constprop.0 (4 samples, 0.01%) - - - -__switch_to (7 samples, 0.02%) - - - -[unknown] (62 samples, 0.21%) - - - -select_task_rq_fair (44 samples, 0.15%) - - - -select_task_rq_fair (3 samples, 0.01%) - - - -sched_clock_cpu (3 samples, 0.01%) - - - -put_prev_task_idle (8 samples, 0.03%) - - - -csum_partial (28 samples, 0.10%) - - - -do_idle (1,035 samples, 3.54%) -do_.. - - -_raw_read_lock (3 samples, 0.01%) - - - -Inet4Address_isLinkLocalAddress_ce47843b990249e34a84313af9f6958152044ee1 (3 samples, 0.01%) - - - -ktime_get (23 samples, 0.08%) - - - -select_task_rq_fair (7 samples, 0.02%) - - - -rcu_dynticks_inc (5 samples, 0.02%) - - - -syscall_exit_to_user_mode (13 samples, 0.04%) - - - -_raw_spin_lock_irqsave (4 samples, 0.01%) - - - -__schedule (169 samples, 0.58%) - - - -tick_nohz_get_sleep_length (5 samples, 0.02%) - - - -__update_load_avg_se (32 samples, 0.11%) - - - -native_sched_clock (8 samples, 0.03%) - - - -skb_set_owner_w (29 samples, 0.10%) - - - -__common_interrupt (3 samples, 0.01%) - - - -__x64_sys_futex (29 samples, 0.10%) - - - -kmem_cache_free (9 samples, 0.03%) - - - -_raw_spin_lock_irqsave (21 samples, 0.07%) - - - -__raise_softirq_irqoff (14 samples, 0.05%) - - - -try_to_wake_up (3 samples, 0.01%) - - - -sched_clock_cpu (3 samples, 0.01%) - - - -sched_clock_cpu (18 samples, 0.06%) - - - -sched_ttwu_pending (82 samples, 0.28%) - - - -perf_event_update_userpage (3 samples, 0.01%) - - - -update_rq_clock (9 samples, 0.03%) - - - -schedule (68 samples, 0.23%) - - - -validate_xmit_skb (57 samples, 0.19%) - - - -rcu_idle_exit (8 samples, 0.03%) - - - -move_addr_to_kernel.part.0 (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -futex_wait (3 samples, 0.01%) - - - -__calloc (3 samples, 0.01%) - - - -siphash_3u32 (9 samples, 0.03%) - - - -__rdgsbase_inactive (15 samples, 0.05%) - - - -skb_release_data (7 samples, 0.02%) - - - -ip_output (44 samples, 0.15%) - - - -dequeue_task_fair (9 samples, 0.03%) - - - -MultiThreadedMonitorSupport_slowPathMonitorEnter_5c2ec80c70301e1f54c9deef94b70b719d5a10f5 (39 samples, 0.13%) - - - -__softirqentry_text_start (4 samples, 0.01%) - - - -pick_next_task_fair (29 samples, 0.10%) - - - -native_sched_clock (6 samples, 0.02%) - - - -__alloc_skb (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -JNIObjectHandles_getObject_3a8ad95345633a155810448c9f6c1b478270ddcf (6 samples, 0.02%) - - - -sched_clock_cpu (12 samples, 0.04%) - - - -__set_cpus_allowed_ptr_locked (10 samples, 0.03%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (4 samples, 0.01%) - - - -common_interrupt (6 samples, 0.02%) - - - -psi_task_switch (7 samples, 0.02%) - - - -psi_group_change (9 samples, 0.03%) - - - -cpu_startup_entry (68 samples, 0.23%) - - - -_copy_from_iter (3 samples, 0.01%) - - - -inet_lookup_ifaddr_rcu (3 samples, 0.01%) - - - -rcu_read_unlock_strict (5 samples, 0.02%) - - - -can_stop_idle_tick (5 samples, 0.02%) - - - -timerqueue_iterate_next (3 samples, 0.01%) - - - -cpuidle_enter (1,456 samples, 4.98%) -cpuidl.. - - -icmp_rcv (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (10 samples, 0.03%) - - - -mark_wake_futex (3 samples, 0.01%) - - - -syscall_return_via_sysret (70 samples, 0.24%) - - - -eth_type_trans (5 samples, 0.02%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (9 samples, 0.03%) - - - -cpuidle_enter_state (11 samples, 0.04%) - - - -psi_task_change (50 samples, 0.17%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -nr_iowait_cpu (9 samples, 0.03%) - - - -udp_send_skb (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (10 samples, 0.03%) - - - -dev_hard_start_xmit (29 samples, 0.10%) - - - -__get_user_nocheck_4 (293 samples, 1.00%) - - - -mark_wake_futex (31 samples, 0.11%) - - - -entry_SYSCALL_64_after_hwframe (8 samples, 0.03%) - - - -copy_user_generic_string (16 samples, 0.05%) - - - -irq_enter_rcu (16 samples, 0.05%) - - - -sysvec_apic_timer_interrupt (135 samples, 0.46%) - - - -sched_clock_cpu (8 samples, 0.03%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (1,054 samples, 3.60%) -Abst.. - - -syscall_exit_to_user_mode (18 samples, 0.06%) - - - -finish_task_switch.isra.0 (5 samples, 0.02%) - - - -dst_release (5 samples, 0.02%) - - - -__x64_sys_futex (19 samples, 0.06%) - - - -__handle_irq_event_percpu (6 samples, 0.02%) - - - -__cgroup_bpf_run_filter_skb (3 samples, 0.01%) - - - -event_sched_in.part.0 (4 samples, 0.01%) - - - -dequeue_task_fair (15 samples, 0.05%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.02%) - - - -flush_smp_call_function_queue (3 samples, 0.01%) - - - -raw_spin_rq_lock_nested (14 samples, 0.05%) - - - -fib_table_lookup (235 samples, 0.80%) - - - -csum_partial_copy_generic (5 samples, 0.02%) - - - -pick_next_task_fair (22 samples, 0.08%) - - - -__list_add_valid (3 samples, 0.01%) - - - -fib_table_lookup (3 samples, 0.01%) - - - -kmem_cache_free (3 samples, 0.01%) - - - -ip_append_data (3 samples, 0.01%) - - - -set_next_entity (5 samples, 0.02%) - - - -__get_user_nocheck_4 (8 samples, 0.03%) - - - -VertxHttpProcessor$preinitializeRouter1141331088_deploy_3b2f5507ea83b0332da36f2bdf7801b77279fa8e (5 samples, 0.02%) - - - -reweight_entity (13 samples, 0.04%) - - - -get_next_timer_interrupt (13 samples, 0.04%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (283 samples, 0.97%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -ip_route_output_key_hash_rcu (134 samples, 0.46%) - - - -__smp_call_single_queue (4 samples, 0.01%) - - - -udp_err (6 samples, 0.02%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -nr_iowait_cpu (3 samples, 0.01%) - - - -StackOverflowCheckImpl_makeYellowZoneAvailable_096a6b7f9daf5fe9be382b399b6cbe747c1658f9 (13 samples, 0.04%) - - - -kfree_skbmem (3 samples, 0.01%) - - - -slab_free_freelist_hook.constprop.0 (8 samples, 0.03%) - - - -rcu_dynticks_eqs_exit (3 samples, 0.01%) - - - -__GI___write (4 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (4 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (14 samples, 0.05%) - - - -__GI___munmap (5 samples, 0.02%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (8 samples, 0.03%) - - - -do_futex (3 samples, 0.01%) - - - -NioEventLoopGroup_constructor_71faef2cba720dff3a733ae1aacd91e752ffea5a (5 samples, 0.02%) - - - -dequeue_entity (18 samples, 0.06%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (11 samples, 0.04%) - - - -kfree_skb (4 samples, 0.01%) - - - -nf_hook_slow (36 samples, 0.12%) - - - -__update_load_avg_se (23 samples, 0.08%) - - - -kmem_cache_free (11 samples, 0.04%) - - - -xfs_file_buffered_write (3 samples, 0.01%) - - - -cpu_startup_entry (27 samples, 0.09%) - - - -hrtimer_next_event_without (7 samples, 0.02%) - - - -__skb_checksum_complete (3 samples, 0.01%) - - - -__calc_delta (30 samples, 0.10%) - - - -icmp_rcv (5 samples, 0.02%) - - - -nvkm_mc_intr (3 samples, 0.01%) - - - -flush_smp_call_function_from_idle (6 samples, 0.02%) - - - -__x64_sys_futex (3 samples, 0.01%) - - - -select_task_rq_fair (7 samples, 0.02%) - - - -__udp4_lib_err (4 samples, 0.01%) - - - -nf_hook_slow (14 samples, 0.05%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (14 samples, 0.05%) - - - -cpu_startup_entry (22 samples, 0.08%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (18 samples, 0.06%) - - - -native_write_msr (5 samples, 0.02%) - - - -start_thread (4 samples, 0.01%) - - - -irq_work_needs_cpu (4 samples, 0.01%) - - - -__icmp_send (4 samples, 0.01%) - - - -update_irq_load_avg (3 samples, 0.01%) - - - -hrtimer_next_event_without (7 samples, 0.02%) - - - -futex_wait (5 samples, 0.02%) - - - -tick_nohz_idle_enter (4 samples, 0.01%) - - - -net_rx_action (9 samples, 0.03%) - - - -MultithreadEventExecutorGroup_constructor_30993b7c05e555e173884314126cb8ebf8f0a765 (5 samples, 0.02%) - - - -__alloc_skb (12 samples, 0.04%) - - - -poll_idle (114 samples, 0.39%) - - - -ktime_get_update_offsets_now (62 samples, 0.21%) - - - -security_sk_classify_flow (9 samples, 0.03%) - - - -ip_protocol_deliver_rcu (574 samples, 1.96%) -i.. - - -__get_user_nocheck_4 (20 samples, 0.07%) - - - -rcu_needs_cpu (4 samples, 0.01%) - - - -avc_has_perm (5 samples, 0.02%) - - - -cpuidle_enter_state (27 samples, 0.09%) - - - -do_syscall_64 (6 samples, 0.02%) - - - -start_kernel (11 samples, 0.04%) - - - -sock_wfree (19 samples, 0.06%) - - - -__netif_receive_skb_one_core (610 samples, 2.09%) -_.. - - -rcu_eqs_exit.constprop.0 (5 samples, 0.02%) - - - -try_to_wake_up (3 samples, 0.01%) - - - -do_futex (12 samples, 0.04%) - - - -build_cr3 (4 samples, 0.01%) - - - -selinux_ip_postroute (60 samples, 0.21%) - - - -selinux_ip_postroute (7 samples, 0.02%) - - - -__sched_setaffinity (10 samples, 0.03%) - - - -sched_clock_cpu (11 samples, 0.04%) - - - -mark_wake_futex (3 samples, 0.01%) - - - -kfree (4 samples, 0.01%) - - - -rb_next (5 samples, 0.02%) - - - -ip_route_output_key_hash (44 samples, 0.15%) - - - -ip_append_data (26 samples, 0.09%) - - - -syscall_return_via_sysret (6 samples, 0.02%) - - - -__build_skb_around (13 samples, 0.04%) - - - -syscall_enter_from_user_mode (7 samples, 0.02%) - - - -tick_nohz_get_sleep_length (14 samples, 0.05%) - - - -__schedule (104 samples, 0.36%) - - - -ip_output (22 samples, 0.08%) - - - -select_task_rq_fair (3 samples, 0.01%) - - - -rcu_all_qs (5 samples, 0.02%) - - - -ip_send_check (9 samples, 0.03%) - - - -hash_futex (3 samples, 0.01%) - - - -__rdgsbase_inactive (6 samples, 0.02%) - - - -nvkm_pci_intr (3 samples, 0.01%) - - - -arena_get2.part.0 (3 samples, 0.01%) - - - -native_sched_clock (4 samples, 0.01%) - - - -ip_push_pending_frames (4 samples, 0.01%) - - - -ttwu_do_wakeup (4 samples, 0.01%) - - - -cpuidle_enter (2,368 samples, 8.10%) -cpuidle_enter - - -NioEventLoopGroup_constructor_194be6972b9ebc23f435c4e558bfcee11ed151b1 (5 samples, 0.02%) - - - -sock_def_write_space (10 samples, 0.03%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (19 samples, 0.06%) - - - -siphash_3u32 (75 samples, 0.26%) - - - -rb_insert_color (5 samples, 0.02%) - - - -cpuidle_enter (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (9 samples, 0.03%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (20 samples, 0.07%) - - - -enqueue_to_backlog (4 samples, 0.01%) - - - -enqueue_task_fair (14 samples, 0.05%) - - - -tick_nohz_idle_enter (11 samples, 0.04%) - - - -entry_SYSCALL_64_after_hwframe (7 samples, 0.02%) - - - -ip_generic_getfrag (32 samples, 0.11%) - - - -mark_wake_futex (6 samples, 0.02%) - - - -__kmalloc_node_track_caller (3 samples, 0.01%) - - - -StackOverflowCheckImpl_makeYellowZoneAvailable_096a6b7f9daf5fe9be382b399b6cbe747c1658f9 (8 samples, 0.03%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (172 samples, 0.59%) - - - -psi_group_change (43 samples, 0.15%) - - - -vfs_write (3 samples, 0.01%) - - - -__schedule (40 samples, 0.14%) - - - -ip_finish_output2 (4 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (95 samples, 0.32%) - - - -finish_task_switch.isra.0 (5 samples, 0.02%) - - - -select_task_rq_fair (5 samples, 0.02%) - - - -syscall_return_via_sysret (15 samples, 0.05%) - - - -kfree (39 samples, 0.13%) - - - -VertxHttpRecorder_initializeRouter_931d9cc504c3f3ebb7166418f3971225ab19f602 (5 samples, 0.02%) - - - -pm_qos_read_value (3 samples, 0.01%) - - - -dequeue_task (4 samples, 0.01%) - - - -sock_setsockopt (3 samples, 0.01%) - - - -update_sd_lb_stats.constprop.0 (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (12 samples, 0.04%) - - - -ip_route_output_key_hash (4 samples, 0.01%) - - - -selinux_ip_postroute_compat (10 samples, 0.03%) - - - -__x86_indirect_thunk_rax (5 samples, 0.02%) - - - -ip_push_pending_frames (12 samples, 0.04%) - - - -selinux_ip_postroute (12 samples, 0.04%) - - - -iterate_groups (11 samples, 0.04%) - - - -__schedule (3 samples, 0.01%) - - - -skb_free_head (6 samples, 0.02%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (7 samples, 0.02%) - - - -pick_next_task_fair (3 samples, 0.01%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (4 samples, 0.01%) - - - -dequeue_task (4 samples, 0.01%) - - - -__wrgsbase_inactive (6 samples, 0.02%) - - - -reweight_entity (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (11 samples, 0.04%) - - - -select_task_rq_fair (8 samples, 0.03%) - - - -exit_to_user_mode_prepare (5 samples, 0.02%) - - - -siphash_3u32 (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -consume_skb (3 samples, 0.01%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (4 samples, 0.01%) - - - -exit_to_user_mode_prepare (8 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (7 samples, 0.02%) - - - -tick_nohz_next_event (9 samples, 0.03%) - - - -available_idle_cpu (4 samples, 0.01%) - - - -__GI___pthread_cond_wait (2,166 samples, 7.41%) -__GI___pth.. - - -security_sk_classify_flow (7 samples, 0.02%) - - - -udp4_lib_lookup2 (4 samples, 0.01%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (1,003 samples, 3.43%) -Uns.. - - -kthread (7 samples, 0.02%) - - - -icmp_unreach (17 samples, 0.06%) - - - -pick_next_entity (15 samples, 0.05%) - - - -_raw_spin_lock (3 samples, 0.01%) - - - -do_syscall_64 (1,047 samples, 3.58%) -do_.. - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (17 samples, 0.06%) - - - -cpuidle_enter_state (124 samples, 0.42%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.01%) - - - -StackOverflowCheckImpl_protectYellowZone_c940e860df16ce6529c43e09187ac7003f0ff4ce (6 samples, 0.02%) - - - -cpu_startup_entry (11 samples, 0.04%) - - - -select_task_rq_fair (9 samples, 0.03%) - - - -schedule_idle (8 samples, 0.03%) - - - -__x64_sys_epoll_pwait (3 samples, 0.01%) - - - -__update_load_avg_cfs_rq (3 samples, 0.01%) - - - -poll_idle (3 samples, 0.01%) - - - -ktime_get_update_offsets_now (49 samples, 0.17%) - - - -__schedule (3 samples, 0.01%) - - - -__schedule (58 samples, 0.20%) - - - -xfrm_lookup (11 samples, 0.04%) - - - -save_fpregs_to_fpstate (34 samples, 0.12%) - - - -__GI___pthread_disable_asynccancel (5 samples, 0.02%) - - - -__get_user_nocheck_4 (11 samples, 0.04%) - - - -mark_wake_futex (7 samples, 0.02%) - - - -acpi_idle_enter (1,470 samples, 5.03%) -acpi_i.. - - -__x86_indirect_thunk_rax (9 samples, 0.03%) - - - -try_to_wake_up (25 samples, 0.09%) - - - -sock_sendmsg (3 samples, 0.01%) - - - -icmp_unreach (5 samples, 0.02%) - - - -__ip_options_echo (11 samples, 0.04%) - - - -ip_send_skb (834 samples, 2.85%) -ip.. - - -__icmp_send (3 samples, 0.01%) - - - -native_sched_clock (6 samples, 0.02%) - - - -psi_group_change (9 samples, 0.03%) - - - -kworker/dying (7,085 samples, 24.23%) -kworker/dying - - -__GI___mprotect (3 samples, 0.01%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (7,545 samples, 25.81%) -DatagramChannelImpl_sendFromNativeBuffer_.. - - -__libc_sendto (8 samples, 0.03%) - - - -psi_task_change (8 samples, 0.03%) - - - -visit_groups_merge.constprop.0.isra.0 (9 samples, 0.03%) - - - -flush_smp_call_function_queue (76 samples, 0.26%) - - - -VertxHttpRecorder_startServer_c11f0a68def0b12024624749d87e838bcfaba8d2 (4 samples, 0.01%) - - - -smp_call_function_single (14 samples, 0.05%) - - - -fib_table_lookup (14 samples, 0.05%) - - - -sched_clock_cpu (4 samples, 0.01%) - - - -VertxBuilder_vertx_96fbf1a3cb1f742947b0ca876c6065e325fb888f (5 samples, 0.02%) - - - -__softirqentry_text_start (703 samples, 2.40%) -__.. - - -psi_task_switch (30 samples, 0.10%) - - - -psi_task_change (4 samples, 0.01%) - - - -syscall_return_via_sysret (108 samples, 0.37%) - - - -_perf_ioctl (17 samples, 0.06%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (5 samples, 0.02%) - - - -nf_hook_slow (12 samples, 0.04%) - - - -__x64_sys_munmap (4 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.02%) - - - -acpi_idle_do_entry (11 samples, 0.04%) - - - -native_sched_clock (6 samples, 0.02%) - - - -handle_irq_event (6 samples, 0.02%) - - - -__hrtimer_run_queues (27 samples, 0.09%) - - - -reweight_entity (45 samples, 0.15%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (7,484 samples, 25.60%) -DatagramChannelImpl_send0_d05a7d3bffd13f.. - - -perf (324 samples, 1.11%) - - - -xfsaild (3 samples, 0.01%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (92 samples, 0.31%) - - - -nf_hook_slow (9 samples, 0.03%) - - - -sched_clock_cpu (3 samples, 0.01%) - - - -enqueue_entity (171 samples, 0.58%) - - - -_raw_spin_lock_irqsave (8 samples, 0.03%) - - - -__entry_text_start (141 samples, 0.48%) - - - -ip_protocol_deliver_rcu (10 samples, 0.03%) - - - -plist_add (7 samples, 0.02%) - - - -read_tsc (17 samples, 0.06%) - - - -psi_group_change (12 samples, 0.04%) - - - -hrtimer_interrupt (3 samples, 0.01%) - - - -VertxCoreRecorder$VertxSupplier_get_9cc8afdf967b204fbc01652d0c347eb980314ecb (5 samples, 0.02%) - - - -__dev_queue_xmit (5 samples, 0.02%) - - - -kfree (8 samples, 0.03%) - - - -enqueue_task_fair (5 samples, 0.02%) - - - -__blk_mq_run_hw_queue (3 samples, 0.01%) - - - -ip_finish_output2 (51 samples, 0.17%) - - - -security_socket_sendmsg (12 samples, 0.04%) - - - -do_syscall_64 (96 samples, 0.33%) - - - -poll_idle (429 samples, 1.47%) - - - -acpi_idle_enter (67 samples, 0.23%) - - - -nvkm_timer_alarm_trigger (3 samples, 0.01%) - - - -VertxHttpRecorder_doServerStart_5a2b82f625f728fe2208f2246f8f2a6468c96864 (4 samples, 0.01%) - - - -decode_session4 (3 samples, 0.01%) - - - -JNIObjectHandles_getObject_3a8ad95345633a155810448c9f6c1b478270ddcf (9 samples, 0.03%) - - - -native_sched_clock (5 samples, 0.02%) - - - -syscall_exit_to_user_mode (10 samples, 0.03%) - - - -__calc_delta (4 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -Transport_eventLoopGroup_5e4430940b5fba54d415e1d3ec4ee2566b75c5f9 (5 samples, 0.02%) - - - -irq_enter_rcu (23 samples, 0.08%) - - - -__rdgsbase_inactive (3 samples, 0.01%) - - - -psi_task_change (4 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (18 samples, 0.06%) - - - -hash_futex (12 samples, 0.04%) - - - -VertxCoreRecorder_initialize_48e4f58a461abde1ae74d26423a5a8ada77d2a60 (5 samples, 0.02%) - - - -_raw_spin_lock_irqsave (5 samples, 0.02%) - - - -JavaThreads_startThread_4a48623aeb6d5a9f3cf7f8dabdba7ffbb99828ba (4 samples, 0.01%) - - - -ttwu_queue_wakelist (17 samples, 0.06%) - - - -mark_wake_futex (10 samples, 0.03%) - - - -process_backlog (6 samples, 0.02%) - - - -select_task_rq_fair (6 samples, 0.02%) - - - -kmem_cache_free (3 samples, 0.01%) - - - -ip_finish_output (3 samples, 0.01%) - - - -icmp_rcv (17 samples, 0.06%) - - - -ip_setup_cork (4 samples, 0.01%) - - - -native_sched_clock (3 samples, 0.01%) - - - -tick_sched_timer (9 samples, 0.03%) - - - -sysvec_apic_timer_interrupt (260 samples, 0.89%) - - - -icmp_glue_bits (9 samples, 0.03%) - - - -__libc_sendto (8 samples, 0.03%) - - - -raw_spin_rq_lock_nested (5 samples, 0.02%) - - - -ip_route_output_key_hash (8 samples, 0.03%) - - - -JNIGeneratedMethodSupport_getFieldOffsetFromId_5041c78d77a7b3d62103393b72fc35d80d2cc709 (4 samples, 0.01%) - - - -__dev_queue_xmit (89 samples, 0.30%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (12,437 samples, 42.54%) -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 - - -reweight_entity (28 samples, 0.10%) - - - -rwsem_down_write_slowpath (4 samples, 0.01%) - - - -skb_free_head (5 samples, 0.02%) - - - -__ip_append_data (67 samples, 0.23%) - - - -ttwu_do_activate (5 samples, 0.02%) - - - -acpi_idle_enter (11 samples, 0.04%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -nvkm_timer_alarm_trigger (3 samples, 0.01%) - - - -sock_sendmsg (977 samples, 3.34%) -soc.. - - -consume_skb (16 samples, 0.05%) - - - -enqueue_task (5 samples, 0.02%) - - - -fib_table_lookup (7 samples, 0.02%) - - - -enqueue_to_backlog (3 samples, 0.01%) - - - -cpuidle_not_available (3 samples, 0.01%) - - - -tick_irq_enter (22 samples, 0.08%) - - - -handle_edge_irq (3 samples, 0.01%) - - - -hash_futex (3 samples, 0.01%) - - - -__schedule (51 samples, 0.17%) - - - -enqueue_to_backlog (60 samples, 0.21%) - - - -__libc_sendto (5 samples, 0.02%) - - - -__get_user_nocheck_4 (8 samples, 0.03%) - - - -__x86_indirect_thunk_rax (96 samples, 0.33%) - - - -hrtimer_next_event_without (6 samples, 0.02%) - - - -try_to_wake_up (14 samples, 0.05%) - - - -_raw_spin_lock_irqsave (28 samples, 0.10%) - - - -acpi_processor_ffh_cstate_enter (162 samples, 0.55%) - - - -__wrgsbase_inactive (12 samples, 0.04%) - - - -acpi_idle_enter (4 samples, 0.01%) - - - -__dev_queue_xmit (9 samples, 0.03%) - - - -copy_user_generic_string (7 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (132 samples, 0.45%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (34 samples, 0.12%) - - - -raw_local_deliver (13 samples, 0.04%) - - - -ip_skb_dst_mtu (35 samples, 0.12%) - - - -get_futex_key (4 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (66 samples, 0.23%) - - - -udp4_lib_lookup2 (18 samples, 0.06%) - - - -futex_wait (3 samples, 0.01%) - - - -perf_event_idx_default (5 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (22 samples, 0.08%) - - - -__netif_receive_skb_core.constprop.0 (3 samples, 0.01%) - - - -sched_clock_cpu (77 samples, 0.26%) - - - -__switch_to_asm (3 samples, 0.01%) - - - -mark_wake_futex (12 samples, 0.04%) - - - -Buffer_position_542e9d12d78d28ae335243c3729c7f4a18caa5f2 (8 samples, 0.03%) - - - -pick_next_task_fair (16 samples, 0.05%) - - - -getInetAddress_addr (12 samples, 0.04%) - - - -__local_bh_enable_ip (714 samples, 2.44%) -__.. - - -__x86_indirect_thunk_rax (7 samples, 0.02%) - - - -icmpv4_xrlim_allow (20 samples, 0.07%) - - - -__sysvec_apic_timer_interrupt (92 samples, 0.31%) - - - -iterate_groups (4 samples, 0.01%) - - - -HeapChunkProvider_produceAlignedChunk_151eeb69b2ff04e5a10d422de20e777d95b68672 (8 samples, 0.03%) - - - -update_rq_clock (11 samples, 0.04%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (8 samples, 0.03%) - - - -update_load_avg (4 samples, 0.01%) - - - -psi_group_change (42 samples, 0.14%) - - - -MultithreadEventExecutorGroup_constructor_23099da2a05a0695e33fca7210ac631de273f329 (5 samples, 0.02%) - - - -__udp4_lib_lookup (4 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (49 samples, 0.17%) - - - -entry_SYSCALL_64_after_hwframe (10 samples, 0.03%) - - - -__futex_abstimed_wait_common (5 samples, 0.02%) - - - -CEntryPointSnippets_attachThread_299a3505abe96864afd07f8f20f652a19cd12ea9 (6 samples, 0.02%) - - - -timerqueue_del (7 samples, 0.02%) - - - -sched_ttwu_pending (7 samples, 0.02%) - - - -queue_core_balance (7 samples, 0.02%) - - - -__ip_make_skb (168 samples, 0.57%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (13 samples, 0.04%) - - - -selinux_xfrm_decode_session (5 samples, 0.02%) - - - -do_syscall_64 (19 samples, 0.06%) - - - -__netif_receive_skb_core.constprop.0 (27 samples, 0.09%) - - - -__x86_indirect_thunk_rax (20 samples, 0.07%) - - - -update_load_avg (27 samples, 0.09%) - - - -dev_queue_xmit (11 samples, 0.04%) - - - -__x64_sys_futex (17 samples, 0.06%) - - - -tick_nohz_get_sleep_length (17 samples, 0.06%) - - - -menu_reflect (3 samples, 0.01%) - - - -__x64_sys_sendto (3 samples, 0.01%) - - - -mark_wake_futex (32 samples, 0.11%) - - - -__rdgsbase_inactive (6 samples, 0.02%) - - - -slab_free_freelist_hook.constprop.0 (12 samples, 0.04%) - - - -__schedule (68 samples, 0.23%) - - - -__update_load_avg_cfs_rq (70 samples, 0.24%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (33 samples, 0.11%) - - - -__alloc_skb (3 samples, 0.01%) - - - -tick_nohz_idle_enter (20 samples, 0.07%) - - - -net_rx_action (42 samples, 0.14%) - - - -native_sched_clock (8 samples, 0.03%) - - - -sched_idle_set_state (3 samples, 0.01%) - - - -syscall_return_via_sysret (3 samples, 0.01%) - - - -newidle_balance (12 samples, 0.04%) - - - -__ip_local_out (15 samples, 0.05%) - - - -enqueue_task_fair (10 samples, 0.03%) - - - -syscall_enter_from_user_mode (13 samples, 0.04%) - - - -String_substring_4989a637ecbe1eecc1c126665598175df66fa98b (8 samples, 0.03%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (103 samples, 0.35%) - - - -cpu_startup_entry (1,723 samples, 5.89%) -cpu_sta.. - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -update_rq_clock (148 samples, 0.51%) - - - -CEntryPointSnippets_attachUnattachedThread_624b0c1d4e08bdf4608c1290142e118ef51d6192 (3 samples, 0.01%) - - - -syscall_return_via_sysret (6 samples, 0.02%) - - - -send_call_function_single_ipi (4 samples, 0.01%) - - - -pick_next_task_fair (9 samples, 0.03%) - - - -__build_skb_around (8 samples, 0.03%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -sched_clock_cpu (29 samples, 0.10%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (17 samples, 0.06%) - - - -do_syscall_64 (17 samples, 0.06%) - - - -icmp_glue_bits (29 samples, 0.10%) - - - -tick_nohz_get_sleep_length (5 samples, 0.02%) - - - -native_sched_clock (14 samples, 0.05%) - - - -do_csum (3 samples, 0.01%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -__update_load_avg_cfs_rq (22 samples, 0.08%) - - - -__skb_checksum (29 samples, 0.10%) - - - -selinux_ip_postroute_compat (68 samples, 0.23%) - - - -__local_bh_enable_ip (4 samples, 0.01%) - - - -rb_erase (4 samples, 0.01%) - - - -handle_edge_irq (7 samples, 0.02%) - - - -iterate_groups (10 samples, 0.03%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (47 samples, 0.16%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (33 samples, 0.11%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -find_exception (8 samples, 0.03%) - - - -NioEventLoopGroup_newChild_18cd34fd0de866436bd03197e567ec292a38961b (5 samples, 0.02%) - - - -net_rx_action (655 samples, 2.24%) -n.. - - -tick_nohz_idle_retain_tick (3 samples, 0.01%) - - - -__ip_finish_output (47 samples, 0.16%) - - - -tick_sched_timer (3 samples, 0.01%) - - - -do_futex (3 samples, 0.01%) - - - -_raw_spin_lock (8 samples, 0.03%) - - - -__softirqentry_text_start (4 samples, 0.01%) - - - -udp_rcv (16 samples, 0.05%) - - - -SingleThreadEventExecutor_doStartThread_f74c626e81f588e0747a3e04c8bdb98bee0cbdb6 (4 samples, 0.01%) - - - -csum_partial_copy_generic (8 samples, 0.03%) - - - -__inet_dev_addr_type (19 samples, 0.06%) - - - -udp_sendmsg (5 samples, 0.02%) - - - -acpi_idle_do_entry (858 samples, 2.93%) -ac.. - - -mark_wake_futex (7 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -fib_table_lookup (3 samples, 0.01%) - - - -kfree (5 samples, 0.02%) - - - -is_cpu_allowed (3 samples, 0.01%) - - - -ip_generic_getfrag (47 samples, 0.16%) - - - -debugging-nativ (37 samples, 0.13%) - - - -__ip_finish_output (4 samples, 0.01%) - - - -rcu_dynticks_inc (5 samples, 0.02%) - - - -hash_futex (12 samples, 0.04%) - - - -acpi_idle_do_entry (3 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (4 samples, 0.01%) - - - -ip_options_build (10 samples, 0.03%) - - - -do_syscall_64 (25 samples, 0.09%) - - - -select_task_rq_fair (42 samples, 0.14%) - - - -ttwu_queue_wakelist (7 samples, 0.02%) - - - -process_backlog (7 samples, 0.02%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (19 samples, 0.06%) - - - -put_prev_task_idle (3 samples, 0.01%) - - - -dequeue_entity (4 samples, 0.01%) - - - -__skb_checksum_complete (3 samples, 0.01%) - - - -tick_irq_enter (39 samples, 0.13%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (7 samples, 0.02%) - - - -cpuidle_enter (1,010 samples, 3.45%) -cpu.. - - -__x86_indirect_thunk_rax (14 samples, 0.05%) - - - -udp4_lib_lookup2 (6 samples, 0.02%) - - - -event_function_call (15 samples, 0.05%) - - - -can_stop_idle_tick (5 samples, 0.02%) - - - -should_failslab (9 samples, 0.03%) - - - -update_curr (14 samples, 0.05%) - - - -SingleThreadEventExecutor_execute_ae2480551d18263655f927040d2e4463ae8e5bf2 (4 samples, 0.01%) - - - -icmp_unreach (6 samples, 0.02%) - - - -futex_wait_queue_me (3 samples, 0.01%) - - - -JNIObjectHandles_getObject_3a8ad95345633a155810448c9f6c1b478270ddcf (5 samples, 0.02%) - - - -validate_xmit_xfrm (29 samples, 0.10%) - - - -rcu_idle_exit (5 samples, 0.02%) - - - -check_preempt_curr (3 samples, 0.01%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (3 samples, 0.01%) - - - -__x86_indirect_thunk_rax (14 samples, 0.05%) - - - -do_epoll_wait (3 samples, 0.01%) - - - -netif_rx (13 samples, 0.04%) - - - -enqueue_hrtimer (3 samples, 0.01%) - - - -menu_select (4 samples, 0.01%) - - - -__ip_make_skb (37 samples, 0.13%) - - - -udp_err (4 samples, 0.01%) - - - -sched_clock_cpu (24 samples, 0.08%) - - - -hrtimer_interrupt (79 samples, 0.27%) - - - -kfence_ksize (16 samples, 0.05%) - - - -slab_free_freelist_hook.constprop.0 (3 samples, 0.01%) - - - -sched_clock_cpu (4 samples, 0.01%) - - - -tick_do_update_jiffies64 (3 samples, 0.01%) - - - -irqtime_account_irq (20 samples, 0.07%) - - - -fib_lookup_good_nhc (27 samples, 0.09%) - - - -ip_skb_dst_mtu (3 samples, 0.01%) - - - -ip_output (14 samples, 0.05%) - - - -cpuidle_enter (5 samples, 0.02%) - - - -acpi_idle_enter (943 samples, 3.23%) -acp.. - - -VMError_guarantee_18caf46ef6d672f2c7aab3ad271ff5117b823ec1 (6 samples, 0.02%) - - - -schedule_idle (179 samples, 0.61%) - - - -futex_wait_queue_me (14 samples, 0.05%) - - - -fdval (5 samples, 0.02%) - - - -ThreadLocalAllocation_slowPathNewArrayWithoutAllocating_8145f66af737ef111395a70e26cdbc5d55538ef0 (8 samples, 0.03%) - - - -JNIObjectHandles_getObject_3a8ad95345633a155810448c9f6c1b478270ddcf (7 samples, 0.02%) - - - -ip_route_output_flow (27 samples, 0.09%) - - - -syscall_exit_to_user_mode (3 samples, 0.01%) - - - -sched_clock (4 samples, 0.01%) - - - -icmp_glue_bits (4 samples, 0.01%) - - - -acpi_idle_enter (871 samples, 2.98%) -ac.. - - -sched_ttwu_pending (28 samples, 0.10%) - - - -dequeue_task (6 samples, 0.02%) - - - -__alloc_skb (42 samples, 0.14%) - - - -netif_rx (13 samples, 0.04%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (6 samples, 0.02%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (45 samples, 0.15%) - - - -ApplicationImpl_doStart_e1afde9430e67b7c57499ed67ff5f64600d056ec (11 samples, 0.04%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (12,437 samples, 42.54%) -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 - - -clear_buddies (4 samples, 0.01%) - - - -ttwu_do_activate (3 samples, 0.01%) - - - -__GI___pthread_disable_asynccancel (17 samples, 0.06%) - - - -tick_nohz_next_event (5 samples, 0.02%) - - - -sysvec_apic_timer_interrupt (3 samples, 0.01%) - - - -enqueue_task (16 samples, 0.05%) - - - -mark_wake_futex (4 samples, 0.01%) - - - -psi_task_switch (54 samples, 0.18%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (4 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (3 samples, 0.01%) - - - -ktime_get (6 samples, 0.02%) - - - -__x64_sys_futex (93 samples, 0.32%) - - - -tick_nohz_idle_enter (5 samples, 0.02%) - - - -net_rx_action (3 samples, 0.01%) - - - -selinux_socket_sendmsg (4 samples, 0.01%) - - - -ip_rcv (9 samples, 0.03%) - - - -syscall_return_via_sysret (13 samples, 0.04%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (23 samples, 0.08%) - - - -rb_next (3 samples, 0.01%) - - - -cpu_startup_entry (8 samples, 0.03%) - - - -read_tsc (28 samples, 0.10%) - - - -rcu_note_context_switch (4 samples, 0.01%) - - - -fib_table_lookup (4 samples, 0.01%) - - - -do_csum (5 samples, 0.02%) - - - -selinux_ip_postroute_compat (11 samples, 0.04%) - - - -do_syscall_64 (4 samples, 0.01%) - - - -NET_InetAddressToSockaddr (421 samples, 1.44%) - - - -skb_release_head_state (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (23 samples, 0.08%) - - - -kfree (5 samples, 0.02%) - - - -put_prev_entity (3 samples, 0.01%) - - - -update_load_avg (6 samples, 0.02%) - - - -native_sched_clock (5 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (56 samples, 0.19%) - - - -_raw_spin_unlock_irqrestore (6 samples, 0.02%) - - - -__ip_append_data (6 samples, 0.02%) - - - -ip_local_deliver (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (162 samples, 0.55%) - - - -irq_work_needs_cpu (7 samples, 0.02%) - - - -futex_wait_setup (5 samples, 0.02%) - - - -mark_wake_futex (4 samples, 0.01%) - - - -ksys_write (3 samples, 0.01%) - - - -syscall_enter_from_user_mode (20 samples, 0.07%) - - - -irqtime_account_irq (24 samples, 0.08%) - - - -poll_idle (513 samples, 1.75%) - - - -llist_add_batch (5 samples, 0.02%) - - - -rcu_read_unlock_strict (3 samples, 0.01%) - - - -make (4 samples, 0.01%) - - - -reweight_entity (107 samples, 0.37%) - - - -copy_user_generic_string (3 samples, 0.01%) - - - -__udp4_lib_lookup (5 samples, 0.02%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -dequeue_entity (26 samples, 0.09%) - - - -__wrgsbase_inactive (20 samples, 0.07%) - - - -rcu_read_unlock_strict (12 samples, 0.04%) - - - -psi_group_change (4 samples, 0.01%) - - - -exit_to_user_mode_prepare (7 samples, 0.02%) - - - -tick_nohz_idle_exit (3 samples, 0.01%) - - - -slab_free_freelist_hook.constprop.0 (13 samples, 0.04%) - - - -set_next_entity (22 samples, 0.08%) - - - -__update_load_avg_se (4 samples, 0.01%) - - - -mark_wake_futex (5 samples, 0.02%) - - - -cpuidle_enter_state (2,328 samples, 7.96%) -cpuidle_ent.. - - -schedule (9 samples, 0.03%) - - - -__switch_to (57 samples, 0.19%) - - - -syscall_exit_to_user_mode (3 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (125 samples, 0.43%) - - - -import_single_range (3 samples, 0.01%) - - - -hrtimer_interrupt (87 samples, 0.30%) - - - -__x64_sys_futex (4 samples, 0.01%) - - - -__x64_sys_sendto (10 samples, 0.03%) - - - -swapper (1,428 samples, 4.88%) -swapper - - -flush_smp_call_function_queue (5 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -update_rq_clock (74 samples, 0.25%) - - - -security_perf_event_write (8 samples, 0.03%) - - - -ip_make_skb (104 samples, 0.36%) - - - -__GI___pthread_mutex_unlock_usercnt (420 samples, 1.44%) - - - -read_tsc (9 samples, 0.03%) - - - -thread (16,358 samples, 55.95%) -thread - - -poll_idle (257 samples, 0.88%) - - - -icmp_rcv (8 samples, 0.03%) - - - -ecutor-thread (5 samples, 0.02%) - - - -__hrtimer_next_event_base (11 samples, 0.04%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (29 samples, 0.10%) - - - -__switch_to_asm (3 samples, 0.01%) - - - -iterate_groups (4 samples, 0.01%) - - - -ttwu_do_wakeup (9 samples, 0.03%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (12,437 samples, 42.54%) -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55.. - - -icmp_rcv (84 samples, 0.29%) - - - -try_to_wake_up (3 samples, 0.01%) - - - -psi_task_switch (29 samples, 0.10%) - - - -remote_function (14 samples, 0.05%) - - - -update_curr (7 samples, 0.02%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (2,694 samples, 9.21%) -JavaThreads_p.. - - -kmem_cache_free (5 samples, 0.02%) - - - -bpf_lsm_sk_getsecid (8 samples, 0.03%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (45 samples, 0.15%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -native_sched_clock (59 samples, 0.20%) - - - -menu_select (35 samples, 0.12%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -do_futex (10 samples, 0.03%) - - - -plist_del (3 samples, 0.01%) - - - -ksize (37 samples, 0.13%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (1,601 samples, 5.48%) -Reentra.. - - -__entry_text_start (68 samples, 0.23%) - - - -pick_next_task_fair (12 samples, 0.04%) - - - -__ip_append_data (3 samples, 0.01%) - - - -futex_wait (38 samples, 0.13%) - - - -Thread_start_d043f016dd75eb113f895de55f2e129bad1ee51a (4 samples, 0.01%) - - - -tick_nohz_idle_retain_tick (3 samples, 0.01%) - - - -update_load_avg (7 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.01%) - - - -__udp4_lib_err (3 samples, 0.01%) - - - -__sys_sendto (4 samples, 0.01%) - - - -__udp4_lib_rcv (82 samples, 0.28%) - - - -iterate_groups (4 samples, 0.01%) - - - -sched_clock_cpu (4 samples, 0.01%) - - - -skb_release_data (16 samples, 0.05%) - - - -mark_wake_futex (11 samples, 0.04%) - - - -xfrm_lookup_route (10 samples, 0.03%) - - - -sock_alloc_send_pskb (13 samples, 0.04%) - - - -_copy_from_iter (38 samples, 0.13%) - - - -pick_next_task_fair (11 samples, 0.04%) - - - -MultiThreadedMonitorSupport_monitorEnter_a853e48d8499fe94e7e0723447fc9d2060965e91 (20 samples, 0.07%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (620 samples, 2.12%) -a.. - - -__switch_to_asm (51 samples, 0.17%) - - - -tick_nohz_get_sleep_length (14 samples, 0.05%) - - - -tick_nohz_idle_exit (9 samples, 0.03%) - - - -selinux_socket_sendmsg (3 samples, 0.01%) - - - -__x86_indirect_thunk_rax (15 samples, 0.05%) - - - -process_one_work (3 samples, 0.01%) - - - -consume_skb (5 samples, 0.02%) - - - -security_skb_classify_flow (8 samples, 0.03%) - - - -sock_wfree (5 samples, 0.02%) - - - -ip_send_skb (5 samples, 0.02%) - - - -dev_queue_xmit (3 samples, 0.01%) - - - -schedule (19 samples, 0.06%) - - - -ip_route_output_key_hash (61 samples, 0.21%) - - - -selinux_xfrm_skb_sid_ingress (6 samples, 0.02%) - - - -MultiThreadedMonitorSupport_maybeAdjustNewParkStatus_7f6c07fb0d959156fc8d4805a5a893a4708324d5 (8 samples, 0.03%) - - - -sock_alloc_send_pskb (45 samples, 0.15%) - - - -__clone3 (18 samples, 0.06%) - - - -do_csum (9 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (16 samples, 0.05%) - - - -do_idle (35 samples, 0.12%) - - - -__cgroup_bpf_run_filter_skb (58 samples, 0.20%) - - - -udp_send_skb (4 samples, 0.01%) - - - -__local_bh_enable_ip (3 samples, 0.01%) - - - -update_rq_clock (3 samples, 0.01%) - - - -irqtime_account_irq (3 samples, 0.01%) - - - -udp_err (4 samples, 0.01%) - - - -native_write_msr (4 samples, 0.01%) - - - -ip_rcv_finish_core.constprop.0 (24 samples, 0.08%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (109 samples, 0.37%) - - - -nf_hook_slow (18 samples, 0.06%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (2,824 samples, 9.66%) -AbstractQueued.. - - -acpi_processor_ffh_cstate_enter (16 samples, 0.05%) - - - -process_backlog (10 samples, 0.03%) - - - -validate_xmit_skb (3 samples, 0.01%) - - - -try_to_wake_up (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (143 samples, 0.49%) - - - -kfree (18 samples, 0.06%) - - - -update_rq_clock (5 samples, 0.02%) - - - -ttwu_do_wakeup (14 samples, 0.05%) - - - -udp_send_skb (841 samples, 2.88%) -ud.. - - -__libc_sendto (6,628 samples, 22.67%) -__libc_sendto - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (12 samples, 0.04%) - - - -reweight_entity (3 samples, 0.01%) - - - -select_task_rq_fair (3 samples, 0.01%) - - - -native_sched_clock (10 samples, 0.03%) - - - -__list_add_valid (5 samples, 0.02%) - - - -select_task_rq_fair (33 samples, 0.11%) - - - -native_sched_clock (5 samples, 0.02%) - - - -netif_rx_internal (7 samples, 0.02%) - - - -__alloc_skb (3 samples, 0.01%) - - - -acpi_idle_enter (27 samples, 0.09%) - - - -__sysvec_apic_timer_interrupt (3 samples, 0.01%) - - - -psi_group_change (123 samples, 0.42%) - - - -__get_user_nocheck_4 (10 samples, 0.03%) - - - -__udp4_lib_lookup (6 samples, 0.02%) - - - -rb_next (7 samples, 0.02%) - - - -__udp4_lib_lookup (8 samples, 0.03%) - - - -_raw_spin_lock_irqsave (6 samples, 0.02%) - - - -mark_wake_futex (12 samples, 0.04%) - - - -iomap_file_buffered_write (3 samples, 0.01%) - - - -raw_spin_rq_unlock (3 samples, 0.01%) - - - -cpuidle_enter (11 samples, 0.04%) - - - -tick_sched_handle (4 samples, 0.01%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (12,435 samples, 42.53%) -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 - - -wake_up_q (56 samples, 0.19%) - - - -VMThreads_findIsolateThreadForCurrentOSThread_92ae819b2eb5871e48575e78c4c13a4549a980b0 (3 samples, 0.01%) - - - -dev_hard_start_xmit (31 samples, 0.11%) - - - -slab_free_freelist_hook.constprop.0 (3 samples, 0.01%) - - - -irqtime_account_irq (4 samples, 0.01%) - - - -getInetAddress_addr (95 samples, 0.32%) - - - -__udp4_lib_err (4 samples, 0.01%) - - - -ctx_resched (9 samples, 0.03%) - - - -put_prev_task_fair (7 samples, 0.02%) - - - -selinux_ip_postroute (4 samples, 0.01%) - - - -kmem_cache_alloc_node (78 samples, 0.27%) - - - -tick_nohz_stop_tick (3 samples, 0.01%) - - - -ip_skb_dst_mtu (3 samples, 0.01%) - - - -bpf_lsm_xfrm_decode_session (4 samples, 0.01%) - - - -sched_ttwu_pending (4 samples, 0.01%) - - - -__x64_sys_futex (5 samples, 0.02%) - - - -__ksize (25 samples, 0.09%) - - - -__hrtimer_run_queues (16 samples, 0.05%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (974 samples, 3.33%) -Jav.. - - -__rdgsbase_inactive (4 samples, 0.01%) - - - -ip_setup_cork (67 samples, 0.23%) - - - -psi_task_switch (3 samples, 0.01%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (12 samples, 0.04%) - - - -__ip_dev_find (27 samples, 0.09%) - - - -switch_fpu_return (4 samples, 0.01%) - - - -netif_rx_internal (13 samples, 0.04%) - - - -read_tsc (3 samples, 0.01%) - - - -ip_finish_output (5 samples, 0.02%) - - - -__GI___sched_setaffinity_new (67 samples, 0.23%) - - - -__GI___ioctl_time64 (220 samples, 0.75%) - - - -sock_sendmsg (4 samples, 0.01%) - - - -nohz_run_idle_balance (6 samples, 0.02%) - - - -__GI___pthread_mutex_unlock_usercnt (18 samples, 0.06%) - - - -entry_SYSCALL_64_after_hwframe (43 samples, 0.15%) - - - -send_call_function_single_ipi (31 samples, 0.11%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (6 samples, 0.02%) - - - -ip_send_skb (4 samples, 0.01%) - - - -select_task_rq_fair (9 samples, 0.03%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (12,435 samples, 42.53%) -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_c.. - - -ttwu_do_wakeup (4 samples, 0.01%) - - - -update_rq_clock (9 samples, 0.03%) - - - -psi_group_change (9 samples, 0.03%) - - - -pick_next_task_idle (6 samples, 0.02%) - - - -schedule_idle (10 samples, 0.03%) - - - -__x86_indirect_thunk_rax (3 samples, 0.01%) - - - -icmp_socket_deliver (4 samples, 0.01%) - - - -ktime_get (9 samples, 0.03%) - - - -__x86_indirect_thunk_rax (4 samples, 0.01%) - - - -ttwu_do_activate (23 samples, 0.08%) - - - -ktime_get_update_offsets_now (144 samples, 0.49%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (3,058 samples, 10.46%) -AbstractQueuedS.. - - -__check_object_size (3 samples, 0.01%) - - - -sched_clock_cpu (5 samples, 0.02%) - - - -asm_common_interrupt (8 samples, 0.03%) - - - -clear_buddies (3 samples, 0.01%) - - - -do_futex (93 samples, 0.32%) - - - -do_idle (68 samples, 0.23%) - - - -copy_user_generic_string (161 samples, 0.55%) - - - -getInetAddress_family (29 samples, 0.10%) - - - -blk_mq_dispatch_rq_list (3 samples, 0.01%) - - - -copy_user_generic_string (3 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (127 samples, 0.43%) - - - -netif_rx_internal (12 samples, 0.04%) - - - -VertxImpl_deployVerticle_fcea789add9d3b31e6ff08796618671d34deb721 (4 samples, 0.01%) - - - -hrtimer_next_event_without (6 samples, 0.02%) - - - -DeploymentManager_doDeploy_28a24e5825cfa179a86ccb09c95909cc881c7940 (4 samples, 0.01%) - - - -__ip_local_out (32 samples, 0.11%) - - - -ttwu_do_wakeup (6 samples, 0.02%) - - - -__GI___lll_lock_wake (78 samples, 0.27%) - - - -raw_spin_rq_lock_nested (17 samples, 0.06%) - - - -update_rq_clock (6 samples, 0.02%) - - - -[perf] (319 samples, 1.09%) - - - -hrtimer_get_next_event (4 samples, 0.01%) - - - -select_task_rq_fair (47 samples, 0.16%) - - - -ip_idents_reserve (16 samples, 0.05%) - - - -ip_local_deliver (9 samples, 0.03%) - - - -do_syscall_64 (3 samples, 0.01%) - - - -ip_setup_cork (4 samples, 0.01%) - - - -native_write_msr (6 samples, 0.02%) - - - -ThreadLocalAllocation_slowPathNewArray_846db6d88ea2f5c90935fae3e872715327297019 (8 samples, 0.03%) - - - -map_id_range_down (5 samples, 0.02%) - - - -net_rx_action (5 samples, 0.02%) - - - -fput_many (3 samples, 0.01%) - - - -put_prev_entity (8 samples, 0.03%) - - - -DeploymentManager_doDeploy_13ca90eebffa21856f602fd4ed6b030ab866132b (4 samples, 0.01%) - - - -sched_clock_cpu (5 samples, 0.02%) - - - -__rdgsbase_inactive (8 samples, 0.03%) - - - -menu_select (11 samples, 0.04%) - - - -try_to_wake_up (5 samples, 0.02%) - - - -update_curr (3 samples, 0.01%) - - - -update_load_avg (3 samples, 0.01%) - - - -__udp4_lib_lookup (4 samples, 0.01%) - - - -hrtimer_interrupt (172 samples, 0.59%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (12,434 samples, 42.53%) -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18ba.. - - -sched_clock_cpu (5 samples, 0.02%) - - - -schedule (3 samples, 0.01%) - - - -sched_clock_cpu (3 samples, 0.01%) - - - -__libc_sendto (13 samples, 0.04%) - - - -plist_del (4 samples, 0.01%) - - - -__futex_abstimed_wait_common (1,751 samples, 5.99%) -__futex.. - - -slab_free_freelist_hook.constprop.0 (3 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.01%) - - - -perf_ibs_nmi_handler (4 samples, 0.01%) - - - -__GI___lll_lock_wake (411 samples, 1.41%) - - - -futex_wake (17 samples, 0.06%) - - - -_raw_spin_lock_irqsave (10 samples, 0.03%) - - - -asm_sysvec_apic_timer_interrupt (3 samples, 0.01%) - - - -__inet_dev_addr_type (38 samples, 0.13%) - - - -flush_smp_call_function_from_idle (274 samples, 0.94%) - - - -kmem_cache_alloc_trace (25 samples, 0.09%) - - - -newidle_balance (12 samples, 0.04%) - - - -syscall_enter_from_user_mode (4 samples, 0.01%) - - - -sched_clock (3 samples, 0.01%) - - - -generic_exec_single (14 samples, 0.05%) - - - -newidle_balance (16 samples, 0.05%) - - - -ip_make_skb (13 samples, 0.04%) - - - -futex_wait (4 samples, 0.01%) - - - -futex_wake (3 samples, 0.01%) - - - -__skb_checksum (3 samples, 0.01%) - - - -finish_task_switch.isra.0 (20 samples, 0.07%) - - - -udp_sendmsg (122 samples, 0.42%) - - - -psi_group_change (16 samples, 0.05%) - - - -kfree (11 samples, 0.04%) - - - -futex_wait (4 samples, 0.01%) - - - -__schedule (31 samples, 0.11%) - - - -__netif_receive_skb_one_core (7 samples, 0.02%) - - - -enqueue_task_fair (10 samples, 0.03%) - - - -ip_rcv_finish_core.constprop.0 (10 samples, 0.03%) - - - -native_write_msr (8 samples, 0.03%) - - - -select_task_rq_fair (37 samples, 0.13%) - - - -enqueue_entity (14 samples, 0.05%) - - - -__clone3 (12,437 samples, 42.54%) -__clone3 - - -psi_task_switch (14 samples, 0.05%) - - - -__fget_files (14 samples, 0.05%) - - - -flush_smp_call_function_from_idle (110 samples, 0.38%) - - - -__rdgsbase_inactive (3 samples, 0.01%) - - - -flush_smp_call_function_from_idle (9 samples, 0.03%) - - - -ip_protocol_deliver_rcu (11 samples, 0.04%) - - - -do_syscall_64 (3 samples, 0.01%) - - - -__GI___pthread_cond_wait (3 samples, 0.01%) - - - -start_thread (14 samples, 0.05%) - - - -tick_nohz_idle_retain_tick (4 samples, 0.01%) - - - -update_load_avg (3 samples, 0.01%) - - - -__update_load_avg_se (4 samples, 0.01%) - - - -enqueue_entity (9 samples, 0.03%) - - - -csum_partial_copy_generic (92 samples, 0.31%) - - - -alloc_skb_with_frags (12 samples, 0.04%) - - - -__pthread_cleanup_pop (8 samples, 0.03%) - - - -set_next_entity (6 samples, 0.02%) - - - -menu_select (68 samples, 0.23%) - - - -dequeue_task_fair (4 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (190 samples, 0.65%) - - - -asm_sysvec_apic_timer_interrupt (135 samples, 0.46%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (6 samples, 0.02%) - - - -udp_send_skb (16 samples, 0.05%) - - - -__x86_indirect_thunk_rax (5 samples, 0.02%) - - - -ip_skb_dst_mtu (6 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (5 samples, 0.02%) - - - -futex_wait_setup (6 samples, 0.02%) - - - -copy_user_generic_string (3 samples, 0.01%) - - - -__x86_indirect_thunk_rax (6 samples, 0.02%) - - - -__check_object_size (4 samples, 0.01%) - - - -new_sync_write (3 samples, 0.01%) - - - -kmem_cache_alloc_node (3 samples, 0.01%) - - - -skb_release_data (12 samples, 0.04%) - - - -update_load_avg (11 samples, 0.04%) - - - -native_write_msr (3 samples, 0.01%) - - - -raw_spin_rq_lock_nested (57 samples, 0.19%) - - - -icmp_rcv (4 samples, 0.01%) - - - -skb_release_data (4 samples, 0.01%) - - - -pick_next_entity (4 samples, 0.01%) - - - -psi_group_change (3 samples, 0.01%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (6 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (3 samples, 0.01%) - - - -cpu_startup_entry (1,036 samples, 3.54%) -cpu.. - - -__skb_checksum_complete (4 samples, 0.01%) - - - -__dev_queue_xmit (46 samples, 0.16%) - - - -psi_task_switch (67 samples, 0.23%) - - - -__hrtimer_next_event_base (20 samples, 0.07%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (678 samples, 2.32%) -P.. - - -mark_wake_futex (5 samples, 0.02%) - - - -psi_task_change (89 samples, 0.30%) - - - -rb_erase (3 samples, 0.01%) - - - -netif_rx (4 samples, 0.01%) - - - -try_to_wake_up (3 samples, 0.01%) - - - -enqueue_task (14 samples, 0.05%) - - - -acpi_processor_ffh_cstate_enter (13 samples, 0.04%) - - - -__dev_queue_xmit (43 samples, 0.15%) - - - -__rdgsbase_inactive (5 samples, 0.02%) - - - -start_thread (12,437 samples, 42.54%) -start_thread - - -netif_rx (3 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (260 samples, 0.89%) - - - -__dev_queue_xmit (3 samples, 0.01%) - - - -secondary_startup_64_no_verify (2,948 samples, 10.08%) -secondary_star.. - - -handle_irq_event (3 samples, 0.01%) - - - -copy_user_generic_string (5 samples, 0.02%) - - - -do_softirq (23 samples, 0.08%) - - - -enqueue_entity (116 samples, 0.40%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.01%) - - - -__entry_text_start (167 samples, 0.57%) - - - -update_process_times (3 samples, 0.01%) - - - -__wrgsbase_inactive (3 samples, 0.01%) - - - -__blk_mq_sched_dispatch_requests (3 samples, 0.01%) - - - -enqueue_entity (6 samples, 0.02%) - - - -do_softirq (3 samples, 0.01%) - - - -ip_local_deliver_finish (6 samples, 0.02%) - - - -set_next_entity (4 samples, 0.01%) - - - -raw_spin_rq_lock_nested (5 samples, 0.02%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -start_kernel (68 samples, 0.23%) - - - -__switch_to_asm (3 samples, 0.01%) - - - -update_load_avg (8 samples, 0.03%) - - - -__x64_sys_ioctl (40 samples, 0.14%) - - - -__x86_indirect_thunk_rax (40 samples, 0.14%) - - - -__schedule (3 samples, 0.01%) - - - -__update_load_avg_cfs_rq (17 samples, 0.06%) - - - -set_next_task_idle (3 samples, 0.01%) - - - -select_task_rq_fair (4 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1,138 samples, 3.89%) -entr.. - - -dev_get_by_index_rcu (6 samples, 0.02%) - - - -ip_finish_output2 (3 samples, 0.01%) - - - -skb_copy_and_csum_bits (13 samples, 0.04%) - - - -__x64_sys_futex (4 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -raw_spin_rq_lock_nested (10 samples, 0.03%) - - - -kmem_cache_free (30 samples, 0.10%) - - - -Quarkus_run_264e1542aba49a980676e2116b6211b2dc545762 (11 samples, 0.04%) - - - -tick_sched_handle (7 samples, 0.02%) - - - -tick_nohz_next_event (5 samples, 0.02%) - - - -kfree_skb (26 samples, 0.09%) - - - -PosixJavaThreads_setNativeName_ad1428f6ffd25a626f703670d3ed7e20656291a9 (8 samples, 0.03%) - - - -__virt_addr_valid (10 samples, 0.03%) - - - -enqueue_entity (3 samples, 0.01%) - - - -__ip_make_skb (3 samples, 0.01%) - - - -wake_up_q (15 samples, 0.05%) - - - -rb_erase (7 samples, 0.02%) - - - -VMOperationControl_guaranteeOkayToBlock_6c18be2cba7df7cda24be664a42b08f35232e6be (5 samples, 0.02%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (196 samples, 0.67%) - - - -tick_sched_do_timer (6 samples, 0.02%) - - - -do_syscall_64 (11 samples, 0.04%) - - - -tick_sched_timer (19 samples, 0.06%) - - - -do_syscall_64 (43 samples, 0.15%) - - - -__switch_to_asm (4 samples, 0.01%) - - - -native_sched_clock (3 samples, 0.01%) - - - -do_idle (82 samples, 0.28%) - - - -cpuidle_enter (68 samples, 0.23%) - - - -__libc_start_main_alias_2 (12 samples, 0.04%) - - - -copy_user_generic_string (4 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (13 samples, 0.04%) - - - -__local_bh_enable_ip (5 samples, 0.02%) - - - -__dev_queue_xmit (5 samples, 0.02%) - - - -skb_release_data (4 samples, 0.01%) - - - -StackOverflowCheckImpl_protectYellowZone_c940e860df16ce6529c43e09187ac7003f0ff4ce (3 samples, 0.01%) - - - -finish_task_switch.isra.0 (15 samples, 0.05%) - - - -validate_xmit_skb (5 samples, 0.02%) - - - -tick_nohz_idle_stop_tick (3 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (3 samples, 0.01%) - - - -enqueue_task_fair (6 samples, 0.02%) - - - -icmp_push_reply (80 samples, 0.27%) - - - -__xfrm_decode_session (4 samples, 0.01%) - - - -pick_next_entity (3 samples, 0.01%) - - - -do_futex (3 samples, 0.01%) - - - -update_curr (12 samples, 0.04%) - - - -[perf] (317 samples, 1.08%) - - - -acpi_processor_ffh_cstate_enter (1,057 samples, 3.62%) -acpi.. - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (210 samples, 0.72%) - - - -finish_task_switch.isra.0 (10 samples, 0.03%) - - - -sched_clock_cpu (20 samples, 0.07%) - - - -__dev_queue_xmit (5 samples, 0.02%) - - - -migrate_enable (7 samples, 0.02%) - - - -psi_group_change (6 samples, 0.02%) - - - -reweight_entity (4 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.02%) - - - -selinux_xfrm_decode_session (16 samples, 0.05%) - - - -__switch_to (7 samples, 0.02%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (24 samples, 0.08%) - - - -tick_nohz_idle_exit (3 samples, 0.01%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (31 samples, 0.11%) - - - - diff --git a/_versions/2.7/guides/images/native-reference-multi-flamegraph-separate-threads.svg b/_versions/2.7/guides/images/native-reference-multi-flamegraph-separate-threads.svg deleted file mode 100644 index a60cf2a4825..00000000000 --- a/_versions/2.7/guides/images/native-reference-multi-flamegraph-separate-threads.svg +++ /dev/null @@ -1,12096 +0,0 @@ - - - - - - - - - - - - - - -Flame Graph - -Reset Zoom -Search -ic - - - -_copy_from_iter (7 samples, 0.01%) - - - -futex_wait (14 samples, 0.03%) - - - -futex_wait (8 samples, 0.02%) - - - -ip_finish_output2 (19 samples, 0.04%) - - - -perf_event_for_each_child (18 samples, 0.03%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (1,823 samples, 3.52%) -Thr.. - - -decode_session4 (8 samples, 0.02%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (530 samples, 1.02%) - - - -_copy_from_iter (6 samples, 0.01%) - - - -getInetAddress_family (15 samples, 0.03%) - - - -___pthread_cond_broadcast (5 samples, 0.01%) - - - -native_sched_clock (8 samples, 0.02%) - - - -decode_session4 (17 samples, 0.03%) - - - -__x86_indirect_thunk_rax (13 samples, 0.03%) - - - -CEntryPointSnippets_attachThread_299a3505abe96864afd07f8f20f652a19cd12ea9 (8 samples, 0.02%) - - - -__dev_queue_xmit (8 samples, 0.02%) - - - -ip_output (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (597 samples, 1.15%) - - - -ip_setup_cork (26 samples, 0.05%) - - - -net_rx_action (13 samples, 0.03%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -ttwu_do_wakeup (7 samples, 0.01%) - - - -__irq_exit_rcu (6 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (8 samples, 0.02%) - - - -__GI___pthread_cond_wait (420 samples, 0.81%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (101 samples, 0.19%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (5 samples, 0.01%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -psi_group_change (26 samples, 0.05%) - - - -__x86_indirect_thunk_rax (7 samples, 0.01%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (74 samples, 0.14%) - - - -__sys_sendto (147 samples, 0.28%) - - - -selinux_parse_skb.constprop.0 (10 samples, 0.02%) - - - -[unknown] (6 samples, 0.01%) - - - -psi_group_change (5 samples, 0.01%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (5 samples, 0.01%) - - - -validate_xmit_skb (12 samples, 0.02%) - - - -poll_idle (11 samples, 0.02%) - - - -futex_wait (14 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (66 samples, 0.13%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (6 samples, 0.01%) - - - -ip_append_data (7 samples, 0.01%) - - - -__calc_delta (96 samples, 0.19%) - - - -loopback_xmit (8 samples, 0.02%) - - - -MultiThreadedMonitorSupport_monitorEnter_a853e48d8499fe94e7e0723447fc9d2060965e91 (22 samples, 0.04%) - - - -hash_futex (5 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (5 samples, 0.01%) - - - -syscall_return_via_sysret (8 samples, 0.02%) - - - -syscall_return_via_sysret (22 samples, 0.04%) - - - -__schedule (11 samples, 0.02%) - - - -MultiThreadedMonitorSupport_slowPathMonitorExit_183871de385508d0f6b4f0881e8e0c44628018b3 (52 samples, 0.10%) - - - -__entry_text_start (5 samples, 0.01%) - - - -__futex_abstimed_wait_common (5 samples, 0.01%) - - - -worker_thread (18 samples, 0.03%) - - - -tick_nohz_idle_retain_tick (6 samples, 0.01%) - - - -process_backlog (7 samples, 0.01%) - - - -__icmp_send (61 samples, 0.12%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (8 samples, 0.02%) - - - -siphash_3u32 (14 samples, 0.03%) - - - -__kmalloc_node_track_caller (8 samples, 0.02%) - - - -__inet_dev_addr_type (7 samples, 0.01%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (89 samples, 0.17%) - - - -check_preempt_curr (10 samples, 0.02%) - - - -enqueue_task_fair (42 samples, 0.08%) - - - -__update_load_avg_cfs_rq (11 samples, 0.02%) - - - -ip_generic_getfrag (5 samples, 0.01%) - - - -__GI___pthread_cond_wait (423 samples, 0.82%) - - - -plist_add (5 samples, 0.01%) - - - -ip_finish_output2 (250 samples, 0.48%) - - - -ApplicationLifecycleManager_run_dbf144db2a98237beac0f2d82fb961c3bd6ed251 (9 samples, 0.02%) - - - -do_futex (6 samples, 0.01%) - - - -do_epoll_wait (10 samples, 0.02%) - - - -udp_sendmsg (16 samples, 0.03%) - - - -raw_spin_rq_lock_nested (19 samples, 0.04%) - - - -selinux_parse_skb.constprop.0 (8 samples, 0.02%) - - - -udp_err (6 samples, 0.01%) - - - -selinux_ip_postroute_compat (20 samples, 0.04%) - - - -__libc_sendto (810 samples, 1.56%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (28 samples, 0.05%) - - - -ipv4_mtu (14 samples, 0.03%) - - - -dequeue_entity (8 samples, 0.02%) - - - -__get_user_nocheck_4 (75 samples, 0.14%) - - - -irqtime_account_irq (9 samples, 0.02%) - - - -udp_sendmsg (21 samples, 0.04%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -try_to_wake_up (5 samples, 0.01%) - - - -psi_group_change (11 samples, 0.02%) - - - -__x86_indirect_thunk_rax (11 samples, 0.02%) - - - -update_load_avg (10 samples, 0.02%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (5 samples, 0.01%) - - - -MultiThreadedMonitorSupport_slowPathMonitorExit_183871de385508d0f6b4f0881e8e0c44628018b3 (5 samples, 0.01%) - - - -__ip_dev_find (6 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (46 samples, 0.09%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (7 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (39 samples, 0.08%) - - - -__alloc_skb (40 samples, 0.08%) - - - -enqueue_to_backlog (8 samples, 0.02%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (7 samples, 0.01%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (9 samples, 0.02%) - - - -__dev_queue_xmit (12 samples, 0.02%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (13 samples, 0.03%) - - - -syscall_return_via_sysret (6 samples, 0.01%) - - - -native_write_msr (7 samples, 0.01%) - - - -_copy_from_iter (8 samples, 0.02%) - - - -MultiThreadedMonitorSupport_monitorEnter_a853e48d8499fe94e7e0723447fc9d2060965e91 (7 samples, 0.01%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (60 samples, 0.12%) - - - -__udp4_lib_rcv (74 samples, 0.14%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (7 samples, 0.01%) - - - -__dev_queue_xmit (6 samples, 0.01%) - - - -sched_clock_cpu (33 samples, 0.06%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1,934 samples, 3.73%) -Java.. - - -__check_object_size (7 samples, 0.01%) - - - -__GI___pthread_cond_wait (417 samples, 0.80%) - - - -common_interrupt (5 samples, 0.01%) - - - -fib_lookup_good_nhc (7 samples, 0.01%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (2,573 samples, 4.97%) -Multic.. - - -ip_protocol_deliver_rcu (176 samples, 0.34%) - - - -try_to_wake_up (6 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (6 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (24 samples, 0.05%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (48 samples, 0.09%) - - - -icmp_push_reply (5 samples, 0.01%) - - - -__udp4_lib_rcv (15 samples, 0.03%) - - - -__x64_sys_futex (10 samples, 0.02%) - - - -sysvec_apic_timer_interrupt (6 samples, 0.01%) - - - -__writeback_inodes_wb (7 samples, 0.01%) - - - -fib_table_lookup (5 samples, 0.01%) - - - -ip_route_output_key_hash (6 samples, 0.01%) - - - -net_rx_action (78 samples, 0.15%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (184 samples, 0.36%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (1,934 samples, 3.73%) -Thre.. - - -siphash_3u32 (18 samples, 0.03%) - - - -__schedule (8 samples, 0.02%) - - - -move_addr_to_kernel.part.0 (6 samples, 0.01%) - - - -__icmp_send (10 samples, 0.02%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -finish_task_switch.isra.0 (59 samples, 0.11%) - - - -menu_select (6 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2,319 samples, 4.48%) -JavaT.. - - -xfs_vm_writepages (7 samples, 0.01%) - - - -sock_sendmsg (158 samples, 0.30%) - - - -__x86_indirect_thunk_rax (6 samples, 0.01%) - - - -xfrm_lookup_with_ifid (13 samples, 0.03%) - - - -ip_protocol_deliver_rcu (96 samples, 0.19%) - - - -nf_hook_slow (5 samples, 0.01%) - - - -syscall_enter_from_user_mode (7 samples, 0.01%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (2,539 samples, 4.90%) -Future.. - - -mark_wake_futex (5 samples, 0.01%) - - - -___pthread_cond_broadcast (48 samples, 0.09%) - - - -ip_make_skb (8 samples, 0.02%) - - - -do_syscall_64 (171 samples, 0.33%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (2,339 samples, 4.51%) -Datag.. - - -select_task_rq_fair (14 samples, 0.03%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (17 samples, 0.03%) - - - -sock_alloc_send_pskb (8 samples, 0.02%) - - - -__dev_queue_xmit (16 samples, 0.03%) - - - -nvme_queue_rq (7 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (186 samples, 0.36%) - - - -_raw_spin_unlock_irqrestore (8 samples, 0.02%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (21 samples, 0.04%) - - - -futex_wait_queue_me (20 samples, 0.04%) - - - -__udp4_lib_lookup (5 samples, 0.01%) - - - -icmp_rcv (18 samples, 0.03%) - - - -ip_output (5 samples, 0.01%) - - - -schedule (12 samples, 0.02%) - - - -sched_clock_cpu (45 samples, 0.09%) - - - -__udp4_lib_rcv (80 samples, 0.15%) - - - -raw_spin_rq_lock_nested (30 samples, 0.06%) - - - -psi_task_switch (69 samples, 0.13%) - - - -pick_next_task_fair (5 samples, 0.01%) - - - -__writeback_single_inode (7 samples, 0.01%) - - - -JNIObjectHandles_getObject_3a8ad95345633a155810448c9f6c1b478270ddcf (7 samples, 0.01%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,606 samples, 3.10%) -Dat.. - - -try_to_wake_up (8 samples, 0.02%) - - - -[unknown] (9 samples, 0.02%) - - - -enqueue_to_backlog (18 samples, 0.03%) - - - -psi_group_change (26 samples, 0.05%) - - - -sock_wfree (10 samples, 0.02%) - - - -__GI___lll_lock_wake (74 samples, 0.14%) - - - -icmp_route_lookup.constprop.0 (12 samples, 0.02%) - - - -native_sched_clock (9 samples, 0.02%) - - - -ip_local_deliver_finish (97 samples, 0.19%) - - - -__ip_append_data (18 samples, 0.03%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (15 samples, 0.03%) - - - -__entry_text_start (28 samples, 0.05%) - - - -__update_load_avg_se (9 samples, 0.02%) - - - -rcu_eqs_exit.constprop.0 (9 samples, 0.02%) - - - -process_backlog (107 samples, 0.21%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (242 samples, 0.47%) - - - -put_prev_task_fair (26 samples, 0.05%) - - - -selinux_ip_postroute_compat (14 samples, 0.03%) - - - -NET_InetAddressToSockaddr (40 samples, 0.08%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (5 samples, 0.01%) - - - -psi_flags_change (20 samples, 0.04%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2,276 samples, 4.39%) -JavaT.. - - -net_rx_action (10 samples, 0.02%) - - - -ip_finish_output2 (21 samples, 0.04%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (2,167 samples, 4.18%) -Data.. - - -siphash_3u32 (5 samples, 0.01%) - - - -load_balance (9 samples, 0.02%) - - - -__ip_make_skb (30 samples, 0.06%) - - - -update_sd_lb_stats.constprop.0 (8 samples, 0.02%) - - - -ip_skb_dst_mtu (10 samples, 0.02%) - - - -__x64_sys_futex (6 samples, 0.01%) - - - -__entry_text_start (43 samples, 0.08%) - - - -__dev_queue_xmit (7 samples, 0.01%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (3,469 samples, 6.70%) -ThreadPoo.. - - -tick_nohz_get_next_hrtimer (7 samples, 0.01%) - - - -MultiThreadedMonitorSupport_slowPathMonitorEnter_5c2ec80c70301e1f54c9deef94b70b719d5a10f5 (34 samples, 0.07%) - - - -syscall_return_via_sysret (31 samples, 0.06%) - - - -__x64_sys_futex (8 samples, 0.02%) - - - -__ip_select_ident (5 samples, 0.01%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -kmem_cache_alloc_node (12 samples, 0.02%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,102 samples, 2.13%) -D.. - - -do_syscall_64 (8 samples, 0.02%) - - - -MultiThreadedMonitorSupport_slowPathMonitorEnter_5c2ec80c70301e1f54c9deef94b70b719d5a10f5 (12 samples, 0.02%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (2,573 samples, 4.97%) -Future.. - - -make (5 samples, 0.01%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (853 samples, 1.65%) - - - -ip_finish_output2 (90 samples, 0.17%) - - - -NET_InetAddressToSockaddr (68 samples, 0.13%) - - - -udp_sendmsg (135 samples, 0.26%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (21 samples, 0.04%) - - - -udp_sendmsg (33 samples, 0.06%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (41 samples, 0.08%) - - - -start_thread (2,319 samples, 4.48%) -start.. - - -scheduler_tick (7 samples, 0.01%) - - - -ip_skb_dst_mtu (5 samples, 0.01%) - - - -__entry_text_start (23 samples, 0.04%) - - - -update_load_avg (5 samples, 0.01%) - - - -do_syscall_64 (14 samples, 0.03%) - - - -pick_next_task_idle (14 samples, 0.03%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (6 samples, 0.01%) - - - -net_rx_action (9 samples, 0.02%) - - - -irqtime_account_irq (6 samples, 0.01%) - - - -sock_sendmsg (146 samples, 0.28%) - - - -udp_sendmsg (21 samples, 0.04%) - - - -icmp_route_lookup.constprop.0 (15 samples, 0.03%) - - - -memcg_slab_post_alloc_hook (9 samples, 0.02%) - - - -__entry_text_start (10 samples, 0.02%) - - - -__schedule (15 samples, 0.03%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (2,319 samples, 4.48%) -Threa.. - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (11 samples, 0.02%) - - - -alloc_skb_with_frags (7 samples, 0.01%) - - - -ip_protocol_deliver_rcu (80 samples, 0.15%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (3,469 samples, 6.70%) -JavaThrea.. - - -ttwu_do_wakeup (21 samples, 0.04%) - - - -__update_load_avg_se (6 samples, 0.01%) - - - -loopback_xmit (10 samples, 0.02%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (6 samples, 0.01%) - - - -nohz_run_idle_balance (8 samples, 0.02%) - - - -switch_mm_irqs_off (6 samples, 0.01%) - - - -psi_task_change (32 samples, 0.06%) - - - -arena_get2.part.0 (7 samples, 0.01%) - - - -__clone3 (1,823 samples, 3.52%) -__c.. - - -__switch_to_asm (11 samples, 0.02%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (10 samples, 0.02%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (193 samples, 0.37%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (9 samples, 0.02%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (169 samples, 0.33%) - - - -__softirqentry_text_start (73 samples, 0.14%) - - - -__GI___pthread_cond_wait (481 samples, 0.93%) - - - -__libc_sendto (1,135 samples, 2.19%) -_.. - - -try_to_wake_up (6 samples, 0.01%) - - - -__sys_sendto (159 samples, 0.31%) - - - -ip_local_deliver_finish (187 samples, 0.36%) - - - -do_syscall_64 (8 samples, 0.02%) - - - -__netif_receive_skb_core.constprop.0 (13 samples, 0.03%) - - - -siphash_3u32 (8 samples, 0.02%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,092 samples, 2.11%) -D.. - - -psi_group_change (6 samples, 0.01%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (5 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (5 samples, 0.01%) - - - -getInetAddress_addr (11 samples, 0.02%) - - - -alloc_skb_with_frags (6 samples, 0.01%) - - - -writeback_sb_inodes (7 samples, 0.01%) - - - -MultiThreadedMonitorSupport_monitorExit_f765f7445e650efe1207579ef06c6f8ac708d1b5 (22 samples, 0.04%) - - - -psi_group_change (7 samples, 0.01%) - - - -___pthread_cond_broadcast (51 samples, 0.10%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (30 samples, 0.06%) - - - -__schedule (6 samples, 0.01%) - - - -___pthread_cond_broadcast (7 samples, 0.01%) - - - -__dev_queue_xmit (9 samples, 0.02%) - - - -dev_hard_start_xmit (9 samples, 0.02%) - - - -__GI___pthread_mutex_unlock_usercnt (89 samples, 0.17%) - - - -cpuidle_enter_state (17 samples, 0.03%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (26 samples, 0.05%) - - - -__rdgsbase_inactive (21 samples, 0.04%) - - - -sched_ttwu_pending (21 samples, 0.04%) - - - -selinux_parse_skb.constprop.0 (7 samples, 0.01%) - - - -pick_next_task_fair (14 samples, 0.03%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (23 samples, 0.04%) - - - -newidle_balance (71 samples, 0.14%) - - - -icmp_unreach (11 samples, 0.02%) - - - -alloc_skb_with_frags (11 samples, 0.02%) - - - -syscall_return_via_sysret (32 samples, 0.06%) - - - -__fget_files (5 samples, 0.01%) - - - -secondary_startup_64_no_verify (8,424 samples, 16.26%) -secondary_startup_64_no_v.. - - -do_csum (5 samples, 0.01%) - - - -__netif_receive_skb_one_core (96 samples, 0.19%) - - - -ip_route_output_key_hash (14 samples, 0.03%) - - - -syscall_return_via_sysret (13 samples, 0.03%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -ipv4_mtu (6 samples, 0.01%) - - - -futex_wake (5 samples, 0.01%) - - - -psi_task_switch (157 samples, 0.30%) - - - -icmp_glue_bits (5 samples, 0.01%) - - - -__udp4_lib_lookup (12 samples, 0.02%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (9 samples, 0.02%) - - - -__dev_queue_xmit (12 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (11 samples, 0.02%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (163 samples, 0.31%) - - - -__local_bh_enable_ip (120 samples, 0.23%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (3,469 samples, 6.70%) -PosixJava.. - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (159 samples, 0.31%) - - - -__schedule (266 samples, 0.51%) - - - -__x64_sys_futex (5 samples, 0.01%) - - - -net_rx_action (109 samples, 0.21%) - - - -entry_SYSCALL_64_after_hwframe (49 samples, 0.09%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (2,539 samples, 4.90%) -Datagr.. - - -__GI___pthread_cond_wait (392 samples, 0.76%) - - - -sched_clock_cpu (7 samples, 0.01%) - - - -__ip_append_data (13 samples, 0.03%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (325 samples, 0.63%) - - - -decode_session4 (8 samples, 0.02%) - - - -__pthread_mutex_cond_lock (5 samples, 0.01%) - - - -process_backlog (108 samples, 0.21%) - - - -ip_generic_getfrag (16 samples, 0.03%) - - - -__kmalloc_node_track_caller (12 samples, 0.02%) - - - -__entry_text_start (26 samples, 0.05%) - - - -migrate_enable (5 samples, 0.01%) - - - -__GI___pthread_cond_wait (7 samples, 0.01%) - - - -__ip_append_data (27 samples, 0.05%) - - - -futex_wait (9 samples, 0.02%) - - - -bpf_lsm_xfrm_decode_session (6 samples, 0.01%) - - - -__switch_to (17 samples, 0.03%) - - - -net_rx_action (14 samples, 0.03%) - - - -__switch_to_asm (14 samples, 0.03%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,238 samples, 2.39%) -J.. - - -__x64_sys_futex (16 samples, 0.03%) - - - -__x64_sys_futex (12 samples, 0.02%) - - - -udp_sendmsg (6 samples, 0.01%) - - - -___pthread_cond_broadcast (58 samples, 0.11%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (7 samples, 0.01%) - - - -__GI___lll_lock_wake (9 samples, 0.02%) - - - -kmem_cache_free (5 samples, 0.01%) - - - -psi_group_change (32 samples, 0.06%) - - - -tick_nohz_get_sleep_length (26 samples, 0.05%) - - - -__udp4_lib_lookup (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.01%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (2,575 samples, 4.97%) -Thread.. - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (10 samples, 0.02%) - - - -sched_clock_cpu (59 samples, 0.11%) - - - -__dev_queue_xmit (8 samples, 0.02%) - - - -__GI___pthread_cond_wait (461 samples, 0.89%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (26 samples, 0.05%) - - - -ip_setup_cork (11 samples, 0.02%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,312 samples, 2.53%) -Da.. - - -icmp_route_lookup.constprop.0 (8 samples, 0.02%) - - - -start_thread (2,575 samples, 4.97%) -start_.. - - -acpi_processor_ffh_cstate_enter (9 samples, 0.02%) - - - -do_csum (7 samples, 0.01%) - - - -do_syscall_64 (6 samples, 0.01%) - - - -do_csum (7 samples, 0.01%) - - - -__common_interrupt (5 samples, 0.01%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (2,319 samples, 4.48%) -Futur.. - - -do_softirq (5 samples, 0.01%) - - - -do_idle (8,276 samples, 15.97%) -do_idle - - -__ksize (9 samples, 0.02%) - - - -clear_buddies (9 samples, 0.02%) - - - -update_load_avg (5 samples, 0.01%) - - - -_raw_spin_lock_irqsave (7 samples, 0.01%) - - - -visit_groups_merge.constprop.0.isra.0 (11 samples, 0.02%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (41 samples, 0.08%) - - - -__cgroup_bpf_run_filter_skb (13 samples, 0.03%) - - - -udp_send_skb (5 samples, 0.01%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (32 samples, 0.06%) - - - -do_syscall_64 (16 samples, 0.03%) - - - -enqueue_entity (11 samples, 0.02%) - - - -do_futex (15 samples, 0.03%) - - - -local_touch_nmi (5 samples, 0.01%) - - - -__libc_sendto (5 samples, 0.01%) - - - -dev_hard_start_xmit (5 samples, 0.01%) - - - -udp_sendmsg (16 samples, 0.03%) - - - -ip_route_output_key_hash_rcu (8 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (12 samples, 0.02%) - - - -do_syscall_64 (32 samples, 0.06%) - - - -futex_wake (8 samples, 0.02%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -native_write_msr (5 samples, 0.01%) - - - -__x64_sys_futex (6 samples, 0.01%) - - - -__switch_to_asm (10 samples, 0.02%) - - - -ip_route_output_flow (6 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1,946 samples, 3.76%) -Posi.. - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (90 samples, 0.17%) - - - -__x64_sys_sendto (165 samples, 0.32%) - - - -_copy_from_iter (9 samples, 0.02%) - - - -fib_table_lookup (5 samples, 0.01%) - - - -alloc_skb_with_frags (5 samples, 0.01%) - - - -tcache_init.part.0 (7 samples, 0.01%) - - - -psi_group_change (12 samples, 0.02%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (520 samples, 1.00%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -reweight_entity (5 samples, 0.01%) - - - -merge_sched_in (9 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (23 samples, 0.04%) - - - -nohz_run_idle_balance (26 samples, 0.05%) - - - -select_task_rq_fair (12 samples, 0.02%) - - - -psi_group_change (12 samples, 0.02%) - - - -process_backlog (101 samples, 0.19%) - - - -select_task_rq_fair (18 samples, 0.03%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,342 samples, 2.59%) -Da.. - - -ip_make_skb (6 samples, 0.01%) - - - -bpf_lsm_socket_sendmsg (6 samples, 0.01%) - - - -copy_user_generic_string (33 samples, 0.06%) - - - -icmp_push_reply (9 samples, 0.02%) - - - -process_backlog (7 samples, 0.01%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (191 samples, 0.37%) - - - -cpuidle_enter_state (6,144 samples, 11.86%) -cpuidle_enter_state - - -icmp_glue_bits (5 samples, 0.01%) - - - -tick_nohz_idle_got_tick (6 samples, 0.01%) - - - -psi_task_change (7 samples, 0.01%) - - - -__local_bh_enable_ip (232 samples, 0.45%) - - - -entry_SYSCALL_64_after_hwframe (180 samples, 0.35%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (1,934 samples, 3.73%) -Futu.. - - -alloc_skb_with_frags (9 samples, 0.02%) - - - -try_to_wake_up (5 samples, 0.01%) - - - -__add_to_page_cache_locked (7 samples, 0.01%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (144 samples, 0.28%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (588 samples, 1.13%) - - - -enqueue_to_backlog (10 samples, 0.02%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (186 samples, 0.36%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (8 samples, 0.02%) - - - -cpuidle_enter (6,225 samples, 12.01%) -cpuidle_enter - - -__x64_sys_futex (16 samples, 0.03%) - - - -__futex_abstimed_wait_common (296 samples, 0.57%) - - - -getInetAddress_family (33 samples, 0.06%) - - - -ip_output (5 samples, 0.01%) - - - -do_syscall_64 (6 samples, 0.01%) - - - -select_task_rq_fair (31 samples, 0.06%) - - - -__entry_text_start (7 samples, 0.01%) - - - -psi_group_change (11 samples, 0.02%) - - - -ip_local_deliver (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.01%) - - - -wake_up_q (20 samples, 0.04%) - - - -tick_nohz_next_event (27 samples, 0.05%) - - - -__clone3 (3,469 samples, 6.70%) -__clone3 - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,579 samples, 3.05%) -Dat.. - - -update_rq_clock (18 samples, 0.03%) - - - -do_idle (266 samples, 0.51%) - - - -do_futex (6 samples, 0.01%) - - - -__schedule (18 samples, 0.03%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (54 samples, 0.10%) - - - -__update_load_avg_se (5 samples, 0.01%) - - - -alloc_skb_with_frags (6 samples, 0.01%) - - - -dev_hard_start_xmit (11 samples, 0.02%) - - - -__schedule (7 samples, 0.01%) - - - -schedule_idle (71 samples, 0.14%) - - - -__local_bh_enable_ip (122 samples, 0.24%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (1,823 samples, 3.52%) -Dat.. - - -getInetAddress_family (5 samples, 0.01%) - - - -kmem_cache_free (6 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (22 samples, 0.04%) - - - -select_task_rq_fair (37 samples, 0.07%) - - - -validate_xmit_skb (6 samples, 0.01%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -ip_finish_output2 (18 samples, 0.03%) - - - -get_next_timer_interrupt (52 samples, 0.10%) - - - -get_futex_key (8 samples, 0.02%) - - - -native_sched_clock (6 samples, 0.01%) - - - -__entry_text_start (41 samples, 0.08%) - - - -icmp_push_reply (5 samples, 0.01%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (492 samples, 0.95%) - - - -___pthread_mutex_lock (40 samples, 0.08%) - - - -__GI___pthread_mutex_unlock_usercnt (164 samples, 0.32%) - - - -do_syscall_64 (5 samples, 0.01%) - - - -__get_user_nocheck_4 (6 samples, 0.01%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (78 samples, 0.15%) - - - -__entry_text_start (17 samples, 0.03%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (11 samples, 0.02%) - - - -nf_hook_slow (6 samples, 0.01%) - - - -menu_select (95 samples, 0.18%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -__ip_append_data (28 samples, 0.05%) - - - -hrtimer_next_event_without (14 samples, 0.03%) - - - -kmem_cache_alloc_node (12 samples, 0.02%) - - - -__GI___lll_lock_wake (59 samples, 0.11%) - - - -udp_send_skb (268 samples, 0.52%) - - - -__get_user_nocheck_4 (44 samples, 0.08%) - - - -psi_group_change (40 samples, 0.08%) - - - -enqueue_to_backlog (8 samples, 0.02%) - - - -update_load_avg (14 samples, 0.03%) - - - -__blk_mq_sched_dispatch_requests (7 samples, 0.01%) - - - -getInetAddress_addr (10 samples, 0.02%) - - - -ret_from_fork (22 samples, 0.04%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (22 samples, 0.04%) - - - -native_sched_clock (11 samples, 0.02%) - - - -do_futex (31 samples, 0.06%) - - - -native_sched_clock (5 samples, 0.01%) - - - -__ip_append_data (5 samples, 0.01%) - - - -enqueue_entity (17 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (19 samples, 0.04%) - - - -__GI___pthread_mutex_unlock_usercnt (89 samples, 0.17%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (18 samples, 0.03%) - - - -switch_mm_irqs_off (6 samples, 0.01%) - - - -fib_table_lookup (45 samples, 0.09%) - - - -kfree (11 samples, 0.02%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (36 samples, 0.07%) - - - -getInetAddress_family (13 samples, 0.03%) - - - -ip_send_check (5 samples, 0.01%) - - - -[unknown] (21 samples, 0.04%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (553 samples, 1.07%) - - - -do_softirq (122 samples, 0.24%) - - - -menu_reflect (21 samples, 0.04%) - - - -__entry_text_start (21 samples, 0.04%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (56 samples, 0.11%) - - - -_raw_spin_lock_irqsave (8 samples, 0.02%) - - - -__libc_sendto (1,068 samples, 2.06%) -_.. - - -_raw_spin_lock_irqsave (10 samples, 0.02%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (5 samples, 0.01%) - - - -__icmp_send (21 samples, 0.04%) - - - -___pthread_cond_broadcast (78 samples, 0.15%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (131 samples, 0.25%) - - - -__GI___pthread_mutex_unlock_usercnt (6 samples, 0.01%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -xfs_file_buffered_write (31 samples, 0.06%) - - - -selinux_socket_sendmsg (7 samples, 0.01%) - - - -siphash_3u32 (5 samples, 0.01%) - - - -select_task_rq_fair (6 samples, 0.01%) - - - -iterate_groups (5 samples, 0.01%) - - - -ip_route_output_key_hash (15 samples, 0.03%) - - - -fib_table_lookup (45 samples, 0.09%) - - - -queue_core_balance (12 samples, 0.02%) - - - -slab_free_freelist_hook.constprop.0 (8 samples, 0.02%) - - - -ip_finish_output2 (20 samples, 0.04%) - - - -__libc_sendto (2,045 samples, 3.95%) -__li.. - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (1,946 samples, 3.76%) -Futu.. - - -udp_err (5 samples, 0.01%) - - - -__clone3 (2,319 samples, 4.48%) -__clo.. - - -ip_idents_reserve (5 samples, 0.01%) - - - -fib_lookup_good_nhc (5 samples, 0.01%) - - - -csum_partial_copy_generic (13 samples, 0.03%) - - - -update_cfs_group (76 samples, 0.15%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2,276 samples, 4.39%) -Posix.. - - -fib_lookup_good_nhc (10 samples, 0.02%) - - - -pagecache_get_page (12 samples, 0.02%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,236 samples, 2.39%) -J.. - - -__condvar_dec_grefs (49 samples, 0.09%) - - - -csum_partial (9 samples, 0.02%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (6 samples, 0.01%) - - - -syscall_return_via_sysret (18 samples, 0.03%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2,539 samples, 4.90%) -Isolat.. - - -__alloc_skb (30 samples, 0.06%) - - - -cpuidle_enter (21 samples, 0.04%) - - - -ip_finish_output2 (126 samples, 0.24%) - - - -raw_spin_rq_lock_nested (23 samples, 0.04%) - - - -__switch_to_asm (5 samples, 0.01%) - - - -udp_sendmsg (112 samples, 0.22%) - - - -enqueue_entity (57 samples, 0.11%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (134 samples, 0.26%) - - - -do_softirq (74 samples, 0.14%) - - - -do_syscall_64 (16 samples, 0.03%) - - - -selinux_socket_sendmsg (7 samples, 0.01%) - - - -__ksize (8 samples, 0.02%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (167 samples, 0.32%) - - - -selinux_ipv4_output (8 samples, 0.02%) - - - -dequeue_entity (84 samples, 0.16%) - - - -ip_make_skb (28 samples, 0.05%) - - - -cpuidle_enter_state (461 samples, 0.89%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (558 samples, 1.08%) - - - -__ksize (7 samples, 0.01%) - - - -__libc_sendto (1,102 samples, 2.13%) -_.. - - -skb_set_owner_w (6 samples, 0.01%) - - - -sched_clock_cpu (22 samples, 0.04%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (11 samples, 0.02%) - - - -__inet_dev_addr_type (7 samples, 0.01%) - - - -__ip_append_data (8 samples, 0.02%) - - - -ip_make_skb (5 samples, 0.01%) - - - -migrate_enable (5 samples, 0.01%) - - - -ip_skb_dst_mtu (5 samples, 0.01%) - - - -Application_start_9a0b63742d6e66c1b5dc0121670fdf46106d2d88 (9 samples, 0.02%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (2,539 samples, 4.90%) -Multic.. - - -__rdgsbase_inactive (6 samples, 0.01%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (71 samples, 0.14%) - - - -do_futex (16 samples, 0.03%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (12 samples, 0.02%) - - - -__cgroup_bpf_run_filter_skb (18 samples, 0.03%) - - - -poll_idle (10 samples, 0.02%) - - - -__switch_to (6 samples, 0.01%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (532 samples, 1.03%) - - - -__ip_append_data (22 samples, 0.04%) - - - -getInetAddress_addr (5 samples, 0.01%) - - - -mark_wake_futex (6 samples, 0.01%) - - - -__futex_abstimed_wait_common (358 samples, 0.69%) - - - -icmp_push_reply (7 samples, 0.01%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (1,946 samples, 3.76%) -Mult.. - - -nf_hook_slow (5 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (9 samples, 0.02%) - - - -futex_wait_queue_me (5 samples, 0.01%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (31 samples, 0.06%) - - - -select_task_rq_fair (6 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (33 samples, 0.06%) - - - -__ip_make_skb (8 samples, 0.02%) - - - -icmp_rcv (11 samples, 0.02%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (10 samples, 0.02%) - - - -__ip_append_data (5 samples, 0.01%) - - - -getInetAddress_family (33 samples, 0.06%) - - - -nf_hook_slow (5 samples, 0.01%) - - - -select_task_rq_fair (10 samples, 0.02%) - - - -iomap_write_begin (14 samples, 0.03%) - - - -syscall_return_via_sysret (17 samples, 0.03%) - - - -dev_hard_start_xmit (5 samples, 0.01%) - - - -timekeeping_max_deferment (6 samples, 0.01%) - - - -sock_sendmsg (105 samples, 0.20%) - - - -cpuidle_not_available (5 samples, 0.01%) - - - -timerqueue_iterate_next (6 samples, 0.01%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (1,934 samples, 3.73%) -Mult.. - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (232 samples, 0.45%) - - - -udp_send_skb (6 samples, 0.01%) - - - -security_skb_classify_flow (6 samples, 0.01%) - - - -csum_partial_copy_generic (12 samples, 0.02%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (55 samples, 0.11%) - - - -syscall_return_via_sysret (5 samples, 0.01%) - - - -poll_idle (1,397 samples, 2.70%) -po.. - - -__dev_queue_xmit (35 samples, 0.07%) - - - -icmp_push_reply (8 samples, 0.02%) - - - -kfree (5 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (11 samples, 0.02%) - - - -__udp4_lib_lookup (9 samples, 0.02%) - - - -__entry_text_start (20 samples, 0.04%) - - - -__wrgsbase_inactive (5 samples, 0.01%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -try_to_wake_up (5 samples, 0.01%) - - - -__icmp_send (40 samples, 0.08%) - - - -process_backlog (7 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (5 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2,319 samples, 4.48%) -Posix.. - - -psi_group_change (46 samples, 0.09%) - - - -new_heap (7 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.01%) - - - -ip_output (7 samples, 0.01%) - - - -bpf_lsm_socket_sendmsg (7 samples, 0.01%) - - - -update_load_avg (15 samples, 0.03%) - - - -__ip_append_data (28 samples, 0.05%) - - - -slab_free_freelist_hook.constprop.0 (6 samples, 0.01%) - - - -schedule (9 samples, 0.02%) - - - -___pthread_cond_broadcast (11 samples, 0.02%) - - - -raw_spin_rq_lock_nested (15 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (133 samples, 0.26%) - - - -dev_hard_start_xmit (10 samples, 0.02%) - - - -__dev_queue_xmit (7 samples, 0.01%) - - - -start_thread (1,823 samples, 3.52%) -sta.. - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (56 samples, 0.11%) - - - -wake_up_q (8 samples, 0.02%) - - - -__fget_files (5 samples, 0.01%) - - - -__GI___pthread_disable_asynccancel (5 samples, 0.01%) - - - -__ip_make_skb (12 samples, 0.02%) - - - -__skb_checksum (16 samples, 0.03%) - - - -do_futex (6 samples, 0.01%) - - - -__netif_receive_skb_one_core (104 samples, 0.20%) - - - -syscall_return_via_sysret (10 samples, 0.02%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (54 samples, 0.10%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,493 samples, 2.88%) -Ja.. - - -__dev_queue_xmit (8 samples, 0.02%) - - - -should_failslab (6 samples, 0.01%) - - - -update_cfs_group (5 samples, 0.01%) - - - -__dev_queue_xmit (5 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (394 samples, 0.76%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (358 samples, 0.69%) - - - -futex_wake (5 samples, 0.01%) - - - -NET_InetAddressToSockaddr (103 samples, 0.20%) - - - -event_function_call (18 samples, 0.03%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (49 samples, 0.09%) - - - -dequeue_task_fair (5 samples, 0.01%) - - - -ttwu_do_wakeup (34 samples, 0.07%) - - - -move_addr_to_kernel.part.0 (7 samples, 0.01%) - - - -rcu_dynticks_inc (7 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (20 samples, 0.04%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (447 samples, 0.86%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (2,276 samples, 4.39%) -Multi.. - - -flush_smp_call_function_from_idle (1,226 samples, 2.37%) -f.. - - -schedule (7 samples, 0.01%) - - - -pick_next_task_fair (17 samples, 0.03%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (7 samples, 0.01%) - - - -update_load_avg (12 samples, 0.02%) - - - -ip_make_skb (19 samples, 0.04%) - - - -__icmp_send (69 samples, 0.13%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (8 samples, 0.02%) - - - -__skb_checksum (10 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.01%) - - - -select_task_rq_fair (36 samples, 0.07%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (624 samples, 1.20%) - - - -udp_sendmsg (20 samples, 0.04%) - - - -tick_nohz_next_event (22 samples, 0.04%) - - - -__GI___pthread_cond_wait (660 samples, 1.27%) - - - -__x64_sys_sendto (147 samples, 0.28%) - - - -ip_push_pending_frames (29 samples, 0.06%) - - - -__ip_make_skb (8 samples, 0.02%) - - - -ip_send_skb (100 samples, 0.19%) - - - -sched_clock_cpu (19 samples, 0.04%) - - - -do_futex (5 samples, 0.01%) - - - -process_one_work (18 samples, 0.03%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (3,469 samples, 6.70%) -IsolateEn.. - - -_copy_from_iter (5 samples, 0.01%) - - - -select_task_rq_fair (16 samples, 0.03%) - - - -sched_clock_cpu (13 samples, 0.03%) - - - -acpi_idle_do_entry (6 samples, 0.01%) - - - -rb_next (10 samples, 0.02%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -tick_irq_enter (9 samples, 0.02%) - - - -avc_has_perm (5 samples, 0.01%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (2,573 samples, 4.97%) -Multic.. - - -__entry_text_start (26 samples, 0.05%) - - - -skb_set_owner_w (6 samples, 0.01%) - - - -do_futex (15 samples, 0.03%) - - - -do_syscall_64 (15 samples, 0.03%) - - - -copy_user_generic_string (6 samples, 0.01%) - - - -syscall_return_via_sysret (22 samples, 0.04%) - - - -ttwu_queue_wakelist (8 samples, 0.02%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (96 samples, 0.19%) - - - -memcg_slab_post_alloc_hook (11 samples, 0.02%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (1,823 samples, 3.52%) -Mul.. - - -check_stack_object (5 samples, 0.01%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (11 samples, 0.02%) - - - -put_prev_entity (20 samples, 0.04%) - - - -validate_xmit_xfrm (6 samples, 0.01%) - - - -__x64_sys_sendto (171 samples, 0.33%) - - - -finish_task_switch.isra.0 (5 samples, 0.01%) - - - -__x86_indirect_thunk_rax (5 samples, 0.01%) - - - -memcg_slab_post_alloc_hook (12 samples, 0.02%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (5 samples, 0.01%) - - - -ip_route_output_key_hash (13 samples, 0.03%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (2,194 samples, 4.23%) -Multi.. - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (15 samples, 0.03%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2,194 samples, 4.23%) -JavaT.. - - -do_softirq (6 samples, 0.01%) - - - -_raw_spin_lock (16 samples, 0.03%) - - - -__kmalloc_node_track_caller (23 samples, 0.04%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (7 samples, 0.01%) - - - -do_futex (9 samples, 0.02%) - - - -ip_local_deliver_finish (100 samples, 0.19%) - - - -__futex_abstimed_wait_common (7 samples, 0.01%) - - - -copy_user_generic_string (29 samples, 0.06%) - - - -__softirqentry_text_start (9 samples, 0.02%) - - - -__x86_indirect_thunk_rax (41 samples, 0.08%) - - - -kfree (11 samples, 0.02%) - - - -kmem_cache_alloc_trace (10 samples, 0.02%) - - - -update_load_avg (5 samples, 0.01%) - - - -decode_session4 (7 samples, 0.01%) - - - -syscall_return_via_sysret (13 samples, 0.03%) - - - -sock_sendmsg (154 samples, 0.30%) - - - -__icmp_send (38 samples, 0.07%) - - - -select_task_rq_fair (24 samples, 0.05%) - - - -udp_sendmsg (17 samples, 0.03%) - - - -sched_clock_cpu (17 samples, 0.03%) - - - -__ip_make_skb (34 samples, 0.07%) - - - -raw_spin_rq_lock_nested (20 samples, 0.04%) - - - -StackOverflowCheckImpl_makeYellowZoneAvailable_096a6b7f9daf5fe9be382b399b6cbe747c1658f9 (21 samples, 0.04%) - - - -__softirqentry_text_start (117 samples, 0.23%) - - - -__ip_make_skb (16 samples, 0.03%) - - - -__ip_make_skb (9 samples, 0.02%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (6 samples, 0.01%) - - - -ip_protocol_deliver_rcu (100 samples, 0.19%) - - - -icmp_route_lookup.constprop.0 (38 samples, 0.07%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1,946 samples, 3.76%) -Thre.. - - -__schedule (7 samples, 0.01%) - - - -kfree (6 samples, 0.01%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (13 samples, 0.03%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (2,539 samples, 4.90%) -Thread.. - - -__do_set_cpus_allowed (7 samples, 0.01%) - - - -ip_protocol_deliver_rcu (87 samples, 0.17%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -security_perf_event_write (14 samples, 0.03%) - - - -cpu_startup_entry (485 samples, 0.94%) - - - -try_to_wake_up (13 samples, 0.03%) - - - -ip_append_data (5 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (9 samples, 0.02%) - - - -do_csum (5 samples, 0.01%) - - - -xfrm_lookup_with_ifid (20 samples, 0.04%) - - - -dev_hard_start_xmit (5 samples, 0.01%) - - - -getInetAddress_family (35 samples, 0.07%) - - - -rcu_dynticks_eqs_exit (6 samples, 0.01%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (10 samples, 0.02%) - - - -ip_finish_output2 (11 samples, 0.02%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (496 samples, 0.96%) - - - -try_to_wake_up (5 samples, 0.01%) - - - -__set_cpus_allowed_ptr_locked (17 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (6 samples, 0.01%) - - - -ip_generic_getfrag (13 samples, 0.03%) - - - -udp_send_skb (144 samples, 0.28%) - - - -event_function (16 samples, 0.03%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (24 samples, 0.05%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (555 samples, 1.07%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (344 samples, 0.66%) - - - -do_syscall_64 (7 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (894 samples, 1.73%) - - - -ctx_sched_in (11 samples, 0.02%) - - - -__local_bh_enable_ip (117 samples, 0.23%) - - - -do_syscall_64 (5 samples, 0.01%) - - - -ip_route_output_key_hash (8 samples, 0.02%) - - - -__ip_make_skb (5 samples, 0.01%) - - - -start_thread (2,194 samples, 4.23%) -start.. - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (359 samples, 0.69%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (1,946 samples, 3.76%) -Thre.. - - -psi_group_change (19 samples, 0.04%) - - - -__schedule (8 samples, 0.02%) - - - -ip_finish_output2 (10 samples, 0.02%) - - - -dev_hard_start_xmit (5 samples, 0.01%) - - - -fib_table_lookup (34 samples, 0.07%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (42 samples, 0.08%) - - - -__entry_text_start (35 samples, 0.07%) - - - -kmem_cache_free (7 samples, 0.01%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (2,318 samples, 4.47%) -Datag.. - - -mark_wake_futex (5 samples, 0.01%) - - - -__x86_indirect_thunk_rax (26 samples, 0.05%) - - - -psi_group_change (7 samples, 0.01%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (2,319 samples, 4.48%) -Multi.. - - -__entry_text_start (9 samples, 0.02%) - - - -asm_sysvec_apic_timer_interrupt (1,063 samples, 2.05%) -a.. - - -dev_hard_start_xmit (13 samples, 0.03%) - - - -kmem_cache_free (5 samples, 0.01%) - - - -[unknown] (19 samples, 0.04%) - - - -psi_group_change (5 samples, 0.01%) - - - -schedule (12 samples, 0.02%) - - - -ip_skb_dst_mtu (8 samples, 0.02%) - - - -do_softirq (7 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (43 samples, 0.08%) - - - -kfree_skb (7 samples, 0.01%) - - - -ip_route_output_flow (5 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (22 samples, 0.04%) - - - -reweight_entity (27 samples, 0.05%) - - - -__schedule (5 samples, 0.01%) - - - -__get_user_nocheck_4 (35 samples, 0.07%) - - - -enter_lazy_tlb (6 samples, 0.01%) - - - -try_to_wake_up (7 samples, 0.01%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (1,934 samples, 3.73%) -Data.. - - -__dev_queue_xmit (15 samples, 0.03%) - - - -futex_wait (6 samples, 0.01%) - - - -slab_free_freelist_hook.constprop.0 (5 samples, 0.01%) - - - -native_sched_clock (5 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,197 samples, 2.31%) -J.. - - -udp_sendmsg (17 samples, 0.03%) - - - -fib_table_lookup (34 samples, 0.07%) - - - -set_next_entity (9 samples, 0.02%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (12 samples, 0.02%) - - - -[perf] (82 samples, 0.16%) - - - -psi_group_change (6 samples, 0.01%) - - - -__napi_poll (111 samples, 0.21%) - - - -rcu_read_unlock_strict (6 samples, 0.01%) - - - -ip_route_output_key_hash (10 samples, 0.02%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -mark_wake_futex (15 samples, 0.03%) - - - -udp_sendmsg (187 samples, 0.36%) - - - -DatagramChannelImpl_ensureOpen_7bc4a5fa5a7f6ec97a0d5cbcaa3165bbb4e730b5 (6 samples, 0.01%) - - - -psi_group_change (43 samples, 0.08%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -xfrm_lookup_with_ifid (9 samples, 0.02%) - - - -fib_lookup_good_nhc (7 samples, 0.01%) - - - -__update_idle_core (18 samples, 0.03%) - - - -__x86_indirect_thunk_rax (16 samples, 0.03%) - - - -update_rq_clock (28 samples, 0.05%) - - - -update_load_avg (7 samples, 0.01%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (2,539 samples, 4.90%) -Multic.. - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (2,194 samples, 4.23%) -Multi.. - - -ApplicationImpl_doStart_e1afde9430e67b7c57499ed67ff5f64600d056ec (9 samples, 0.02%) - - - -do_softirq (6 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (6 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (6 samples, 0.01%) - - - -selinux_ipv4_output (6 samples, 0.01%) - - - -schedule_idle (504 samples, 0.97%) - - - -futex_wait_queue_me (12 samples, 0.02%) - - - -kmem_cache_alloc_node (13 samples, 0.03%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (26 samples, 0.05%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (1,946 samples, 3.76%) -Mult.. - - -set_next_task_idle (23 samples, 0.04%) - - - -slab_free_freelist_hook.constprop.0 (5 samples, 0.01%) - - - -StackOverflowCheckImpl_protectYellowZone_c940e860df16ce6529c43e09187ac7003f0ff4ce (9 samples, 0.02%) - - - -__udp4_lib_lookup (5 samples, 0.01%) - - - -pick_next_task_fair (5 samples, 0.01%) - - - -ip_local_deliver_finish (103 samples, 0.20%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (7 samples, 0.01%) - - - -futex_wait (9 samples, 0.02%) - - - -tick_sched_timer (22 samples, 0.04%) - - - -__napi_poll (95 samples, 0.18%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (10 samples, 0.02%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (527 samples, 1.02%) - - - -__cgroup_bpf_run_filter_skb (10 samples, 0.02%) - - - -ttwu_queue_wakelist (6 samples, 0.01%) - - - -net_rx_action (67 samples, 0.13%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (572 samples, 1.10%) - - - -___pthread_mutex_lock (17 samples, 0.03%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -fib_lookup_good_nhc (9 samples, 0.02%) - - - -__icmp_send (18 samples, 0.03%) - - - -__futex_abstimed_wait_common (218 samples, 0.42%) - - - -handle_edge_irq (5 samples, 0.01%) - - - -__GI___sched_setaffinity_new (79 samples, 0.15%) - - - -__condvar_dec_grefs (50 samples, 0.10%) - - - -ip_output (6 samples, 0.01%) - - - -ip_route_output_key_hash_rcu (32 samples, 0.06%) - - - -selinux_ip_postroute (11 samples, 0.02%) - - - -psi_group_change (5 samples, 0.01%) - - - -update_min_vruntime (6 samples, 0.01%) - - - -__ip_make_skb (37 samples, 0.07%) - - - -get_futex_key (5 samples, 0.01%) - - - -ip_append_data (8 samples, 0.02%) - - - -process_backlog (65 samples, 0.13%) - - - -pick_next_entity (5 samples, 0.01%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (1,934 samples, 3.73%) -Mult.. - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (2,319 samples, 4.48%) -Threa.. - - -cpu_startup_entry (57 samples, 0.11%) - - - -__udp4_lib_rcv (13 samples, 0.03%) - - - -dev_hard_start_xmit (11 samples, 0.02%) - - - -ip_generic_getfrag (5 samples, 0.01%) - - - -update_load_avg (18 samples, 0.03%) - - - -__kmalloc_node_track_caller (23 samples, 0.04%) - - - -selinux_ip_postroute_compat (9 samples, 0.02%) - - - -thread-5 (2,951 samples, 5.70%) -thread-5 - - -___pthread_mutex_lock (59 samples, 0.11%) - - - -__netif_receive_skb_core.constprop.0 (5 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (15 samples, 0.03%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (10 samples, 0.02%) - - - -ip_protocol_deliver_rcu (57 samples, 0.11%) - - - -kmem_cache_alloc_trace (5 samples, 0.01%) - - - -ip_push_pending_frames (23 samples, 0.04%) - - - -__update_load_avg_cfs_rq (6 samples, 0.01%) - - - -do_syscall_64 (131 samples, 0.25%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (11 samples, 0.02%) - - - -__netif_receive_skb_core.constprop.0 (5 samples, 0.01%) - - - -ip_append_data (28 samples, 0.05%) - - - -ip_route_output_key_hash (14 samples, 0.03%) - - - -__x64_sys_futex (14 samples, 0.03%) - - - -copy_user_generic_string (9 samples, 0.02%) - - - -loopback_xmit (8 samples, 0.02%) - - - -copy_user_generic_string (25 samples, 0.05%) - - - -copy_user_generic_string (33 samples, 0.06%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (14 samples, 0.03%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (54 samples, 0.10%) - - - -ip_setup_cork (9 samples, 0.02%) - - - -getInetAddress_addr (5 samples, 0.01%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (246 samples, 0.47%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (190 samples, 0.37%) - - - -icmp_unreach (5 samples, 0.01%) - - - -do_syscall_64 (11 samples, 0.02%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1,934 samples, 3.73%) -Isol.. - - -thread-1 (2,029 samples, 3.92%) -thre.. - - -icmp_glue_bits (5 samples, 0.01%) - - - -ktime_get_update_offsets_now (729 samples, 1.41%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2,319 samples, 4.48%) -Threa.. - - -timekeeping_max_deferment (18 samples, 0.03%) - - - -ip_setup_cork (20 samples, 0.04%) - - - -flush_smp_call_function_from_idle (20 samples, 0.04%) - - - -__ip_finish_output (8 samples, 0.02%) - - - -getInetAddress_family (27 samples, 0.05%) - - - -sched_clock_cpu (5 samples, 0.01%) - - - -ttwu_queue_wakelist (5 samples, 0.01%) - - - -kfence_ksize (8 samples, 0.02%) - - - -fib_table_lookup (85 samples, 0.16%) - - - -__napi_poll (111 samples, 0.21%) - - - -__netif_receive_skb_core.constprop.0 (7 samples, 0.01%) - - - -__inet_dev_addr_type (6 samples, 0.01%) - - - -native_sched_clock (7 samples, 0.01%) - - - -consume_skb (5 samples, 0.01%) - - - -tick_irq_enter (102 samples, 0.20%) - - - -__ip_append_data (44 samples, 0.08%) - - - -schedule (5 samples, 0.01%) - - - -__GI___pthread_cond_wait (395 samples, 0.76%) - - - -acpi_idle_enter (92 samples, 0.18%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (348 samples, 0.67%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (22 samples, 0.04%) - - - -hash_futex (5 samples, 0.01%) - - - -csum_partial_copy_generic (18 samples, 0.03%) - - - -__get_user_nocheck_4 (64 samples, 0.12%) - - - -curl (19,198 samples, 37.05%) -curl - - -ipv4_mtu (5 samples, 0.01%) - - - -NET_InetAddressToSockaddr (91 samples, 0.18%) - - - -__skb_checksum (9 samples, 0.02%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (438 samples, 0.85%) - - - -__update_load_avg_cfs_rq (9 samples, 0.02%) - - - -__ip_append_data (13 samples, 0.03%) - - - -__icmp_send (77 samples, 0.15%) - - - -__sys_sendto (171 samples, 0.33%) - - - -futex_wait (21 samples, 0.04%) - - - -netif_skb_features (5 samples, 0.01%) - - - -do_syscall_64 (342 samples, 0.66%) - - - -ip_rcv_core (7 samples, 0.01%) - - - -__ip_dev_find (5 samples, 0.01%) - - - -_copy_from_user (7 samples, 0.01%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (518 samples, 1.00%) - - - -__GI___lll_lock_wake (16 samples, 0.03%) - - - -__x86_indirect_thunk_rax (6 samples, 0.01%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (8 samples, 0.02%) - - - -icmp_unreach (8 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (8 samples, 0.02%) - - - -ipv4_mtu (7 samples, 0.01%) - - - -wake_up_q (7 samples, 0.01%) - - - -select_task_rq_fair (19 samples, 0.04%) - - - -do_csum (5 samples, 0.01%) - - - -__softirqentry_text_start (120 samples, 0.23%) - - - -__list_add_valid (5 samples, 0.01%) - - - -__schedule (7 samples, 0.01%) - - - -ipv4_mtu (7 samples, 0.01%) - - - -rb_next (5 samples, 0.01%) - - - -validate_xmit_skb (11 samples, 0.02%) - - - -rcu_dynticks_inc (15 samples, 0.03%) - - - -___pthread_cond_broadcast (48 samples, 0.09%) - - - -icmp_route_lookup.constprop.0 (22 samples, 0.04%) - - - -__ip_select_ident (5 samples, 0.01%) - - - -udp_send_skb (136 samples, 0.26%) - - - -VMOperationControl_guaranteeOkayToBlock_6c18be2cba7df7cda24be664a42b08f35232e6be (7 samples, 0.01%) - - - -tick_nohz_idle_exit (30 samples, 0.06%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (101 samples, 0.19%) - - - -__softirqentry_text_start (81 samples, 0.16%) - - - -__entry_text_start (13 samples, 0.03%) - - - -icmp_route_lookup.constprop.0 (12 samples, 0.02%) - - - -__icmp_send (5 samples, 0.01%) - - - -StackOverflowCheckImpl_makeYellowZoneAvailable_096a6b7f9daf5fe9be382b399b6cbe747c1658f9 (15 samples, 0.03%) - - - -ip_route_output_flow (6 samples, 0.01%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (8 samples, 0.02%) - - - -ip_finish_output2 (115 samples, 0.22%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1,946 samples, 3.76%) -Isol.. - - -selinux_ipv4_output (10 samples, 0.02%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (5 samples, 0.01%) - - - -__x64_sys_futex (6 samples, 0.01%) - - - -[unknown] (20 samples, 0.04%) - - - -ktime_get (21 samples, 0.04%) - - - -loopback_xmit (5 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (17 samples, 0.03%) - - - -ip_finish_output2 (131 samples, 0.25%) - - - -perf_ioctl (35 samples, 0.07%) - - - -__alloc_skb (18 samples, 0.03%) - - - -sched_clock_cpu (39 samples, 0.08%) - - - -__x64_sys_sendto (156 samples, 0.30%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (8 samples, 0.02%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (541 samples, 1.04%) - - - -do_idle (484 samples, 0.93%) - - - -__schedule (12 samples, 0.02%) - - - -syscall_return_via_sysret (7 samples, 0.01%) - - - -decode_session4 (10 samples, 0.02%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (181 samples, 0.35%) - - - -fib_table_lookup (29 samples, 0.06%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (536 samples, 1.03%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (5 samples, 0.01%) - - - -mark_wake_futex (6 samples, 0.01%) - - - -__netif_receive_skb_one_core (73 samples, 0.14%) - - - -grab_cache_page_write_begin (13 samples, 0.03%) - - - -udp_sendmsg (7 samples, 0.01%) - - - -__dev_queue_xmit (19 samples, 0.04%) - - - -security_socket_sendmsg (5 samples, 0.01%) - - - -__ip_append_data (11 samples, 0.02%) - - - -decode_session4 (11 samples, 0.02%) - - - -__alloc_skb (5 samples, 0.01%) - - - -_raw_spin_lock_irqsave (13 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (10 samples, 0.02%) - - - -__kmalloc_node_track_caller (23 samples, 0.04%) - - - -___pthread_mutex_lock (22 samples, 0.04%) - - - -__alloc_skb (22 samples, 0.04%) - - - -validate_xmit_skb (12 samples, 0.02%) - - - -thread-9 (3,395 samples, 6.55%) -thread-9 - - -__napi_poll (102 samples, 0.20%) - - - -__icmp_send (23 samples, 0.04%) - - - -__hrtimer_run_queues (11 samples, 0.02%) - - - -__GI___lll_lock_wake (22 samples, 0.04%) - - - -__update_load_avg_se (8 samples, 0.02%) - - - -__x86_indirect_thunk_rax (6 samples, 0.01%) - - - -__futex_abstimed_wait_common (254 samples, 0.49%) - - - -__schedule (8 samples, 0.02%) - - - -process_backlog (108 samples, 0.21%) - - - -selinux_ipv4_output (8 samples, 0.02%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (195 samples, 0.38%) - - - -siphash_3u32 (23 samples, 0.04%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (1,823 samples, 3.52%) -Fut.. - - -ip_route_output_key_hash_rcu (20 samples, 0.04%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (595 samples, 1.15%) - - - -switch_mm_irqs_off (12 samples, 0.02%) - - - -__alloc_skb (5 samples, 0.01%) - - - -ip_route_output_key_hash_rcu (23 samples, 0.04%) - - - -copy_user_generic_string (45 samples, 0.09%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (12 samples, 0.02%) - - - -tsc_verify_tsc_adjust (6 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (6 samples, 0.01%) - - - -__wrgsbase_inactive (8 samples, 0.02%) - - - -futex_wait_queue_me (13 samples, 0.03%) - - - -__x64_sys_futex (31 samples, 0.06%) - - - -selinux_ip_postroute (6 samples, 0.01%) - - - -update_rq_clock (7 samples, 0.01%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (5 samples, 0.01%) - - - -mark_wake_futex (6 samples, 0.01%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -icmp_push_reply (5 samples, 0.01%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -kmem_cache_alloc_trace (6 samples, 0.01%) - - - -ip_finish_output (7 samples, 0.01%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,276 samples, 2.46%) -Da.. - - -__GI___lll_lock_wake (82 samples, 0.16%) - - - -native_sched_clock (6 samples, 0.01%) - - - -__update_load_avg_se (7 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,264 samples, 2.44%) -Da.. - - -__x86_indirect_thunk_rax (5 samples, 0.01%) - - - -__icmp_send (48 samples, 0.09%) - - - -update_curr (69 samples, 0.13%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (3,468 samples, 6.69%) -DatagramC.. - - -enqueue_to_backlog (10 samples, 0.02%) - - - -selinux_socket_sendmsg (6 samples, 0.01%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (7 samples, 0.01%) - - - -__virt_addr_valid (5 samples, 0.01%) - - - -process_backlog (6 samples, 0.01%) - - - -__ip_finish_output (14 samples, 0.03%) - - - -__kmalloc_node_track_caller (24 samples, 0.05%) - - - -__dev_queue_xmit (8 samples, 0.02%) - - - -[perf] (377 samples, 0.73%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (20 samples, 0.04%) - - - -dev_hard_start_xmit (5 samples, 0.01%) - - - -sock_alloc_send_pskb (10 samples, 0.02%) - - - -kmem_cache_free (8 samples, 0.02%) - - - -futex_wait (16 samples, 0.03%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (590 samples, 1.14%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (230 samples, 0.44%) - - - -__udp4_lib_rcv (80 samples, 0.15%) - - - -psi_group_change (9 samples, 0.02%) - - - -update_rq_clock (17 samples, 0.03%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (14 samples, 0.03%) - - - -__x86_indirect_thunk_rax (6 samples, 0.01%) - - - -process_backlog (94 samples, 0.18%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1,946 samples, 3.76%) -Java.. - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (6 samples, 0.01%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (54 samples, 0.10%) - - - -__x64_sys_futex (5 samples, 0.01%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (599 samples, 1.16%) - - - -getInetAddress_family (5 samples, 0.01%) - - - -__switch_to_asm (14 samples, 0.03%) - - - -native_sched_clock (6 samples, 0.01%) - - - -ttwu_queue_wakelist (8 samples, 0.02%) - - - -flush_smp_call_function_queue (5 samples, 0.01%) - - - -_copy_from_iter (10 samples, 0.02%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (15 samples, 0.03%) - - - -__pthread_mutex_cond_lock (6 samples, 0.01%) - - - -update_rq_clock (51 samples, 0.10%) - - - -__napi_poll (205 samples, 0.40%) - - - -ip_route_output_key_hash (7 samples, 0.01%) - - - -__libc_sendto (1,133 samples, 2.19%) -_.. - - -enqueue_entity (9 samples, 0.02%) - - - -futex_wait_queue_me (12 samples, 0.02%) - - - -hrtimer_next_event_without (30 samples, 0.06%) - - - -do_softirq (122 samples, 0.24%) - - - -__switch_to_asm (18 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (5 samples, 0.01%) - - - -select_task_rq_fair (10 samples, 0.02%) - - - -__netif_receive_skb_one_core (193 samples, 0.37%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,109 samples, 2.14%) -D.. - - -sched_clock (8 samples, 0.02%) - - - -sock_sendmsg (330 samples, 0.64%) - - - -udp_send_skb (130 samples, 0.25%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (6 samples, 0.01%) - - - -__pthread_mutex_cond_lock (5 samples, 0.01%) - - - -__schedule (9 samples, 0.02%) - - - -do_syscall_64 (5 samples, 0.01%) - - - -__ip_local_out (8 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.01%) - - - -__softirqentry_text_start (110 samples, 0.21%) - - - -futex_wait (6 samples, 0.01%) - - - -save_fpregs_to_fpstate (6 samples, 0.01%) - - - -__ip_select_ident (10 samples, 0.02%) - - - -wake_up_q (15 samples, 0.03%) - - - -finish_task_switch.isra.0 (5 samples, 0.01%) - - - -MultiThreadedMonitorSupport_monitorExit_f765f7445e650efe1207579ef06c6f8ac708d1b5 (19 samples, 0.04%) - - - -dequeue_task (13 samples, 0.03%) - - - -_raw_spin_lock (5 samples, 0.01%) - - - -do_syscall_64 (159 samples, 0.31%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (954 samples, 1.84%) -R.. - - -sched_clock_cpu (19 samples, 0.04%) - - - -mark_wake_futex (7 samples, 0.01%) - - - -psi_group_change (57 samples, 0.11%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (162 samples, 0.31%) - - - -enqueue_task_fair (9 samples, 0.02%) - - - -timerqueue_iterate_next (8 samples, 0.02%) - - - -NET_InetAddressToSockaddr (102 samples, 0.20%) - - - -kthread (22 samples, 0.04%) - - - -selinux_ip_postroute_compat (10 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (12 samples, 0.02%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (493 samples, 0.95%) - - - -fib_table_lookup (39 samples, 0.08%) - - - -kthread (5 samples, 0.01%) - - - -ip_route_output_key_hash (18 samples, 0.03%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (5 samples, 0.01%) - - - -ip_finish_output2 (17 samples, 0.03%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (15 samples, 0.03%) - - - -xfrm_lookup_route (5 samples, 0.01%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (10 samples, 0.02%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -__cgroup_bpf_run_filter_skb (17 samples, 0.03%) - - - -__libc_sendto (1,190 samples, 2.30%) -_.. - - -__napi_poll (78 samples, 0.15%) - - - -__get_user_nocheck_4 (47 samples, 0.09%) - - - -__GI___lll_lock_wake (75 samples, 0.14%) - - - -__ip_finish_output (6 samples, 0.01%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (597 samples, 1.15%) - - - -__schedule (14 samples, 0.03%) - - - -__x64_sys_futex (19 samples, 0.04%) - - - -__GI___pthread_disable_asynccancel (11 samples, 0.02%) - - - -do_syscall_64 (6 samples, 0.01%) - - - -__entry_text_start (22 samples, 0.04%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,211 samples, 2.34%) -J.. - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (35 samples, 0.07%) - - - -ip_protocol_deliver_rcu (66 samples, 0.13%) - - - -copy_user_generic_string (16 samples, 0.03%) - - - -cpu_startup_entry (94 samples, 0.18%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (10 samples, 0.02%) - - - -update_min_vruntime (5 samples, 0.01%) - - - -psi_task_change (18 samples, 0.03%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (259 samples, 0.50%) - - - -___pthread_cond_broadcast (68 samples, 0.13%) - - - -wake_up_q (6 samples, 0.01%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (7 samples, 0.01%) - - - -process_backlog (11 samples, 0.02%) - - - -ip_send_skb (264 samples, 0.51%) - - - -thread-10 (3,094 samples, 5.97%) -thread-10 - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (5 samples, 0.01%) - - - -icmp_glue_bits (6 samples, 0.01%) - - - -ip_send_skb (126 samples, 0.24%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1,823 samples, 3.52%) -Pos.. - - -_copy_from_user (5 samples, 0.01%) - - - -__cgroup_bpf_run_filter_skb (21 samples, 0.04%) - - - -kmem_cache_alloc_trace (7 samples, 0.01%) - - - -__ip_make_skb (25 samples, 0.05%) - - - -do_futex (7 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (9 samples, 0.02%) - - - -ip_rcv (5 samples, 0.01%) - - - -ip_route_output_flow (7 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (563 samples, 1.09%) - - - -psi_group_change (32 samples, 0.06%) - - - -icmp_route_lookup.constprop.0 (5 samples, 0.01%) - - - -__GI___pthread_mutex_unlock_usercnt (75 samples, 0.14%) - - - -entry_SYSCALL_64_after_hwframe (9 samples, 0.02%) - - - -submit_bio_noacct (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (672 samples, 1.30%) - - - -process_backlog (10 samples, 0.02%) - - - -csum_partial_copy_generic (19 samples, 0.04%) - - - -native_sched_clock (6 samples, 0.01%) - - - -__entry_text_start (11 samples, 0.02%) - - - -icmp_push_reply (8 samples, 0.02%) - - - -getInetAddress_family (9 samples, 0.02%) - - - -native_sched_clock (5 samples, 0.01%) - - - -ip_finish_output2 (125 samples, 0.24%) - - - -_copy_from_user (9 samples, 0.02%) - - - -ip_finish_output2 (118 samples, 0.23%) - - - -__ip_local_out (12 samples, 0.02%) - - - -NET_InetAddressToSockaddr (124 samples, 0.24%) - - - -__ip_local_out (6 samples, 0.01%) - - - -nf_hook_slow (6 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (3,469 samples, 6.70%) -Thread_ru.. - - -icmp_rcv (11 samples, 0.02%) - - - -__entry_text_start (28 samples, 0.05%) - - - -do_softirq (83 samples, 0.16%) - - - -kfree (6 samples, 0.01%) - - - -irq_enter_rcu (9 samples, 0.02%) - - - -pick_next_task_fair (16 samples, 0.03%) - - - -__ip_make_skb (9 samples, 0.02%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (5 samples, 0.01%) - - - -kfree (5 samples, 0.01%) - - - -sock_sendmsg (168 samples, 0.32%) - - - -[unknown] (24 samples, 0.05%) - - - -__netif_receive_skb_core.constprop.0 (18 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (6 samples, 0.01%) - - - -icmp_glue_bits (8 samples, 0.02%) - - - -udp_send_skb (151 samples, 0.29%) - - - -pick_next_task_fair (6 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (5 samples, 0.01%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (2,539 samples, 4.90%) -Thread.. - - -__get_user_nocheck_4 (5 samples, 0.01%) - - - -native_sched_clock (493 samples, 0.95%) - - - -__schedule (7 samples, 0.01%) - - - -__ip_select_ident (6 samples, 0.01%) - - - -enqueue_entity (659 samples, 1.27%) - - - -icmp_route_lookup.constprop.0 (9 samples, 0.02%) - - - -xfrm_lookup_with_ifid (34 samples, 0.07%) - - - -__pthread_mutex_cond_lock (6 samples, 0.01%) - - - -raw_spin_rq_lock_nested (32 samples, 0.06%) - - - -psi_group_change (56 samples, 0.11%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (26 samples, 0.05%) - - - -dequeue_task (5 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (373 samples, 0.72%) - - - -kmem_cache_free (5 samples, 0.01%) - - - -cpuidle_enter (5 samples, 0.01%) - - - -__blk_mq_run_hw_queue (7 samples, 0.01%) - - - -dequeue_task_fair (5 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2,575 samples, 4.97%) -PosixJ.. - - -update_load_avg (461 samples, 0.89%) - - - -NET_InetAddressToSockaddr (35 samples, 0.07%) - - - -__check_object_size (8 samples, 0.02%) - - - -nr_iowait_cpu (26 samples, 0.05%) - - - -do_syscall_64 (43 samples, 0.08%) - - - -ip_local_deliver_finish (97 samples, 0.19%) - - - -__ip_append_data (21 samples, 0.04%) - - - -__ip_select_ident (6 samples, 0.01%) - - - -__libc_sendto (6 samples, 0.01%) - - - -do_futex (16 samples, 0.03%) - - - -__dev_queue_xmit (11 samples, 0.02%) - - - -irqtime_account_irq (7 samples, 0.01%) - - - -__schedule (10 samples, 0.02%) - - - -__icmp_send (17 samples, 0.03%) - - - -icmp_route_lookup.constprop.0 (19 samples, 0.04%) - - - -__calloc (7 samples, 0.01%) - - - -enqueue_to_backlog (7 samples, 0.01%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (70 samples, 0.14%) - - - -ip_rcv_core (6 samples, 0.01%) - - - -__switch_to_asm (9 samples, 0.02%) - - - -native_write_msr (5 samples, 0.01%) - - - -update_load_avg (7 samples, 0.01%) - - - -new_sync_write (31 samples, 0.06%) - - - -copy_user_generic_string (5 samples, 0.01%) - - - -ip_rcv_core (6 samples, 0.01%) - - - -sockfd_lookup_light (5 samples, 0.01%) - - - -icmp_rcv (33 samples, 0.06%) - - - -__GI___lll_lock_wake (74 samples, 0.14%) - - - -irqtime_account_irq (14 samples, 0.03%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2,575 samples, 4.97%) -Isolat.. - - -read_tsc (100 samples, 0.19%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (210 samples, 0.41%) - - - -selinux_ip_postroute (17 samples, 0.03%) - - - -_copy_from_user (5 samples, 0.01%) - - - -copy_user_generic_string (8 samples, 0.02%) - - - -__ip_select_ident (8 samples, 0.02%) - - - -udp_send_skb (95 samples, 0.18%) - - - -syscall_return_via_sysret (9 samples, 0.02%) - - - -__ksize (7 samples, 0.01%) - - - -sched_clock_cpu (24 samples, 0.05%) - - - -selinux_socket_sendmsg (11 samples, 0.02%) - - - -ip_route_output_key_hash (12 samples, 0.02%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,359 samples, 2.62%) -Da.. - - -__wrgsbase_inactive (9 samples, 0.02%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (6 samples, 0.01%) - - - -__dev_queue_xmit (30 samples, 0.06%) - - - -ip_output (5 samples, 0.01%) - - - -csum_partial_copy_generic (25 samples, 0.05%) - - - -__condvar_dec_grefs (44 samples, 0.08%) - - - -validate_xmit_skb (13 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (7 samples, 0.01%) - - - -skb_network_protocol (6 samples, 0.01%) - - - -udp_sendmsg (325 samples, 0.63%) - - - -native_sched_clock (7 samples, 0.01%) - - - -perf (418 samples, 0.81%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (1,823 samples, 3.52%) -Mul.. - - -native_write_msr (128 samples, 0.25%) - - - -__GI___lll_lock_wake (83 samples, 0.16%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (15 samples, 0.03%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (5 samples, 0.01%) - - - -nf_hook_slow (6 samples, 0.01%) - - - -select_task_rq_fair (6 samples, 0.01%) - - - -ipv4_mtu (6 samples, 0.01%) - - - -pick_next_task_fair (17 samples, 0.03%) - - - -ip_skb_dst_mtu (5 samples, 0.01%) - - - -swapper (892 samples, 1.72%) - - - -__ip_local_out (6 samples, 0.01%) - - - -do_csum (5 samples, 0.01%) - - - -__switch_to (5 samples, 0.01%) - - - -psi_group_change (5 samples, 0.01%) - - - -__cgroup_bpf_run_filter_skb (6 samples, 0.01%) - - - -__entry_text_start (6 samples, 0.01%) - - - -___pthread_cond_broadcast (85 samples, 0.16%) - - - -process_backlog (106 samples, 0.20%) - - - -__entry_text_start (24 samples, 0.05%) - - - -__udp4_lib_rcv (67 samples, 0.13%) - - - -__netif_receive_skb_core.constprop.0 (16 samples, 0.03%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (47 samples, 0.09%) - - - -try_to_wake_up (5 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (11 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (15 samples, 0.03%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (13 samples, 0.03%) - - - -net_rx_action (9 samples, 0.02%) - - - -__local_bh_enable_ip (85 samples, 0.16%) - - - -nf_hook_slow (7 samples, 0.01%) - - - -sched_ttwu_pending (19 samples, 0.04%) - - - -selinux_socket_sendmsg (6 samples, 0.01%) - - - -icmp_rcv (5 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (6 samples, 0.01%) - - - -kfree (8 samples, 0.02%) - - - -sched_ttwu_pending (176 samples, 0.34%) - - - -futex_wait (6 samples, 0.01%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (2,167 samples, 4.18%) -Mult.. - - -icmp_route_lookup.constprop.0 (29 samples, 0.06%) - - - -tick_nohz_get_next_hrtimer (5 samples, 0.01%) - - - -__condvar_dec_grefs (60 samples, 0.12%) - - - -native_sched_clock (22 samples, 0.04%) - - - -__switch_to (9 samples, 0.02%) - - - -sock_sendmsg (136 samples, 0.26%) - - - -__schedule (13 samples, 0.03%) - - - -icmp_rcv (14 samples, 0.03%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (5 samples, 0.01%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (6 samples, 0.01%) - - - -__GI___pthread_mutex_unlock_usercnt (77 samples, 0.15%) - - - -sock_alloc_send_pskb (6 samples, 0.01%) - - - -start_thread (2,539 samples, 4.90%) -start_.. - - -try_to_wake_up (6 samples, 0.01%) - - - -__x86_indirect_thunk_rax (5 samples, 0.01%) - - - -__udp4_lib_rcv (81 samples, 0.16%) - - - -native_sched_clock (6 samples, 0.01%) - - - -__rdgsbase_inactive (7 samples, 0.01%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (583 samples, 1.13%) - - - -select_task_rq_fair (7 samples, 0.01%) - - - -psi_group_change (6 samples, 0.01%) - - - -getInetAddress_family (27 samples, 0.05%) - - - -__ip_local_out (10 samples, 0.02%) - - - -update_load_avg (19 samples, 0.04%) - - - -__schedule (11 samples, 0.02%) - - - -__softirqentry_text_start (107 samples, 0.21%) - - - -__ip_append_data (8 samples, 0.02%) - - - -sched_clock_cpu (48 samples, 0.09%) - - - -select_task_rq_fair (10 samples, 0.02%) - - - -NET_InetAddressToSockaddr (90 samples, 0.17%) - - - -enqueue_to_backlog (7 samples, 0.01%) - - - -__ip_make_skb (46 samples, 0.09%) - - - -__napi_poll (66 samples, 0.13%) - - - -nf_hook_slow (7 samples, 0.01%) - - - -__x64_sys_futex (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (728 samples, 1.41%) - - - -futex_wait (9 samples, 0.02%) - - - -net_rx_action (14 samples, 0.03%) - - - -__schedule (8 samples, 0.02%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (2,167 samples, 4.18%) -Mult.. - - -native_sched_clock (6 samples, 0.01%) - - - -selinux_ip_postroute (18 samples, 0.03%) - - - -__ip_finish_output (12 samples, 0.02%) - - - -do_futex (9 samples, 0.02%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (146 samples, 0.28%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2,167 samples, 4.18%) -Thre.. - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (490 samples, 0.95%) - - - -[unknown] (13 samples, 0.03%) - - - -ip_route_output_key_hash_rcu (30 samples, 0.06%) - - - -__entry_text_start (25 samples, 0.05%) - - - -__netif_receive_skb_one_core (63 samples, 0.12%) - - - -update_min_vruntime (29 samples, 0.06%) - - - -sock_sendmsg (113 samples, 0.22%) - - - -futex_wait_queue_me (5 samples, 0.01%) - - - -native_sched_clock (8 samples, 0.02%) - - - -__x64_sys_sendto (340 samples, 0.66%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (14 samples, 0.03%) - - - -set_next_entity (8 samples, 0.02%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (218 samples, 0.42%) - - - -icmp_route_lookup.constprop.0 (16 samples, 0.03%) - - - -DatagramChannelImpl_endWrite_1d7ec518ca03a11f522bf4fee613d13027dc03bc (6 samples, 0.01%) - - - -icmp_rcv (11 samples, 0.02%) - - - -hrtimer_interrupt (49 samples, 0.09%) - - - -net_rx_action (104 samples, 0.20%) - - - -do_syscall_64 (22 samples, 0.04%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -loopback_xmit (5 samples, 0.01%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (15 samples, 0.03%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (67 samples, 0.13%) - - - -save_fpregs_to_fpstate (59 samples, 0.11%) - - - -ip_finish_output2 (23 samples, 0.04%) - - - -__GI___write (81 samples, 0.16%) - - - -wake_up_q (8 samples, 0.02%) - - - -rcu_all_qs (5 samples, 0.01%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (433 samples, 0.84%) - - - -__udp4_lib_rcv (40 samples, 0.08%) - - - -all (51,814 samples, 100%) - - - -__softirqentry_text_start (122 samples, 0.24%) - - - -ktime_get (81 samples, 0.16%) - - - -start_kernel (94 samples, 0.18%) - - - -__x64_sys_futex (8 samples, 0.02%) - - - -cpuidle_enter (464 samples, 0.90%) - - - -ip_push_pending_frames (58 samples, 0.11%) - - - -pick_next_task_fair (5 samples, 0.01%) - - - -__GI___lll_lock_wake (36 samples, 0.07%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (6 samples, 0.01%) - - - -__schedule (12 samples, 0.02%) - - - -DatagramChannelImpl_ensureOpen_7bc4a5fa5a7f6ec97a0d5cbcaa3165bbb4e730b5 (9 samples, 0.02%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (85 samples, 0.16%) - - - -debugging-nativ (33 samples, 0.06%) - - - -validate_xmit_skb (6 samples, 0.01%) - - - -__rdgsbase_inactive (5 samples, 0.01%) - - - -pick_next_task_fair (21 samples, 0.04%) - - - -ip_make_skb (13 samples, 0.03%) - - - -futex_wait_queue_me (11 samples, 0.02%) - - - -ip_route_output_key_hash (12 samples, 0.02%) - - - -tick_nohz_tick_stopped (13 samples, 0.03%) - - - -udp_send_skb (6 samples, 0.01%) - - - -kmem_cache_free (6 samples, 0.01%) - - - -ip_route_output_key_hash (11 samples, 0.02%) - - - -__wrgsbase_inactive (7 samples, 0.01%) - - - -sock_sendmsg (163 samples, 0.31%) - - - -do_idle (5 samples, 0.01%) - - - -__inet_dev_addr_type (12 samples, 0.02%) - - - -_raw_spin_lock_irqsave (7 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2,276 samples, 4.39%) -Threa.. - - -__condvar_dec_grefs (64 samples, 0.12%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (530 samples, 1.02%) - - - -__run_timers.part.0 (6 samples, 0.01%) - - - -__cgroup_bpf_run_filter_skb (10 samples, 0.02%) - - - -___pthread_mutex_lock (67 samples, 0.13%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (217 samples, 0.42%) - - - -__x86_indirect_thunk_rax (7 samples, 0.01%) - - - -icmpv4_xrlim_allow (7 samples, 0.01%) - - - -icmp_push_reply (5 samples, 0.01%) - - - -psi_group_change (8 samples, 0.02%) - - - -selinux_ipv4_output (5 samples, 0.01%) - - - -secondary_startup_64_no_verify (490 samples, 0.95%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (2,276 samples, 4.39%) -Datag.. - - -selinux_ip_postroute_compat (11 samples, 0.02%) - - - -wake_up_q (13 samples, 0.03%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (354 samples, 0.68%) - - - -selinux_parse_skb.constprop.0 (8 samples, 0.02%) - - - -avc_lookup (6 samples, 0.01%) - - - -__icmp_send (17 samples, 0.03%) - - - -__ip_finish_output (5 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (624 samples, 1.20%) - - - -siphash_3u32 (8 samples, 0.02%) - - - -select_task_rq_fair (9 samples, 0.02%) - - - -skb_set_owner_w (7 samples, 0.01%) - - - -security_socket_sendmsg (5 samples, 0.01%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (485 samples, 0.94%) - - - -__ip_local_out (10 samples, 0.02%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2,319 samples, 4.48%) -Isola.. - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (552 samples, 1.07%) - - - -syscall_return_via_sysret (12 samples, 0.02%) - - - -ip_skb_dst_mtu (5 samples, 0.01%) - - - -syscall_return_via_sysret (24 samples, 0.05%) - - - -move_addr_to_kernel.part.0 (5 samples, 0.01%) - - - -icmp_push_reply (5 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (5 samples, 0.01%) - - - -__list_del_entry_valid (7 samples, 0.01%) - - - -irqtime_account_irq (5 samples, 0.01%) - - - -JNIObjectHandles_getObject_3a8ad95345633a155810448c9f6c1b478270ddcf (6 samples, 0.01%) - - - -__wrgsbase_inactive (20 samples, 0.04%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -_copy_from_iter (7 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2,194 samples, 4.23%) -Isola.. - - -__get_user_nocheck_4 (6 samples, 0.01%) - - - -Quarkus_run_264e1542aba49a980676e2116b6211b2dc545762 (9 samples, 0.02%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (14 samples, 0.03%) - - - -selinux_parse_skb.constprop.0 (5 samples, 0.01%) - - - -[unknown] (15 samples, 0.03%) - - - -sg_init_table (5 samples, 0.01%) - - - -kmem_cache_alloc_trace (6 samples, 0.01%) - - - -udp_sendmsg (144 samples, 0.28%) - - - -ip_route_output_key_hash (16 samples, 0.03%) - - - -skb_release_data (5 samples, 0.01%) - - - -csum_partial (14 samples, 0.03%) - - - -__softirqentry_text_start (6 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,038 samples, 2.00%) -J.. - - -__ip_make_skb (41 samples, 0.08%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (11 samples, 0.02%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (271 samples, 0.52%) - - - -cgroup_rstat_updated (26 samples, 0.05%) - - - -__ip_dev_find (5 samples, 0.01%) - - - -udp_send_skb (145 samples, 0.28%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (569 samples, 1.10%) - - - -___pthread_cond_broadcast (45 samples, 0.09%) - - - -__ip_select_ident (11 samples, 0.02%) - - - -sched_clock_cpu (25 samples, 0.05%) - - - -dst_release (5 samples, 0.01%) - - - -__napi_poll (108 samples, 0.21%) - - - -__check_object_size (9 samples, 0.02%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (2,296 samples, 4.43%) -Datag.. - - -kmem_cache_alloc_node (14 samples, 0.03%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,299 samples, 2.51%) -Da.. - - -ip_generic_getfrag (8 samples, 0.02%) - - - -__x64_sys_ioctl (41 samples, 0.08%) - - - -copy_user_generic_string (5 samples, 0.01%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (556 samples, 1.07%) - - - -native_sched_clock (52 samples, 0.10%) - - - -__switch_to_asm (151 samples, 0.29%) - - - -exit_to_user_mode_prepare (5 samples, 0.01%) - - - -acpi_idle_do_entry (5 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (201 samples, 0.39%) - - - -skb_set_owner_w (6 samples, 0.01%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -ip_push_pending_frames (21 samples, 0.04%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (12 samples, 0.02%) - - - -ip_generic_getfrag (37 samples, 0.07%) - - - -udp_sendmsg (161 samples, 0.31%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (25 samples, 0.05%) - - - -__kmalloc_node_track_caller (20 samples, 0.04%) - - - -__schedule (465 samples, 0.90%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (40 samples, 0.08%) - - - -__futex_abstimed_wait_common (326 samples, 0.63%) - - - -__wrgsbase_inactive (11 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (35 samples, 0.07%) - - - -__check_object_size (8 samples, 0.02%) - - - -__schedule (13 samples, 0.03%) - - - -copy_user_generic_string (45 samples, 0.09%) - - - -ktime_get (6 samples, 0.01%) - - - -kmem_cache_alloc_node (14 samples, 0.03%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (576 samples, 1.11%) - - - -copy_user_generic_string (7 samples, 0.01%) - - - -__udp4_lib_rcv (17 samples, 0.03%) - - - -ip_skb_dst_mtu (7 samples, 0.01%) - - - -__check_object_size (6 samples, 0.01%) - - - -sched_clock_cpu (31 samples, 0.06%) - - - -psi_group_change (5 samples, 0.01%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (12 samples, 0.02%) - - - -__ip_select_ident (7 samples, 0.01%) - - - -ip_finish_output2 (8 samples, 0.02%) - - - -psi_task_switch (6 samples, 0.01%) - - - -do_softirq (231 samples, 0.45%) - - - -__clone3 (2,167 samples, 4.18%) -__cl.. - - -do_softirq (119 samples, 0.23%) - - - -psi_group_change (84 samples, 0.16%) - - - -ip_finish_output2 (133 samples, 0.26%) - - - -__udp4_lib_rcv (17 samples, 0.03%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (495 samples, 0.96%) - - - -do_syscall_64 (159 samples, 0.31%) - - - -ip_append_data (10 samples, 0.02%) - - - -net_rx_action (5 samples, 0.01%) - - - -__x64_sys_futex (7 samples, 0.01%) - - - -wake_up_q (10 samples, 0.02%) - - - -ip_rcv (5 samples, 0.01%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (89 samples, 0.17%) - - - -ip_protocol_deliver_rcu (93 samples, 0.18%) - - - -__hrtimer_next_event_base (43 samples, 0.08%) - - - -__x64_sys_futex (15 samples, 0.03%) - - - -__schedule (11 samples, 0.02%) - - - -sched_clock_cpu (7 samples, 0.01%) - - - -ip_rcv_core (9 samples, 0.02%) - - - -ip_rcv_core (6 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (547 samples, 1.06%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (7 samples, 0.01%) - - - -__dev_queue_xmit (7 samples, 0.01%) - - - -ip_append_data (5 samples, 0.01%) - - - -__update_load_avg_cfs_rq (210 samples, 0.41%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -kmem_cache_alloc_trace (5 samples, 0.01%) - - - -thread-6 (2,969 samples, 5.73%) -thread-6 - - -ttwu_queue_wakelist (5 samples, 0.01%) - - - -MultiThreadedMonitorSupport_slowPathMonitorExit_183871de385508d0f6b4f0881e8e0c44628018b3 (36 samples, 0.07%) - - - -ip_push_pending_frames (32 samples, 0.06%) - - - -__ip_dev_find (8 samples, 0.02%) - - - -do_syscall_64 (172 samples, 0.33%) - - - -event_sched_in.part.0 (6 samples, 0.01%) - - - -update_load_avg (16 samples, 0.03%) - - - -tick_nohz_idle_exit (10 samples, 0.02%) - - - -ip_finish_output2 (83 samples, 0.16%) - - - -kmem_cache_alloc_trace (6 samples, 0.01%) - - - -irq_enter_rcu (102 samples, 0.20%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (22 samples, 0.04%) - - - -futex_wait (7 samples, 0.01%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (482 samples, 0.93%) - - - -__schedule (9 samples, 0.02%) - - - -__dev_queue_xmit (13 samples, 0.03%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -ip_push_pending_frames (31 samples, 0.06%) - - - -ksize (8 samples, 0.02%) - - - -do_syscall_64 (116 samples, 0.22%) - - - -cpu_startup_entry (8,313 samples, 16.04%) -cpu_startup_entry - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,239 samples, 2.39%) -D.. - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (2,167 samples, 4.18%) -Thre.. - - -entry_SYSCALL_64_after_hwframe (6 samples, 0.01%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -ip_route_output_key_hash (11 samples, 0.02%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (5 samples, 0.01%) - - - -__ip_local_out (12 samples, 0.02%) - - - -__x64_sys_futex (9 samples, 0.02%) - - - -__netif_receive_skb_core.constprop.0 (11 samples, 0.02%) - - - -ip_route_output_flow (7 samples, 0.01%) - - - -ttwu_do_wakeup (6 samples, 0.01%) - - - -selinux_parse_skb.constprop.0 (5 samples, 0.01%) - - - -do_futex (10 samples, 0.02%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (1,823 samples, 3.52%) -Thr.. - - -__clone3 (2,194 samples, 4.23%) -__clo.. - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (7 samples, 0.01%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (10 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (6 samples, 0.01%) - - - -hrtimer_get_next_event (27 samples, 0.05%) - - - -ip_local_deliver_finish (82 samples, 0.16%) - - - -__switch_to_asm (11 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (34 samples, 0.07%) - - - -__alloc_skb (9 samples, 0.02%) - - - -getInetAddress_family (13 samples, 0.03%) - - - -__GI___munmap (5 samples, 0.01%) - - - -nf_hook_slow (15 samples, 0.03%) - - - -process_backlog (8 samples, 0.02%) - - - -selinux_socket_sendmsg (10 samples, 0.02%) - - - -__local_bh_enable_ip (110 samples, 0.21%) - - - -move_addr_to_kernel.part.0 (10 samples, 0.02%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (25 samples, 0.05%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -avc_has_perm (6 samples, 0.01%) - - - -ip_push_pending_frames (30 samples, 0.06%) - - - -native_sched_clock (5 samples, 0.01%) - - - -dequeue_entity (6 samples, 0.01%) - - - -acpi_idle_enter (3,050 samples, 5.89%) -acpi_id.. - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,330 samples, 2.57%) -Da.. - - -kfence_ksize (6 samples, 0.01%) - - - -select_task_rq_fair (10 samples, 0.02%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (2,276 samples, 4.39%) -Threa.. - - -__dev_queue_xmit (15 samples, 0.03%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (8 samples, 0.02%) - - - -__futex_abstimed_wait_common (564 samples, 1.09%) - - - -selinux_xfrm_skb_sid_ingress (6 samples, 0.01%) - - - -ip_idents_reserve (6 samples, 0.01%) - - - -loopback_xmit (7 samples, 0.01%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -_copy_from_iter (28 samples, 0.05%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (2,276 samples, 4.39%) -Multi.. - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (9 samples, 0.02%) - - - -__GI___pthread_mutex_unlock_usercnt (83 samples, 0.16%) - - - -sock_wfree (5 samples, 0.01%) - - - -do_futex (12 samples, 0.02%) - - - -__inet_dev_addr_type (7 samples, 0.01%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (2,167 samples, 4.18%) -Futu.. - - -select_task_rq_fair (7 samples, 0.01%) - - - -__sys_sendto (165 samples, 0.32%) - - - -schedule (15 samples, 0.03%) - - - -__x64_sys_futex (8 samples, 0.02%) - - - -__wrgsbase_inactive (7 samples, 0.01%) - - - -__schedule (12 samples, 0.02%) - - - -__get_user_nocheck_4 (64 samples, 0.12%) - - - -acpi_processor_ffh_cstate_enter (21 samples, 0.04%) - - - -dst_release (7 samples, 0.01%) - - - -ip_send_skb (144 samples, 0.28%) - - - -__alloc_skb (33 samples, 0.06%) - - - -__udp4_lib_lookup (6 samples, 0.01%) - - - -validate_xmit_skb (13 samples, 0.03%) - - - -wake_up_q (15 samples, 0.03%) - - - -copy_user_generic_string (34 samples, 0.07%) - - - -memcg_slab_post_alloc_hook (8 samples, 0.02%) - - - -ip_protocol_deliver_rcu (98 samples, 0.19%) - - - -reweight_entity (32 samples, 0.06%) - - - -selinux_ip_postroute (8 samples, 0.02%) - - - -skb_release_data (9 samples, 0.02%) - - - -irqtime_account_irq (7 samples, 0.01%) - - - -find_exception (5 samples, 0.01%) - - - -icmp_push_reply (8 samples, 0.02%) - - - -copy_page_from_iter_atomic (11 samples, 0.02%) - - - -schedule (18 samples, 0.03%) - - - -__get_user_nocheck_4 (118 samples, 0.23%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (5 samples, 0.01%) - - - -iomap_do_writepage (6 samples, 0.01%) - - - -__schedule (18 samples, 0.03%) - - - -dev_hard_start_xmit (8 samples, 0.02%) - - - -mark_wake_futex (6 samples, 0.01%) - - - -__clone3 (2,575 samples, 4.97%) -__clone3 - - -native_write_msr (8 samples, 0.02%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (7 samples, 0.01%) - - - -futex_wait_queue_me (8 samples, 0.02%) - - - -__x64_sys_futex (7 samples, 0.01%) - - - -poll_idle (398 samples, 0.77%) - - - -[perf] (87 samples, 0.17%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (20 samples, 0.04%) - - - -selinux_socket_sendmsg (9 samples, 0.02%) - - - -copy_user_generic_string (28 samples, 0.05%) - - - -ipv4_mtu (10 samples, 0.02%) - - - -set_next_entity (5 samples, 0.01%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (172 samples, 0.33%) - - - -do_syscall_64 (5 samples, 0.01%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,252 samples, 2.42%) -Da.. - - -__libc_start_call_main (9 samples, 0.02%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (46 samples, 0.09%) - - - -ip_send_skb (136 samples, 0.26%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (25 samples, 0.05%) - - - -ip_output (5 samples, 0.01%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (7 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2,167 samples, 4.18%) -Isol.. - - -__alloc_skb (24 samples, 0.05%) - - - -selinux_sk_getsecid (6 samples, 0.01%) - - - -psi_group_change (6 samples, 0.01%) - - - -__check_heap_object (10 samples, 0.02%) - - - -__x86_indirect_thunk_rax (24 samples, 0.05%) - - - -__get_user_nocheck_4 (79 samples, 0.15%) - - - -__netif_receive_skb_one_core (99 samples, 0.19%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (76 samples, 0.15%) - - - -do_syscall_64 (149 samples, 0.29%) - - - -icmp_rcv (22 samples, 0.04%) - - - -ip_generic_getfrag (6 samples, 0.01%) - - - -syscall_enter_from_user_mode (6 samples, 0.01%) - - - -___pthread_cond_broadcast (19 samples, 0.04%) - - - -ttwu_do_wakeup (5 samples, 0.01%) - - - -__softirqentry_text_start (8 samples, 0.02%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (31 samples, 0.06%) - - - -ip_append_data (13 samples, 0.03%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (2,194 samples, 4.23%) -Datag.. - - -psi_task_change (257 samples, 0.50%) - - - -reweight_entity (13 samples, 0.03%) - - - -___pthread_cond_broadcast (94 samples, 0.18%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (673 samples, 1.30%) - - - -ip_output (8 samples, 0.02%) - - - -ttwu_do_wakeup (7 samples, 0.01%) - - - -psi_group_change (5 samples, 0.01%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (5 samples, 0.01%) - - - -__ip_append_data (8 samples, 0.02%) - - - -JNIGeneratedMethodSupport_getFieldOffsetFromId_5041c78d77a7b3d62103393b72fc35d80d2cc709 (5 samples, 0.01%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (7 samples, 0.01%) - - - -__switch_to (18 samples, 0.03%) - - - -__x64_sys_futex (16 samples, 0.03%) - - - -do_futex (5 samples, 0.01%) - - - -bpf_lsm_xfrm_decode_session (5 samples, 0.01%) - - - -ip_send_skb (144 samples, 0.28%) - - - -thread-7 (4,524 samples, 8.73%) -thread-7 - - -ip_generic_getfrag (17 samples, 0.03%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -__udp4_lib_rcv (60 samples, 0.12%) - - - -__ip_append_data (17 samples, 0.03%) - - - -cpuidle_governor_latency_req (9 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (7 samples, 0.01%) - - - -__x86_indirect_thunk_rax (7 samples, 0.01%) - - - -__libc_start_main_alias_2 (9 samples, 0.02%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (7 samples, 0.01%) - - - -sock_wfree (7 samples, 0.01%) - - - -__get_user_nocheck_4 (6 samples, 0.01%) - - - -select_task_rq_fair (17 samples, 0.03%) - - - -__GI___pthread_cond_wait (341 samples, 0.66%) - - - -ip_route_output_key_hash (13 samples, 0.03%) - - - -sched_setaffinity (20 samples, 0.04%) - - - -reweight_entity (152 samples, 0.29%) - - - -__sysvec_apic_timer_interrupt (823 samples, 1.59%) - - - -sysvec_apic_timer_interrupt (5 samples, 0.01%) - - - -ip_setup_cork (14 samples, 0.03%) - - - -selinux_ip_postroute_compat (14 samples, 0.03%) - - - -__softirqentry_text_start (115 samples, 0.22%) - - - -getInetAddress_family (15 samples, 0.03%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,293 samples, 2.50%) -Da.. - - -__entry_text_start (13 samples, 0.03%) - - - -do_futex (14 samples, 0.03%) - - - -flush_smp_call_function_queue (21 samples, 0.04%) - - - -kfree (6 samples, 0.01%) - - - -__libc_sendto (5 samples, 0.01%) - - - -__get_user_nocheck_4 (62 samples, 0.12%) - - - -entry_SYSCALL_64_after_hwframe (204 samples, 0.39%) - - - -icmp_route_lookup.constprop.0 (10 samples, 0.02%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (5 samples, 0.01%) - - - -select_task_rq_fair (10 samples, 0.02%) - - - -kmem_cache_free (7 samples, 0.01%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,594 samples, 3.08%) -Dat.. - - -copy_user_generic_string (6 samples, 0.01%) - - - -__x64_sys_futex (10 samples, 0.02%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (7 samples, 0.01%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (122 samples, 0.24%) - - - -copy_user_generic_string (18 samples, 0.03%) - - - -__clone3 (1,934 samples, 3.73%) -__cl.. - - -acpi_idle_do_entry (2,420 samples, 4.67%) -acpi_.. - - -ip_setup_cork (7 samples, 0.01%) - - - -read_tsc (19 samples, 0.04%) - - - -__GI___pthread_mutex_unlock_usercnt (77 samples, 0.15%) - - - -selinux_ip_postroute_compat (10 samples, 0.02%) - - - -ip_route_output_key_hash_rcu (16 samples, 0.03%) - - - -__x86_indirect_thunk_rax (36 samples, 0.07%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (5 samples, 0.01%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (443 samples, 0.85%) - - - -csum_partial_copy_generic (22 samples, 0.04%) - - - -kfree (5 samples, 0.01%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (27 samples, 0.05%) - - - -ip_route_output_key_hash (15 samples, 0.03%) - - - -ip_finish_output2 (16 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (5 samples, 0.01%) - - - -__futex_abstimed_wait_common (409 samples, 0.79%) - - - -udp_sendmsg (105 samples, 0.20%) - - - -sysvec_apic_timer_interrupt (1,062 samples, 2.05%) -s.. - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (7 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1,934 samples, 3.73%) -Thre.. - - -do_syscall_64 (183 samples, 0.35%) - - - -__skb_checksum_complete (17 samples, 0.03%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (190 samples, 0.37%) - - - -dev_hard_start_xmit (11 samples, 0.02%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (13 samples, 0.03%) - - - -sysvec_apic_timer_interrupt (82 samples, 0.16%) - - - -ip_rcv_core (8 samples, 0.02%) - - - -futex_wait (29 samples, 0.06%) - - - -consume_skb (5 samples, 0.01%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (219 samples, 0.42%) - - - -ipv4_mtu (11 samples, 0.02%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (281 samples, 0.54%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (5 samples, 0.01%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (8 samples, 0.02%) - - - -__switch_to_asm (7 samples, 0.01%) - - - -ip_push_pending_frames (17 samples, 0.03%) - - - -syscall_return_via_sysret (15 samples, 0.03%) - - - -ip_append_data (5 samples, 0.01%) - - - -do_syscall_64 (7 samples, 0.01%) - - - -fib_table_lookup (6 samples, 0.01%) - - - -psi_task_switch (7 samples, 0.01%) - - - -native_write_msr (13 samples, 0.03%) - - - -ip_setup_cork (13 samples, 0.03%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (600 samples, 1.16%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (545 samples, 1.05%) - - - -udp_sendmsg (163 samples, 0.31%) - - - -entry_SYSCALL_64_after_hwframe (23 samples, 0.04%) - - - -__ip_make_skb (9 samples, 0.02%) - - - -__x64_sys_sched_setaffinity (22 samples, 0.04%) - - - -validate_xmit_skb (25 samples, 0.05%) - - - -validate_xmit_skb (6 samples, 0.01%) - - - -udp_sendmsg (152 samples, 0.29%) - - - -__inet_dev_addr_type (10 samples, 0.02%) - - - -__ip_local_out (7 samples, 0.01%) - - - -__virt_addr_valid (5 samples, 0.01%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (930 samples, 1.79%) - - - -thread-1 (634 samples, 1.22%) - - - -select_task_rq_fair (22 samples, 0.04%) - - - -__ip_local_out (15 samples, 0.03%) - - - -__local_bh_enable_ip (5 samples, 0.01%) - - - -menu_select (73 samples, 0.14%) - - - -do_softirq (108 samples, 0.21%) - - - -dev_hard_start_xmit (5 samples, 0.01%) - - - -syscall_return_via_sysret (10 samples, 0.02%) - - - -__pthread_mutex_cond_lock (6 samples, 0.01%) - - - -check_stack_object (6 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (36 samples, 0.07%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -getInetAddress_addr (24 samples, 0.05%) - - - -entry_SYSCALL_64_after_hwframe (11 samples, 0.02%) - - - -asm_sysvec_apic_timer_interrupt (82 samples, 0.16%) - - - -acpi_processor_ffh_cstate_enter (1,246 samples, 2.40%) -ac.. - - -reweight_entity (11 samples, 0.02%) - - - -ip_rcv (6 samples, 0.01%) - - - -fib_table_lookup (7 samples, 0.01%) - - - -select_task_rq_fair (9 samples, 0.02%) - - - -__softirqentry_text_start (5 samples, 0.01%) - - - -__x86_indirect_thunk_rax (288 samples, 0.56%) - - - -pick_next_task_fair (6 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (9 samples, 0.02%) - - - -__udp4_lib_err (5 samples, 0.01%) - - - -ip_generic_getfrag (6 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (30 samples, 0.06%) - - - -udp_sendmsg (14 samples, 0.03%) - - - -__condvar_dec_grefs (49 samples, 0.09%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (3,469 samples, 6.70%) -ThreadPoo.. - - -ip_finish_output2 (5 samples, 0.01%) - - - -update_process_times (7 samples, 0.01%) - - - -do_syscall_64 (9 samples, 0.02%) - - - -__udp4_lib_rcv (11 samples, 0.02%) - - - -__sys_sendto (340 samples, 0.66%) - - - -wake_q_add_safe (8 samples, 0.02%) - - - -read_tsc (6 samples, 0.01%) - - - -enqueue_entity (15 samples, 0.03%) - - - -__switch_to (9 samples, 0.02%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (82 samples, 0.16%) - - - -schedule (5 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (7 samples, 0.01%) - - - -__clone3 (2,539 samples, 4.90%) -__clone3 - - -__ip_make_skb (7 samples, 0.01%) - - - -__udp4_lib_lookup (6 samples, 0.01%) - - - -decode_session4 (7 samples, 0.01%) - - - -sched_ttwu_pending (43 samples, 0.08%) - - - -ktime_get (16 samples, 0.03%) - - - -_raw_spin_lock_irqsave (13 samples, 0.03%) - - - -available_idle_cpu (6 samples, 0.01%) - - - -getInetAddress_family (7 samples, 0.01%) - - - -hrtimer_wakeup (8 samples, 0.02%) - - - -native_sched_clock (11 samples, 0.02%) - - - -iterate_groups (6 samples, 0.01%) - - - -ip_route_output_key_hash_rcu (53 samples, 0.10%) - - - -futex_wait (17 samples, 0.03%) - - - -ip_generic_getfrag (6 samples, 0.01%) - - - -poll_idle (1,864 samples, 3.60%) -pol.. - - -schedule (9 samples, 0.02%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (297 samples, 0.57%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,323 samples, 2.55%) -Da.. - - -__ip_append_data (7 samples, 0.01%) - - - -__ip_dev_find (5 samples, 0.01%) - - - -fib_table_lookup (43 samples, 0.08%) - - - -enqueue_task (111 samples, 0.21%) - - - -__softirqentry_text_start (7 samples, 0.01%) - - - -fib_table_lookup (5 samples, 0.01%) - - - -sched_clock_cpu (19 samples, 0.04%) - - - -loopback_xmit (8 samples, 0.02%) - - - -do_syscall_64 (196 samples, 0.38%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (5 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (44 samples, 0.08%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (86 samples, 0.17%) - - - -__icmp_send (73 samples, 0.14%) - - - -nr_iowait_cpu (12 samples, 0.02%) - - - -__ip_local_out (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (563 samples, 1.09%) - - - -net_rx_action (211 samples, 0.41%) - - - -acpi_processor_ffh_cstate_enter (7 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (889 samples, 1.72%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -_perf_ioctl (21 samples, 0.04%) - - - -__GI___lll_lock_wake (156 samples, 0.30%) - - - -copy_user_generic_string (30 samples, 0.06%) - - - -switch_mm_irqs_off (5 samples, 0.01%) - - - -__dev_queue_xmit (9 samples, 0.02%) - - - -ctx_resched (11 samples, 0.02%) - - - -__switch_to_asm (10 samples, 0.02%) - - - -alloc_skb_with_frags (8 samples, 0.02%) - - - -tick_sched_timer (7 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (6 samples, 0.01%) - - - -__rdgsbase_inactive (5 samples, 0.01%) - - - -ip_finish_output2 (6 samples, 0.01%) - - - -_copy_from_user (5 samples, 0.01%) - - - -fib_lookup_good_nhc (9 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (21 samples, 0.04%) - - - -futex_wake (16 samples, 0.03%) - - - -update_load_avg (8 samples, 0.02%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (65 samples, 0.13%) - - - -net_rx_action (11 samples, 0.02%) - - - -ip_rcv_core (5 samples, 0.01%) - - - -__icmp_send (55 samples, 0.11%) - - - -entry_SYSCALL_64_after_hwframe (8 samples, 0.02%) - - - -try_to_wake_up (6 samples, 0.01%) - - - -ip_send_skb (95 samples, 0.18%) - - - -__dev_queue_xmit (18 samples, 0.03%) - - - -flush_smp_call_function_from_idle (5 samples, 0.01%) - - - -__wrgsbase_inactive (13 samples, 0.03%) - - - -__hrtimer_run_queues (38 samples, 0.07%) - - - -try_to_wake_up (5 samples, 0.01%) - - - -try_to_wake_up (15 samples, 0.03%) - - - -irq_work_needs_cpu (22 samples, 0.04%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (8 samples, 0.02%) - - - -select_task_rq_fair (5 samples, 0.01%) - - - -enqueue_task_fair (57 samples, 0.11%) - - - -entry_SYSCALL_64_after_hwframe (17 samples, 0.03%) - - - -sock_sendmsg (187 samples, 0.36%) - - - -__x86_indirect_thunk_rax (27 samples, 0.05%) - - - -selinux_xfrm_decode_session (6 samples, 0.01%) - - - -ip_make_skb (11 samples, 0.02%) - - - -start_thread (3,469 samples, 6.70%) -start_thr.. - - -__fget_files (5 samples, 0.01%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (2,276 samples, 4.39%) -Futur.. - - -__sys_sendto (179 samples, 0.35%) - - - -__submit_bio (6 samples, 0.01%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (3,469 samples, 6.70%) -Multicast.. - - -iomap_writepages (7 samples, 0.01%) - - - -tick_nohz_idle_retain_tick (8 samples, 0.02%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,330 samples, 2.57%) -Da.. - - -alloc_skb_with_frags (11 samples, 0.02%) - - - -dequeue_task_fair (6 samples, 0.01%) - - - -move_addr_to_kernel.part.0 (5 samples, 0.01%) - - - -remote_function (17 samples, 0.03%) - - - -kmem_cache_alloc_node (9 samples, 0.02%) - - - -loopback_xmit (8 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (7 samples, 0.01%) - - - -ret_from_fork (5 samples, 0.01%) - - - -kmem_cache_free (7 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (54 samples, 0.10%) - - - -icmp_rcv (11 samples, 0.02%) - - - -set_next_entity (9 samples, 0.02%) - - - -enqueue_entity (12 samples, 0.02%) - - - -ip_setup_cork (10 samples, 0.02%) - - - -__x86_indirect_thunk_rax (7 samples, 0.01%) - - - -udp_sendmsg (157 samples, 0.30%) - - - -kmalloc_slab (6 samples, 0.01%) - - - -__irq_exit_rcu (8 samples, 0.02%) - - - -ip_skb_dst_mtu (5 samples, 0.01%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (1,934 samples, 3.73%) -Thre.. - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (5 samples, 0.01%) - - - -__GI___pthread_mutex_unlock_usercnt (60 samples, 0.12%) - - - -__x86_indirect_thunk_rax (10 samples, 0.02%) - - - -__schedule (5 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2,539 samples, 4.90%) -Thread.. - - -syscall_return_via_sysret (5 samples, 0.01%) - - - -ip_finish_output2 (9 samples, 0.02%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (93 samples, 0.18%) - - - -thread-2 (3,528 samples, 6.81%) -thread-2 - - -acpi_processor_ffh_cstate_enter (15 samples, 0.03%) - - - -syscall_return_via_sysret (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (189 samples, 0.36%) - - - -kmem_cache_free (10 samples, 0.02%) - - - -__x86_indirect_thunk_rax (9 samples, 0.02%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (32 samples, 0.06%) - - - -kfree (7 samples, 0.01%) - - - -__udp4_lib_rcv (13 samples, 0.03%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -__softirqentry_text_start (6 samples, 0.01%) - - - -__icmp_send (75 samples, 0.14%) - - - -__ip_select_ident (5 samples, 0.01%) - - - -ip_local_deliver_finish (58 samples, 0.11%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (10 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (6 samples, 0.01%) - - - -psi_group_change (9 samples, 0.02%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -process_backlog (5 samples, 0.01%) - - - -futex_wait (15 samples, 0.03%) - - - -MultiThreadedMonitorSupport_slowPathMonitorEnter_5c2ec80c70301e1f54c9deef94b70b719d5a10f5 (30 samples, 0.06%) - - - -udp4_lib_lookup2 (8 samples, 0.02%) - - - -ip_send_skb (155 samples, 0.30%) - - - -rcu_eqs_enter.constprop.0 (7 samples, 0.01%) - - - -xfrm_lookup_with_ifid (10 samples, 0.02%) - - - -selinux_socket_sendmsg (6 samples, 0.01%) - - - -__dev_queue_xmit (8 samples, 0.02%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (949 samples, 1.83%) -D.. - - -icmp_rcv (12 samples, 0.02%) - - - -cpuacct_charge (15 samples, 0.03%) - - - -process_backlog (201 samples, 0.39%) - - - -__libc_sendto (1,147 samples, 2.21%) -_.. - - -__dev_queue_xmit (8 samples, 0.02%) - - - -selinux_ip_postroute (16 samples, 0.03%) - - - -__netif_receive_skb_one_core (102 samples, 0.20%) - - - -tick_check_broadcast_expired (6 samples, 0.01%) - - - -ip_push_pending_frames (27 samples, 0.05%) - - - -hash_futex (6 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (12 samples, 0.02%) - - - -asm_common_interrupt (5 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (17 samples, 0.03%) - - - -psi_task_change (10 samples, 0.02%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (166 samples, 0.32%) - - - -futex_wait_queue_me (28 samples, 0.05%) - - - -sock_alloc_send_pskb (7 samples, 0.01%) - - - -affine_move_task (6 samples, 0.01%) - - - -__ip_append_data (39 samples, 0.08%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1,934 samples, 3.73%) -Posi.. - - -raw_spin_rq_lock_nested (9 samples, 0.02%) - - - -__GI___lll_lock_wake (88 samples, 0.17%) - - - -__udp4_lib_rcv (15 samples, 0.03%) - - - -schedule (10 samples, 0.02%) - - - -__wrgsbase_inactive (78 samples, 0.15%) - - - -do_futex (7 samples, 0.01%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (419 samples, 0.81%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (496 samples, 0.96%) - - - -fib_table_lookup (46 samples, 0.09%) - - - -___pthread_mutex_lock (34 samples, 0.07%) - - - -do_syscall_64 (17 samples, 0.03%) - - - -native_load_tls (5 samples, 0.01%) - - - -hrtimer_interrupt (776 samples, 1.50%) - - - -syscall_enter_from_user_mode (9 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.01%) - - - -udp_sendmsg (7 samples, 0.01%) - - - -dequeue_task_fair (7 samples, 0.01%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (60 samples, 0.12%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (79 samples, 0.15%) - - - -__entry_text_start (27 samples, 0.05%) - - - -__futex_abstimed_wait_common (316 samples, 0.61%) - - - -xfrm_lookup_with_ifid (16 samples, 0.03%) - - - -security_skb_classify_flow (6 samples, 0.01%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (23 samples, 0.04%) - - - -dequeue_entity (5 samples, 0.01%) - - - -__sched_setaffinity (18 samples, 0.03%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (514 samples, 0.99%) - - - -acpi_processor_ffh_cstate_enter (8 samples, 0.02%) - - - -process_backlog (7 samples, 0.01%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -__local_bh_enable_ip (109 samples, 0.21%) - - - -JNIGeneratedMethodSupport_unboxHandle_4bc785faa30981ed3ca002aef8d600c83e6f66e4 (6 samples, 0.01%) - - - -psi_group_change (6 samples, 0.01%) - - - -validate_xmit_xfrm (7 samples, 0.01%) - - - -__alloc_skb (9 samples, 0.02%) - - - -sock_wfree (8 samples, 0.02%) - - - -ip_finish_output2 (5 samples, 0.01%) - - - -thread-3 (2,628 samples, 5.07%) -thread-3 - - -__rdgsbase_inactive (60 samples, 0.12%) - - - -wake_up_q (6 samples, 0.01%) - - - -__schedule (8 samples, 0.02%) - - - -ip_send_skb (150 samples, 0.29%) - - - -__wrgsbase_inactive (8 samples, 0.02%) - - - -__ip_local_out (6 samples, 0.01%) - - - -irqtime_account_irq (6 samples, 0.01%) - - - -__update_load_avg_se (6 samples, 0.01%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,352 samples, 2.61%) -Da.. - - -_raw_spin_lock_irqsave (9 samples, 0.02%) - - - -__x86_indirect_thunk_rax (13 samples, 0.03%) - - - -sched_idle_set_state (8 samples, 0.02%) - - - -tick_nohz_idle_enter (14 samples, 0.03%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (27 samples, 0.05%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (5 samples, 0.01%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (2,194 samples, 4.23%) -Threa.. - - -__GI___lll_lock_wake (25 samples, 0.05%) - - - -ksys_write (32 samples, 0.06%) - - - -pick_next_task_fair (11 samples, 0.02%) - - - -sched_clock_cpu (5 samples, 0.01%) - - - -psi_group_change (7 samples, 0.01%) - - - -__schedule (24 samples, 0.05%) - - - -native_sched_clock (9 samples, 0.02%) - - - -kmem_cache_free (5 samples, 0.01%) - - - -psi_group_change (41 samples, 0.08%) - - - -fib_table_lookup (9 samples, 0.02%) - - - -schedule_hrtimeout_range_clock (10 samples, 0.02%) - - - -__netif_receive_skb_core.constprop.0 (8 samples, 0.02%) - - - -do_softirq (110 samples, 0.21%) - - - -__dev_queue_xmit (5 samples, 0.01%) - - - -__ip_append_data (21 samples, 0.04%) - - - -icmp_push_reply (13 samples, 0.03%) - - - -iterate_groups (21 samples, 0.04%) - - - -NET_InetAddressToSockaddr (68 samples, 0.13%) - - - -ttwu_do_activate (6 samples, 0.01%) - - - -__GI___pthread_mutex_unlock_usercnt (83 samples, 0.16%) - - - -thread-8 (2,485 samples, 4.80%) -threa.. - - -finish_task_switch.isra.0 (5 samples, 0.01%) - - - -__libc_sendto (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (70 samples, 0.14%) - - - -cpuidle_enter (94 samples, 0.18%) - - - -enqueue_task (7 samples, 0.01%) - - - -ip_append_data (9 samples, 0.02%) - - - -__icmp_send (62 samples, 0.12%) - - - -wake_up_q (7 samples, 0.01%) - - - -__update_load_avg_cfs_rq (15 samples, 0.03%) - - - -__alloc_skb (20 samples, 0.04%) - - - -___pthread_mutex_lock (5 samples, 0.01%) - - - -__clone3 (14 samples, 0.03%) - - - -__update_load_avg_se (161 samples, 0.31%) - - - -do_futex (19 samples, 0.04%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (12 samples, 0.02%) - - - -process_backlog (5 samples, 0.01%) - - - -___pthread_mutex_lock (34 samples, 0.07%) - - - -futex_wait (6 samples, 0.01%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (500 samples, 0.96%) - - - -blk_mq_dispatch_rq_list (7 samples, 0.01%) - - - -start_thread (2,276 samples, 4.39%) -start.. - - -ip_append_data (11 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (9 samples, 0.02%) - - - -thread-4 (3,014 samples, 5.82%) -thread-4 - - -validate_xmit_xfrm (6 samples, 0.01%) - - - -__icmp_send (128 samples, 0.25%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (624 samples, 1.20%) - - - -__x86_indirect_thunk_rax (29 samples, 0.06%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (15 samples, 0.03%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (10 samples, 0.02%) - - - -net_rx_action (109 samples, 0.21%) - - - -__alloc_skb (20 samples, 0.04%) - - - -select_task_rq_fair (8 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (6 samples, 0.01%) - - - -memcg_slab_post_alloc_hook (5 samples, 0.01%) - - - -__clone3 (1,946 samples, 3.76%) -__cl.. - - -validate_xmit_skb (15 samples, 0.03%) - - - -do_syscall_64 (7 samples, 0.01%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (241 samples, 0.47%) - - - -tick_nohz_get_sleep_length (63 samples, 0.12%) - - - -__inet_dev_addr_type (8 samples, 0.02%) - - - -___pthread_mutex_lock (21 samples, 0.04%) - - - -entry_SYSCALL_64_after_hwframe (218 samples, 0.42%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (2,573 samples, 4.97%) -Datagr.. - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (2,194 samples, 4.23%) -Threa.. - - -__ip_make_skb (29 samples, 0.06%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (210 samples, 0.41%) - - - -kmem_cache_alloc_node (15 samples, 0.03%) - - - -do_syscall_64 (6 samples, 0.01%) - - - -dev_hard_start_xmit (19 samples, 0.04%) - - - -__sys_sendto (194 samples, 0.37%) - - - -icmp_push_reply (12 samples, 0.02%) - - - -__netif_receive_skb_one_core (100 samples, 0.19%) - - - -select_task_rq_fair (13 samples, 0.03%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (5 samples, 0.01%) - - - -kmem_cache_alloc_node (13 samples, 0.03%) - - - -__futex_abstimed_wait_common (339 samples, 0.65%) - - - -__entry_text_start (33 samples, 0.06%) - - - -__GI___pthread_cond_wait (281 samples, 0.54%) - - - -ip_route_output_key_hash (9 samples, 0.02%) - - - -ipv4_mtu (10 samples, 0.02%) - - - -process_backlog (106 samples, 0.20%) - - - -ip_output (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (59 samples, 0.11%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (97 samples, 0.19%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (27 samples, 0.05%) - - - -Pthread_pthread_mutex_unlock_4817c536fb459a4a32739c23b0ed198cc1fb485f (76 samples, 0.15%) - - - -sock_alloc_send_pskb (11 samples, 0.02%) - - - -ip_send_skb (139 samples, 0.27%) - - - -do_syscall_64 (20 samples, 0.04%) - - - -wake_up_q (20 samples, 0.04%) - - - -__icmp_send (13 samples, 0.03%) - - - -slab_free_freelist_hook.constprop.0 (5 samples, 0.01%) - - - -sock_alloc_send_pskb (5 samples, 0.01%) - - - -native_sched_clock (36 samples, 0.07%) - - - -entry_SYSCALL_64_after_hwframe (22 samples, 0.04%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (7 samples, 0.01%) - - - -selinux_ipv4_output (14 samples, 0.03%) - - - -__GI___pthread_mutex_unlock_usercnt (76 samples, 0.15%) - - - -JavaThreads_unpark_2ea667c9c895f0321a5b2853fc974fe6018f8b6f (119 samples, 0.23%) - - - -___pthread_cond_broadcast (7 samples, 0.01%) - - - -netdev_core_pick_tx (5 samples, 0.01%) - - - -select_task_rq_fair (6 samples, 0.01%) - - - -csum_partial_copy_generic (18 samples, 0.03%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (577 samples, 1.11%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (555 samples, 1.07%) - - - -__x86_indirect_thunk_rax (26 samples, 0.05%) - - - -netdev_core_pick_tx (11 samples, 0.02%) - - - -ip_local_deliver_finish (69 samples, 0.13%) - - - -psi_task_switch (7 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.01%) - - - -__update_load_avg_se (6 samples, 0.01%) - - - -skb_free_head (8 samples, 0.02%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (77 samples, 0.15%) - - - -__condvar_dec_grefs (33 samples, 0.06%) - - - -native_sched_clock (22 samples, 0.04%) - - - -entry_SYSCALL_64_after_hwframe (9 samples, 0.02%) - - - -loopback_xmit (12 samples, 0.02%) - - - -rcu_needs_cpu (5 samples, 0.01%) - - - -Pthread_pthread_cond_broadcast_eebf0c33ba863f59f420d2e79631b37822b99e02 (78 samples, 0.15%) - - - -__libc_sendto (964 samples, 1.86%) -_.. - - -tick_nohz_get_sleep_length (10 samples, 0.02%) - - - -tick_nohz_idle_enter (49 samples, 0.09%) - - - -nf_hook_slow (6 samples, 0.01%) - - - -getInetAddress_family (6 samples, 0.01%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (175 samples, 0.34%) - - - -ktime_get_update_offsets_now (38 samples, 0.07%) - - - -getInetAddress_family (57 samples, 0.11%) - - - -getInetAddress_addr (34 samples, 0.07%) - - - -kmem_cache_alloc_node (12 samples, 0.02%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2,167 samples, 4.18%) -Java.. - - -entry_SYSCALL_64_after_hwframe (181 samples, 0.35%) - - - -siphash_3u32 (18 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (26 samples, 0.05%) - - - -netif_rx (5 samples, 0.01%) - - - -ip_setup_cork (9 samples, 0.02%) - - - -kmem_cache_alloc_trace (5 samples, 0.01%) - - - -decode_session4 (10 samples, 0.02%) - - - -__ip_append_data (43 samples, 0.08%) - - - -icmp_push_reply (13 samples, 0.03%) - - - -kmem_cache_alloc_node (6 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (5 samples, 0.01%) - - - -__ip_make_skb (9 samples, 0.02%) - - - -process_backlog (6 samples, 0.01%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (2,194 samples, 4.23%) -Futur.. - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,290 samples, 2.49%) -Da.. - - -sched_clock_cpu (28 samples, 0.05%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (103 samples, 0.20%) - - - -__x64_sys_futex (9 samples, 0.02%) - - - -__switch_to_asm (8 samples, 0.02%) - - - -udp_send_skb (142 samples, 0.27%) - - - -ip_append_data (17 samples, 0.03%) - - - -__ip_make_skb (28 samples, 0.05%) - - - -__ip_dev_find (5 samples, 0.01%) - - - -kmem_cache_free (5 samples, 0.01%) - - - -do_futex (7 samples, 0.01%) - - - -MultiThreadedMonitorSupport_monitorExit_f765f7445e650efe1207579ef06c6f8ac708d1b5 (11 samples, 0.02%) - - - -ip_finish_output2 (10 samples, 0.02%) - - - -psi_task_switch (11 samples, 0.02%) - - - -ip_skb_dst_mtu (11 samples, 0.02%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (10 samples, 0.02%) - - - -__napi_poll (109 samples, 0.21%) - - - -rcu_idle_exit (28 samples, 0.05%) - - - -futex_wait (11 samples, 0.02%) - - - -__netif_receive_skb_core.constprop.0 (5 samples, 0.01%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (56 samples, 0.11%) - - - -rb_erase (5 samples, 0.01%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (6 samples, 0.01%) - - - -enqueue_to_backlog (10 samples, 0.02%) - - - -__libc_sendto (1,374 samples, 2.65%) -__.. - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (12 samples, 0.02%) - - - -__cgroup_bpf_run_filter_skb (10 samples, 0.02%) - - - -psi_task_change (9 samples, 0.02%) - - - -__dev_queue_xmit (9 samples, 0.02%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (150 samples, 0.29%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (396 samples, 0.76%) - - - -__update_load_avg_cfs_rq (5 samples, 0.01%) - - - -CEntryPointSnippets_attachUnattachedThread_624b0c1d4e08bdf4608c1290142e118ef51d6192 (8 samples, 0.02%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (430 samples, 0.83%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (1,275 samples, 2.46%) -Da.. - - -selinux_ip_postroute (9 samples, 0.02%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (5 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1,823 samples, 3.52%) -Jav.. - - -schedule (12 samples, 0.02%) - - - -psi_group_change (8 samples, 0.02%) - - - -Inet4Address_isLinkLocalAddress_ce47843b990249e34a84313af9f6958152044ee1 (5 samples, 0.01%) - - - -set_next_entity (8 samples, 0.02%) - - - -ip_append_data (17 samples, 0.03%) - - - -ip_local_deliver_finish (96 samples, 0.19%) - - - -process_backlog (74 samples, 0.14%) - - - -entry_SYSCALL_64_after_hwframe (167 samples, 0.32%) - - - -acpi_idle_enter (5 samples, 0.01%) - - - -ip_protocol_deliver_rcu (97 samples, 0.19%) - - - -ip_route_output_key_hash_rcu (6 samples, 0.01%) - - - -__schedule (14 samples, 0.03%) - - - -do_futex (12 samples, 0.02%) - - - -csum_partial_copy_generic (25 samples, 0.05%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (18 samples, 0.03%) - - - -__x64_sys_epoll_pwait (10 samples, 0.02%) - - - -Buffer_position_542e9d12d78d28ae335243c3729c7f4a18caa5f2 (6 samples, 0.01%) - - - -__pthread_cleanup_pop (6 samples, 0.01%) - - - -wb_writeback (7 samples, 0.01%) - - - -selinux_ip_postroute_compat (10 samples, 0.02%) - - - -selinux_ip_postroute_compat (11 samples, 0.02%) - - - -__icmp_send (12 samples, 0.02%) - - - -memcg_slab_post_alloc_hook (8 samples, 0.02%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (520 samples, 1.00%) - - - -__alloc_skb (42 samples, 0.08%) - - - -selinux_ip_postroute (9 samples, 0.02%) - - - -udp_send_skb (100 samples, 0.19%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (5 samples, 0.01%) - - - -futex_wait_setup (7 samples, 0.01%) - - - -psi_group_change (5 samples, 0.01%) - - - -___pthread_cond_broadcast (8 samples, 0.02%) - - - -alloc_skb_with_frags (8 samples, 0.02%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (5 samples, 0.01%) - - - -fib_table_lookup (5 samples, 0.01%) - - - -update_rq_clock (401 samples, 0.77%) - - - -__schedule (14 samples, 0.03%) - - - -native_sched_clock (5 samples, 0.01%) - - - -DatagramChannelImpl_send_6bb2ce127b1f52f0bf68e97a6085aa686e4b83f4 (1,946 samples, 3.76%) -Data.. - - -cpuidle_enter_state (94 samples, 0.18%) - - - -schedule (12 samples, 0.02%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (865 samples, 1.67%) - - - -dequeue_entity (6 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (5 samples, 0.01%) - - - -psi_group_change (6 samples, 0.01%) - - - -__GI___lll_lock_wake (87 samples, 0.17%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (9 samples, 0.02%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (21 samples, 0.04%) - - - -__libc_sendto (5 samples, 0.01%) - - - -getInetAddress_family (6 samples, 0.01%) - - - -move_addr_to_kernel.part.0 (5 samples, 0.01%) - - - -ip_protocol_deliver_rcu (10 samples, 0.02%) - - - -___pthread_mutex_lock (20 samples, 0.04%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (311 samples, 0.60%) - - - -__get_user_nocheck_4 (5 samples, 0.01%) - - - -syscall_enter_from_user_mode (6 samples, 0.01%) - - - -_start (10 samples, 0.02%) - - - -IsolateEnterStub_JNIFunctions_ExceptionCheck_c3880ec5388acdaaf0a33f93c718f75d394cf800_56464c7018196a101b3a4a0b8a60eff8ca309807 (25 samples, 0.05%) - - - -__GI___ioctl_time64 (198 samples, 0.38%) - - - -loopback_xmit (7 samples, 0.01%) - - - -do_csum (12 samples, 0.02%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (6 samples, 0.01%) - - - -__switch_to (5 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2,276 samples, 4.39%) -Isola.. - - -__local_bh_enable_ip (122 samples, 0.24%) - - - -__ip_local_out (5 samples, 0.01%) - - - -__ip_make_skb (25 samples, 0.05%) - - - -__entry_text_start (37 samples, 0.07%) - - - -__condvar_dec_grefs (52 samples, 0.10%) - - - -__entry_text_start (11 samples, 0.02%) - - - -__entry_text_start (24 samples, 0.05%) - - - -__futex_abstimed_wait_common (322 samples, 0.62%) - - - -__x86_indirect_thunk_rax (20 samples, 0.04%) - - - -__GI___lll_lock_wake (16 samples, 0.03%) - - - -futex_wait (11 samples, 0.02%) - - - -futex_wait_queue_me (6 samples, 0.01%) - - - -ip_route_output_key_hash_rcu (23 samples, 0.04%) - - - -entry_SYSCALL_64_after_hwframe (22 samples, 0.04%) - - - -futex_wait_setup (5 samples, 0.01%) - - - -icmp_push_reply (28 samples, 0.05%) - - - -__ip_finish_output (8 samples, 0.02%) - - - -siphash_3u32 (9 samples, 0.02%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (45 samples, 0.09%) - - - -icmp_glue_bits (9 samples, 0.02%) - - - -__x86_indirect_thunk_rax (12 samples, 0.02%) - - - -psi_group_change (6 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (5 samples, 0.01%) - - - -do_syscall_64 (14 samples, 0.03%) - - - -__libc_sendto (5 samples, 0.01%) - - - -syscall_enter_from_user_mode (6 samples, 0.01%) - - - -pick_next_task_fair (7 samples, 0.01%) - - - -raw_spin_rq_lock_nested (15 samples, 0.03%) - - - -__udp4_lib_rcv (18 samples, 0.03%) - - - -net_rx_action (112 samples, 0.22%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (51 samples, 0.10%) - - - -schedule (6 samples, 0.01%) - - - -sock_alloc_send_pskb (5 samples, 0.01%) - - - -pick_next_entity (5 samples, 0.01%) - - - -rcu_eqs_enter.constprop.0 (6 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (11 samples, 0.02%) - - - -select_task_rq_fair (9 samples, 0.02%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1,823 samples, 3.52%) -Thr.. - - -raw_spin_rq_lock_nested (16 samples, 0.03%) - - - -ip_route_output_key_hash_rcu (10 samples, 0.02%) - - - -ip_make_skb (5 samples, 0.01%) - - - -cpu_startup_entry (5 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (17 samples, 0.03%) - - - -__local_bh_enable_ip (116 samples, 0.22%) - - - -loopback_xmit (6 samples, 0.01%) - - - -fib_lookup_good_nhc (6 samples, 0.01%) - - - -csum_partial_copy_generic (26 samples, 0.05%) - - - -write_cache_pages (7 samples, 0.01%) - - - -ip_setup_cork (6 samples, 0.01%) - - - -__update_load_avg_cfs_rq (7 samples, 0.01%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (20 samples, 0.04%) - - - -smp_call_function_single (17 samples, 0.03%) - - - -acpi_idle_do_entry (92 samples, 0.18%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (17 samples, 0.03%) - - - -__x64_sys_futex (14 samples, 0.03%) - - - -__ip_local_out (6 samples, 0.01%) - - - -raw_local_deliver (6 samples, 0.01%) - - - -ip_rcv (9 samples, 0.02%) - - - -sched_clock_cpu (11 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.01%) - - - -ip_finish_output2 (134 samples, 0.26%) - - - -alloc_skb_with_frags (10 samples, 0.02%) - - - -migrate_enable (7 samples, 0.01%) - - - -idle_cpu (6 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (47 samples, 0.09%) - - - -entry_SYSCALL_64_after_hwframe (11 samples, 0.02%) - - - -__ip_local_out (8 samples, 0.02%) - - - -PosixParkEvent_condWait_48f9d4da7d07c2044e85cec5495ae177057e5073 (558 samples, 1.08%) - - - -netif_skb_features (5 samples, 0.01%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (28 samples, 0.05%) - - - -entry_SYSCALL_64_after_hwframe (27 samples, 0.05%) - - - -[perf] (394 samples, 0.76%) - - - -selinux_ipv4_output (11 samples, 0.02%) - - - -selinux_socket_sendmsg (9 samples, 0.02%) - - - -__napi_poll (107 samples, 0.21%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (11 samples, 0.02%) - - - -__ip_select_ident (7 samples, 0.01%) - - - -__local_bh_enable_ip (75 samples, 0.14%) - - - -memcg_slab_post_alloc_hook (9 samples, 0.02%) - - - -__entry_text_start (32 samples, 0.06%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (2,575 samples, 4.97%) -Thread.. - - -__icmp_send (18 samples, 0.03%) - - - -irqtime_account_irq (59 samples, 0.11%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -schedule (27 samples, 0.05%) - - - -irqtime_account_irq (8 samples, 0.02%) - - - -queue_core_balance (7 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (17 samples, 0.03%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (6 samples, 0.01%) - - - -do_idle (94 samples, 0.18%) - - - -select_task_rq_fair (9 samples, 0.02%) - - - -rb_insert_color (15 samples, 0.03%) - - - -update_load_avg (8 samples, 0.02%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (7 samples, 0.01%) - - - -psi_task_change (6 samples, 0.01%) - - - -JNIObjectHandles_popLocalFramesIncluding_33a493f8782cc84a75a079908d7e5b418b3738fc (6 samples, 0.01%) - - - -getInetAddress_family (27 samples, 0.05%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2,539 samples, 4.90%) -PosixJ.. - - -MultiThreadedMonitorSupport_slowPathMonitorExit_183871de385508d0f6b4f0881e8e0c44628018b3 (18 samples, 0.03%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (593 samples, 1.14%) - - - -_raw_spin_lock_irqsave (11 samples, 0.02%) - - - -icmp_route_lookup.constprop.0 (16 samples, 0.03%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (1,946 samples, 3.76%) -Thre.. - - -net_rx_action (97 samples, 0.19%) - - - -net_rx_action (12 samples, 0.02%) - - - -check_preempt_curr (15 samples, 0.03%) - - - -raw_spin_rq_lock_nested (11 samples, 0.02%) - - - -sched_clock_cpu (13 samples, 0.03%) - - - -__x64_sys_futex (6 samples, 0.01%) - - - -native_sched_clock (20 samples, 0.04%) - - - -__entry_text_start (10 samples, 0.02%) - - - -ip_push_pending_frames (7 samples, 0.01%) - - - -call_timer_fn (6 samples, 0.01%) - - - -ip_skb_dst_mtu (6 samples, 0.01%) - - - -__schedule (10 samples, 0.02%) - - - -start_kernel (5 samples, 0.01%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,339 samples, 2.58%) -Da.. - - -__x64_sys_futex (5 samples, 0.01%) - - - -native_sched_clock (5 samples, 0.01%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -select_task_rq_fair (10 samples, 0.02%) - - - -get_cpu_device (8 samples, 0.02%) - - - -blk_mq_sched_dispatch_requests (7 samples, 0.01%) - - - -menu_select (191 samples, 0.37%) - - - -dev_hard_start_xmit (9 samples, 0.02%) - - - -__schedule (9 samples, 0.02%) - - - -iomap_file_buffered_write (29 samples, 0.06%) - - - -__x64_sys_futex (22 samples, 0.04%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (522 samples, 1.01%) - - - -acpi_processor_ffh_cstate_enter (633 samples, 1.22%) - - - -ip_route_output_key_hash_rcu (13 samples, 0.03%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (125 samples, 0.24%) - - - -icmp_push_reply (17 samples, 0.03%) - - - -udp_err (7 samples, 0.01%) - - - -schedule (6 samples, 0.01%) - - - -futex_wake (7 samples, 0.01%) - - - -psi_group_change (12 samples, 0.02%) - - - -__udp4_lib_rcv (52 samples, 0.10%) - - - -__netif_receive_skb_one_core (105 samples, 0.20%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (284 samples, 0.55%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (67 samples, 0.13%) - - - -__switch_to (7 samples, 0.01%) - - - -do_futex (5 samples, 0.01%) - - - -sock_alloc_send_pskb (6 samples, 0.01%) - - - -ip_route_output_key_hash_rcu (7 samples, 0.01%) - - - -mark_wake_futex (12 samples, 0.02%) - - - -native_sched_clock (8 samples, 0.02%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (8 samples, 0.02%) - - - -select_task_rq_fair (6 samples, 0.01%) - - - -ttwu_queue_wakelist (7 samples, 0.01%) - - - -__x64_sys_sendto (159 samples, 0.31%) - - - -ip_setup_cork (11 samples, 0.02%) - - - -__alloc_skb (5 samples, 0.01%) - - - -_copy_from_iter (15 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (9 samples, 0.02%) - - - -sock_wfree (14 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (299 samples, 0.58%) - - - -fib_table_lookup (40 samples, 0.08%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (67 samples, 0.13%) - - - -rcu_read_unlock_strict (9 samples, 0.02%) - - - -ThreadLocalHandles_create_e73ceb82cc40aa1b541873d16f11c8b5b16f2175 (20 samples, 0.04%) - - - -psi_group_change (796 samples, 1.54%) - - - -sched_clock_cpu (9 samples, 0.02%) - - - -udp_sendmsg (10 samples, 0.02%) - - - -psi_group_change (6 samples, 0.01%) - - - -[unknown] (13 samples, 0.03%) - - - -__x64_sys_sendto (180 samples, 0.35%) - - - -loopback_xmit (23 samples, 0.04%) - - - -__libc_sendto (8 samples, 0.02%) - - - -__x86_indirect_thunk_rax (5 samples, 0.01%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (15 samples, 0.03%) - - - -psi_task_switch (5 samples, 0.01%) - - - -___pthread_cond_broadcast (52 samples, 0.10%) - - - -ip_route_output_key_hash_rcu (6 samples, 0.01%) - - - -__x86_indirect_thunk_rax (8 samples, 0.02%) - - - -ip_rcv_finish_core.constprop.0 (10 samples, 0.02%) - - - -start_thread (1,946 samples, 3.76%) -star.. - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (23 samples, 0.04%) - - - -ip_route_output_key_hash (8 samples, 0.02%) - - - -__ip_finish_output (10 samples, 0.02%) - - - -syscall_return_via_sysret (5 samples, 0.01%) - - - -mark_wake_futex (5 samples, 0.01%) - - - -__condvar_dec_grefs (92 samples, 0.18%) - - - -psi_group_change (16 samples, 0.03%) - - - -xfrm_lookup_with_ifid (17 samples, 0.03%) - - - -selinux_ip_postroute_compat (10 samples, 0.02%) - - - -syscall_return_via_sysret (17 samples, 0.03%) - - - -sock_alloc_send_pskb (5 samples, 0.01%) - - - -JavaThreads_park_9cfef0c461baa0314aafe1415754561e32f1e386 (527 samples, 1.02%) - - - -__sys_sendto (129 samples, 0.25%) - - - -FutureTask_run_8b0bdf0834cb555c1e2aa8896a714d79bab78517 (3,469 samples, 6.70%) -FutureTas.. - - -acpi_idle_do_entry (371 samples, 0.72%) - - - -__GI___pthread_mutex_unlock_usercnt (6 samples, 0.01%) - - - -__softirqentry_text_start (6 samples, 0.01%) - - - -can_stop_idle_tick (24 samples, 0.05%) - - - -__cgroup_bpf_run_filter_skb (11 samples, 0.02%) - - - -__kmalloc_node_track_caller (9 samples, 0.02%) - - - -plist_del (6 samples, 0.01%) - - - -enqueue_to_backlog (9 samples, 0.02%) - - - -DatagramChannelImpl_beginWrite_e0b047bda5d2ef03b38b2294da1d52e27566ee32 (57 samples, 0.11%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (125 samples, 0.24%) - - - -psi_group_change (12 samples, 0.02%) - - - -syscall_enter_from_user_mode (5 samples, 0.01%) - - - -do_futex (14 samples, 0.03%) - - - -__ip_append_data (10 samples, 0.02%) - - - -AbstractQueuedSynchronizer_unparkSuccessor_851859a085ed0112a4406e9c9b4d253092c06d1d (129 samples, 0.25%) - - - -rb_erase (5 samples, 0.01%) - - - -sock_alloc_send_pskb (8 samples, 0.02%) - - - -switch_mm_irqs_off (7 samples, 0.01%) - - - -do_futex (6 samples, 0.01%) - - - -[perf] (412 samples, 0.80%) - - - -entry_SYSCALL_64_after_hwframe (171 samples, 0.33%) - - - -enqueue_task (82 samples, 0.16%) - - - -__libc_poll (16 samples, 0.03%) - - - -__netif_receive_skb_core.constprop.0 (7 samples, 0.01%) - - - -__condvar_confirm_wakeup (6 samples, 0.01%) - - - -schedule (6 samples, 0.01%) - - - -start_thread (1,934 samples, 3.73%) -star.. - - -raw_spin_rq_lock_nested (7 samples, 0.01%) - - - -__x64_sys_sendto (129 samples, 0.25%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (22 samples, 0.04%) - - - -__entry_text_start (18 samples, 0.03%) - - - -__GI___lll_lock_wake (9 samples, 0.02%) - - - -set_next_entity (34 samples, 0.07%) - - - -reweight_entity (18 samples, 0.03%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (128 samples, 0.25%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (23 samples, 0.04%) - - - -kfree (8 samples, 0.02%) - - - -__ksize (6 samples, 0.01%) - - - -ParkEvent_initializeOnce_68f5df089169fde77d25ea87d820b2e9cca25332 (20 samples, 0.04%) - - - -__softirqentry_text_start (5 samples, 0.01%) - - - -reweight_entity (26 samples, 0.05%) - - - -__get_user_nocheck_4 (65 samples, 0.13%) - - - -getInetAddress_addr (31 samples, 0.06%) - - - -do_syscall_64 (32 samples, 0.06%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (504 samples, 0.97%) - - - -select_task_rq_fair (7 samples, 0.01%) - - - -sock_def_write_space (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (482 samples, 0.93%) - - - -reweight_entity (7 samples, 0.01%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (6 samples, 0.01%) - - - -do_syscall_64 (23 samples, 0.04%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (50 samples, 0.10%) - - - -tick_nohz_next_event (28 samples, 0.05%) - - - -JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b (9 samples, 0.02%) - - - -select_task_rq_fair (12 samples, 0.02%) - - - -syscall_return_via_sysret (7 samples, 0.01%) - - - -ip_finish_output2 (10 samples, 0.02%) - - - -mark_wake_futex (7 samples, 0.01%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (1,245 samples, 2.40%) -Da.. - - -icmp_rcv (5 samples, 0.01%) - - - -sock_wfree (6 samples, 0.01%) - - - -Unsafe_unpark_d7094c561c57e072c1b45b47117ccd4b31ac594f (168 samples, 0.32%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (207 samples, 0.40%) - - - -dequeue_task_fair (41 samples, 0.08%) - - - -get_futex_key (8 samples, 0.02%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2,194 samples, 4.23%) -Threa.. - - -syscall_enter_from_user_mode (12 samples, 0.02%) - - - -update_load_avg (5 samples, 0.01%) - - - -copy_user_generic_string (35 samples, 0.07%) - - - -wb_workfn (7 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (38 samples, 0.07%) - - - -entry_SYSCALL_64_after_hwframe (19 samples, 0.04%) - - - -__ip_make_skb (8 samples, 0.02%) - - - -ThreadLocalHandles_ensureCapacity_22e5abcd3c07151c01ffcc4e0e4a54317d42c2a8 (14 samples, 0.03%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (11 samples, 0.02%) - - - -__GI___pthread_cond_wait (6 samples, 0.01%) - - - -__ip_append_data (37 samples, 0.07%) - - - -__udp4_lib_rcv (136 samples, 0.26%) - - - -__netif_receive_skb_one_core (86 samples, 0.17%) - - - -psi_task_switch (8 samples, 0.02%) - - - -loopback_xmit (11 samples, 0.02%) - - - -xfrm_lookup_with_ifid (16 samples, 0.03%) - - - -ip_finish_output2 (24 samples, 0.05%) - - - -dev_hard_start_xmit (6 samples, 0.01%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (263 samples, 0.51%) - - - -futex_wake (6 samples, 0.01%) - - - -ip_local_deliver_finish (5 samples, 0.01%) - - - -native_sched_clock (10 samples, 0.02%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (940 samples, 1.81%) -D.. - - -ip_route_output_key_hash_rcu (29 samples, 0.06%) - - - -fib_lookup_good_nhc (6 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (11 samples, 0.02%) - - - -do_syscall_64 (9 samples, 0.02%) - - - -do_softirq (117 samples, 0.23%) - - - -irqtime_account_irq (5 samples, 0.01%) - - - -syscall_exit_to_user_mode (6 samples, 0.01%) - - - -sched_clock_cpu (7 samples, 0.01%) - - - -__clone3 (2,276 samples, 4.39%) -__clo.. - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (8 samples, 0.02%) - - - -do_writepages (7 samples, 0.01%) - - - -__udp4_lib_rcv (25 samples, 0.05%) - - - -__x64_sys_sendto (195 samples, 0.38%) - - - -__skb_checksum_complete (10 samples, 0.02%) - - - -kmem_cache_free (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (45 samples, 0.09%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -ip_make_skb (55 samples, 0.11%) - - - -ip_push_pending_frames (38 samples, 0.07%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (5 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (22 samples, 0.04%) - - - -icmp_route_lookup.constprop.0 (14 samples, 0.03%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (33 samples, 0.06%) - - - -__dev_queue_xmit (15 samples, 0.03%) - - - -syscall_return_via_sysret (7 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (5 samples, 0.01%) - - - -___pthread_cond_broadcast (5 samples, 0.01%) - - - -cpuidle_reflect (8 samples, 0.02%) - - - -add_to_page_cache_lru (10 samples, 0.02%) - - - -__ip_select_ident (8 samples, 0.02%) - - - -__ip_local_out (10 samples, 0.02%) - - - -select_task_rq_fair (7 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (47 samples, 0.09%) - - - -_raw_spin_lock_irqsave (5 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (32 samples, 0.06%) - - - -do_csum (6 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (6 samples, 0.01%) - - - -icmp_route_lookup.constprop.0 (24 samples, 0.05%) - - - -__entry_text_start (33 samples, 0.06%) - - - -update_rq_clock (32 samples, 0.06%) - - - -cpuidle_governor_latency_req (5 samples, 0.01%) - - - -futex_wait (15 samples, 0.03%) - - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (9 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.01%) - - - -__ip_make_skb (49 samples, 0.09%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (664 samples, 1.28%) - - - -siphash_3u32 (11 samples, 0.02%) - - - -alloc_skb_with_frags (6 samples, 0.01%) - - - -ip_rcv_finish_core.constprop.0 (7 samples, 0.01%) - - - -ttwu_do_activate (94 samples, 0.18%) - - - -selinux_ipv4_output (6 samples, 0.01%) - - - -raw_spin_rq_lock_nested (6 samples, 0.01%) - - - -__dev_queue_xmit (5 samples, 0.01%) - - - -sock_def_write_space (5 samples, 0.01%) - - - -__softirqentry_text_start (119 samples, 0.23%) - - - -enqueue_to_backlog (24 samples, 0.05%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (589 samples, 1.14%) - - - -kfree (6 samples, 0.01%) - - - -psi_group_change (6 samples, 0.01%) - - - -__condvar_dec_grefs (42 samples, 0.08%) - - - -skb_release_data (5 samples, 0.01%) - - - -__x64_sys_futex (15 samples, 0.03%) - - - -__update_load_avg_cfs_rq (9 samples, 0.02%) - - - -update_load_avg (9 samples, 0.02%) - - - -skb_copy_and_csum_bits (7 samples, 0.01%) - - - -__udp4_lib_rcv (13 samples, 0.03%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (212 samples, 0.41%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (5 samples, 0.01%) - - - -enqueue_entity (54 samples, 0.10%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (94 samples, 0.18%) - - - -xfrm_lookup_with_ifid (18 samples, 0.03%) - - - -IsolateEnterStub_JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b_a61fe6c26e84dd4037e4629852b5488bfcc16e7e (9 samples, 0.02%) - - - -fib_table_lookup (6 samples, 0.01%) - - - -loopback_xmit (5 samples, 0.01%) - - - -JNIFunctions_GetIntField_cc20eaa35b54deb80db2eb05b754b96465828e2c (58 samples, 0.11%) - - - -ttwu_queue_wakelist (5 samples, 0.01%) - - - -reweight_entity (43 samples, 0.08%) - - - -generic_exec_single (17 samples, 0.03%) - - - -[unknown] (24 samples, 0.05%) - - - -ip_idents_reserve (5 samples, 0.01%) - - - -__check_object_size (5 samples, 0.01%) - - - -native_write_msr (6 samples, 0.01%) - - - -__x64_sys_sendto (114 samples, 0.22%) - - - -wake_up_q (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (208 samples, 0.40%) - - - -__x86_indirect_thunk_rax (5 samples, 0.01%) - - - -__cgroup_bpf_run_filter_skb (9 samples, 0.02%) - - - -csum_partial_copy_generic (15 samples, 0.03%) - - - -__x86_indirect_thunk_rax (5 samples, 0.01%) - - - -do_syscall_64 (5 samples, 0.01%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (562 samples, 1.08%) - - - -ip_route_output_key_hash_rcu (9 samples, 0.02%) - - - -irqtime_account_irq (9 samples, 0.02%) - - - -MulticastResource$$Lambda$2a51773ff173f1368fed8feb76f72e8954bae8ff_call_62c3c97a19c862d1b88b34ebccea6e8fb847006c (3,469 samples, 6.70%) -Multicast.. - - -__ip_finish_output (10 samples, 0.02%) - - - -MultiThreadedMonitorSupport_slowPathMonitorExit_183871de385508d0f6b4f0881e8e0c44628018b3 (7 samples, 0.01%) - - - -run_timer_softirq (7 samples, 0.01%) - - - -selinux_ip_postroute (13 samples, 0.03%) - - - -net_rx_action (6 samples, 0.01%) - - - -DatagramChannelImpl_send_a43258374f29d362070cc463bb9c00cfc7759f9e (1,346 samples, 2.60%) -Da.. - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1,823 samples, 3.52%) -Iso.. - - -__update_load_avg_cfs_rq (5 samples, 0.01%) - - - -select_task_rq_fair (8 samples, 0.02%) - - - -switch_mm_irqs_off (6 samples, 0.01%) - - - -start_thread (2,167 samples, 4.18%) -star.. - - -__kmalloc_node_track_caller (12 samples, 0.02%) - - - -__switch_to (13 samples, 0.03%) - - - -save_fpregs_to_fpstate (178 samples, 0.34%) - - - -JavaThreads_ensureUnsafeParkEvent_77b8c19cff94e6325a0dc99352d8624db969530b (95 samples, 0.18%) - - - -syscall_return_via_sysret (13 samples, 0.03%) - - - -futex_wait_queue_me (13 samples, 0.03%) - - - -__schedule (20 samples, 0.04%) - - - -__x86_indirect_thunk_rax (44 samples, 0.08%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2,539 samples, 4.90%) -JavaTh.. - - -siphash_3u32 (8 samples, 0.02%) - - - -select_task_rq_fair (15 samples, 0.03%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (9 samples, 0.02%) - - - -Unsafe_park_8e78b5f196bf0524b1490ff9abb68dc337e02cca (871 samples, 1.68%) - - - -__get_user_nocheck_4 (53 samples, 0.10%) - - - -do_syscall_64 (18 samples, 0.03%) - - - -cpuidle_enter_state (5 samples, 0.01%) - - - -ReentrantLock$Sync_tryRelease_a66c341958d8201110d2de33406f88fc73bac424 (34 samples, 0.07%) - - - -syscall_return_via_sysret (5 samples, 0.01%) - - - -ip_generic_getfrag (7 samples, 0.01%) - - - -skb_set_owner_w (5 samples, 0.01%) - - - -sock_def_write_space (5 samples, 0.01%) - - - -process_backlog (11 samples, 0.02%) - - - -psi_group_change (43 samples, 0.08%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,180 samples, 2.28%) -J.. - - -loopback_xmit (5 samples, 0.01%) - - - -icmp_rcv (12 samples, 0.02%) - - - -__x86_indirect_thunk_rax (30 samples, 0.06%) - - - -__sys_sendto (156 samples, 0.30%) - - - -AbstractQueuedSynchronizer_release_a6f0f81643a4166b53eeaa75bf587c535be7bad9 (209 samples, 0.40%) - - - -do_epoll_pwait.part.0 (10 samples, 0.02%) - - - -__switch_to_asm (15 samples, 0.03%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2,575 samples, 4.97%) -Thread.. - - -cpuidle_enter_state (295 samples, 0.57%) - - - -do_futex (8 samples, 0.02%) - - - -__softirqentry_text_start (6 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (589 samples, 1.14%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2,167 samples, 4.18%) -Posi.. - - -ThreadPoolExecutor_runWorker_d6102a49f44caa9353f47edf6df17054308b7151 (2,167 samples, 4.18%) -Thre.. - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (563 samples, 1.09%) - - - -kfree (16 samples, 0.03%) - - - -ip_finish_output2 (38 samples, 0.07%) - - - -acpi_idle_enter (375 samples, 0.72%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitorFromObject_922cf11599fedc7c0bfa829f3c2f09fcdebe2077 (9 samples, 0.02%) - - - -DatagramChannelImpl_send0_d05a7d3bffd13f93567ba253ded1608e364b9beb (5 samples, 0.01%) - - - -finish_task_switch.isra.0 (9 samples, 0.02%) - - - -__udp4_lib_rcv (67 samples, 0.13%) - - - -udp4_lib_lookup2 (5 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (81 samples, 0.16%) - - - -__ip_append_data (15 samples, 0.03%) - - - -slab_free_freelist_hook.constprop.0 (9 samples, 0.02%) - - - -ip_local_deliver_finish (88 samples, 0.17%) - - - -reweight_entity (40 samples, 0.08%) - - - -AbstractQueuedSynchronizer_acquireQueued_5fdba57beb8676d17b9542ff03179d64f478f5cf (954 samples, 1.84%) -A.. - - -update_load_avg (5 samples, 0.01%) - - - -siphash_3u32 (23 samples, 0.04%) - - - -DatagramChannelImpl_sendFromNativeBuffer_bc12ee464c05741dc5b1fe45dfbf70e5fe3085b7 (2,319 samples, 4.48%) -Datag.. - - -__x86_indirect_thunk_rax (7 samples, 0.01%) - - - -JNIObjectHandles_createLocal_d841b948c1d20e64912c894e72fb1e4feeb98975 (20 samples, 0.04%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (2,168 samples, 4.18%) -Java.. - - -__softirqentry_text_start (9 samples, 0.02%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (34 samples, 0.07%) - - - -JNIFunctions_GetObjectField_b896a889ade18324cbc5d5d90b52cdc5d4886f72 (27 samples, 0.05%) - - - -ReentrantLock_lock_c898d1de8855d56ce6120446f90aa9e4e86ede9f (396 samples, 0.76%) - - - -flush_smp_call_function_queue (644 samples, 1.24%) - - - -ip_route_output_key_hash (12 samples, 0.02%) - - - -do_softirq (116 samples, 0.22%) - - - -ip_finish_output2 (5 samples, 0.01%) - - - -__ip_make_skb (12 samples, 0.02%) - - - -__hrtimer_next_event_base (5 samples, 0.01%) - - - -__ip_finish_output (7 samples, 0.01%) - - - -__softirqentry_text_start (231 samples, 0.45%) - - - -PosixParkEvent_unpark_ffa65ac66d3e43a0f362cb01e11f41ef58ea7eaf (6 samples, 0.01%) - - - -__GI___lll_lock_wake (76 samples, 0.15%) - - - -getInetAddress_addr (10 samples, 0.02%) - - - -futex_wait_queue_me (12 samples, 0.02%) - - - -AbstractQueuedSynchronizer_parkAndCheckInterrupt_5f77ccdb5d848784815a610576e6a7d474e2c6b6 (362 samples, 0.70%) - - - -select_task_rq_fair (6 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (6 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2,575 samples, 4.97%) -JavaTh.. - - -schedule (8 samples, 0.02%) - - - -do_futex (9 samples, 0.02%) - - - -net_rx_action (109 samples, 0.21%) - - - -JNIObjectHandles_pushLocalFrame_72b60bb5d6b4f261f0f202d949a56da9c54ff0c0 (14 samples, 0.03%) - - - -icmp_push_reply (8 samples, 0.02%) - - - -__kmalloc_node_track_caller (29 samples, 0.06%) - - - -__x64_sys_sendto (5 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (562 samples, 1.08%) - - - -hash_futex (7 samples, 0.01%) - - - -ThreadLocalHandles_pushFrame_c070e6fc2960d253faa099aad7972764e55d4ca2 (7 samples, 0.01%) - - - -__x64_sys_futex (11 samples, 0.02%) - - - -start_thread (12 samples, 0.02%) - - - -_raw_spin_lock_irqsave (62 samples, 0.12%) - - - -do_futex (21 samples, 0.04%) - - - -MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 (2,319 samples, 4.48%) -Multi.. - - -__netif_receive_skb_core.constprop.0 (8 samples, 0.02%) - - - -ttwu_do_wakeup (5 samples, 0.01%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (954 samples, 1.84%) -A.. - - -set_next_entity (6 samples, 0.01%) - - - -raw_spin_rq_lock_nested (39 samples, 0.08%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2,194 samples, 4.23%) -Posix.. - - -nf_hook_slow (6 samples, 0.01%) - - - -__alloc_skb (26 samples, 0.05%) - - - -LockSupport_park_ad3b0439cc81f3747a99376fb5cad89af288f997 (886 samples, 1.71%) - - - -kfence_ksize (7 samples, 0.01%) - - - -ThreadPoolExecutor$Worker_run_f861978740f7fe309db28b47935f0d22284f1441 (2,276 samples, 4.39%) -Threa.. - - -JNIGeneratedMethodSupport_boxObjectInLocalHandle_b4008a17d25bb266b616277a16d8ca1073257780 (10 samples, 0.02%) - - - -MultiThreadedMonitorSupport_monitorEnter_a853e48d8499fe94e7e0723447fc9d2060965e91 (17 samples, 0.03%) - - - -NET_InetAddressToSockaddr (87 samples, 0.17%) - - - -ip_rcv (5 samples, 0.01%) - - - -udp_send_skb (157 samples, 0.30%) - - - -siphash_3u32 (5 samples, 0.01%) - - - -ip_rcv (5 samples, 0.01%) - - - -select_task_rq_fair (11 samples, 0.02%) - - - -psi_group_change (7 samples, 0.01%) - - - -psi_group_change (63 samples, 0.12%) - - - -__switch_to (177 samples, 0.34%) - - - -selinux_socket_sendmsg (10 samples, 0.02%) - - - -xfrm_lookup_with_ifid (11 samples, 0.02%) - - - -loopback_xmit (5 samples, 0.01%) - - - -ThreadLocalHandles_popFramesIncluding_a2f2b35267b27849afd06acabb6c2e3bc3b22169 (5 samples, 0.01%) - - - -do_syscall_64 (17 samples, 0.03%) - - - -net_rx_action (113 samples, 0.22%) - - - -native_sched_clock (5 samples, 0.01%) - - - -sched_clock_cpu (6 samples, 0.01%) - - - -Java_sun_nio_ch_DatagramChannelImpl_send0 (1,254 samples, 2.42%) -Ja.. - - -pick_next_task_fair (66 samples, 0.13%) - - - -AbstractQueuedSynchronizer_acquire_d7c03c3cee25dd5a735b5a4334799f668f70ef36 (482 samples, 0.93%) - - - -tick_sched_handle (13 samples, 0.03%) - - - -vfs_write (31 samples, 0.06%) - - - -__sys_sendto (114 samples, 0.22%) - - - - diff --git a/_versions/2.7/guides/images/native-reference-neo4j-db-info.png b/_versions/2.7/guides/images/native-reference-neo4j-db-info.png deleted file mode 100644 index 8099b9c1aac..00000000000 Binary files a/_versions/2.7/guides/images/native-reference-neo4j-db-info.png and /dev/null differ diff --git a/_versions/2.7/guides/images/native-reference-perf-flamegraph-no-symbols.svg b/_versions/2.7/guides/images/native-reference-perf-flamegraph-no-symbols.svg deleted file mode 100644 index 4433456ca80..00000000000 --- a/_versions/2.7/guides/images/native-reference-perf-flamegraph-no-symbols.svg +++ /dev/null @@ -1,3984 +0,0 @@ - - - - - - - - - - - - - - -Flame Graph - -Reset Zoom -Search -ic - - - -__schedule (1 samples, 0.01%) - - - -update_load_avg (2 samples, 0.02%) - - - -dl_main (1 samples, 0.01%) - - - -[debugging-native-1.0.0-SNAPSHOT-runner] (2 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (60 samples, 0.70%) - - - -sugov_update_single_freq (3 samples, 0.04%) - - - -cpufreq_this_cpu_can_update (1 samples, 0.01%) - - - -tick_nohz_restart_sched_tick (3 samples, 0.04%) - - - -find_busiest_group (1 samples, 0.01%) - - - -native_write_msr (4 samples, 0.05%) - - - -perf_event_idx_default (1 samples, 0.01%) - - - -security_task_setscheduler (5 samples, 0.06%) - - - -blk_mq_insert_requests (1 samples, 0.01%) - - - -exc_page_fault (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -__do_set_cpus_allowed (6 samples, 0.07%) - - - -schedule_timeout (1 samples, 0.01%) - - - -clear_page_rep (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (195 samples, 2.28%) -a.. - - -update_process_times (3 samples, 0.04%) - - - -security_perf_event_write (8 samples, 0.09%) - - - -rb_next (1 samples, 0.01%) - - - -nf_hook_slow (1 samples, 0.01%) - - - -i2c_outb.isra.0 (1 samples, 0.01%) - - - -update_process_times (1 samples, 0.01%) - - - -irqentry_enter (1 samples, 0.01%) - - - -dequeue_entity (2 samples, 0.02%) - - - -__handle_mm_fault (1 samples, 0.01%) - - - -[unknown] (51 samples, 0.60%) - - - -do_idle (1 samples, 0.01%) - - - -[unknown] (54 samples, 0.63%) - - - -[libcrypto.so.1.1.1l] (2 samples, 0.02%) - - - -__do_sys_clone3 (1 samples, 0.01%) - - - -enqueue_task_fair (2 samples, 0.02%) - - - -tcp_write_xmit (2 samples, 0.02%) - - - -alloc_pages_vma (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (1 samples, 0.01%) - - - -smpboot_thread_fn (1 samples, 0.01%) - - - -epoll_ctl (1 samples, 0.01%) - - - -__update_load_avg_cfs_rq (1 samples, 0.01%) - - - -__netif_receive_skb_list_core (1 samples, 0.01%) - - - -[perf] (1 samples, 0.01%) - - - -ecutor-thread-0 (6,716 samples, 78.52%) -ecutor-thread-0 - - -__wait_for_common (1 samples, 0.01%) - - - -hrtick_update (1 samples, 0.01%) - - - -del_timer_sync (1 samples, 0.01%) - - - -__mod_timer (1 samples, 0.01%) - - - -drm_helper_probe_detect_ctx (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -psi_task_switch (1 samples, 0.01%) - - - -__munmap (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.04%) - - - -cpu_startup_entry (5 samples, 0.06%) - - - -native_apic_mem_write (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.02%) - - - -xfsaild (3 samples, 0.04%) - - - -set_next_task_fair (1 samples, 0.01%) - - - -clear_buddies (1 samples, 0.01%) - - - -__fget_light (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -__fdget (1 samples, 0.01%) - - - -__x64_sys_munmap (1 samples, 0.01%) - - - -pte_pfn (1 samples, 0.01%) - - - -perf_ibs_handle_irq (1 samples, 0.01%) - - - -find_idlest_group (1 samples, 0.01%) - - - -[debugging-native-1.0.0-SNAPSHOT-runner] (6,705 samples, 78.39%) -[debugging-native-1.0.0-SNAPSHOT-runner] - - -flush_smp_call_function_queue (2 samples, 0.02%) - - - -native_irq_return_iret (1 samples, 0.01%) - - - -schedule (2 samples, 0.02%) - - - -do_idle (1 samples, 0.01%) - - - -ktime_get (9 samples, 0.11%) - - - -pfn_pte (1 samples, 0.01%) - - - -enqueue_task_fair (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -irq_work_needs_cpu (1 samples, 0.01%) - - - -quiet_vmstat (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -sched_setaffinity@@GLIBC_2.3.4 (2 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (42 samples, 0.49%) - - - -do_idle (559 samples, 6.54%) -do_idle - - -calc_load_nohz_stop (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -load_balance (1 samples, 0.01%) - - - -__irq_exit_rcu (2 samples, 0.02%) - - - -rcu_gp_kthread (1 samples, 0.01%) - - - -tick_nohz_idle_exit (1 samples, 0.01%) - - - -nft_immediate_eval (1 samples, 0.01%) - - - -tick_sched_handle (1 samples, 0.01%) - - - -native_apic_mem_write (1 samples, 0.01%) - - - -post_alloc_hook (1 samples, 0.01%) - - - -__switch_to (1 samples, 0.01%) - - - -tick_sched_handle (1 samples, 0.01%) - - - -__hrtimer_run_queues (1 samples, 0.01%) - - - -flush_smp_call_function_from_idle (3 samples, 0.04%) - - - -copy_process (1 samples, 0.01%) - - - -asm_exc_page_fault (2 samples, 0.02%) - - - -update_irq_load_avg (1 samples, 0.01%) - - - -tick_nohz_get_sleep_length (3 samples, 0.04%) - - - -pick_next_task_fair (1 samples, 0.01%) - - - -all (8,553 samples, 100%) - - - -perf_event_idx_default (1 samples, 0.01%) - - - -perf_event_idx_default (1 samples, 0.01%) - - - -acpi_idle_do_entry (1 samples, 0.01%) - - - -get_next_timer_interrupt (1 samples, 0.01%) - - - -get_page_from_freelist (1 samples, 0.01%) - - - -rcu_nmi_exit (1 samples, 0.01%) - - - -tick_nohz_get_sleep_length (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -copy_user_generic_string (1 samples, 0.01%) - - - -__softirqentry_text_start (1 samples, 0.01%) - - - -affine_move_task (2 samples, 0.02%) - - - -nvkm_fantog_update (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -bprm_execve (1 samples, 0.01%) - - - -__x64_sys_ioctl (46 samples, 0.54%) - - - -[unknown] (18 samples, 0.21%) - - - -update_min_vruntime (2 samples, 0.02%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -psi_group_change (1 samples, 0.01%) - - - -sched_clock (1 samples, 0.01%) - - - -dl_main (2 samples, 0.02%) - - - -add_timer_on (1 samples, 0.01%) - - - -acpi_idle_enter (5 samples, 0.06%) - - - -acpi_idle_do_entry (1 samples, 0.01%) - - - -psi_task_switch (2 samples, 0.02%) - - - -event_sched_in.part.0 (3 samples, 0.04%) - - - -__softirqentry_text_start (2 samples, 0.02%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -ip_local_out (1 samples, 0.01%) - - - -dup_mm (1 samples, 0.01%) - - - -rcu_gp_kthread (2 samples, 0.02%) - - - -irqtime_account_process_tick (1 samples, 0.01%) - - - -acpi_idle_enter (292 samples, 3.41%) -acp.. - - -schedule_hrtimeout_range_clock (1 samples, 0.01%) - - - -ip_rcv (1 samples, 0.01%) - - - -tloop-thread-42 (1 samples, 0.01%) - - - -sched_clock (1 samples, 0.01%) - - - -[debugging-native-1.0.0-SNAPSHOT-runner] (1 samples, 0.01%) - - - -quiet_vmstat (1 samples, 0.01%) - - - -start_kernel (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -__sched_setaffinity (12 samples, 0.14%) - - - -ktime_get (2 samples, 0.02%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -load_balance (1 samples, 0.01%) - - - -nv04_timer_read (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -anon_vma_clone (1 samples, 0.01%) - - - -unmap_page_range (1 samples, 0.01%) - - - -[unknown] (55 samples, 0.64%) - - - -[unknown] (25 samples, 0.29%) - - - -nohz_run_idle_balance (1 samples, 0.01%) - - - -perf_event_task_tick (1 samples, 0.01%) - - - -get_user_cpu_mask (1 samples, 0.01%) - - - -new_heap (1 samples, 0.01%) - - - -rcu_core (1 samples, 0.01%) - - - -__x64_sys_fcntl (1 samples, 0.01%) - - - -wp_page_copy (1 samples, 0.01%) - - - -rcu_irq_exit (1 samples, 0.01%) - - - -enqueue_task (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -migration_cpu_stop (1 samples, 0.01%) - - - -irqtime_account_irq (2 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (47 samples, 0.55%) - - - -[unknown] (4 samples, 0.05%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -collect_percpu_times (1 samples, 0.01%) - - - -task_work_run (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.05%) - - - -delayed_work_timer_fn (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -set_next_entity (1 samples, 0.01%) - - - -acpi_idle_do_entry (348 samples, 4.07%) -acpi.. - - -kernel_init_free_pages.part.0 (1 samples, 0.01%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -hrtimer_start_range_ns (2 samples, 0.02%) - - - -__schedule (3 samples, 0.04%) - - - -balance_fair (1 samples, 0.01%) - - - -find_busiest_group (1 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (32 samples, 0.37%) - - - -set_next_entity (1 samples, 0.01%) - - - -tty_write (1 samples, 0.01%) - - - -rb_erase (1 samples, 0.01%) - - - -native_write_msr (134 samples, 1.57%) - - - -rcu_read_unlock_strict (1 samples, 0.01%) - - - -need_update (1 samples, 0.01%) - - - -dequeue_task_fair (2 samples, 0.02%) - - - -rcu_note_context_switch (1 samples, 0.01%) - - - -update_process_times (1 samples, 0.01%) - - - -hrtimer_interrupt (1 samples, 0.01%) - - - -[unknown] (2 samples, 0.02%) - - - -__schedule (1 samples, 0.01%) - - - -acpi_idle_do_entry (5 samples, 0.06%) - - - -enqueue_task (2 samples, 0.02%) - - - -__vm_munmap (1 samples, 0.01%) - - - -do_user_addr_fault (1 samples, 0.01%) - - - -security_file_ioctl (4 samples, 0.05%) - - - -mem_cgroup_disabled (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (2 samples, 0.02%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -core_sys_select (1 samples, 0.01%) - - - -clear_page_rep (1 samples, 0.01%) - - - -rcu_sched_clock_irq (1 samples, 0.01%) - - - -dequeue_entity (1 samples, 0.01%) - - - -__tcp_push_pending_frames (2 samples, 0.02%) - - - -do_syscall_64 (3 samples, 0.04%) - - - -iterate_groups (1 samples, 0.01%) - - - -__irq_exit_rcu (5 samples, 0.06%) - - - -kernel_clone (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -cpu_stop_queue_work (1 samples, 0.01%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (9 samples, 0.11%) - - - -__entry_text_start (1 samples, 0.01%) - - - -epoll_wait (1 samples, 0.01%) - - - -get_next_timer_interrupt (1 samples, 0.01%) - - - -smpboot_thread_fn (2 samples, 0.02%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -do_mprotect_pkey (2 samples, 0.02%) - - - -hrtimer_wakeup (3 samples, 0.04%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -rcu_segcblist_ready_cbs (1 samples, 0.01%) - - - -__rdgsbase_inactive (1 samples, 0.01%) - - - -perf_ibs_nmi_handler (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.05%) - - - -rcu_gp_fqs_loop (2 samples, 0.02%) - - - -cpu_startup_entry (559 samples, 6.54%) -cpu_star.. - - -_find_next_bit (1 samples, 0.01%) - - - -do_futex (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -__hrtimer_next_event_base (1 samples, 0.01%) - - - -sched_setaffinity@@GLIBC_2.3.4 (68 samples, 0.80%) - - - -perf_sample_event_took (3 samples, 0.04%) - - - -select_task_rq_fair (3 samples, 0.04%) - - - -call_timer_fn (1 samples, 0.01%) - - - -rcu_core (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -get_user_cpu_mask (2 samples, 0.02%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -__x64_sys_futex (1 samples, 0.01%) - - - -bpf_lsm_task_setscheduler (1 samples, 0.01%) - - - -update_curr (2 samples, 0.02%) - - - -[unknown] (33 samples, 0.39%) - - - -schedule_hrtimeout_range_clock (3 samples, 0.04%) - - - -sched_ttwu_pending (1 samples, 0.01%) - - - -hrtimer_next_event_without (2 samples, 0.02%) - - - -iterate_groups (1 samples, 0.01%) - - - -__schedule (3 samples, 0.04%) - - - -strcmp (1 samples, 0.01%) - - - -schedule_idle (3 samples, 0.04%) - - - -schedule_hrtimeout_range_clock (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -try_to_wake_up (1 samples, 0.01%) - - - -__x64_sys_sched_setaffinity (21 samples, 0.25%) - - - -asm_common_interrupt (1 samples, 0.01%) - - - -affine_move_task (2 samples, 0.02%) - - - -nvkm_timer_alarm_trigger (1 samples, 0.01%) - - - -__vmalloc_node_range (1 samples, 0.01%) - - - -dl_task_check_affinity (1 samples, 0.01%) - - - -[unknown] (23 samples, 0.27%) - - - -flush_tlb_func (1 samples, 0.01%) - - - -ctx_sched_in (9 samples, 0.11%) - - - -entry_SYSCALL_64_after_hwframe (3 samples, 0.04%) - - - -alloc_cpumask_var_node (1 samples, 0.01%) - - - -syscall_enter_from_user_mode (1 samples, 0.01%) - - - -[unknown] (44 samples, 0.51%) - - - -update_curr (1 samples, 0.01%) - - - -update_rq_clock (1 samples, 0.01%) - - - -__run_timers.part.0 (1 samples, 0.01%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -exc_nmi (1 samples, 0.01%) - - - -__update_load_avg_cfs_rq (1 samples, 0.01%) - - - -iommu_pgsize.isra.0 (1 samples, 0.01%) - - - -update_curr (1 samples, 0.01%) - - - -[unknown] (6 samples, 0.07%) - - - -__schedule (3 samples, 0.04%) - - - -handle_mm_fault (1 samples, 0.01%) - - - -__switch_to_asm (4 samples, 0.05%) - - - -try_to_wake_up (1 samples, 0.01%) - - - -native_sched_clock (10 samples, 0.12%) - - - -__update_load_avg_se (2 samples, 0.02%) - - - -tick_nohz_idle_stop_tick (1 samples, 0.01%) - - - -clear_page_rep (1 samples, 0.01%) - - - -native_write_msr (4 samples, 0.05%) - - - -__handle_mm_fault (1 samples, 0.01%) - - - -tick_nohz_next_event (2 samples, 0.02%) - - - -__bitmap_equal (1 samples, 0.01%) - - - -bpf_inode_storage_free (1 samples, 0.01%) - - - -merge_sched_in (5 samples, 0.06%) - - - -__GI___pthread_enable_asynccancel (1 samples, 0.01%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -_dl_show_auxv (2 samples, 0.02%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (2 samples, 0.02%) - - - -irqtime_account_irq (1 samples, 0.01%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (1 samples, 0.01%) - - - -_start (1 samples, 0.01%) - - - -xfs_trans_ail_cursor_init (1 samples, 0.01%) - - - -__handle_mm_fault (1 samples, 0.01%) - - - -native_write_msr (10 samples, 0.12%) - - - -rcu_note_context_switch (1 samples, 0.01%) - - - -nv04_timer_intr (1 samples, 0.01%) - - - -tick_irq_enter (14 samples, 0.16%) - - - -hrtimer_wakeup (1 samples, 0.01%) - - - -update_rq_clock (3 samples, 0.04%) - - - -asm_common_interrupt (1 samples, 0.01%) - - - -[unknown] (61 samples, 0.71%) - - - -__update_load_avg_cfs_rq (1 samples, 0.01%) - - - -psi_group_change (4 samples, 0.05%) - - - -menu_select (1 samples, 0.01%) - - - -newidle_balance (1 samples, 0.01%) - - - -do_execveat_common.isra.0 (1 samples, 0.01%) - - - -sched_clock (1 samples, 0.01%) - - - -newidle_balance (1 samples, 0.01%) - - - -ktime_get (42 samples, 0.49%) - - - -psi_task_change (1 samples, 0.01%) - - - -xfsaild (2 samples, 0.02%) - - - -igb_poll (1 samples, 0.01%) - - - -down_write_killable (1 samples, 0.01%) - - - -ktime_get_update_offsets_now (20 samples, 0.23%) - - - -tick_sched_do_timer (2 samples, 0.02%) - - - -ctx_resched (9 samples, 0.11%) - - - -__check_object_size (1 samples, 0.01%) - - - -exc_page_fault (1 samples, 0.01%) - - - -process_timeout (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -do_select (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (87 samples, 1.02%) - - - -perf-exec (1 samples, 0.01%) - - - -cpumask_next_and (1 samples, 0.01%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -tick_sched_handle (4 samples, 0.05%) - - - -do_select (1 samples, 0.01%) - - - -common_interrupt (1 samples, 0.01%) - - - -vm_mmap_pgoff (1 samples, 0.01%) - - - -update_min_vruntime (1 samples, 0.01%) - - - -__mem_cgroup_charge (1 samples, 0.01%) - - - -syscall_exit_to_user_mode (1 samples, 0.01%) - - - -native_irq_return_iret (1 samples, 0.01%) - - - -__cgroup_account_cputime (1 samples, 0.01%) - - - -[perf] (302 samples, 3.53%) -[pe.. - - -cpumask_any_and_distribute (3 samples, 0.04%) - - - -update_load_avg (1 samples, 0.01%) - - - -cpumask_next (1 samples, 0.01%) - - - -sched_clock_cpu (60 samples, 0.70%) - - - -ttwu_do_activate (2 samples, 0.02%) - - - -hrtimer_wakeup (2 samples, 0.02%) - - - -__list_del_entry_valid (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (3 samples, 0.04%) - - - -__sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -__mod_timer (1 samples, 0.01%) - - - -__libc_start_call_main (302 samples, 3.53%) -__l.. - - -irq_enter_rcu (43 samples, 0.50%) - - - -native_sched_clock (3 samples, 0.04%) - - - -__schedule (2 samples, 0.02%) - - - -set_normalized_timespec64 (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -lock_page_memcg (1 samples, 0.01%) - - - -exit_mmap (1 samples, 0.01%) - - - -__ip_queue_xmit (1 samples, 0.01%) - - - -update_sd_lb_stats.constprop.0 (1 samples, 0.01%) - - - -__list_add_valid (1 samples, 0.01%) - - - -do_idle (5 samples, 0.06%) - - - -nmi_handle (1 samples, 0.01%) - - - -update_load_avg (1 samples, 0.01%) - - - -_raw_spin_unlock_irqrestore (1 samples, 0.01%) - - - -net_rx_action (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -menu_select (6 samples, 0.07%) - - - -iterate_groups (1 samples, 0.01%) - - - -__update_load_avg_se (1 samples, 0.01%) - - - -cpu_stop_queue_work (1 samples, 0.01%) - - - -do_syscall_64 (21 samples, 0.25%) - - - -pick_next_task_idle (1 samples, 0.01%) - - - -__alloc_pages (1 samples, 0.01%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (2 samples, 0.02%) - - - -native_write_cr2 (2 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (19 samples, 0.22%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -add_mm_counter_fast (1 samples, 0.01%) - - - -do_syscall_64 (3 samples, 0.04%) - - - -acpi_idle_enter (2 samples, 0.02%) - - - -__mmap (1 samples, 0.01%) - - - -perf_event_for_each_child (17 samples, 0.20%) - - - -sugov_iowait_boost (1 samples, 0.01%) - - - -arch_scale_freq_tick (1 samples, 0.01%) - - - -dcb_table (1 samples, 0.01%) - - - -[unknown] (2 samples, 0.02%) - - - -perf_sample_event_took (1 samples, 0.01%) - - - -update_load_avg (1 samples, 0.01%) - - - -alloc_cpumask_var_node (1 samples, 0.01%) - - - -__list_add_valid (1 samples, 0.01%) - - - -update_curr (1 samples, 0.01%) - - - -__fget_files (1 samples, 0.01%) - - - -sched_setaffinity (18 samples, 0.21%) - - - -exit_to_user_mode_prepare (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (3 samples, 0.04%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -do_epoll_wait (1 samples, 0.01%) - - - -[unknown] (3 samples, 0.04%) - - - -update_blocked_averages (1 samples, 0.01%) - - - -sched_clock_cpu (3 samples, 0.04%) - - - -[unknown] (6 samples, 0.07%) - - - -clear_buddies (1 samples, 0.01%) - - - -perf_sample_event_took (1 samples, 0.01%) - - - -pick_next_entity (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -cpuidle_enter (1 samples, 0.01%) - - - -perf_event_update_userpage (1 samples, 0.01%) - - - -[unknown] (2 samples, 0.02%) - - - -[perf] (302 samples, 3.53%) -[pe.. - - -__x64_sys_mprotect (2 samples, 0.02%) - - - -native_write_msr (6 samples, 0.07%) - - - -event_function_call (17 samples, 0.20%) - - - -ntloop-thread-3 (3 samples, 0.04%) - - - -cpuidle_enter (1 samples, 0.01%) - - - -acpi_idle_enter (1 samples, 0.01%) - - - -_dl_relocate_object (1 samples, 0.01%) - - - -ttwu_do_wakeup (1 samples, 0.01%) - - - -native_write_msr (3 samples, 0.04%) - - - -rcu_gp_kthread (1 samples, 0.01%) - - - -update_irq_load_avg (1 samples, 0.01%) - - - -asm_exc_page_fault (1 samples, 0.01%) - - - -__switch_to (1 samples, 0.01%) - - - -tick_sched_handle (2 samples, 0.02%) - - - -kthread (3 samples, 0.04%) - - - -timerqueue_del (1 samples, 0.01%) - - - -cpuidle_enter_state (551 samples, 6.44%) -cpuidle_.. - - -__common_interrupt (1 samples, 0.01%) - - - -__do_set_cpus_allowed (1 samples, 0.01%) - - - -nmi_handle (1 samples, 0.01%) - - - -lock_timer_base (1 samples, 0.01%) - - - -event_function (12 samples, 0.14%) - - - -enqueue_task_fair (1 samples, 0.01%) - - - -ttwu_do_activate (2 samples, 0.02%) - - - -end_repeat_nmi (1 samples, 0.01%) - - - -__fput (1 samples, 0.01%) - - - -__tcp_transmit_skb (2 samples, 0.02%) - - - -napi_complete_done (1 samples, 0.01%) - - - -next_zone (1 samples, 0.01%) - - - -menu_select (4 samples, 0.05%) - - - -__rb_insert_augmented (1 samples, 0.01%) - - - -avc_has_perm (2 samples, 0.02%) - - - -__set_cpus_allowed_ptr_locked (2 samples, 0.02%) - - - -acpi_idle_do_entry (292 samples, 3.41%) -acp.. - - -[perf] (302 samples, 3.53%) -[pe.. - - -perf_event_task_tick (1 samples, 0.01%) - - - -cpumask_any_and_distribute (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -irqtime_account_irq (60 samples, 0.70%) - - - -__mod_lruvec_page_state (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (9 samples, 0.11%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -dequeue_task_fair (1 samples, 0.01%) - - - -try_address (1 samples, 0.01%) - - - -futex_wait_queue_me (1 samples, 0.01%) - - - -nf_conntrack_tcp_packet (1 samples, 0.01%) - - - -reweight_entity (1 samples, 0.01%) - - - -tick_sched_timer (1 samples, 0.01%) - - - -[perf] (302 samples, 3.53%) -[pe.. - - -get_cpu_device (1 samples, 0.01%) - - - -timerqueue_add (3 samples, 0.04%) - - - -native_sched_clock (1 samples, 0.01%) - - - -ntloop-thread-4 (1 samples, 0.01%) - - - -__list_add_valid (1 samples, 0.01%) - - - -tick_do_update_jiffies64 (1 samples, 0.01%) - - - -cpumask_next_wrap (1 samples, 0.01%) - - - -reweight_entity (4 samples, 0.05%) - - - -[unknown] (27 samples, 0.32%) - - - -begin_new_exec (1 samples, 0.01%) - - - -task_tick_fair (1 samples, 0.01%) - - - -irqentry_enter (3 samples, 0.04%) - - - -run_timer_softirq (1 samples, 0.01%) - - - -run_posix_cpu_timers (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (37 samples, 0.43%) - - - -__irq_exit_rcu (1 samples, 0.01%) - - - -rcu_note_context_switch (1 samples, 0.01%) - - - -delay_halt (1 samples, 0.01%) - - - -native_write_msr (6 samples, 0.07%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -sock_sendmsg (2 samples, 0.02%) - - - -__rdgsbase_inactive (1 samples, 0.01%) - - - -__GI___ioctl_time64 (209 samples, 2.44%) -__.. - - -[unknown] (3 samples, 0.04%) - - - -update_process_times (1 samples, 0.01%) - - - -swapper (564 samples, 6.59%) -swapper - - -schedule (1 samples, 0.01%) - - - -_int_malloc (1 samples, 0.01%) - - - -irq_enter_rcu (15 samples, 0.18%) - - - -tick_sched_timer (8 samples, 0.09%) - - - -__next_timer_interrupt (1 samples, 0.01%) - - - -__next_timer_interrupt (1 samples, 0.01%) - - - -psi_avgs_work (1 samples, 0.01%) - - - -sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -__cond_resched (1 samples, 0.01%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (5 samples, 0.06%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.02%) - - - -bit_xfer (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.07%) - - - -set_cpus_allowed_common (1 samples, 0.01%) - - - -try_charge_memcg (1 samples, 0.01%) - - - -__update_load_avg_cfs_rq (1 samples, 0.01%) - - - -update_curr (1 samples, 0.01%) - - - -perf_sample_event_took (1 samples, 0.01%) - - - -[unknown] (8 samples, 0.09%) - - - -update_load_avg (1 samples, 0.01%) - - - -perf (306 samples, 3.58%) -perf - - -on_each_cpu_cond_mask (1 samples, 0.01%) - - - -load_balance (1 samples, 0.01%) - - - -__hrtimer_run_queues (41 samples, 0.48%) - - - -cpu_latency_qos_limit (1 samples, 0.01%) - - - -hrtimer_interrupt (81 samples, 0.95%) - - - -SIGINT_handler (2 samples, 0.02%) - - - -dequeue_entity (1 samples, 0.01%) - - - -irqentry_exit_to_user_mode (1 samples, 0.01%) - - - -update_dl_rq_load_avg (1 samples, 0.01%) - - - -update_load_avg (1 samples, 0.01%) - - - -ktime_get (1 samples, 0.01%) - - - -cpumask_next_and (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -native_read_msr (1 samples, 0.01%) - - - -dequeue_entity (2 samples, 0.02%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (1 samples, 0.01%) - - - -tick_nohz_tick_stopped (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -default_send_IPI_single_phys (1 samples, 0.01%) - - - -mem_cgroup_charge_statistics.constprop.0 (1 samples, 0.01%) - - - -do_user_addr_fault (1 samples, 0.01%) - - - -[unknown] (2 samples, 0.02%) - - - -schedule (1 samples, 0.01%) - - - -new_sync_write (2 samples, 0.02%) - - - -native_write_msr (3 samples, 0.04%) - - - -[unknown] (10 samples, 0.12%) - - - -sysvec_apic_timer_interrupt (60 samples, 0.70%) - - - -update_rq_clock (1 samples, 0.01%) - - - -update_load_avg (4 samples, 0.05%) - - - -is_cpu_allowed (1 samples, 0.01%) - - - -task_tick_idle (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -__run_timers.part.0 (2 samples, 0.02%) - - - -scheduler_tick (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -schedule (1 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (1 samples, 0.01%) - - - -menu_select (2 samples, 0.02%) - - - -sysvec_apic_timer_interrupt (196 samples, 2.29%) -s.. - - -__wrgsbase_inactive (1 samples, 0.01%) - - - -set_next_entity (1 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -native_write_msr (2 samples, 0.02%) - - - -[unknown] (9 samples, 0.11%) - - - -asm_exc_page_fault (1 samples, 0.01%) - - - -xfsaild (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.04%) - - - -tick_nohz_stop_tick (1 samples, 0.01%) - - - -poll_idle (6 samples, 0.07%) - - - -hrtimer_forward (2 samples, 0.02%) - - - -handle_mm_fault (1 samples, 0.01%) - - - -__rdgsbase_inactive (1 samples, 0.01%) - - - -__ip_local_out (1 samples, 0.01%) - - - -schedule (1 samples, 0.01%) - - - -[debugging-native-1.0.0-SNAPSHOT-runner] (1 samples, 0.01%) - - - -cpuidle_not_available (1 samples, 0.01%) - - - -update_load_avg (1 samples, 0.01%) - - - -_raw_spin_lock_irqsave (1 samples, 0.01%) - - - -__raw_callee_save___native_queued_spin_unlock (1 samples, 0.01%) - - - -__x64_sys_execve (1 samples, 0.01%) - - - -rcu_nmi_exit (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (2 samples, 0.02%) - - - -__GI___execve (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -__switch_to_asm (1 samples, 0.01%) - - - -enqueue_entity (1 samples, 0.01%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -__raw_callee_save___native_queued_spin_unlock (2 samples, 0.02%) - - - -[unknown] (2 samples, 0.02%) - - - -cpu_startup_entry (1 samples, 0.01%) - - - -pick_next_entity (1 samples, 0.01%) - - - -igb_rd32 (1 samples, 0.01%) - - - -balance_stop (1 samples, 0.01%) - - - -get_obj_cgroup_from_current (1 samples, 0.01%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -cpuidle_enter_state (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -run_timer_softirq (1 samples, 0.01%) - - - -force_qs_rnp (2 samples, 0.02%) - - - -_dl_sysdep_start (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (5 samples, 0.06%) - - - -ntloop-thread-0 (1 samples, 0.01%) - - - -__do_sys_clone (1 samples, 0.01%) - - - -start_kernel (5 samples, 0.06%) - - - -cpuidle_enter_state (5 samples, 0.06%) - - - -psi_group_change (5 samples, 0.06%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (3 samples, 0.04%) - - - -arch_perf_update_userpage (1 samples, 0.01%) - - - -process_one_work (2 samples, 0.02%) - - - -__switch_to (1 samples, 0.01%) - - - -readBytes (1 samples, 0.01%) - - - -calc_timer_values (1 samples, 0.01%) - - - -raw_spin_rq_lock_nested (1 samples, 0.01%) - - - -__lll_lock_wait_private (1 samples, 0.01%) - - - -secondary_startup_64_no_verify (565 samples, 6.61%) -secondary.. - - -schedule (1 samples, 0.01%) - - - -native_write_msr (3 samples, 0.04%) - - - -enqueue_task (2 samples, 0.02%) - - - -rwsem_spin_on_owner (1 samples, 0.01%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (1 samples, 0.01%) - - - -run_timer_softirq (2 samples, 0.02%) - - - -rcu_read_unlock_strict (1 samples, 0.01%) - - - -perf_event_idx_default (1 samples, 0.01%) - - - -rb_next (1 samples, 0.01%) - - - -tick_sched_timer (1 samples, 0.01%) - - - -osq_lock (2 samples, 0.02%) - - - -post_alloc_hook (1 samples, 0.01%) - - - -tick_sched_handle (1 samples, 0.01%) - - - -dl_task_check_affinity (1 samples, 0.01%) - - - -schedule (3 samples, 0.04%) - - - -ktime_get (1 samples, 0.01%) - - - -tick_sched_timer (2 samples, 0.02%) - - - -arch_perf_update_userpage (2 samples, 0.02%) - - - -secondary_startup_64_no_verify (381 samples, 4.45%) -secon.. - - -ktime_get_ts64 (2 samples, 0.02%) - - - -__schedule (1 samples, 0.01%) - - - -i2c_transfer (1 samples, 0.01%) - - - -wake_up_q (1 samples, 0.01%) - - - -kthread (7 samples, 0.08%) - - - -raw_spin_rq_lock_nested (1 samples, 0.01%) - - - -nvkm_timer_alarm (1 samples, 0.01%) - - - -load_elf_binary (1 samples, 0.01%) - - - -__handle_irq_event_percpu (1 samples, 0.01%) - - - -cpufreq_this_cpu_can_update (1 samples, 0.01%) - - - -set_next_entity (1 samples, 0.01%) - - - -cpu_startup_entry (380 samples, 4.44%) -cpu_s.. - - -futex_wake (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -memset (1 samples, 0.01%) - - - -acpi_idle_enter (1 samples, 0.01%) - - - -__napi_poll (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -do_user_addr_fault (1 samples, 0.01%) - - - -is_cpu_allowed (1 samples, 0.01%) - - - -dequeue_entity (2 samples, 0.02%) - - - -_raw_spin_lock_irqsave (1 samples, 0.01%) - - - -native_apic_mem_write (1 samples, 0.01%) - - - -debugging-nativ (28 samples, 0.33%) - - - -__softirqentry_text_start (8 samples, 0.09%) - - - -__hrtimer_run_queues (11 samples, 0.13%) - - - -clone3 (5 samples, 0.06%) - - - -tick_nohz_idle_exit (3 samples, 0.04%) - - - -pick_next_task_fair (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.02%) - - - -acpi_processor_ffh_cstate_enter (38 samples, 0.44%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -handle_irq_event (1 samples, 0.01%) - - - -_nohz_idle_balance.constprop.0.isra.0 (2 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (2 samples, 0.02%) - - - -do_syscall_64 (2 samples, 0.02%) - - - -newidle_balance (1 samples, 0.01%) - - - -__x64_sys_pselect6 (1 samples, 0.01%) - - - -rcu_nmi_exit (1 samples, 0.01%) - - - -_copy_to_iter (1 samples, 0.01%) - - - -enqueue_entity (1 samples, 0.01%) - - - -__hrtimer_run_queues (1 samples, 0.01%) - - - -__update_load_avg_se (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.02%) - - - -enqueue_task (2 samples, 0.02%) - - - -try_to_wake_up (3 samples, 0.04%) - - - -__mprotect (3 samples, 0.04%) - - - -smp_call_function_single (12 samples, 0.14%) - - - -cpuidle_enter (552 samples, 6.45%) -cpuidle_.. - - -acpi_processor_ffh_cstate_enter (2 samples, 0.02%) - - - -tick_nohz_next_event (1 samples, 0.01%) - - - -__set_cpus_allowed_ptr_locked (12 samples, 0.14%) - - - -__softirqentry_text_start (4 samples, 0.05%) - - - -anon_vma_fork (1 samples, 0.01%) - - - -native_write_msr (2 samples, 0.02%) - - - -cpuidle_enter_state (4 samples, 0.05%) - - - -ttwu_do_activate (1 samples, 0.01%) - - - -avc_lookup (2 samples, 0.02%) - - - -timerqueue_add (1 samples, 0.01%) - - - -call_timer_fn (1 samples, 0.01%) - - - -rmqueue_bulk (1 samples, 0.01%) - - - -get_page_from_freelist (1 samples, 0.01%) - - - -_find_next_bit (1 samples, 0.01%) - - - -cpuidle_enter (5 samples, 0.06%) - - - -check_preempt_curr (1 samples, 0.01%) - - - -ktime_get_update_offsets_now (40 samples, 0.47%) - - - -acpi_processor_ffh_cstate_enter (8 samples, 0.09%) - - - -rwsem_down_write_slowpath (1 samples, 0.01%) - - - -__calc_delta (1 samples, 0.01%) - - - -tick_nohz_idle_stop_tick (1 samples, 0.01%) - - - -visit_groups_merge.constprop.0.isra.0 (9 samples, 0.11%) - - - -read_tsc (1 samples, 0.01%) - - - -psi_task_switch (2 samples, 0.02%) - - - -pick_next_task_fair (1 samples, 0.01%) - - - -nmi_handle (2 samples, 0.02%) - - - -down_write_killable (2 samples, 0.02%) - - - -menu_reflect (1 samples, 0.01%) - - - -__check_object_size (1 samples, 0.01%) - - - -syscall_enter_from_user_mode (1 samples, 0.01%) - - - -insert_vmap_area.constprop.0 (1 samples, 0.01%) - - - -nmi_handle (5 samples, 0.06%) - - - -[unknown] (25 samples, 0.29%) - - - -dequeue_task_fair (2 samples, 0.02%) - - - -[unknown] (10 samples, 0.12%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.05%) - - - -__mmput (1 samples, 0.01%) - - - -_perf_ioctl (21 samples, 0.25%) - - - -remote_function (12 samples, 0.14%) - - - -cpuidle_enter_state (361 samples, 4.22%) -cpuid.. - - -irqtime_account_irq (3 samples, 0.04%) - - - -native_write_msr (10 samples, 0.12%) - - - -perf_ioctl (30 samples, 0.35%) - - - -_raw_spin_unlock_irqrestore (1 samples, 0.01%) - - - -netif_receive_skb_list_internal (1 samples, 0.01%) - - - -check_preempt_curr (1 samples, 0.01%) - - - -[[vdso]] (1 samples, 0.01%) - - - -kernel_init_free_pages.part.0 (1 samples, 0.01%) - - - -try_to_wake_up (1 samples, 0.01%) - - - -common_interrupt (1 samples, 0.01%) - - - -__fcntl64_nocancel_adjusted (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -do_syscall_64 (47 samples, 0.55%) - - - -__hrtimer_run_queues (1 samples, 0.01%) - - - -smp_call_function_many_cond (1 samples, 0.01%) - - - -woken_wake_function (1 samples, 0.01%) - - - -down_write_killable (1 samples, 0.01%) - - - -_dl_relocate_object (2 samples, 0.02%) - - - -[unknown] (45 samples, 0.53%) - - - -clockevents_program_event (1 samples, 0.01%) - - - -[perf] (9 samples, 0.11%) - - - -ceptor-thread-0 (1 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -kernel_clone (1 samples, 0.01%) - - - -__i2c_transfer (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -tick_nohz_idle_exit (1 samples, 0.01%) - - - -acpi_idle_enter (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -schedule_idle (2 samples, 0.02%) - - - -tick_nohz_idle_enter (1 samples, 0.01%) - - - -__update_load_avg_cfs_rq (1 samples, 0.01%) - - - -do_idle (1 samples, 0.01%) - - - -acpi_idle_enter (353 samples, 4.13%) -acpi.. - - -ttwu_do_wakeup (1 samples, 0.01%) - - - -[unknown] (24 samples, 0.28%) - - - -menu_select (1 samples, 0.01%) - - - -nouveau_connector_detect (1 samples, 0.01%) - - - -hrtimer_interrupt (32 samples, 0.37%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -nf_hook_slow (1 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (197 samples, 2.30%) -a.. - - -raw_spin_rq_unlock (2 samples, 0.02%) - - - -perf_event_idx_default (1 samples, 0.01%) - - - -memchr_inv (1 samples, 0.01%) - - - -tcp_sendmsg_locked (2 samples, 0.02%) - - - -__cond_resched (1 samples, 0.01%) - - - -tick_sched_timer (1 samples, 0.01%) - - - -delay_halt (1 samples, 0.01%) - - - -iterate_groups (4 samples, 0.05%) - - - -native_read_msr (1 samples, 0.01%) - - - -__smp_call_single_queue (1 samples, 0.01%) - - - -tick_do_update_jiffies64 (2 samples, 0.02%) - - - -ret_from_fork (3 samples, 0.04%) - - - -native_write_msr (1 samples, 0.01%) - - - -__alloc_pages (1 samples, 0.01%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -ret_from_fork (7 samples, 0.08%) - - - -clone3 (1 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -output_poll_execute (1 samples, 0.01%) - - - -vfs_write (2 samples, 0.02%) - - - -default_do_nmi (2 samples, 0.02%) - - - -[unknown] (7 samples, 0.08%) - - - -[perf] (2 samples, 0.02%) - - - -enqueue_entity (5 samples, 0.06%) - - - -tloop-thread-58 (1 samples, 0.01%) - - - -rebalance_domains (1 samples, 0.01%) - - - -menu_select (4 samples, 0.05%) - - - -curl (928 samples, 10.85%) -curl - - -scheduler_tick (1 samples, 0.01%) - - - -do_epoll_wait (1 samples, 0.01%) - - - -handle_mm_fault (1 samples, 0.01%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -scheduler_tick (2 samples, 0.02%) - - - -tick_sched_handle (1 samples, 0.01%) - - - -update_curr (1 samples, 0.01%) - - - -perf_ibs_handle_irq (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -vma_interval_tree_insert_after (1 samples, 0.01%) - - - -strchr (1 samples, 0.01%) - - - -[unknown] (78 samples, 0.91%) - - - -__x64_sys_epoll_pwait (3 samples, 0.04%) - - - -igb_rd32 (1 samples, 0.01%) - - - -unmap_vmas (1 samples, 0.01%) - - - -cap_safe_nice (1 samples, 0.01%) - - - -do_epoll_wait (3 samples, 0.04%) - - - -ptep_clear_flush (1 samples, 0.01%) - - - -hrtimer_interrupt (1 samples, 0.01%) - - - -try_to_wake_up (2 samples, 0.02%) - - - -__next_timer_interrupt (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -cpu_startup_entry (1 samples, 0.01%) - - - -[unknown] (9 samples, 0.11%) - - - -psi_group_change (1 samples, 0.01%) - - - -worker_thread (2 samples, 0.02%) - - - -cpu_stopper_thread (1 samples, 0.01%) - - - -_copy_from_user (1 samples, 0.01%) - - - -get_page_from_freelist (1 samples, 0.01%) - - - -pick_next_entity (1 samples, 0.01%) - - - -tick_irq_enter (43 samples, 0.50%) - - - -nvkm_pci_intr (1 samples, 0.01%) - - - -calc_load_nohz_start (1 samples, 0.01%) - - - -default_do_nmi (1 samples, 0.01%) - - - -tcp_sendmsg (2 samples, 0.02%) - - - -hrtimer_start_range_ns (1 samples, 0.01%) - - - -update_sd_lb_stats.constprop.0 (3 samples, 0.04%) - - - -exc_page_fault (2 samples, 0.02%) - - - -__run_timers.part.0 (1 samples, 0.01%) - - - -handle_edge_irq (1 samples, 0.01%) - - - -do_epoll_pwait.part.0 (3 samples, 0.04%) - - - -__hrtimer_init (1 samples, 0.01%) - - - -ksys_write (2 samples, 0.02%) - - - -read (1 samples, 0.01%) - - - -psi_task_change (1 samples, 0.01%) - - - -perf_sample_event_took (1 samples, 0.01%) - - - -unmap_page_range (1 samples, 0.01%) - - - -do_user_addr_fault (2 samples, 0.02%) - - - -pick_next_task_idle (1 samples, 0.01%) - - - -__list_add_valid (1 samples, 0.01%) - - - -nvkm_mc_intr (1 samples, 0.01%) - - - -copy_process (1 samples, 0.01%) - - - -acpi_cpufreq_fast_switch (2 samples, 0.02%) - - - -dequeue_entity (2 samples, 0.02%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (1 samples, 0.01%) - - - -try_to_wake_up (1 samples, 0.01%) - - - -tick_nohz_stop_tick (1 samples, 0.01%) - - - -rebalance_domains (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (39 samples, 0.46%) - - - -do_pselect.constprop.0 (1 samples, 0.01%) - - - -cpuidle_enter (363 samples, 4.24%) -cpuid.. - - -[unknown] (53 samples, 0.62%) - - - -native_write_cr2 (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -enqueue_entity (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (21 samples, 0.25%) - - - -schedule_timeout (1 samples, 0.01%) - - - -sock_write_iter (2 samples, 0.02%) - - - -update_sd_lb_stats.constprop.0 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -native_write_msr (5 samples, 0.06%) - - - -pick_next_task_fair (2 samples, 0.02%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -__vm_munmap (1 samples, 0.01%) - - - -tick_irq_enter (1 samples, 0.01%) - - - -kthread_is_per_cpu (1 samples, 0.01%) - - - -enqueue_entity (1 samples, 0.01%) - - - -try_to_wake_up (1 samples, 0.01%) - - - -flush_tlb_mm_range (1 samples, 0.01%) - - - -schedule_timeout (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -[unknown] (48 samples, 0.56%) - - - -IsolateEnterStub__VmLocatorSymbol__vmLocatorSymbol__bec84cad1f8708102cd8814ef3e496531bf6ff5b__bbf2dbb2d6a07a8e1dae8a3072b01ad86ecc1a50 (3 samples, 0.04%) - - - -tick_nohz_get_sleep_length (4 samples, 0.05%) - - - -nf_nat_inet_fn (1 samples, 0.01%) - - - -syscall_exit_to_user_mode (1 samples, 0.01%) - - - -futex_wait (1 samples, 0.01%) - - - -generic_exec_single (12 samples, 0.14%) - - - -asm_sysvec_apic_timer_interrupt (60 samples, 0.70%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -dyntick_save_progress_counter (2 samples, 0.02%) - - - -load_balance (2 samples, 0.02%) - - - -load_balance (1 samples, 0.01%) - - - -do_idle (380 samples, 4.44%) -do_idle - - - diff --git a/_versions/2.7/guides/images/native-reference-perf-flamegraph-symbols.svg b/_versions/2.7/guides/images/native-reference-perf-flamegraph-symbols.svg deleted file mode 100644 index 144dde2ea8e..00000000000 --- a/_versions/2.7/guides/images/native-reference-perf-flamegraph-symbols.svg +++ /dev/null @@ -1,4480 +0,0 @@ - - - - - - - - - - - - - - -Flame Graph - -Reset Zoom -Search -ic - - - -CLDRLocaleProviderAdapter_createLanguageTagSet_2405faf7aaba60dc13ae0a6f77133bb8c94147ed (1 samples, 0.01%) - - - -start_thread (1 samples, 0.01%) - - - -dequeue_entity (1 samples, 0.01%) - - - -enqueue_task_fair (2 samples, 0.03%) - - - -_find_next_bit (1 samples, 0.01%) - - - -run_timer_softirq (1 samples, 0.01%) - - - -Formatter_constructor_cca37bc44efd32578a070431aaecf8882def4adf (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -CgroupUtil_readStringValue_dec6c248a490a1cedf44ed3e120fda1a7ae3324b (1 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (2 samples, 0.03%) - - - -ResourceMethodInvoker$$Lambda$c45b2e67cba16984bafd1ca1519e47abdb6d0bda_get_ea5d553494144ab585002e79f36dbdb5df5ecd90 (5,203 samples, 75.65%) -ResourceMethodInvoker$$Lambda$c45b2e67cba16984bafd1ca1519e47abdb6d0bda_get_ea5d553494144ab585002e79f36dbdb5df5ecd90 - - -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 (2 samples, 0.03%) - - - -EPollSelectorImpl_doSelect_dfc51bd26126a68b9bd5d631ba6e80694002b398 (1 samples, 0.01%) - - - -StringLatin1_charAt_63e028c5b786b8663c4a4aea1cc147a8c4714d9d (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -timerqueue_add (1 samples, 0.01%) - - - -tick_sched_handle (3 samples, 0.04%) - - - -calc_timer_values (1 samples, 0.01%) - - - -drm_helper_probe_detect_ctx (1 samples, 0.01%) - - - -i2c_outb.isra.0 (1 samples, 0.01%) - - - -update_load_avg (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (1 samples, 0.01%) - - - -try_to_wake_up (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.03%) - - - -JNILibraryInitializer_checkSupportedJNIVersion_92c1d36d15bed05912bd8d68b590f53461afdf45 (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (2 samples, 0.03%) - - - -acpi_idle_enter (237 samples, 3.45%) -acp.. - - -do_syscall_64 (2 samples, 0.03%) - - - -__schedule (1 samples, 0.01%) - - - -do_syscall_64 (3 samples, 0.04%) - - - -acpi_processor_ffh_cstate_enter (28 samples, 0.41%) - - - -__hrtimer_next_event_base (2 samples, 0.03%) - - - -EPollSelectorImpl_doSelect_dfc51bd26126a68b9bd5d631ba6e80694002b398 (1 samples, 0.01%) - - - -AbstractStringBuilder_append_a8c950f4c131c6b18791121d2ff67fda7a026847 (1 samples, 0.01%) - - - -VertxHttpProcessor$openSocket1866188241_deploy_7b07d97e327c2c1535eef8489b04526037b1f0ff (3 samples, 0.04%) - - - -clockevents_program_event (1 samples, 0.01%) - - - -Bundles_loadBundleOf_c6c5a682b8cfe9174b654518e632d111c3b02c3e (1 samples, 0.01%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -Containers_activeProcessorCount_d7ffa066c4f063b2364eb7928d07f9e8b72660b1 (1 samples, 0.01%) - - - -gf100_mc_intr_stat (1 samples, 0.01%) - - - -JRELocaleProviderAdapter_getLanguageTagSet_4670b60ce66b61e1197493704df0da44d70d1268 (1 samples, 0.01%) - - - -VertxRequestHandler_dispatch_bd261203f0cde3a342597088250e7d5f6615c2be (5,203 samples, 75.65%) -VertxRequestHandler_dispatch_bd261203f0cde3a342597088250e7d5f6615c2be - - -sysvec_apic_timer_interrupt (3 samples, 0.04%) - - - -iterate_groups (1 samples, 0.01%) - - - -start_thread (1 samples, 0.01%) - - - -__list_add_valid (1 samples, 0.01%) - - - -psi_flags_change (1 samples, 0.01%) - - - -__x86_indirect_thunk_rbx (1 samples, 0.01%) - - - -policy_node (1 samples, 0.01%) - - - -syscall_exit_to_user_mode (1 samples, 0.01%) - - - -VertxImpl_deployVerticle_fcea789add9d3b31e6ff08796618671d34deb721 (3 samples, 0.04%) - - - -SingleThreadEventExecutor$4_run_1b47df7867e302a2fb7f28d7657a73e92f89d91f (1 samples, 0.01%) - - - -tsc_verify_tsc_adjust (1 samples, 0.01%) - - - -rcu_eqs_enter.constprop.0 (1 samples, 0.01%) - - - -FileInputStream_constructor_c26cfb3083610a614cb5843727b1293a2e050fbe (1 samples, 0.01%) - - - -CEntryPointSnippets_attachThread_299a3505abe96864afd07f8f20f652a19cd12ea9 (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (2 samples, 0.03%) - - - -LocaleData_getBundle_d06337d44e4c294d19ccaa63f7314a4ece51ab1e (1 samples, 0.01%) - - - -tick_sched_timer (2 samples, 0.03%) - - - -[perf] (20 samples, 0.29%) - - - -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 (1 samples, 0.01%) - - - -__vm_munmap (4 samples, 0.06%) - - - -acpi_idle_enter (2 samples, 0.03%) - - - -acpi_idle_do_entry (289 samples, 4.20%) -acpi_.. - - -hrtimer_wakeup (1 samples, 0.01%) - - - -VertxImpl_constructor_775d041b08f67497d294acf44ec22a5d77cc1fc8 (6 samples, 0.09%) - - - -quiet_vmstat (1 samples, 0.01%) - - - -rcu_dynticks_inc (1 samples, 0.01%) - - - -__napi_poll (1 samples, 0.01%) - - - -__clone3 (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -select_task_rq_fair (1 samples, 0.01%) - - - -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 (5,203 samples, 75.65%) -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 - - -perf_event_task_tick (2 samples, 0.03%) - - - -can_stop_idle_tick (1 samples, 0.01%) - - - -tick_nohz_next_event (1 samples, 0.01%) - - - -IOUtil_drain_5295fea17a082bdfb391eb548081d1088626b45e (1 samples, 0.01%) - - - -enqueue_hrtimer (1 samples, 0.01%) - - - -NioEventLoop_run_be89580b4d16514bef6e948913d2ed21c5e4f679 (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -__x64_sys_ioctl (51 samples, 0.74%) - - - -MultithreadEventExecutorGroup_constructor_23099da2a05a0695e33fca7210ac631de273f329 (5 samples, 0.07%) - - - -__entry_text_start (1 samples, 0.01%) - - - -nvkm_timer_alarm_trigger (1 samples, 0.01%) - - - -ThreadExecutorMap$2_run_66c8943ee6536a10df07f979fb6cd278adcf96bc (1 samples, 0.01%) - - - -_dl_start (1 samples, 0.01%) - - - -calc_timer_values (1 samples, 0.01%) - - - -__x64_sys_epoll_pwait (1 samples, 0.01%) - - - -ThreadLocalAllocation_allocateSmallArrayInNewTlab_2321f6d4fc1c4b9e10bb4e93d869d18b5d8c2e80 (2 samples, 0.03%) - - - -cpuidle_not_available (1 samples, 0.01%) - - - -__netif_receive_skb_core.constprop.0 (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -nf_hook_slow (1 samples, 0.01%) - - - -rcu_segcblist_ready_cbs (1 samples, 0.01%) - - - -cpu_startup_entry (3 samples, 0.04%) - - - -timekeeping_advance (1 samples, 0.01%) - - - -asm_common_interrupt (1 samples, 0.01%) - - - -try_to_wake_up (2 samples, 0.03%) - - - -psi_avgs_work (1 samples, 0.01%) - - - -__GI___mprotect (1 samples, 0.01%) - - - -__clone3 (1 samples, 0.01%) - - - -update_rq_clock (1 samples, 0.01%) - - - -sched_setaffinity (1 samples, 0.01%) - - - -__sched_setaffinity (19 samples, 0.28%) - - - -NioEventLoopGroup_constructor_71faef2cba720dff3a733ae1aacd91e752ffea5a (5 samples, 0.07%) - - - -__bitmap_and (1 samples, 0.01%) - - - -__cond_resched (2 samples, 0.03%) - - - -cpu_startup_entry (339 samples, 4.93%) -cpu_st.. - - -SingleThreadEventExecutor$4_run_1b47df7867e302a2fb7f28d7657a73e92f89d91f (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -perf_event_update_userpage (1 samples, 0.01%) - - - -perf_event_idx_default (2 samples, 0.03%) - - - -do_mprotect_pkey (2 samples, 0.03%) - - - -InternalThreadLocalMap_fastGet_b7e03d839ad6d18244b435086c3ab2d54326c976 (1 samples, 0.01%) - - - -Object_hashCode_3de91324fba0d30c45e0d29ba844909fb20c8ef3 (1 samples, 0.01%) - - - -MultithreadEventExecutorGroup_constructor_30993b7c05e555e173884314126cb8ebf8f0a765 (5 samples, 0.07%) - - - -_nohz_idle_balance.constprop.0.isra.0 (1 samples, 0.01%) - - - -init_conntrack.constprop.0 (1 samples, 0.01%) - - - -__GI___munmap (1 samples, 0.01%) - - - -DecimalFormatSymbols_initialize_3f00aa0c337680c52282acc76815fb318fec691e (1 samples, 0.01%) - - - -__vm_munmap (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -menu_select (2 samples, 0.03%) - - - -do_idle (3 samples, 0.04%) - - - -entry_SYSCALL_64_after_hwframe (21 samples, 0.31%) - - - -native_sched_clock (1 samples, 0.01%) - - - -__x64_sys_mprotect (2 samples, 0.03%) - - - -ReentrantLock_unlock_86cdca028e9dd52644b7822ba738ec004cf0c360 (1 samples, 0.01%) - - - -acpi_idle_do_entry (3 samples, 0.04%) - - - -osq_lock (1 samples, 0.01%) - - - -idle_cpu (2 samples, 0.03%) - - - -xfsaild (1 samples, 0.01%) - - - -SelectorImpl_lockAndDoSelect_de3ba179520b17b51a73f959a29e5a68bde086ce (1 samples, 0.01%) - - - -update_curr (1 samples, 0.01%) - - - -rwsem_down_read_slowpath (1 samples, 0.01%) - - - -tick_nohz_idle_exit (1 samples, 0.01%) - - - -selinux_task_setscheduler (3 samples, 0.04%) - - - -free_unref_page_commit.constprop.0 (1 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1 samples, 0.01%) - - - -sched_clock (2 samples, 0.03%) - - - -AbstractEventExecutor_safeExecute_48c5811cdd8968be97028bc79c80e772e065c655 (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (55 samples, 0.80%) - - - -rcu_note_context_switch (2 samples, 0.03%) - - - -tick_nohz_stop_tick (2 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (6 samples, 0.09%) - - - -__entry_text_start (1 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (2 samples, 0.03%) - - - -SmallRyeContextManager_constructor_110d63603f1eb31c28770237cae6de36db3d8bbb (1 samples, 0.01%) - - - -__handle_irq_event_percpu (1 samples, 0.01%) - - - -ClassInitializationInfo_initialize_2fab2a9469a0ef812c52b0ce6061de6c2c8b76f9 (1 samples, 0.01%) - - - -__clone3 (21 samples, 0.31%) - - - -tick_nohz_idle_exit (5 samples, 0.07%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1 samples, 0.01%) - - - -net_rx_action (1 samples, 0.01%) - - - -nmi_restore (1 samples, 0.01%) - - - -perf_ioctl (42 samples, 0.61%) - - - -Util_jdk_internal_misc_Signal$DispatchThread_run_eeebea10b374a7031abb3bd32119e4d5872bb7de (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -start_thread (5,203 samples, 75.65%) -start_thread - - -__irq_exit_rcu (1 samples, 0.01%) - - - -ret_from_fork (2 samples, 0.03%) - - - -Composition_onSuccess_5ef14c9950c1a03bd26172ee39e0892d5d2b83df (1 samples, 0.01%) - - - -save_fpregs_to_fpstate (1 samples, 0.01%) - - - -kmem_cache_alloc_trace (1 samples, 0.01%) - - - -FutureBase_lambda$emitSuccess$0_a532cf7d5071724a34a836ef9024aeb54f27cd14 (1 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (9 samples, 0.13%) - - - -pick_next_task_fair (1 samples, 0.01%) - - - -__update_load_avg_se (2 samples, 0.03%) - - - -__irq_exit_rcu (1 samples, 0.01%) - - - -AbstractStringBuilder_append_5488923cc849183f0525c23695d6326c7735ac5a (1 samples, 0.01%) - - - -asm_common_interrupt (2 samples, 0.03%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1 samples, 0.01%) - - - -dequeue_task_fair (9 samples, 0.13%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -native_write_msr (13 samples, 0.19%) - - - -UnmanagedMemoryUtil_copyForward_82a74c216e7c9d84efc44e8a2463bf268babab5e (1 samples, 0.01%) - - - -native_sched_clock (2 samples, 0.03%) - - - -save_fpregs_to_fpstate (1 samples, 0.01%) - - - -perf_event_idx_default (1 samples, 0.01%) - - - -update_irq_load_avg (1 samples, 0.01%) - - - -poll_freewait (1 samples, 0.01%) - - - -iov_iter_fault_in_readable (1 samples, 0.01%) - - - -VertxHttpProcessor$openSocket1866188241_deploy_0_f62af8cc66423d57d1e40c5a1ec11136d1b717ee (3 samples, 0.04%) - - - -rcu_idle_enter (1 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (5,203 samples, 75.65%) -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 - - -alloc_cpumask_var_node (1 samples, 0.01%) - - - -VertxHttpRecorder_startServer_c11f0a68def0b12024624749d87e838bcfaba8d2 (3 samples, 0.04%) - - - -exc_nmi (2 samples, 0.03%) - - - -hrtimer_wakeup (1 samples, 0.01%) - - - -__GI___fcntl (1 samples, 0.01%) - - - -native_write_msr (2 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (2 samples, 0.03%) - - - -tloop-thread-59 (1 samples, 0.01%) - - - -rcu_read_unlock_strict (1 samples, 0.01%) - - - -collect_percpu_times (1 samples, 0.01%) - - - -GlobalEventExecutor_constructor_da0f4ea3ecb399e737bba37496ce8621738c3421 (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -DecimalFormatSymbols_getInstance_8e8551d477fcd2628da25215aa60c28b1dda9ca8 (1 samples, 0.01%) - - - -try_charge_memcg (1 samples, 0.01%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -_dl_lookup_symbol_x (1 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (81 samples, 1.18%) - - - -SingleThreadEventExecutor$4_run_1b47df7867e302a2fb7f28d7657a73e92f89d91f (1 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1 samples, 0.01%) - - - -Formatter_getZero_8b61958139301e105c9e63b49cb2c92f943ddad5 (1 samples, 0.01%) - - - -_raw_spin_lock_irqsave (1 samples, 0.01%) - - - -switch_mm_irqs_off (1 samples, 0.01%) - - - -Java_sun_nio_ch_EPoll_ctl (1 samples, 0.01%) - - - -AbstractStringBuilder_append_3f1d6796b1056fe35061fd00638930def6b1d957 (1 samples, 0.01%) - - - -CgroupUtil_lambda$readStringValue$0_a6d235bf5481a269733b731f50da0a272e1e2ada (1 samples, 0.01%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -__calc_delta (1 samples, 0.01%) - - - -avc_has_perm (1 samples, 0.01%) - - - -NioEventLoopGroup_constructor_194be6972b9ebc23f435c4e558bfcee11ed151b1 (5 samples, 0.07%) - - - -osq_lock (1 samples, 0.01%) - - - -FileInputStream_constructor_aac741e2ef07c171301eb35c0deea3293fe4d747 (1 samples, 0.01%) - - - -tick_nohz_stop_tick (1 samples, 0.01%) - - - -end_repeat_nmi (1 samples, 0.01%) - - - -cpuidle_enter_state (3 samples, 0.04%) - - - -NioEventLoop_select_4400f85956c925748c40da4a81f574a360b028e5 (1 samples, 0.01%) - - - -epoll_wait (1 samples, 0.01%) - - - -__softirqentry_text_start (1 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1 samples, 0.01%) - - - -_perf_ioctl (32 samples, 0.47%) - - - -sysvec_apic_timer_interrupt (81 samples, 1.18%) - - - -ThreadPerTaskExecutor_execute_9afc5d4473f674f08e02dd448b4e6a6247aa748d (3 samples, 0.04%) - - - -SingleThreadEventExecutor_doStartThread_f74c626e81f588e0747a3e04c8bdb98bee0cbdb6 (3 samples, 0.04%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1 samples, 0.01%) - - - -__clone3 (1 samples, 0.01%) - - - -Java_sun_nio_ch_IOUtil_makePipe (1 samples, 0.01%) - - - -common_interrupt (1 samples, 0.01%) - - - -__calc_delta (1 samples, 0.01%) - - - -iomap_file_buffered_write (6 samples, 0.09%) - - - -__x86_indirect_thunk_rbp (1 samples, 0.01%) - - - -__clone3 (2 samples, 0.03%) - - - -AbstractStringBuilder_delete_58681f709e2653ae2d27c3a178d8de9a64d94ff7 (5,202 samples, 75.63%) -AbstractStringBuilder_delete_58681f709e2653ae2d27c3a178d8de9a64d94ff7 - - -sugov_update_single_freq (1 samples, 0.01%) - - - -__bitmap_equal (1 samples, 0.01%) - - - -perf_event_update_userpage (1 samples, 0.01%) - - - -__GI___sched_setaffinity_new (66 samples, 0.96%) - - - -__x86_indirect_thunk_r14 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -page_memcg (1 samples, 0.01%) - - - -perf_ibs_handle_irq (1 samples, 0.01%) - - - -EPollSelectorImpl_processEvents_987d82f8c09d324f18009971cc77f5c9751c5856 (1 samples, 0.01%) - - - -poll_idle (1 samples, 0.01%) - - - -nouveau_connector_detect (1 samples, 0.01%) - - - -timer_clear_idle (1 samples, 0.01%) - - - -ApplicationImpl_doStart_e1afde9430e67b7c57499ed67ff5f64600d056ec (10 samples, 0.15%) - - - -_raw_spin_unlock_irqrestore (1 samples, 0.01%) - - - -do_syscall_64 (2 samples, 0.03%) - - - -update_min_vruntime (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.09%) - - - -ThreadExecutorMap$2_run_66c8943ee6536a10df07f979fb6cd278adcf96bc (1 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (38 samples, 0.55%) - - - -HttpServerImpl_listen_924344dca4d1462c93ec93b2d8a5621ac5568922 (2 samples, 0.03%) - - - -schedule_idle (1 samples, 0.01%) - - - -ktime_get_ts64 (1 samples, 0.01%) - - - -acpi_idle_enter (299 samples, 4.35%) -acpi_.. - - -rb_next (1 samples, 0.01%) - - - -dequeue_task_fair (1 samples, 0.01%) - - - -perf_event_idx_default (4 samples, 0.06%) - - - -ktime_get_update_offsets_now (28 samples, 0.41%) - - - -ktime_get (44 samples, 0.64%) - - - -psi_task_switch (1 samples, 0.01%) - - - -MultiThreadedMonitorSupport_monitorEnter_a853e48d8499fe94e7e0723447fc9d2060965e91 (1 samples, 0.01%) - - - -SynchronousDispatcher_preprocess_8f9789eb05b0c2ed2bfb4a45ed81798e9f4e1c2b (5,203 samples, 75.65%) -SynchronousDispatcher_preprocess_8f9789eb05b0c2ed2bfb4a45ed81798e9f4e1c2b - - -native_queued_spin_lock_slowpath (1 samples, 0.01%) - - - -__virt_addr_valid (1 samples, 0.01%) - - - -__GI_epoll_ctl (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -do_mprotect_pkey (1 samples, 0.01%) - - - -__GI___mmap64 (4 samples, 0.06%) - - - -StackOverflowCheckImpl_initialize_aaf8521db46ac9287b8d5950d505eca786aadb91 (1 samples, 0.01%) - - - -IsolateEnterStub_JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b_a61fe6c26e84dd4037e4629852b5488bfcc16e7e (10 samples, 0.15%) - - - -do_idle (339 samples, 4.93%) -do_idle - - -PooledByteBufAllocator_constructor_3f22fd13fc11f1a9f2781e94e3855aa075af6d9f (2 samples, 0.03%) - - - -poll_idle (1 samples, 0.01%) - - - -Collections_addAll_df242f0b9a5d57257a9c98d0d21d96ea04c162c3 (1 samples, 0.01%) - - - -memcg_slab_post_alloc_hook (1 samples, 0.01%) - - - -schedule (1 samples, 0.01%) - - - -rebalance_domains (2 samples, 0.03%) - - - -new_sync_write (6 samples, 0.09%) - - - -NioEventLoop_run_be89580b4d16514bef6e948913d2ed21c5e4f679 (2 samples, 0.03%) - - - -down_write_killable (2 samples, 0.03%) - - - -JavaThreads_startThread_4a48623aeb6d5a9f3cf7f8dabdba7ffbb99828ba (3 samples, 0.04%) - - - -Integer_getChars_2437c44e6023372be22fabd9686065302ca92d3e (2 samples, 0.03%) - - - -menu_select (3 samples, 0.04%) - - - -do_idle (3 samples, 0.04%) - - - -sysvec_apic_timer_interrupt (148 samples, 2.15%) -s.. - - -perf_event_for_each_child (26 samples, 0.38%) - - - -update_min_vruntime (1 samples, 0.01%) - - - -ConfigValueConfigSourceWrapper_getConfigValue_22547015811a89d3a8167bb59da895feec96784a (1 samples, 0.01%) - - - -NioEventLoopGroup_newChild_18cd34fd0de866436bd03197e567ec292a38961b (3 samples, 0.04%) - - - -ThreadLocalAllocation_slowPathNewArray_846db6d88ea2f5c90935fae3e872715327297019 (9 samples, 0.13%) - - - -HashMap_put_984cd2450422ab28c8fd057c8fadb18b9b383f84 (1 samples, 0.01%) - - - -raw_spin_rq_unlock (1 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1 samples, 0.01%) - - - -down_write_killable (2 samples, 0.03%) - - - -output_poll_execute (1 samples, 0.01%) - - - -perf_event_update_userpage (2 samples, 0.03%) - - - -__x86_indirect_thunk_rax (3 samples, 0.04%) - - - -dequeue_entity (1 samples, 0.01%) - - - -flush_memcg_stats_dwork (1 samples, 0.01%) - - - -secondary_startup_64_no_verify (343 samples, 4.99%) -second.. - - -SmallRyeContextManager$Builder_build_989ade227f951d5ccbc6f57df6cbe552605870a2 (1 samples, 0.01%) - - - -down_write_killable (3 samples, 0.04%) - - - -__clone3 (1 samples, 0.01%) - - - -raw_spin_rq_unlock (3 samples, 0.04%) - - - -UnmanagedMemoryUtil_copyForward_82a74c216e7c9d84efc44e8a2463bf268babab5e (1 samples, 0.01%) - - - -PosixVirtualMemoryProvider_reserve_b6c76ffcfaac89204e3ddd5f1a5cd110a1860862 (3 samples, 0.04%) - - - -__schedule (1 samples, 0.01%) - - - -ret_from_fork (3 samples, 0.04%) - - - -__check_heap_object (1 samples, 0.01%) - - - -__hrtimer_run_queues (8 samples, 0.12%) - - - -xfs_inode_item_format (1 samples, 0.01%) - - - -tick_nohz_stop_tick (1 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1 samples, 0.01%) - - - -native_write_msr (8 samples, 0.12%) - - - -find_busiest_group (1 samples, 0.01%) - - - -perf_event_update_userpage (1 samples, 0.01%) - - - -update_load_avg (2 samples, 0.03%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -SingleThreadEventExecutor_execute_b9fc33f6cf952ec696d6a219f6499740711801a6 (3 samples, 0.04%) - - - -__pthread_getattr_np (1 samples, 0.01%) - - - -VertxCoreRecorder$VertxSupplier_get_ad6de8dda214b81feb5c157bb64f41c2109a30fb (6 samples, 0.09%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (2 samples, 0.03%) - - - -tick_irq_enter (1 samples, 0.01%) - - - -vfs_read (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (2 samples, 0.03%) - - - -update_load_avg (1 samples, 0.01%) - - - -JavaMemoryUtil_copyPrimitiveArrayForward_04558096db261373c167eb1c8cc366001be76728 (5,202 samples, 75.63%) -JavaMemoryUtil_copyPrimitiveArrayForward_04558096db261373c167eb1c8cc366001be76728 - - -PreMatchContainerRequestContext_filter_ed53506a3bc6ae4769c8d7447ae589c1c45bb04a (5,203 samples, 75.65%) -PreMatchContainerRequestContext_filter_ed53506a3bc6ae4769c8d7447ae589c1c45bb04a - - -hrtimer_init_sleeper (1 samples, 0.01%) - - - -_find_next_bit (1 samples, 0.01%) - - - -flush_smp_call_function_queue (6 samples, 0.09%) - - - -do_sys_poll (1 samples, 0.01%) - - - -rb_erase (1 samples, 0.01%) - - - -VMError_guarantee_18caf46ef6d672f2c7aab3ad271ff5117b823ec1 (1 samples, 0.01%) - - - -EPollSelectorImpl_clearInterrupt_0b170fb8e31667827cb531f61c571c3a58fa5c9c (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -[perf] (333 samples, 4.84%) -[perf] - - -asm_sysvec_apic_timer_interrupt (3 samples, 0.04%) - - - -tloop-thread-79 (1 samples, 0.01%) - - - -generic_exec_single (24 samples, 0.35%) - - - -__hrtimer_next_event_base (1 samples, 0.01%) - - - -do_epoll_pwait.part.0 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.04%) - - - -vm_mmap_pgoff (1 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1 samples, 0.01%) - - - -FastThreadLocal_set_fff7f7440bd45eebb04f4f5e75aa2e886029c385 (1 samples, 0.01%) - - - -DefaultValues_constructor_42b1c927d5d211da15badbbf7e2cd36a524c60c5 (1 samples, 0.01%) - - - -HeapChunkProvider_produceAlignedChunk_151eeb69b2ff04e5a10d422de20e777d95b68672 (7 samples, 0.10%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (9 samples, 0.13%) - - - -Application_start_9a0b63742d6e66c1b5dc0121670fdf46106d2d88 (10 samples, 0.15%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.04%) - - - -task_tick_fair (2 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (44 samples, 0.64%) - - - -dl_task_check_affinity (1 samples, 0.01%) - - - -__remove_hrtimer (1 samples, 0.01%) - - - -ThreadExecutorMap$2_run_66c8943ee6536a10df07f979fb6cd278adcf96bc (1 samples, 0.01%) - - - -ResourceMethodInvoker_invoke_ecd15bd481d6b9ac845055c3b0868ec2d9d5db8b (5,203 samples, 75.65%) -ResourceMethodInvoker_invoke_ecd15bd481d6b9ac845055c3b0868ec2d9d5db8b - - -__hrtimer_run_queues (3 samples, 0.04%) - - - -String_format_b37619acc0ead67a05e6961119f206850dc1edf9 (1 samples, 0.01%) - - - -Signal_dispatch_bcb0c1d8f443286c2e23cb169addad8f98e40c4f (1 samples, 0.01%) - - - -kthread (3 samples, 0.04%) - - - -VertxHttpProcessor$preinitializeRouter1141331088_deploy_0_04f518fcb19517993a4ab43510a8b1bf5082b981 (6 samples, 0.09%) - - - -SingleThreadEventExecutor_runAllTasks_1c632c8f112449f5c5cb92250f70fa224c43b8f9 (2 samples, 0.03%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -SmallRyeContextPropagationRecorder_configureRuntime_91af04ddc20f1337d1ec47d12d569da95f388696 (1 samples, 0.01%) - - - -affine_move_task (1 samples, 0.01%) - - - -perf_ibs_stop (1 samples, 0.01%) - - - -NioEventLoop_run_be89580b4d16514bef6e948913d2ed21c5e4f679 (1 samples, 0.01%) - - - -EnhancedQueueExecutor$ThreadBody_run_e70256a0a4fe9f6a77701bda112aef0436551de9 (5,203 samples, 75.65%) -EnhancedQueueExecutor$ThreadBody_run_e70256a0a4fe9f6a77701bda112aef0436551de9 - - -update_cfs_group (1 samples, 0.01%) - - - -read_tsc (1 samples, 0.01%) - - - -sched_clock_cpu (9 samples, 0.13%) - - - -StringBuilderResource_appendDelete_9e06d4817d0208a0cce97ebcc0952534cac45a19 (5,203 samples, 75.65%) -StringBuilderResource_appendDelete_9e06d4817d0208a0cce97ebcc0952534cac45a19 - - -cpumask_any_and_distribute (1 samples, 0.01%) - - - -EPoll_wait_924e0155f5e5b0f5871656887c69c84c66dabd03 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1 samples, 0.01%) - - - -TCPServerBase_listen_0178883c054d7a1089dfb36af5cd1df5c11faa01 (2 samples, 0.03%) - - - -run_posix_cpu_timers (1 samples, 0.01%) - - - -tick_nohz_idle_got_tick (1 samples, 0.01%) - - - -arena_get2.part.0 (8 samples, 0.12%) - - - -FutureBase$$Lambda$1a242d9af289ab51236f34d2b5ce865d7385ac85_run_44090d6353a0b2a719d01b3b7549c66faeec19ac (1 samples, 0.01%) - - - -RequestDispatcher_service_21e693aae924b24e39e78ee32740e878fcf31c62 (5,203 samples, 75.65%) -RequestDispatcher_service_21e693aae924b24e39e78ee32740e878fcf31c62 - - -CgroupUtil$$Lambda$017b0cd0360754c055090b7d9521ad624f6920d8_run_77af995cd5a939af9f290a79416bc56d932802be (1 samples, 0.01%) - - - -native_sched_clock (2 samples, 0.03%) - - - -menu_reflect (1 samples, 0.01%) - - - -NioEventLoop_constructor_49df0e0d6cddf8f78e642a99ad82de56c1f0a39b (3 samples, 0.04%) - - - -security_task_setscheduler (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (4 samples, 0.06%) - - - -AbstractScheduledEventExecutor_scheduledTaskQueue_0845941b70faa6fbebbf559e17b92044930f7689 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -LinuxStackOverflowSupport_getStackInformation_668d550a017f138111a811aa1691a8cdc4c01e07 (1 samples, 0.01%) - - - -PooledByteBufAllocator_<clinit>_21f234826cd9f3a4b36f743cf0eab78640b3f2fc (2 samples, 0.03%) - - - -perf (344 samples, 5.00%) -perf - - -vm_mmap_pgoff (2 samples, 0.03%) - - - -__GI___write (20 samples, 0.29%) - - - -i2c_transfer (1 samples, 0.01%) - - - -update_min_vruntime (1 samples, 0.01%) - - - -schedule (1 samples, 0.01%) - - - -start_kernel (3 samples, 0.04%) - - - -DeploymentManager_lambda$doDeploy$5_c3cad315748fdf4e2680e0fb8aa9d13885437e92 (2 samples, 0.03%) - - - -mod_objcg_state (1 samples, 0.01%) - - - -_perf_event_enable (1 samples, 0.01%) - - - -[perf] (311 samples, 4.52%) -[perf] - - -poll_freewait (1 samples, 0.01%) - - - -process_one_work (1 samples, 0.01%) - - - -__fcntl64_nocancel_adjusted (1 samples, 0.01%) - - - -set_cpus_allowed_common (2 samples, 0.03%) - - - -enqueue_task_fair (1 samples, 0.01%) - - - -cpuidle_enter (3 samples, 0.04%) - - - -worker_thread (2 samples, 0.03%) - - - -calc_timer_values (1 samples, 0.01%) - - - -hrtimer_forward (1 samples, 0.01%) - - - -enqueue_entity (1 samples, 0.01%) - - - -LocaleData$1_run_8e15394bebe96a4c95d0706e2864c07442a8d06d (1 samples, 0.01%) - - - -dequeue_task_fair (1 samples, 0.01%) - - - -VertxHttpRecorder$WebDeploymentVerticle_setupTcpHttpServer_9628d2555bc571703a294132046d9520baa719ce (2 samples, 0.03%) - - - -lapic_next_event (1 samples, 0.01%) - - - -Java_sun_nio_ch_IOUtil_write1 (1 samples, 0.01%) - - - -d_alloc_pseudo (1 samples, 0.01%) - - - -hrtimer_interrupt (37 samples, 0.54%) - - - -update_curr (5 samples, 0.07%) - - - -SIGINT_handler (1 samples, 0.01%) - - - -smpboot_thread_fn (1 samples, 0.01%) - - - -__common_interrupt (1 samples, 0.01%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -QuarkusExecutorFactory_createExecutor_9729cf50c2d628de11b3a8042829183a7e7bf123 (1 samples, 0.01%) - - - -__x64_sys_munmap (4 samples, 0.06%) - - - -SynchronousDispatcher$$Lambda$272cbc239fe16868b5b9c4d18a415e65ad284626_get_066209accc18eab354e52a2a1e440e3711935b5b (5,203 samples, 75.65%) -SynchronousDispatcher$$Lambda$272cbc239fe16868b5b9c4d18a415e65ad284626_get_066209accc18eab354e52a2a1e440e3711935b5b - - -rwsem_down_write_slowpath (3 samples, 0.04%) - - - -DeploymentManager$$Lambda$2780e112bbbc503323919310b7eba5d4bd5972e7_handle_46ed4864dbed50147dc1770382bbc2cd6058e622 (2 samples, 0.03%) - - - -igb_poll (1 samples, 0.01%) - - - -NioEventLoop_run_be89580b4d16514bef6e948913d2ed21c5e4f679 (1 samples, 0.01%) - - - -VMThreads_findIsolateThreadForCurrentOSThread_92ae819b2eb5871e48575e78c4c13a4549a980b0 (1 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (90 samples, 1.31%) - - - -Runtime_availableProcessors_8c88a5035e0f76404bd388d31bc8ea17e15f987b (1 samples, 0.01%) - - - -set_next_task_idle (1 samples, 0.01%) - - - -[unknown] (1 samples, 0.01%) - - - -__GI___mmap64 (2 samples, 0.03%) - - - -GlobalEventExecutor_<clinit>_50a03cee518d54ec869344f03e9a33d49338fe89 (1 samples, 0.01%) - - - -perf_event_task_tick (1 samples, 0.01%) - - - -schedule_hrtimeout_range_clock (1 samples, 0.01%) - - - -__GI___mmap64 (3 samples, 0.04%) - - - -sched_clock_idle_wakeup_event (1 samples, 0.01%) - - - -__netif_receive_skb_list_core (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -nvkm_mc_intr (1 samples, 0.01%) - - - -__mod_lruvec_state (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -blk_finish_plug (1 samples, 0.01%) - - - -VMThreads_findIsolateThreadForCurrentOSThread_92ae819b2eb5871e48575e78c4c13a4549a980b0 (1 samples, 0.01%) - - - -select_task_rq_fair (2 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.07%) - - - -update_sd_lb_stats.constprop.0 (1 samples, 0.01%) - - - -calc_wheel_index (1 samples, 0.01%) - - - -enqueue_task (1 samples, 0.01%) - - - -__remove_hrtimer (1 samples, 0.01%) - - - -cpuidle_enter_state (3 samples, 0.04%) - - - -cgroup_rstat_flush_irqsafe (1 samples, 0.01%) - - - -__x64_sys_mprotect (1 samples, 0.01%) - - - -hrtimer_interrupt (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (96 samples, 1.40%) - - - -arch_perf_update_userpage (3 samples, 0.04%) - - - -_dl_sysdep_start (1 samples, 0.01%) - - - -nohz_run_idle_balance (1 samples, 0.01%) - - - -down_write_killable (1 samples, 0.01%) - - - -netif_receive_skb_list_internal (1 samples, 0.01%) - - - -lock_page_memcg (1 samples, 0.01%) - - - -VertxBuilder_vertx_96fbf1a3cb1f742947b0ca876c6065e325fb888f (6 samples, 0.09%) - - - -native_write_msr (3 samples, 0.04%) - - - -native_write_msr (3 samples, 0.04%) - - - -__mod_memcg_state.part.0 (1 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1 samples, 0.01%) - - - -SubstrateArraycopySnippets_doArraycopy_f84b22127218a3e56e291fab3a848f043b4ef61f (1 samples, 0.01%) - - - -SubstrateAllocationSnippets_newMultiArrayRecursion_1d8dcff1021ab9ecc1a1f2b483cd1dd7943ba1e3 (1 samples, 0.01%) - - - -osq_lock (2 samples, 0.03%) - - - -down_write_killable (1 samples, 0.01%) - - - -update_load_avg (1 samples, 0.01%) - - - -MultiThreadedMonitorSupport_getOrCreateMonitor_2ecf5995a7a109dacede518d33424ec5ebddfde6 (1 samples, 0.01%) - - - -__irq_exit_rcu (1 samples, 0.01%) - - - -SmallRyeContextPropagationProcessor$build1300494616_deploy_0_5b2b050e73892cbed63182af376589f0f9196595 (1 samples, 0.01%) - - - -__clone3 (1 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (19 samples, 0.28%) - - - -do_user_addr_fault (1 samples, 0.01%) - - - -__GI___mmap64 (1 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (2 samples, 0.03%) - - - -get_cpu_device (1 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1 samples, 0.01%) - - - -menu_select (1 samples, 0.01%) - - - -mntput_no_expire (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -FileCleanable_register_ce5f929418f49c4036e4eca128104b8ed5fca8cd (1 samples, 0.01%) - - - -AccessController_doPrivileged_de67c881e9b98da6843ef32b423a3d8bdbb4a36a (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -MethodInjectorImpl_invoke_9ffe1d0d644afc798fca21f38b93f6001d9d69e0 (5,203 samples, 75.65%) -MethodInjectorImpl_invoke_9ffe1d0d644afc798fca21f38b93f6001d9d69e0 - - -[unknown] (3 samples, 0.04%) - - - -menu_select (4 samples, 0.06%) - - - -JavaMemoryUtil_copyPrimitiveArrayForward_ba0fe8da18fa7f513fc003ad474ddce7ea34ac68 (5,201 samples, 75.62%) -JavaMemoryUtil_copyPrimitiveArrayForward_ba0fe8da18fa7f513fc003ad474ddce7ea34ac68 - - -ttwu_do_wakeup (1 samples, 0.01%) - - - -MultiThreadedMonitorSupport_slowPathMonitorEnter_5c2ec80c70301e1f54c9deef94b70b719d5a10f5 (1 samples, 0.01%) - - - -native_irq_return_iret (1 samples, 0.01%) - - - -__x64_sys_poll (1 samples, 0.01%) - - - -ClassInitializationInfo_invokeClassInitializer_bbe695b1135def8c02910dab61e8f305ee37d4f1 (1 samples, 0.01%) - - - -ClassInitializationInfo_invokeClassInitializer_bbe695b1135def8c02910dab61e8f305ee37d4f1 (2 samples, 0.03%) - - - -visit_groups_merge.constprop.0.isra.0 (22 samples, 0.32%) - - - -Java_sun_nio_ch_IOUtil_drain (1 samples, 0.01%) - - - -VertxImpl$$Lambda$144505bd7a76b7cf49821bf3a069bb5516927110_apply_dfd21eab8e62e3425360c396d40782792788ec7c (1 samples, 0.01%) - - - -dequeue_task_fair (1 samples, 0.01%) - - - -__calloc (8 samples, 0.12%) - - - -schedule (1 samples, 0.01%) - - - -_find_next_bit (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (4 samples, 0.06%) - - - -__x86_indirect_thunk_rax (5 samples, 0.07%) - - - -SingleThreadEventExecutor$4_run_1b47df7867e302a2fb7f28d7657a73e92f89d91f (2 samples, 0.03%) - - - -rmqueue_bulk (1 samples, 0.01%) - - - -rebalance_domains (1 samples, 0.01%) - - - -native_sched_clock (2 samples, 0.03%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -SelectedSelectionKeySetSelector_wakeup_52b1bd6d8d8b91dedd71e801800a82502a555a7b (1 samples, 0.01%) - - - -hrtimer_cancel (1 samples, 0.01%) - - - -__wake_up_common_lock (1 samples, 0.01%) - - - -iterate_groups (1 samples, 0.01%) - - - -OSCommittedMemoryProvider_allocate_f8d80d596cf0c26612afa4c08c54998e761aa867 (7 samples, 0.10%) - - - -tloop-thread-85 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -ThreadExecutorMap$1_execute_82a130cdd46546392da3ffac84de8c998f29d43c (3 samples, 0.04%) - - - -EnhancedQueueExecutor$Task_run_0698ada71c4e28a891fbc5cecfd8587d41514d90 (5,203 samples, 75.65%) -EnhancedQueueExecutor$Task_run_0698ada71c4e28a891fbc5cecfd8587d41514d90 - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1 samples, 0.01%) - - - -LocaleData_getNumberFormatData_5b41facb88b603faae6feca879d2b4e048f3c22d (1 samples, 0.01%) - - - -irq_work_needs_cpu (1 samples, 0.01%) - - - -ktime_get (3 samples, 0.04%) - - - -hrtimer_next_event_without (2 samples, 0.03%) - - - -ip_rcv (1 samples, 0.01%) - - - -tick_nohz_idle_stop_tick (1 samples, 0.01%) - - - -do_syscall_64 (53 samples, 0.77%) - - - -PropertyNamesConfigSourceInterceptor_getValue_11640ac94f79b85a3285676ef33b9850d23385d8 (1 samples, 0.01%) - - - -SingleThreadEventExecutor_execute_b9fc33f6cf952ec696d6a219f6499740711801a6 (1 samples, 0.01%) - - - -Arrays_fill_13b665e637ab47c98d30a62411fcb48b3b1b705a (1 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1 samples, 0.01%) - - - -do_idle (467 samples, 6.79%) -do_idle - - -put_prev_task_idle (1 samples, 0.01%) - - - -sched_ttwu_pending (1 samples, 0.01%) - - - -PosixJavaThreads_beforeThreadRun_74270183030d3cf183dcaf07b8ca65494761107e (9 samples, 0.13%) - - - -gnal_Dispatcher (2 samples, 0.03%) - - - -remote_function (24 samples, 0.35%) - - - -CLDRBaseLocaleDataMetaInfo_availableLanguageTags_53777e47b8f22f7f901d1815d54498c12bf7d1e3 (1 samples, 0.01%) - - - -VertxImpl_deployVerticle_b3ce74c752ac28ceba0a6a5f10e8c73f24f312fb (3 samples, 0.04%) - - - -__mod_timer (1 samples, 0.01%) - - - -sched_clock (1 samples, 0.01%) - - - -__pagevec_lru_add (1 samples, 0.01%) - - - -__x64_sys_sched_setaffinity (21 samples, 0.31%) - - - -ResourceMethodInvoker_invoke_f84ce6055610e611d4f1d369a7f33e56653908f3 (5,203 samples, 75.65%) -ResourceMethodInvoker_invoke_f84ce6055610e611d4f1d369a7f33e56653908f3 - - -EPoll_create_37b4f4fe7f9b3384198247adf49e3aee215750e1 (1 samples, 0.01%) - - - -hrtimer_get_next_event (1 samples, 0.01%) - - - -__switch_to_asm (1 samples, 0.01%) - - - -__bitmap_and (1 samples, 0.01%) - - - -sync_regs (1 samples, 0.01%) - - - -calc_timer_values (1 samples, 0.01%) - - - -bpf_lsm_task_setscheduler (1 samples, 0.01%) - - - -selinux_task_alloc (1 samples, 0.01%) - - - -vfs_write (6 samples, 0.09%) - - - -tick_nohz_idle_stop_tick (3 samples, 0.04%) - - - -URI_decode_c22e283d746085135a26df47165adfd68a6e8305 (1 samples, 0.01%) - - - -EventLoopContext_runOnContext_1032f3075a9010887ecdd3fdc7989166bf814f22 (1 samples, 0.01%) - - - -__do_set_cpus_allowed (1 samples, 0.01%) - - - -memset (1 samples, 0.01%) - - - -SmallRyeConfig_convertValue_a70b0d2cf8164b0930913b203315dde297130e7b (1 samples, 0.01%) - - - -handle_edge_irq (1 samples, 0.01%) - - - -cpuidle_not_available (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.09%) - - - -SingleThreadEventExecutor_execute_ae2480551d18263655f927040d2e4463ae8e5bf2 (1 samples, 0.01%) - - - -ctx_sched_in (22 samples, 0.32%) - - - -blk_finish_plug (1 samples, 0.01%) - - - -__softirqentry_text_start (1 samples, 0.01%) - - - -__remove_hrtimer (2 samples, 0.03%) - - - -error_return (1 samples, 0.01%) - - - -poll_idle (2 samples, 0.03%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -native_write_msr (15 samples, 0.22%) - - - -cpuidle_enter (324 samples, 4.71%) -cpuid.. - - -tick_sched_timer (3 samples, 0.04%) - - - -__i2c_transfer (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (150 samples, 2.18%) -a.. - - -hrtimer_start_range_ns (1 samples, 0.01%) - - - -__GI___ioctl_time64 (216 samples, 3.14%) -__G.. - - -__update_load_avg_se (1 samples, 0.01%) - - - -native_irq_return_iret (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (1 samples, 0.01%) - - - -hrtimer_start_range_ns (1 samples, 0.01%) - - - -acpi_idle_do_entry (3 samples, 0.04%) - - - -ClassInitializationInfo_initialize_2fab2a9469a0ef812c52b0ce6061de6c2c8b76f9 (2 samples, 0.03%) - - - -osq_lock (1 samples, 0.01%) - - - -do_idle (3 samples, 0.04%) - - - -run_posix_cpu_timers (1 samples, 0.01%) - - - -copy_page_from_iter_atomic (5 samples, 0.07%) - - - -_start (11 samples, 0.16%) - - - -SelectorImpl_lockAndDoSelect_de3ba179520b17b51a73f959a29e5a68bde086ce (1 samples, 0.01%) - - - -get_page_from_freelist (2 samples, 0.03%) - - - -update_load_avg (3 samples, 0.04%) - - - -exc_nmi (1 samples, 0.01%) - - - -debugging-nativ (37 samples, 0.54%) - - - -cpuidle_enter_state (3 samples, 0.04%) - - - -__GI___munmap (4 samples, 0.06%) - - - -ctx_resched (22 samples, 0.32%) - - - -osq_lock (2 samples, 0.03%) - - - -ntloop-thread-0 (2 samples, 0.03%) - - - -hrtimer_update_next_event (1 samples, 0.01%) - - - -get_next_timer_interrupt (1 samples, 0.01%) - - - -lock_timer_base (1 samples, 0.01%) - - - -do_syscall_64 (2 samples, 0.03%) - - - -SingleThreadEventExecutor_runAllTasks_1c632c8f112449f5c5cb92250f70fa224c43b8f9 (1 samples, 0.01%) - - - -StringBuilder_append_db52bb298168a322e24553389a431b3f7fd8e0c3 (1 samples, 0.01%) - - - -asm_sysvec_apic_timer_interrupt (148 samples, 2.15%) -a.. - - -__libc_start_main_alias_2 (10 samples, 0.15%) - - - -entry_SYSCALL_64_after_hwframe (2 samples, 0.03%) - - - -EPollSelectorImpl_constructor_e766728b6679c7c5e8eabbee0cfd0f70e475eb9e (3 samples, 0.04%) - - - -PhantomCleanable_constructor_2881dfb6976c1ed7306dc290bd9a7132483514bf (1 samples, 0.01%) - - - -psi_task_change (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.03%) - - - -tick_sched_handle (2 samples, 0.03%) - - - -arch_perf_update_userpage (1 samples, 0.01%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1 samples, 0.01%) - - - -tick_nohz_get_sleep_length (1 samples, 0.01%) - - - -MethodInjectorImpl_invoke_74c27acf691aefde36ad9817da5ab80c3377da1d (5,203 samples, 75.65%) -MethodInjectorImpl_invoke_74c27acf691aefde36ad9817da5ab80c3377da1d - - -rebalance_domains (1 samples, 0.01%) - - - -PreMatchContainerRequestContext_filter_ed53506a3bc6ae4769c8d7447ae589c1c45bb04a (5,203 samples, 75.65%) -PreMatchContainerRequestContext_filter_ed53506a3bc6ae4769c8d7447ae589c1c45bb04a - - -cpuacct_charge (1 samples, 0.01%) - - - -memcpy (1 samples, 0.01%) - - - -nf_conntrack_in (1 samples, 0.01%) - - - -__schedule (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -SubstrateArraycopySnippets_doArraycopy_f84b22127218a3e56e291fab3a848f043b4ef61f (5,202 samples, 75.63%) -SubstrateArraycopySnippets_doArraycopy_f84b22127218a3e56e291fab3a848f043b4ef61f - - -rcu_read_unlock_strict (1 samples, 0.01%) - - - -enqueue_entity (2 samples, 0.03%) - - - -__softirqentry_text_start (1 samples, 0.01%) - - - -copy_user_generic_string (1 samples, 0.01%) - - - -xas_start (1 samples, 0.01%) - - - -call_timer_fn (1 samples, 0.01%) - - - -VertxCoreRecorder_initialize_48e4f58a461abde1ae74d26423a5a8ada77d2a60 (6 samples, 0.09%) - - - -cpuidle_enter_state (450 samples, 6.54%) -cpuidle_.. - - -do_epoll_wait (1 samples, 0.01%) - - - -update_cfs_group (1 samples, 0.01%) - - - -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 (1 samples, 0.01%) - - - -tick_irq_enter (47 samples, 0.68%) - - - -merge_sched_in (19 samples, 0.28%) - - - -[perf] (22 samples, 0.32%) - - - -ResourceMethodInvoker_invokeOnTarget_47ca204c53713a8cd4cff7cac75a351bb4d92af5 (5,203 samples, 75.65%) -ResourceMethodInvoker_invokeOnTarget_47ca204c53713a8cd4cff7cac75a351bb4d92af5 - - -InternalThreadLocalMap_get_8aaa7993dfe4d91d62c868b238677da6d66029dd (1 samples, 0.01%) - - - -update_irq_load_avg (2 samples, 0.03%) - - - -CgroupV1Subsystem_getCpuPeriod_c0303a48586f37f048320620b4a5027248c6fb81 (1 samples, 0.01%) - - - -hrtimer_interrupt (3 samples, 0.04%) - - - -tcache_init.part.0 (8 samples, 0.12%) - - - -xas_clear_mark (1 samples, 0.01%) - - - -Java_sun_nio_ch_EPoll_create (1 samples, 0.01%) - - - -need_update (1 samples, 0.01%) - - - -do_syscall_64 (2 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (3 samples, 0.04%) - - - -common_interrupt (2 samples, 0.03%) - - - -native_write_msr (1 samples, 0.01%) - - - -add_wait_queue (1 samples, 0.01%) - - - -JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b (10 samples, 0.15%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -hrtimer_interrupt (84 samples, 1.22%) - - - -psi_task_switch (1 samples, 0.01%) - - - -update_process_times (3 samples, 0.04%) - - - -avc_has_perm (1 samples, 0.01%) - - - -do_epoll_pwait.part.0 (1 samples, 0.01%) - - - -acpi_idle_enter (1 samples, 0.01%) - - - -update_rq_clock (1 samples, 0.01%) - - - -PosixVirtualMemoryProvider_commit_9aa7fc44c70177db46488e00268c856970d185aa (4 samples, 0.06%) - - - -__pthread_create_2_1 (3 samples, 0.04%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.03%) - - - -psi_flags_change (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -update_process_times (3 samples, 0.04%) - - - -HttpServerImpl_listen_1d9a00c8033b9574391a09fa7bd965d1ff7f1675 (2 samples, 0.03%) - - - -VertxCoreRecorder$VertxSupplier_get_9cc8afdf967b204fbc01652d0c347eb980314ecb (6 samples, 0.09%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -SelectedSelectionKeySetSelector_select_0dd4cc46be64c59a9a64598697cdd4d2344029ac (1 samples, 0.01%) - - - -osq_lock (2 samples, 0.03%) - - - -exit_to_user_mode_prepare (1 samples, 0.01%) - - - -Quarkus_run_264e1542aba49a980676e2116b6211b2dc545762 (10 samples, 0.15%) - - - -QuarkusExecutorFactory_internalCreateExecutor_9c9dcce3d0067ecc0c5456ebb7da7fb6ee10ec18 (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (2 samples, 0.03%) - - - -Transport_eventLoopGroup_5e4430940b5fba54d415e1d3ec4ee2566b75c5f9 (5 samples, 0.07%) - - - -__do_set_cpus_allowed (14 samples, 0.20%) - - - -security_task_setscheduler (1 samples, 0.01%) - - - -do_sys_poll (1 samples, 0.01%) - - - -asm_exc_page_fault (1 samples, 0.01%) - - - -tick_nohz_get_sleep_length (2 samples, 0.03%) - - - -update_blocked_averages (1 samples, 0.01%) - - - -__GI___mprotect (2 samples, 0.03%) - - - -__update_load_avg_se (1 samples, 0.01%) - - - -copy_user_generic_string (3 samples, 0.04%) - - - -PosixJavaThreads_setNativeName_ad1428f6ffd25a626f703670d3ed7e20656291a9 (9 samples, 0.13%) - - - -_int_free (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (1 samples, 0.01%) - - - -IOUtil_makePipe_1c2001cec6f124c4ca7bd816aab8f0e7a06feade (1 samples, 0.01%) - - - -SynchronousDispatcher_invoke_c9d1d8ee1c37ef2a9a17e6dfcd577a37872bc8da (5,203 samples, 75.65%) -SynchronousDispatcher_invoke_c9d1d8ee1c37ef2a9a17e6dfcd577a37872bc8da - - -new_heap (8 samples, 0.12%) - - - -rcu_all_qs (1 samples, 0.01%) - - - -psi_group_change (4 samples, 0.06%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1 samples, 0.01%) - - - -update_process_times (2 samples, 0.03%) - - - -IOUtil_write1_b4df3417b58d9bd7a7f70b15a57a972103a6d232 (1 samples, 0.01%) - - - -EventLoopContext_runOnContext_1032f3075a9010887ecdd3fdc7989166bf814f22 (3 samples, 0.04%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -bit_xfer (1 samples, 0.01%) - - - -task_rq_lock (1 samples, 0.01%) - - - -NioEventLoop_wakeup_5a85e416da3120654b2b48474568109b100d0b0d (1 samples, 0.01%) - - - -VertxCoreRecorder$13_runWith_b1a36ffea772fafde91c7c3d55f75803c719117b (5,203 samples, 75.65%) -VertxCoreRecorder$13_runWith_b1a36ffea772fafde91c7c3d55f75803c719117b - - -__sysvec_apic_timer_interrupt (3 samples, 0.04%) - - - -ApplicationLifecycleManager_run_dbf144db2a98237beac0f2d82fb961c3bd6ed251 (10 samples, 0.15%) - - - -Thread_start_d043f016dd75eb113f895de55f2e129bad1ee51a (3 samples, 0.04%) - - - -cpumask_next_and (3 samples, 0.04%) - - - -PoolArena_constructor_036fbd96da5ca1cc5502d05f6478a3da24cb1397 (1 samples, 0.01%) - - - -sched_setaffinity (20 samples, 0.29%) - - - -update_sd_lb_stats.constprop.0 (1 samples, 0.01%) - - - -ktime_get (35 samples, 0.51%) - - - -rb_next (2 samples, 0.03%) - - - -ThreadExecutorMap$2_run_66c8943ee6536a10df07f979fb6cd278adcf96bc (1 samples, 0.01%) - - - -update_load_avg (2 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (2 samples, 0.03%) - - - -__fput (1 samples, 0.01%) - - - -cpuidle_governor_latency_req (2 samples, 0.03%) - - - -exc_page_fault (1 samples, 0.01%) - - - -start_thread (1 samples, 0.01%) - - - -update_cfs_group (1 samples, 0.01%) - - - -perf_event_task_tick (1 samples, 0.01%) - - - -SingleThreadEventExecutor_execute_ae2480551d18263655f927040d2e4463ae8e5bf2 (3 samples, 0.04%) - - - -__hrtimer_run_queues (1 samples, 0.01%) - - - -dl_main (1 samples, 0.01%) - - - -irq_enter_rcu (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (6 samples, 0.09%) - - - -schedule_timeout (1 samples, 0.01%) - - - -entry_SYSCALL_64_after_hwframe (1 samples, 0.01%) - - - -Arrays_copyOfRange_289badfd980998aad0ada38eb3a926841af70498 (9 samples, 0.13%) - - - -__softirqentry_text_start (8 samples, 0.12%) - - - -ReflectionAccessorHolder_StringBuilderResource_appendDelete_9e06d4817d0208a0cce97ebcc0952534cac45a19_e22addf7d3eaa3ad14013ce01941dc25beba7621 (5,203 samples, 75.65%) -ReflectionAccessorHolder_StringBuilderResource_appendDelete_9e06d4817d0208a0cce97ebcc0952534cac45a19_e22addf7d3eaa3ad14013ce.. - - -__hrtimer_run_queues (1 samples, 0.01%) - - - -native_write_msr (3 samples, 0.04%) - - - -iterate_groups (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -ntloop-thread-3 (4 samples, 0.06%) - - - -timerqueue_del (1 samples, 0.01%) - - - -flush_smp_call_function_queue (1 samples, 0.01%) - - - -curl (735 samples, 10.69%) -curl - - -NioEventLoop_openSelector_807d094eb73208664264916255f5760d290089d2 (3 samples, 0.04%) - - - -ksys_write (6 samples, 0.09%) - - - -acpi_processor_ffh_cstate_enter (1 samples, 0.01%) - - - -__x64_sys_futex (1 samples, 0.01%) - - - -vm_mmap_pgoff (3 samples, 0.04%) - - - -irqtime_account_irq (9 samples, 0.13%) - - - -cgroup_rstat_flush_locked (1 samples, 0.01%) - - - -process_one_work (2 samples, 0.03%) - - - -event_function_call (25 samples, 0.36%) - - - -__x64_sys_epoll_pwait (1 samples, 0.01%) - - - -[perf] (339 samples, 4.93%) -[perf] - - -try_address (1 samples, 0.01%) - - - -native_write_msr (2 samples, 0.03%) - - - -hrtimer_wakeup (3 samples, 0.04%) - - - -down_write_killable (4 samples, 0.06%) - - - -AbstractEventExecutor_safeExecute_48c5811cdd8968be97028bc79c80e772e065c655 (2 samples, 0.03%) - - - -native_write_msr (4 samples, 0.06%) - - - -ecutor-thread-0 (5,216 samples, 75.84%) -ecutor-thread-0 - - -DeploymentManager_undeployVerticle_5e04a8063ba42e96b3f16e0af48675ef9b145663 (1 samples, 0.01%) - - - -rcu_eqs_enter.constprop.0 (1 samples, 0.01%) - - - -__libc_read (1 samples, 0.01%) - - - -Java_sun_nio_ch_EPoll_wait (1 samples, 0.01%) - - - -__clone3 (5,203 samples, 75.65%) -__clone3 - - -__schedule (1 samples, 0.01%) - - - -VertxHttpRecorder_initializeRouter_931d9cc504c3f3ebb7166418f3971225ab19f602 (6 samples, 0.09%) - - - -acpi_idle_enter (3 samples, 0.04%) - - - -xas_store (1 samples, 0.01%) - - - -__set_cpus_allowed_ptr_locked (18 samples, 0.26%) - - - -down_write_killable (1 samples, 0.01%) - - - -__irq_exit_rcu (1 samples, 0.01%) - - - -clear_page_rep (1 samples, 0.01%) - - - -cpu_startup_entry (1 samples, 0.01%) - - - -delay_halt_mwaitx (1 samples, 0.01%) - - - -SmallRyeConfig_getValue_8293bc0c8705cebeb27682ba16a380395008ac29 (1 samples, 0.01%) - - - -__GI_epoll_create (1 samples, 0.01%) - - - -do_epoll_wait (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -__fget_files (1 samples, 0.01%) - - - -dequeue_entity (1 samples, 0.01%) - - - -__libc_poll (6 samples, 0.09%) - - - -PosixJavaThreads_doStartThread_d86493a94746fb837887c6a0e52e99e18ac5be71 (3 samples, 0.04%) - - - -Thread_start0_1ac299bac29d78e193ed792d1de667f50cd6b267 (3 samples, 0.04%) - - - -check_preempt_curr (1 samples, 0.01%) - - - -XSDHandler_constructTrees_5d3ee891fa68030d979d946c947545099ec42a58 (1 samples, 0.01%) - - - -event_sched_in.part.0 (16 samples, 0.23%) - - - -tick_nohz_get_sleep_length (1 samples, 0.01%) - - - -handle_irq_event (1 samples, 0.01%) - - - -perf_event_task_tick (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (3 samples, 0.04%) - - - -update_rq_clock (1 samples, 0.01%) - - - -VertxHttpRecorder_doServerStart_5a2b82f625f728fe2208f2246f8f2a6468c96864 (3 samples, 0.04%) - - - -String_substring_4989a637ecbe1eecc1c126665598175df66fa98b (9 samples, 0.13%) - - - -cpu_startup_entry (467 samples, 6.79%) -cpu_start.. - - -__switch_to_asm (1 samples, 0.01%) - - - -perf_event_update_userpage (1 samples, 0.01%) - - - -plist_del (1 samples, 0.01%) - - - -native_sched_clock (1 samples, 0.01%) - - - -VertxRequestHandler$1_run_82649d241f3fbd149391419a554fb48815fc884d (5,203 samples, 75.65%) -VertxRequestHandler$1_run_82649d241f3fbd149391419a554fb48815fc884d - - -worker_thread (1 samples, 0.01%) - - - -DeploymentManager$DeploymentImpl_doUndeploy_03ad08e24f30484bbac04c9df2d4cef9bc41a85f (1 samples, 0.01%) - - - -flush_smp_call_function_from_idle (7 samples, 0.10%) - - - -flush_smp_call_function_from_idle (1 samples, 0.01%) - - - -do_syscall_64 (21 samples, 0.31%) - - - -acpi_idle_enter (3 samples, 0.04%) - - - -memcpy (1 samples, 0.01%) - - - -event_function (24 samples, 0.35%) - - - -CgroupMetrics_getCpuPeriod_0cc3d66ece40322a6e334cb3cd4e0b8eae5cc0da (1 samples, 0.01%) - - - -start_thread (1 samples, 0.01%) - - - -SubstrateMethodAccessor_invoke_721942f755bcea8bf67383875944771c7b6282c3 (5,203 samples, 75.65%) -SubstrateMethodAccessor_invoke_721942f755bcea8bf67383875944771c7b6282c3 - - -__run_timers.part.0 (1 samples, 0.01%) - - - -irq_enter_rcu (39 samples, 0.57%) - - - -dequeue_entity (9 samples, 0.13%) - - - -finish_task_switch.isra.0 (1 samples, 0.01%) - - - -put_prev_task_idle (1 samples, 0.01%) - - - -scheduler_tick (2 samples, 0.03%) - - - -native_write_msr (2 samples, 0.03%) - - - -__get_user_nocheck_1 (1 samples, 0.01%) - - - -__sysvec_apic_timer_interrupt (1 samples, 0.01%) - - - -ThreadLocalAllocation_slowPathNewArrayWithoutAllocating_8145f66af737ef111395a70e26cdbc5d55538ef0 (9 samples, 0.13%) - - - -cpu_startup_entry (3 samples, 0.04%) - - - -tick_sched_handle (4 samples, 0.06%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (1 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1 samples, 0.01%) - - - -PhantomCleanable_insert_51a20e127a067fecfaf11d3f622f543857def998 (1 samples, 0.01%) - - - -osq_lock (4 samples, 0.06%) - - - -native_write_msr (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (1 samples, 0.01%) - - - -native_write_msr (1 samples, 0.01%) - - - -idle_cpu (1 samples, 0.01%) - - - -swapper (530 samples, 7.71%) -swapper - - -tick_nohz_account_idle_time (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (76 samples, 1.10%) - - - -load_balance (1 samples, 0.01%) - - - -start_thread (1 samples, 0.01%) - - - -native_sched_clock (3 samples, 0.04%) - - - -UnmanagedMemoryUtil_copyLongsForward_786a2262c2ed5c6469526ec7d53d0364ced07432 (2 samples, 0.03%) - - - -update_dl_rq_load_avg (1 samples, 0.01%) - - - -DefaultValues_resolveConfiguration_f0a242c6a7c5d49cff87123cfbc9651ddafdd4b4 (1 samples, 0.01%) - - - -SingleThreadEventExecutor_startThread_01a2f6913975a9e3a694adc6e29d550c50d76f00 (3 samples, 0.04%) - - - -tick_sched_timer (4 samples, 0.06%) - - - -__handle_mm_fault (1 samples, 0.01%) - - - -ContainerInfo_getCpuPeriod_413395ab3902d7d2a968b2c76f9dd7840bf27d79 (1 samples, 0.01%) - - - -perf_ibs_nmi_handler (3 samples, 0.04%) - - - -set_next_task_fair (1 samples, 0.01%) - - - -__GI___write (1 samples, 0.01%) - - - -ThreadExecutorMap$2_run_66c8943ee6536a10df07f979fb6cd278adcf96bc (2 samples, 0.03%) - - - -acpi_processor_ffh_cstate_enter (4 samples, 0.06%) - - - -task_work_run (1 samples, 0.01%) - - - -SmallRyeContextPropagationProcessor$build1300494616_deploy_975072a94a108e9e09ace7b000247786a639ea70 (1 samples, 0.01%) - - - -xfs_file_buffered_write (6 samples, 0.09%) - - - -__x64_sys_fcntl (1 samples, 0.01%) - - - -__x64_sys_munmap (1 samples, 0.01%) - - - -NioEventLoop_select_4400f85956c925748c40da4a81f574a360b028e5 (1 samples, 0.01%) - - - -pm_qos_read_value (1 samples, 0.01%) - - - -do_syscall_64 (6 samples, 0.09%) - - - -DeploymentManager_doDeploy_28a24e5825cfa179a86ccb09c95909cc881c7940 (3 samples, 0.04%) - - - -timekeeping_max_deferment (1 samples, 0.01%) - - - -VertxHttpRecorder$WebDeploymentVerticle_start_01a41ea9c67d11e527dffa634dae100703480447 (2 samples, 0.03%) - - - -xa_get_order (1 samples, 0.01%) - - - -__x64_sys_sched_setaffinity (1 samples, 0.01%) - - - -Thread_run_857ee078f8137062fcf27275732adf5c4870652a (5,203 samples, 75.65%) -Thread_run_857ee078f8137062fcf27275732adf5c4870652a - - -_dl_relocate_object (1 samples, 0.01%) - - - -down_write_killable (2 samples, 0.03%) - - - -cpuidle_enter (3 samples, 0.04%) - - - -set_next_entity (1 samples, 0.01%) - - - -UnmanagedMemoryUtil_copyForward_82a74c216e7c9d84efc44e8a2463bf268babab5e (5,201 samples, 75.62%) -UnmanagedMemoryUtil_copyForward_82a74c216e7c9d84efc44e8a2463bf268babab5e - - -HttpServerImpl_listen_d9b91b05e4c62ba86d1c6c677d62e3d6683eb08b (2 samples, 0.03%) - - - -CEntryPointSnippets_attachThread_299a3505abe96864afd07f8f20f652a19cd12ea9 (10 samples, 0.15%) - - - -JavaThreads_threadStartRoutine_241bd8ce6d5858d439c83fac40308278d1b55d23 (1 samples, 0.01%) - - - -ttwu_do_activate (1 samples, 0.01%) - - - -ResourceMethodInvoker_invokeOnTargetAfterFilter_f8e7d17d95cee5c81b2a53648b8587f246621679 (5,203 samples, 75.65%) -ResourceMethodInvoker_invokeOnTargetAfterFilter_f8e7d17d95cee5c81b2a53648b8587f246621679 - - -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 (1 samples, 0.01%) - - - -ceptor-thread-0 (2 samples, 0.03%) - - - -perf_event_idx_default (2 samples, 0.03%) - - - -tick_nohz_idle_exit (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (9 samples, 0.13%) - - - -start_kernel (3 samples, 0.04%) - - - -__libc_start_call_main (10 samples, 0.15%) - - - -do_epoll_wait (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (2 samples, 0.03%) - - - -UnmanagedMemoryUtil_copyLongsForward_786a2262c2ed5c6469526ec7d53d0364ced07432 (5,199 samples, 75.59%) -UnmanagedMemoryUtil_copyLongsForward_786a2262c2ed5c6469526ec7d53d0364ced07432 - - -cpuidle_enter (451 samples, 6.56%) -cpuidle_.. - - -generic_exec_single (1 samples, 0.01%) - - - -SynchronousDispatcher_lambda$invoke$4_292ae2c7bda6c46223dc07ac42c6c722c5ee3821 (5,203 samples, 75.65%) -SynchronousDispatcher_lambda$invoke$4_292ae2c7bda6c46223dc07ac42c6c722c5ee3821 - - -native_write_msr (1 samples, 0.01%) - - - -acpi_idle_do_entry (229 samples, 3.33%) -acp.. - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (5,203 samples, 75.65%) -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b0.. - - -tick_nohz_restart_sched_tick (2 samples, 0.03%) - - - -AbstractStringBuilder_appendChars_2c5b6248de521275addc6b02411944dfdb2eb638 (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (5 samples, 0.07%) - - - -pmd_page_vaddr (1 samples, 0.01%) - - - -__hrtimer_run_queues (3 samples, 0.04%) - - - -__remove_hrtimer (1 samples, 0.01%) - - - -acpi_processor_ffh_cstate_enter (35 samples, 0.51%) - - - -_nohz_idle_balance.constprop.0.isra.0 (1 samples, 0.01%) - - - -EventLoopContext$$Lambda$f3e5438d051bbe4aa3e7be27f77209dc22fa157e_run_644dc9664c2a1b8c119ae5c8de1bfaba1798b10e (2 samples, 0.03%) - - - -avc_lookup (2 samples, 0.03%) - - - -copy_user_generic_string (2 samples, 0.03%) - - - -AbstractContext_dispatch_2e4239181cf36b3eb1bade1eb323527ec6d95b1b (2 samples, 0.03%) - - - -_find_first_bit (1 samples, 0.01%) - - - -delay_halt (1 samples, 0.01%) - - - -__siphash_unaligned (1 samples, 0.01%) - - - -EPoll_ctl_1d02ecee3b525bd75189428624a6563bac3c277f (1 samples, 0.01%) - - - -secondary_startup_64_no_verify (470 samples, 6.83%) -secondary.. - - -tloop-thread-24 (1 samples, 0.01%) - - - -cpuacct_charge (1 samples, 0.01%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (1 samples, 0.01%) - - - -DeploymentManager_doDeploy_13ca90eebffa21856f602fd4ed6b030ab866132b (3 samples, 0.04%) - - - -SmallRyeConfig_getValue_9cb077a2a71dbfc081933ac8549b7809e12df99d (1 samples, 0.01%) - - - -tloop-thread-22 (1 samples, 0.01%) - - - -nvkm_pci_intr (1 samples, 0.01%) - - - -native_write_msr (2 samples, 0.03%) - - - -SelectedSelectionKeySetSelector_select_0dd4cc46be64c59a9a64598697cdd4d2344029ac (1 samples, 0.01%) - - - -newidle_balance (2 samples, 0.03%) - - - -rwsem_down_write_slowpath (2 samples, 0.03%) - - - -entry_SYSCALL_64_after_hwframe (2 samples, 0.03%) - - - -native_sched_clock (14 samples, 0.20%) - - - -smp_call_function_single (24 samples, 0.35%) - - - -FastThreadLocalRunnable_run_0329ad2c5210a091812879bcecd155c58e561e60 (1 samples, 0.01%) - - - -AccessController_doPrivileged_1204b570263dd604d978d266842152e03e560bf6 (1 samples, 0.01%) - - - -cpu_stopper_thread (1 samples, 0.01%) - - - -native_sched_clock (3 samples, 0.04%) - - - -SynchronousDispatcher$$Lambda$bef98628cf19febb23e9f92d674cba0b8f49861c_run_3911c7b0df1a844b46578aac989128fdd5044146 (5,203 samples, 75.65%) -SynchronousDispatcher$$Lambda$bef98628cf19febb23e9f92d674cba0b8f49861c_run_3911c7b0df1a844b46578aac989128fdd5044146 - - -irq_enter_rcu (47 samples, 0.68%) - - - -cpumask_any_and_distribute (1 samples, 0.01%) - - - -__x64_sys_epoll_pwait (1 samples, 0.01%) - - - -try_to_wake_up (3 samples, 0.04%) - - - -start_thread (1 samples, 0.01%) - - - -perf_ibs_add (1 samples, 0.01%) - - - -__kmalloc_node (3 samples, 0.04%) - - - -recalc_sigpending (1 samples, 0.01%) - - - -ThreadLocalResettingRunnable_run_641d43699f043a15f616f6dd9ba26aa5bb47fbee (5,203 samples, 75.65%) -ThreadLocalResettingRunnable_run_641d43699f043a15f616f6dd9ba26aa5bb47fbee - - -start_thread (2 samples, 0.03%) - - - -psi_group_change (2 samples, 0.03%) - - - -perf_event_task_tick (1 samples, 0.01%) - - - -cpuidle_enter_state (322 samples, 4.68%) -cpuid.. - - -CEntryPointSnippets_attachUnattachedThread_624b0c1d4e08bdf4608c1290142e118ef51d6192 (9 samples, 0.13%) - - - -native_sched_clock (1 samples, 0.01%) - - - -migration_cpu_stop (1 samples, 0.01%) - - - -DecimalFormatSymbolsProviderImpl_getInstance_a2690df278c5ae3df6a6a2a4a77431600042118d (1 samples, 0.01%) - - - -__x86_indirect_thunk_rax (2 samples, 0.03%) - - - -__softirqentry_text_start (1 samples, 0.01%) - - - -vm_mmap_pgoff (2 samples, 0.03%) - - - -start_thread (19 samples, 0.28%) - - - -kthread (2 samples, 0.03%) - - - -scheduler_tick (2 samples, 0.03%) - - - -smp_call_function_single (1 samples, 0.01%) - - - -update_curr (2 samples, 0.03%) - - - -napi_complete_done (1 samples, 0.01%) - - - -__calc_delta (1 samples, 0.01%) - - - -LocaleResources_getDecimalFormatSymbolsData_6faae73db25a10202a4794930b5ec003c7c99c4d (1 samples, 0.01%) - - - -__switch_to (1 samples, 0.01%) - - - -security_perf_event_write (8 samples, 0.12%) - - - -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df (5,203 samples, 75.65%) -PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df - - -__check_heap_object (2 samples, 0.03%) - - - -sched_clock_cpu (1 samples, 0.01%) - - - -IsolateEnterStub_PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df_06195ea7c1ac11d884862c6f069b026336aa4f8c (1 samples, 0.01%) - - - -schedule_idle (1 samples, 0.01%) - - - -VertxImpl_deployVerticle_097940d891a15e3dd0d5dcf2f21cd8dece35a792 (3 samples, 0.04%) - - - -ResourceMethodInvoker_internalInvokeOnTarget_c52c4bab981a348ac4ad27959ef4742d24301ea6 (5,203 samples, 75.65%) -ResourceMethodInvoker_internalInvokeOnTarget_c52c4bab981a348ac4ad27959ef4742d24301ea6 - - -EPollSelectorImpl_wakeup_327e69d296e955a09d20815a6ed65a8d67d804a2 (1 samples, 0.01%) - - - -tick_irq_enter (39 samples, 0.57%) - - - -syscall_exit_to_user_mode_prepare (1 samples, 0.01%) - - - -_raw_spin_lock_irqsave (1 samples, 0.01%) - - - -ktime_get_update_offsets_now (76 samples, 1.10%) - - - -native_flush_tlb_one_user (1 samples, 0.01%) - - - -HashMap_get_c388f6d0f496b3066f48fc407ce8ad7e5dd04bb4 (1 samples, 0.01%) - - - -all (6,878 samples, 100%) - - - -__clone3 (1 samples, 0.01%) - - - -enqueue_entity (2 samples, 0.03%) - - - -HashSet_add_9f7cdf8ef4a7e8a4b5ed241c402a396637e23061 (1 samples, 0.01%) - - - -put_prev_task_fair (1 samples, 0.01%) - - - -do_syscall_64 (4 samples, 0.06%) - - - -__update_load_avg_se (2 samples, 0.03%) - - - -do_syscall_64 (1 samples, 0.01%) - - - -native_irq_return_iret (1 samples, 0.01%) - - - -native_write_msr (122 samples, 1.77%) - - - -ktime_get (1 samples, 0.01%) - - - -copy_user_generic_string (1 samples, 0.01%) - - - -scheduler_tick (3 samples, 0.04%) - - - -__set_cpus_allowed_ptr_locked (1 samples, 0.01%) - - - -LocaleData$LocaleDataStrategy_getCandidateLocales_9d9372c32a6b0009ac1058db02fd8b20801d7828 (1 samples, 0.01%) - - - -rwsem_down_write_slowpath (4 samples, 0.06%) - - - -native_write_msr (6 samples, 0.09%) - - - -schedule_hrtimeout_range_clock (1 samples, 0.01%) - - - -mem_cgroup_css_rstat_flush (1 samples, 0.01%) - - - -rcu_qs (1 samples, 0.01%) - - - -SynchronousDispatcher_invoke_6745ac651a4034966b2c69c97c2869d138e12805 (5,203 samples, 75.65%) -SynchronousDispatcher_invoke_6745ac651a4034966b2c69c97c2869d138e12805 - - -__fget_files (1 samples, 0.01%) - - - -DeploymentManager_deployVerticle_8f348e5595b996709d456a4d080275d9279aea32 (3 samples, 0.04%) - - - -asm_exc_page_fault (1 samples, 0.01%) - - - -StackTraceUtils_shouldShowFrame_cb6da1fc802f24b87b5d448e0188bf5ca30f61ad (1 samples, 0.01%) - - - -JavaMemoryUtil_copyPrimitiveArrayForward_ba0fe8da18fa7f513fc003ad474ddce7ea34ac68 (1 samples, 0.01%) - - - -[unknown] (3 samples, 0.04%) - - - -VertxHttpProcessor$preinitializeRouter1141331088_deploy_3b2f5507ea83b0332da36f2bdf7801b77279fa8e (6 samples, 0.09%) - - - -CgroupSubsystemController_getLongValue_62ae0dd03d9b16d1d7e184c9055ad8616b64c029 (1 samples, 0.01%) - - - - diff --git a/_versions/2.7/guides/images/openapi-swaggerui-guide-screenshot01.png b/_versions/2.7/guides/images/openapi-swaggerui-guide-screenshot01.png deleted file mode 100644 index 1b590c7e847..00000000000 Binary files a/_versions/2.7/guides/images/openapi-swaggerui-guide-screenshot01.png and /dev/null differ diff --git a/_versions/2.7/guides/images/openapi-swaggerui-guide-screenshot02.png b/_versions/2.7/guides/images/openapi-swaggerui-guide-screenshot02.png deleted file mode 100644 index 17558cb0f98..00000000000 Binary files a/_versions/2.7/guides/images/openapi-swaggerui-guide-screenshot02.png and /dev/null differ diff --git a/_versions/2.7/guides/images/optaplanner-time-table-app-screenshot.png b/_versions/2.7/guides/images/optaplanner-time-table-app-screenshot.png deleted file mode 100644 index 8cd7af89288..00000000000 Binary files a/_versions/2.7/guides/images/optaplanner-time-table-app-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/images/optaplanner-time-table-class-diagram-annotated.png b/_versions/2.7/guides/images/optaplanner-time-table-class-diagram-annotated.png deleted file mode 100644 index c52029e1958..00000000000 Binary files a/_versions/2.7/guides/images/optaplanner-time-table-class-diagram-annotated.png and /dev/null differ diff --git a/_versions/2.7/guides/images/optaplanner-time-table-class-diagram-pure.png b/_versions/2.7/guides/images/optaplanner-time-table-class-diagram-pure.png deleted file mode 100644 index 26be4604e44..00000000000 Binary files a/_versions/2.7/guides/images/optaplanner-time-table-class-diagram-pure.png and /dev/null differ diff --git a/_versions/2.7/guides/images/optaplanner-time-table-class-diagram.svg b/_versions/2.7/guides/images/optaplanner-time-table-class-diagram.svg deleted file mode 100644 index 07482243f22..00000000000 --- a/_versions/2.7/guides/images/optaplanner-time-table-class-diagram.svg +++ /dev/null @@ -1,681 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - www.optaplanner.org - - - - - - www.optaplanner.org - - - en - - - - - - - - @PlanningEntity @PlanningVariable - @PlanningVariable The timeslot and room fields arenormally null before solvingand non-null after solving - - - Lesson - subject : Stringteacher : StringstudentGroup : String - - - 0..1 * timeslot - - - 0..1 * room - - Time table class diagram -   - Timeslot - dayOfWeek : DayOfWeekstartTime : LocalTimeendTime : LocalTime - Room - name : String - diff --git a/_versions/2.7/guides/images/proactor-pattern.png b/_versions/2.7/guides/images/proactor-pattern.png deleted file mode 100644 index 2218c588da3..00000000000 Binary files a/_versions/2.7/guides/images/proactor-pattern.png and /dev/null differ diff --git a/_versions/2.7/guides/images/quarkus-reactive-core.png b/_versions/2.7/guides/images/quarkus-reactive-core.png deleted file mode 100644 index 59dcae49af0..00000000000 Binary files a/_versions/2.7/guides/images/quarkus-reactive-core.png and /dev/null differ diff --git a/_versions/2.7/guides/images/quarkus-reactive-stack.png b/_versions/2.7/guides/images/quarkus-reactive-stack.png deleted file mode 100644 index 22fdb382940..00000000000 Binary files a/_versions/2.7/guides/images/quarkus-reactive-stack.png and /dev/null differ diff --git a/_versions/2.7/guides/images/quarkus-vertx-guide-architecture.png b/_versions/2.7/guides/images/quarkus-vertx-guide-architecture.png deleted file mode 100644 index b752e089161..00000000000 Binary files a/_versions/2.7/guides/images/quarkus-vertx-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/rabbitmq-guide-architecture.png b/_versions/2.7/guides/images/rabbitmq-guide-architecture.png deleted file mode 100644 index ea6682da1b6..00000000000 Binary files a/_versions/2.7/guides/images/rabbitmq-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/rabbitmq-qs-app-screenshot.png b/_versions/2.7/guides/images/rabbitmq-qs-app-screenshot.png deleted file mode 100644 index 864b952a782..00000000000 Binary files a/_versions/2.7/guides/images/rabbitmq-qs-app-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/images/rabbitmq-qs-architecture.png b/_versions/2.7/guides/images/rabbitmq-qs-architecture.png deleted file mode 100644 index a6bad5bfbe5..00000000000 Binary files a/_versions/2.7/guides/images/rabbitmq-qs-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/reactive-guide-code.png b/_versions/2.7/guides/images/reactive-guide-code.png deleted file mode 100644 index 84fb2bbe61d..00000000000 Binary files a/_versions/2.7/guides/images/reactive-guide-code.png and /dev/null differ diff --git a/_versions/2.7/guides/images/reactive-routes-guide-screenshot01.png b/_versions/2.7/guides/images/reactive-routes-guide-screenshot01.png deleted file mode 100644 index f4a00fec58b..00000000000 Binary files a/_versions/2.7/guides/images/reactive-routes-guide-screenshot01.png and /dev/null differ diff --git a/_versions/2.7/guides/images/reactive-systems.png b/_versions/2.7/guides/images/reactive-systems.png deleted file mode 100644 index 45c796a995b..00000000000 Binary files a/_versions/2.7/guides/images/reactive-systems.png and /dev/null differ diff --git a/_versions/2.7/guides/images/reactive-thread.png b/_versions/2.7/guides/images/reactive-thread.png deleted file mode 100644 index f8fd7616195..00000000000 Binary files a/_versions/2.7/guides/images/reactive-thread.png and /dev/null differ diff --git a/_versions/2.7/guides/images/registry-nexus-repository.png b/_versions/2.7/guides/images/registry-nexus-repository.png deleted file mode 100644 index b7fb7695f60..00000000000 Binary files a/_versions/2.7/guides/images/registry-nexus-repository.png and /dev/null differ diff --git a/_versions/2.7/guides/images/registry-nexus3-repository.png b/_versions/2.7/guides/images/registry-nexus3-repository.png deleted file mode 100644 index 3f43bcf8fab..00000000000 Binary files a/_versions/2.7/guides/images/registry-nexus3-repository.png and /dev/null differ diff --git a/_versions/2.7/guides/images/scheduling-task-architecture.png b/_versions/2.7/guides/images/scheduling-task-architecture.png deleted file mode 100644 index 34ec58a08d9..00000000000 Binary files a/_versions/2.7/guides/images/scheduling-task-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/spring-web-guide-screenshot01.png b/_versions/2.7/guides/images/spring-web-guide-screenshot01.png deleted file mode 100644 index 13e131ca758..00000000000 Binary files a/_versions/2.7/guides/images/spring-web-guide-screenshot01.png and /dev/null differ diff --git a/_versions/2.7/guides/images/stork-getting-started-architecture.png b/_versions/2.7/guides/images/stork-getting-started-architecture.png deleted file mode 100644 index a73610ccf71..00000000000 Binary files a/_versions/2.7/guides/images/stork-getting-started-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/stork-process.png b/_versions/2.7/guides/images/stork-process.png deleted file mode 100644 index c9c80abb3bb..00000000000 Binary files a/_versions/2.7/guides/images/stork-process.png and /dev/null differ diff --git a/_versions/2.7/guides/images/validation-guide-architecture.png b/_versions/2.7/guides/images/validation-guide-architecture.png deleted file mode 100644 index 8bd4b4876b1..00000000000 Binary files a/_versions/2.7/guides/images/validation-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/validation-guide-screenshot.png b/_versions/2.7/guides/images/validation-guide-screenshot.png deleted file mode 100644 index d69550fb329..00000000000 Binary files a/_versions/2.7/guides/images/validation-guide-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/images/websocket-guide-architecture.png b/_versions/2.7/guides/images/websocket-guide-architecture.png deleted file mode 100644 index f306cf161d5..00000000000 Binary files a/_versions/2.7/guides/images/websocket-guide-architecture.png and /dev/null differ diff --git a/_versions/2.7/guides/images/websocket-guide-screenshot.png b/_versions/2.7/guides/images/websocket-guide-screenshot.png deleted file mode 100644 index 231b78bd41b..00000000000 Binary files a/_versions/2.7/guides/images/websocket-guide-screenshot.png and /dev/null differ diff --git a/_versions/2.7/guides/includes/devtools/build-native-container-parameters.adoc b/_versions/2.7/guides/includes/devtools/build-native-container-parameters.adoc deleted file mode 100644 index 1885434a18a..00000000000 --- a/_versions/2.7/guides/includes/devtools/build-native-container-parameters.adoc +++ /dev/null @@ -1,21 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus build --native -Dquarkus.native.container-build=true {build-additional-parameters} ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw package -Dnative -Dquarkus.native.container-build=true {build-additional-parameters} ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew build -Dquarkus.package.type=native -Dquarkus.native.container-build=true {build-additional-parameters} ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/build-native-container.adoc b/_versions/2.7/guides/includes/devtools/build-native-container.adoc deleted file mode 100644 index f382e8f9362..00000000000 --- a/_versions/2.7/guides/includes/devtools/build-native-container.adoc +++ /dev/null @@ -1,36 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -ifdef::build-additional-parameters[] -quarkus build --native -Dquarkus.native.container-build=true {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -quarkus build --native -Dquarkus.native.container-build=true -endif::[] ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -ifdef::build-additional-parameters[] -./mvnw package -Dnative -Dquarkus.native.container-build=true {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -./mvnw package -Dnative -Dquarkus.native.container-build=true -endif::[] ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -ifdef::build-additional-parameters[] -./gradlew build -Dquarkus.package.type=native -Dquarkus.native.container-build=true {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -./gradlew build -Dquarkus.package.type=native -Dquarkus.native.container-build=true -endif::[] ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/build-native.adoc b/_versions/2.7/guides/includes/devtools/build-native.adoc deleted file mode 100644 index dff6362c2ab..00000000000 --- a/_versions/2.7/guides/includes/devtools/build-native.adoc +++ /dev/null @@ -1,36 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -ifdef::build-additional-parameters[] -quarkus build --native {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -quarkus build --native -endif::[] ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -ifdef::build-additional-parameters[] -./mvnw package -Dnative {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -./mvnw package -Dnative -endif::[] ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -ifdef::build-additional-parameters[] -./gradlew build -Dquarkus.package.type=native {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -./gradlew build -Dquarkus.package.type=native -endif::[] ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/build.adoc b/_versions/2.7/guides/includes/devtools/build.adoc deleted file mode 100644 index 86e82319783..00000000000 --- a/_versions/2.7/guides/includes/devtools/build.adoc +++ /dev/null @@ -1,36 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -ifdef::build-additional-parameters[] -quarkus build {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -quarkus build -endif::[] ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -ifdef::build-additional-parameters[] -./mvnw clean package {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -./mvnw clean package -endif::[] ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -ifdef::build-additional-parameters[] -./gradlew build {build-additional-parameters} -endif::[] -ifndef::build-additional-parameters[] -./gradlew build -endif::[] ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/create-app.adoc b/_versions/2.7/guides/includes/devtools/create-app.adoc deleted file mode 100644 index 78c434112a5..00000000000 --- a/_versions/2.7/guides/includes/devtools/create-app.adoc +++ /dev/null @@ -1,105 +0,0 @@ -[role="primary asciidoc-tabs-sync-cli"] -.CLI -**** -[source,bash,subs=attributes+] ----- -ifdef::create-app-group-id[] -ifdef::create-app-extensions[] -quarkus create app {create-app-group-id}:{create-app-artifact-id} \ -endif::[] -ifndef::create-app-extensions[] -ifndef::create-app-code[] -quarkus create app {create-app-group-id}:{create-app-artifact-id} \ -endif::[] -ifdef::create-app-code[] -quarkus create app {create-app-group-id}:{create-app-artifact-id} -endif::[] -endif::[] -endif::[] -ifndef::create-app-group-id[] -ifdef::create-app-extensions[] -quarkus create app org.acme:{create-app-artifact-id} \ -endif::[] -ifndef::create-app-extensions[] -ifndef::create-app-code[] -quarkus create app org.acme:{create-app-artifact-id} \ -endif::[] -ifdef::create-app-code[] -quarkus create app org.acme:{create-app-artifact-id} -endif::[] -endif::[] -endif::[] -ifdef::create-app-extensions[] -ifndef::create-app-code[] - --extension={create-app-extensions} \ -endif::[] -ifdef::create-app-code[] - --extension={create-app-extensions} -endif::[] -endif::[] -ifndef::create-app-code[] - --no-code -endif::[] -ifdef::create-app-post-command[] -ifeval::["{create-app-post-command}" != ""] -{create-app-post-command} -endif::[] -endif::[] -ifndef::create-app-post-command[] -cd {create-app-artifact-id} -endif::[] ----- - -ifndef::devtools-no-gradle[] -To create a Gradle project, add the `--gradle` or `--gradle-kotlin-dsl` option. -endif::[] - -_For more information about how to install the Quarkus CLI and use it, please refer to xref:cli-tooling.adoc[the Quarkus CLI guide]._ -**** - -[role="secondary asciidoc-tabs-sync-maven"] -.Maven -**** -[source,bash,subs=attributes+] ----- -mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create \ -ifdef::create-app-group-id[] - -DprojectGroupId={create-app-group-id} \ -endif::[] -ifndef::create-app-group-id[] - -DprojectGroupId=org.acme \ -endif::[] -ifdef::create-app-extensions[] - -DprojectArtifactId={create-app-artifact-id} \ -endif::[] -ifndef::create-app-extensions[] -ifndef::create-app-code[] - -DprojectArtifactId={create-app-artifact-id} \ -endif::[] -ifdef::create-app-code[] - -DprojectArtifactId={create-app-artifact-id} -endif::[] -endif::[] -ifdef::create-app-extensions[] -ifndef::create-app-code[] - -Dextensions="{create-app-extensions}" \ -endif::[] -ifdef::create-app-code[] - -Dextensions="{create-app-extensions}" -endif::[] -endif::[] -ifndef::create-app-code[] - -DnoCode -endif::[] -ifdef::create-app-post-command[] -{create-app-post-command} -endif::[] -ifndef::create-app-post-command[] -cd {create-app-artifact-id} -endif::[] ----- - -ifndef::devtools-no-gradle[] -To create a Gradle project, add the `-DbuildTool=gradle` or `-DbuildTool=gradle-kotlin-dsl` option. -endif::[] -**** diff --git a/_versions/2.7/guides/includes/devtools/create-cli.adoc b/_versions/2.7/guides/includes/devtools/create-cli.adoc deleted file mode 100644 index 06fb5e672a5..00000000000 --- a/_versions/2.7/guides/includes/devtools/create-cli.adoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="primary asciidoc-tabs-sync-cli"] -.CLI -**** -[source,bash,subs=attributes+] ----- -ifdef::create-cli-group-id[] -ifdef::create-cli-extensions[] -quarkus create cli {create-cli-group-id}:{create-cli-artifact-id} \ -endif::[] -ifndef::create-cli-extensions[] -ifndef::create-cli-code[] -quarkus create cli {create-cli-group-id}:{create-cli-artifact-id} \ -endif::[] -ifdef::create-cli-code[] -quarkus create cli {create-cli-group-id}:{create-cli-artifact-id} -endif::[] -endif::[] -endif::[] -ifndef::create-cli-group-id[] -ifdef::create-cli-extensions[] -quarkus create cli org.acme:{create-cli-artifact-id} \ -endif::[] -ifndef::create-cli-extensions[] -ifndef::create-cli-code[] -quarkus create cli org.acme:{create-cli-artifact-id} \ -endif::[] -ifdef::create-cli-code[] -quarkus create cli org.acme:{create-cli-artifact-id} -endif::[] -endif::[] -endif::[] -ifdef::create-cli-extensions[] -ifndef::create-cli-code[] - --extension={create-cli-extensions} \ -endif::[] -ifdef::create-cli-code[] - --extension={create-cli-extensions} -endif::[] -endif::[] -ifndef::create-cli-code[] - --no-code -endif::[] -ifdef::create-cli-post-command[] -ifeval::["{create-cli-post-command}" != ""] -{create-cli-post-command} -endif::[] -endif::[] -ifndef::create-cli-post-command[] -cd {create-cli-artifact-id} -endif::[] ----- - -To create a Gradle project, add the `--gradle` or `--gradle-kotlin-dsl` option. - -_For more information about how to install the Quarkus CLI and use it, please refer to xref:cli-tooling.adoc[the Quarkus CLI guide]._ -**** - -[role="secondary asciidoc-tabs-sync-maven"] -.Maven -**** -[source,bash,subs=attributes+] ----- -mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create \ -ifdef::create-cli-group-id[] - -DprojectGroupId={create-cli-group-id} \ -endif::[] -ifndef::create-cli-group-id[] - -DprojectGroupId=org.acme \ -endif::[] - -DprojectArtifactId={create-cli-artifact-id} \ -ifndef::create-cli-code[] - -DnoCode \ -endif::[] -ifdef::create-cli-extensions[] - -Dextensions="picocli,{create-cli-extensions}" -endif::[] -ifndef::create-cli-extensions[] - -Dextensions="picocli" -endif::[] -ifdef::create-cli-post-command[] -{create-cli-post-command} -endif::[] -ifndef::create-cli-post-command[] -cd {create-cli-artifact-id} -endif::[] ----- - -To create a Gradle project, add the `-DbuildTool=gradle` or `-DbuildTool=gradle-kotlin-dsl` option. -**** diff --git a/_versions/2.7/guides/includes/devtools/dev-parameters.adoc b/_versions/2.7/guides/includes/devtools/dev-parameters.adoc deleted file mode 100644 index 4773ee969f5..00000000000 --- a/_versions/2.7/guides/includes/devtools/dev-parameters.adoc +++ /dev/null @@ -1,21 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus dev {dev-additional-parameters} ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw quarkus:dev {dev-additional-parameters} ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew --console=plain quarkusDev {dev-additional-parameters} ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/dev.adoc b/_versions/2.7/guides/includes/devtools/dev.adoc deleted file mode 100644 index 86bbddb4894..00000000000 --- a/_versions/2.7/guides/includes/devtools/dev.adoc +++ /dev/null @@ -1,36 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -ifdef::dev-additional-parameters[] -quarkus dev {dev-additional-parameters} -endif::[] -ifndef::dev-additional-parameters[] -quarkus dev -endif::[] ----- -ifdef::devtools-wrapped[+] -ifndef::devtools-no-maven[] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -ifdef::dev-additional-parameters[] -./mvnw quarkus:dev {dev-additional-parameters} -endif::[] -ifndef::dev-additional-parameters[] -./mvnw quarkus:dev -endif::[] ----- -endif::[] -ifdef::devtools-wrapped[+] -ifndef::devtools-no-gradle[] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -ifdef::dev-additional-parameters[] -./gradlew --console=plain quarkusDev {dev-additional-parameters} -endif::[] -ifndef::dev-additional-parameters[] -./gradlew --console=plain quarkusDev -endif::[] ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/extension-add.adoc b/_versions/2.7/guides/includes/devtools/extension-add.adoc deleted file mode 100644 index 159865c95d8..00000000000 --- a/_versions/2.7/guides/includes/devtools/extension-add.adoc +++ /dev/null @@ -1,21 +0,0 @@ -[source,bash,subs=attributes+,role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus extension add '{add-extension-extensions}' ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source,bash,subs=attributes+,role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw quarkus:add-extension -Dextensions="{add-extension-extensions}" ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source,bash,subs=attributes+,role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew addExtension --extensions="{add-extension-extensions}" ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/extension-list.adoc b/_versions/2.7/guides/includes/devtools/extension-list.adoc deleted file mode 100644 index 525271a8fc7..00000000000 --- a/_versions/2.7/guides/includes/devtools/extension-list.adoc +++ /dev/null @@ -1,21 +0,0 @@ -[source, bash, subs=attributes+, role="primary asciidoc-tabs-sync-cli"] -.CLI ----- -quarkus extension ----- -ifndef::devtools-no-maven[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw quarkus:list-extensions ----- -endif::[] -ifndef::devtools-no-gradle[] -ifdef::devtools-wrapped[+] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew listExtensions ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/prerequisites.adoc b/_versions/2.7/guides/includes/devtools/prerequisites.adoc deleted file mode 100644 index c64c71b5f95..00000000000 --- a/_versions/2.7/guides/includes/devtools/prerequisites.adoc +++ /dev/null @@ -1,31 +0,0 @@ -To complete this guide, you need: - -ifdef::prerequisites-time[] -* Roughly {prerequisites-time} -endif::[] -ifndef::prerequisites-time[] -* Roughly 15 minutes -endif::[] -* An IDE -ifdef::prerequisites-ide[{prerequisites-ide}] -* JDK 11+ installed with `JAVA_HOME` configured appropriately -ifndef::prerequisites-no-maven[] -* Apache Maven {maven-version} -endif::[] -ifdef::prerequisites-docker[] -* A working container runtime (Docker or Podman) -endif::[] -ifdef::prerequisites-docker-compose[] -* Docker and Docker Compose -endif::[] -ifndef::prerequisites-no-cli[] -* Optionally the xref:cli-tooling.adoc[Quarkus CLI] if you want to use it -endif::[] -ifndef::prerequisites-no-graalvm[] -ifndef::prerequisites-graalvm-mandatory[] -* Optionally Mandrel or GraalVM installed and xref:building-native-image.adoc#configuring-graalvm[configured appropriately] if you want to build a native executable (or Docker if you use a native container build) -endif::[] -ifdef::prerequisites-graalvm-mandatory[] -* Mandrel or GraalVM installed and xref:building-native-image.adoc#configuring-graalvm[configured appropriately] -endif::[] -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/includes/devtools/test.adoc b/_versions/2.7/guides/includes/devtools/test.adoc deleted file mode 100644 index c690c4a946e..00000000000 --- a/_versions/2.7/guides/includes/devtools/test.adoc +++ /dev/null @@ -1,15 +0,0 @@ -ifndef::devtools-no-maven[] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-maven"] -.Maven ----- -./mvnw test ----- -endif::[] -ifdef::devtools-wrapped[+] -ifndef::devtools-no-gradle[] -[source, bash, subs=attributes+, role="secondary asciidoc-tabs-sync-gradle"] -.Gradle ----- -./gradlew test ----- -endif::[] \ No newline at end of file diff --git a/_versions/2.7/guides/infinispan-client.adoc b/_versions/2.7/guides/infinispan-client.adoc deleted file mode 100644 index ca618d29860..00000000000 --- a/_versions/2.7/guides/infinispan-client.adoc +++ /dev/null @@ -1,549 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Infinispan Client - -include::./attributes.adoc[] - -Infinispan is a distributed, in-memory key/value store that provides Quarkus applications with a highly configurable -and independently scalable data layer. -This extension gives you client functionality that connects applications running on Quarkus with remote Infinispan clusters. - -To find out more about Infinispan, visit the https://infinispan.org/documentation[Infinispan documentation]. - -== Solution - -We recommend that you complete each step in the following sections to create the application. -However, you can proceed directly to the completed solution as follows: - -Clone the Git repository: `git clone {quickstarts-clone-url}` or download an {quickstarts-archive-url}[archive]. -Locate the solution in the `infinispan-client-quickstart` {quickstarts-tree-url}/infinispan-client-quickstart[directory]. - -== Adding the Infinispan client extension - -Run the following command in the base directory of your Quarkus project to add the `infinispan-client` extension: - -:add-extension-extensions: infinispan-client -include::includes/devtools/extension-add.adoc[] - -This command adds the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-infinispan-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-infinispan-client") ----- - -== Configuring the Infinispan client - -Open the `application.properties` file in the `src/main/resources` directory with any text editor. - -Note that Infinispan documentation refers to a `hotrod-client.properties` file. -You can configure the Infinispan client with either properties file but `application.properties` always takes -priority over `hotrod-client.properties`. - -Additionally, you cannot update configuration properties at runtime. -If you modify `application.properties` or `hotrod-client.properties`, you must rebuild the application before those changes take effect. - -== Connecting to Infinispan clusters - -Add the following properties to connect to Infinispan Server: - -[source,properties] ----- -# Infinispan Server address -quarkus.infinispan-client.server-list=localhost:11222 - -# Authentication -quarkus.infinispan-client.auth-username=admin -quarkus.infinispan-client.auth-password=password - -# Infinispan client intelligence -# Use BASIC as a Docker for Mac workaround -quarkus.infinispan-client.client-intelligence=BASIC ----- - -.Running Infinispan Server - -To use the Infinispan client extension, you need at least one running instance of Infinispan Server. - -Check out our 5 minute https://infinispan.org/get-started/[Getting stated with Infinispan] tutorial to run Infinispan Server locally. - -Infinispan Server also enables authentication and security authorization by default so you need to create a user with permissions. - -* If you run the Infinispan Server image, pass the `USER="admin"` and `PASS="password"` parameters. -* If you run the bare metal distribution, use the Command Line Interface (CLI) as follows: -+ -[source,bash] ----- -$ ./bin/cli.sh user create admin -p password ----- - -=== Authentication mechanisms - -You can use the following authentication mechanisms with the Infinispan client: - -* DIGEST-MD5 -* PLAIN (recommended only in combination with TLS encryption) -* EXTERNAL - -Other authentication mechanisms, such as SCRAM and GSSAPI, are not yet verified with the Infinispan client. - -You can find more information on configuring authentication in https://infinispan.org/docs/stable/titles/hotrod_java/hotrod_java.html#hotrod_endpoint_auth-hotrod-client-configuration[Hot Rod Endpoint Authentication Mechanisms]. - -[NOTE] -==== -You must configure authentication in the `hotrod-client.properties` file if you use Dependency Injection. -==== - -== Serialization (Key Value types support) - -By default the client will support keys and values of the following types: byte[], -primitive wrappers (eg. Integer, Long, Double etc.), String, Date and Instant. User types require -some additional steps that are detailed here. Let's say we have the following user classes: - -.Author.java -[source,java] ----- -public class Author { - private final String name; - private final String surname; - - public Author(String name, String surname) { - this.name = Objects.requireNonNull(name); - this.surname = Objects.requireNonNull(surname); - } - // Getter/Setter/equals/hashCode/toString omitted -} ----- - -.Book.java -[source,java] ----- -public class Book { - private final String title; - private final String description; - private final int publicationYear; - private final Set authors; - private final BigDecimal price; - - public Book(String title, String description, int publicationYear, Set authors, BigDecimal price) { - this.title = Objects.requireNonNull(title); - this.description = Objects.requireNonNull(description); - this.publicationYear = publicationYear; - this.authors = Objects.requireNonNull(authors); - this.price = price; - } - // Getter/Setter/equals/hashCode/toString omitted -} ----- - -Serialization of user types uses a library based on protobuf, -called https://github.com/infinispan/protostream[Protostream]. - -[TIP] -==== -Infinispan caches can store keys and values in different encodings, but recommend using https://developers.google.com/protocol-buffers[Protocol Buffers (Protobuf)]. - -For more information see our https://infinispan.org/docs/stable/titles/encoding/encoding.html[Cache Encoding and Marshalling] guide. -==== - - -=== Annotation based Serialization - -This can be done automatically by adding protostream annotations to your user classes. -In addition, a single Initializer annotated interface is required which controls how -the supporting classes are generated. - -Here is an example of how the preceding classes should be changed: - -.Author.java -[source,java] ----- - @ProtoFactory - public Author(String name, String surname) { - this.name = Objects.requireNonNull(name); - this.surname = Objects.requireNonNull(surname); - } - - @ProtoField(number = 1) - public String getName() { - return name; - } - - @ProtoField(number = 2) - public String getSurname() { - return surname; - } ----- - -.Book.java -[source,java] ----- - @ProtoFactory - public Book(String title, String description, int publicationYear, Set authors) { - this.title = Objects.requireNonNull(title); - this.description = Objects.requireNonNull(description); - this.publicationYear = publicationYear; - this.authors = Objects.requireNonNull(authors); - } - - @ProtoField(number = 1) - public String getTitle() { - return title; - } - - @ProtoField(number = 2) - public String getDescription() { - return description; - } - - @ProtoField(number = 3, defaultValue = "-1") - public int getPublicationYear() { - return publicationYear; - } - - @ProtoField(number = 4) - public Set getAuthors() { - return authors; - } ----- - -If your classes have only mutable fields, then the `ProtoFactory` annotation -is not required, assuming your class has a no arg constructor. - -Then all that is required is a very simple `GeneratedSchema` interface with an annotation -on it to specify configuration settings - -.BooksSchema.java -[source,java] ----- -import org.infinispan.protostream.GeneratedSchema; -import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; -import org.infinispan.protostream.types.java.math.BigDecimalAdapter; - -@AutoProtoSchemaBuilder(includeClasses = { Book.class, Author.class, BigDecimalAdapter.class }, schemaPackageName = "book_sample") -interface BookStoreSchema extends GeneratedSchema { -} ----- - -[TIP] -Protostream provides default Protobuf mappers for commonly used types as `BigDecimal`, included in the `org.infinispan.protostream.types` package. - -So in this case we will automatically generate the marshaller and schemas for the included classes and -place them in the schema package automatically. The package does not have to be provided, but if you use Infinispan query capabilities, you must know the generated package. - -NOTE: In Quarkus the `schemaFileName` and `schemaFilePath` attributes should NOT be set on the `AutoProtoSchemaBuilder` annotation. Setting either attributes causes native runtime errors. - -=== Custom serialization - -The previous method is suggested for any case when the user can annotate their classes. -Unfortunately the user may not be able to annotate all classes they will put in the -cache. In this case you must define your schema and create your own Marshaller(s) -yourself. - -Protobuf schema:: You can supply a protobuf schema through either one of two ways. -. Proto File - + -You can put the `.proto` file in the `META-INF` directory of the project. These files will -automatically be picked up at initialization time. -+ -.library.proto ----- -package book_sample; - -message Book { - required string title = 1; - required string description = 2; - required int32 publicationYear = 3; // no native Date type available in Protobuf - repeated Author authors = 4; - requited double price = 5; // no native BigDecimal type available in Protobuf -} - -message Author { - required string name = 1; - required string surname = 2; -} ----- -. In Code - + -Or you can define the proto schema directly in user code by defining a produced bean of type -`org.infinispan.protostream.FileDescriptorSource`. -+ -[source,java] ----- - @Produces - FileDescriptorSource bookProtoDefinition() { - return FileDescriptorSource.fromString("library.proto", "package book_sample;\n" + - "\n" + - "message Book {\n" + - " required string title = 1;\n" + - " required string description = 2;\n" + - " required int32 publicationYear = 3; // no native Date type available in Protobuf\n" + - "\n" + - " repeated Author authors = 4;\n" + - "\n" + - " required double price = 5; // no native BigDecimal type available in Protobuf\n" + - "}\n" + - "\n" + - "message Author {\n" + - " required string name = 1;\n" + - " required string surname = 2;\n" + - "}"); - } ----- -User Marshaller:: -The last thing to do is to provide a `org.infinispan.protostream.MessageMarshaller` implementation -for each user class defined in the proto schema. This class is then provided via `@Produces` in a similar -fashion to the code based proto schema definition above. -+ -Here is the Marshaller class for our Author & Book classes. -+ -NOTE: The type name must match the `.` exactly! -+ -.AuthorMarshaller.java -[source,java] ----- -public class AuthorMarshaller implements MessageMarshaller { - - @Override - public String getTypeName() { - return "book_sample.Author"; - } - - @Override - public Class getJavaClass() { - return Author.class; - } - - @Override - public void writeTo(ProtoStreamWriter writer, Author author) throws IOException { - writer.writeString("name", author.getName()); - writer.writeString("surname", author.getSurname()); - } - - @Override - public Author readFrom(ProtoStreamReader reader) throws IOException { - String name = reader.readString("name"); - String surname = reader.readString("surname"); - return new Author(name, surname); - } -} ----- -+ -.BookMarshaller.java -[source,java] ----- -public class BookMarshaller implements MessageMarshaller { - - @Override - public String getTypeName() { - return "book_sample.Book"; - } - - @Override - public Class getJavaClass() { - return Book.class; - } - - @Override - public void writeTo(ProtoStreamWriter writer, Book book) throws IOException { - writer.writeString("title", book.getTitle()); - writer.writeString("description", book.getDescription()); - writer.writeInt("publicationYear", book.getPublicationYear()); - writer.writeCollection("authors", book.getAuthors(), Author.class); - writer.writeDouble("price", book.getPrice().doubleValue()); - } - - @Override - public Book readFrom(ProtoStreamReader reader) throws IOException { - String title = reader.readString("title"); - String description = reader.readString("description"); - int publicationYear = reader.readInt("publicationYear"); - Set authors = reader.readCollection("authors", new HashSet<>(), Author.class); - BigDecimal price = BigDecimal.valueOf(reader.readDouble("price")); - return new Book(title, description, publicationYear, authors, price); - } -} ----- -+ -And you pass the marshaller by defining the following: -+ -[source,java] ----- - @Produces - MessageMarshaller authorMarshaller() { - return new AuthorMarshaller(); - } - - @Produces - MessageMarshaller bookMarshaller() { - return new BookMarshaller(); - } ----- -NOTE: The above produced Marshaller method MUST return `MessageMarshaller` without types or else it will not be found. - -== Dependency Injection - -As you saw above we support the user injecting Marshaller configuration. You can do the inverse with -the Infinispan client extension providing injection for `RemoteCacheManager` and `RemoteCache` objects. -There is one global `RemoteCacheManager` that takes all of the configuration -parameters setup in the above sections. - -It is very simple to inject these components. All you need to do is to add the Inject annotation to -the field, constructor or method. In the below code we utilize field and constructor injection. - -.SomeClass.java -[source,java] ----- - @Inject SomeClass(RemoteCacheManager remoteCacheManager) { - this.remoteCacheManager = remoteCacheManager; - } - - @Inject @Remote("myCache") - RemoteCache cache; - - RemoteCacheManager remoteCacheManager; ----- - -If you notice the `RemoteCache` declaration has an additional optional annotation named `Remote`. -This is a qualifier annotation allowing you to specify which named cache that will be injected. This -annotation is not required and if it is not supplied, the default cache will be injected. - -NOTE: Other types may be supported for injection, please see other sections for more information - -=== Registering Protobuf Schemas with Infinispan Server -You need to register the generated Protobuf schemas with Infinispan Server to perform queries or convert from -`Protobuf` to other media types such as `JSON`. - -[TIP] -You can check the schemas that exist under the `Schemas` tab by logging into -Infinispan Console at `http://localhost:11222` - -By default Protobuf schemas generated this way will be registered by this extension when the client first connects. -However, it might be required to handle the registration manually as a schema may evolve over time when used in -production, so you can disable this from occurring by configuring the -`quarkus.infinispan-client.use-schema-registration` to `false`. - -To configure the schema manually -please use https://infinispan.org/docs/infinispan-operator/master/operator.html[Infinispan Operator] -for Kubernetes deployments, Infinispan Console, -https://infinispan.org/docs/stable/titles/rest/rest.html#rest_v2_protobuf_schemas[REST API] or the -https://infinispan.org/docs/stable/titles/encoding/encoding.html#registering-sci-remote-caches_marshalling[Hot Rod Java Client]. - - -== Querying - -The Infinispan client supports both indexed and non-indexed querying as long as the -`ProtoStreamMarshaller` is configured above. This allows the user to query based on the -properties of the proto schema. - -Query builds upon the proto definitions you can configure when setting up the `ProtoStreamMarshaller`. -Either method of Serialization above will automatically register the schema with the server at -startup, meaning that you will automatically gain the ability to query objects stored in the -remote Infinispan Server. - -You can read more about https://infinispan.org/docs/stable/titles/developing/developing.html#creating_ickle_queries-querying[querying] in the Infinispan documentation. - -You can use either the Query DSL or the Ickle Query language with the Quarkus Infinispan client -extension. - -== Counters - -Infinispan also has a notion of counters and the Quarkus Infinispan client supports them out of -the box. - -The Quarkus Infinispan client extension allows for Dependency Injection -of the `CounterManager` directly. All you need to do is annotate your field, constructor or method -and you get it with no fuss. You can then use counters as you would normally. - -[source,java] ----- -@Inject -CounterManager counterManager; ----- - -You can read more about https://infinispan.org/docs/stable/titles/developing/developing.html#clustered_counters[clustered counters] in the Infinispan documentation. - -== Near Caching - -Near caching is disabled by default, but you can enable it by setting the profile config property -`quarkus.infinispan-client.near-cache-max-entries` to a value greater than 0. You can also configure -a regular expression so that only a subset of caches have near caching applied through the -`quarkus.infinispan-client.near-cache-name-pattern` attribute. - -== Encryption - -Encryption at this point requires additional steps to get working. - -The first step is to configure the `hotrod-client.properties` file to point to your truststore -and/or keystore. This is further detailed https://infinispan.org/docs/stable/titles/hotrod_java/hotrod_java.html#hotrod_encryption[here]. - -The Infinispan Client extension enables SSL/TLS by default. You can read more about this -at xref:native-and-ssl.adoc[Using SSL With Native Executables]. - -== Additional Features - -The Infinispan Client has additional features that were not mentioned here. This means this -feature was not tested in a Quarkus environment and they may or may not work. Please let us -know if you need these added! - -[[dev-services]] -== Dev Services for Infinispan - -When you use the infinispan-client extension in dev mode or in test, Quarkus automatically starts an Infinispan server and configure your application. - -=== Enabling / Disabling Dev Services for Infinispan - -Dev Services for Infinispan is automatically enabled unless: - -- `quarkus.infinispan-client.devservices.enabled` is set to `false` -- the `quarkus.infinispan-client.server-list` is configured - -Dev Services for Infinispan relies on Docker to start the broker. -If your environment does not support Docker, you will need to start the broker manually, or connect to an already running broker. -You can configure the broker address using `quarkus.infinispan-client.server-list`. - -== Shared server - -Quarkus will share the Infinispan broker if you have multiple applications running in dev mode. -Dev Services for Infinispan implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single broker. - -NOTE: Dev Services for Infinispan starts the container with the `quarkus-dev-service-infinispan` label which is used to identify the container. - -If you need multiple (shared) Infinispan server, you can configure the `quarkus.infinispan-client.devservices.service-name` attribute and indicate the server name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `infinispan`. - -Sharing is enabled by default in dev mode, but disabled in test mode. -You can disable the sharing with `quarkus.infinispan-client.devservices.shared=false`. - -== Setting the port - -By default, Dev Services for Infinispan picks a random port and configures the application. -You can set the port by configuring the `quarkus.infinispan-client.devservices.port` property. - -== Testing helpers - -To start a Infinispan Server for your unit tests, Quarkus provides one `QuarkusTestResourceLifecycleManager` that relies on link:https://infinispan.org/docs/stable/titles/hotrod_java/hotrod_java.html#junit-testing[Infinispan Server Test Container]. - -- `io.quarkus.test.infinispan.client.InfinispanTestResource` will start a single instance on port 11222 with user 'admin' and password 'password'. - -To use them, you need to add the `io.quarkus:quarkus-test-infinispan-client` dependency to your pom.xml. - -For more information about the usage of a `QuarkusTestResourceLifecycleManager` please read xref:getting-started-testing.adoc#quarkus-test-resource[Quarkus test resource]. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-infinispan-client.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/javascript/asciidoc-tabs.js b/_versions/2.7/guides/javascript/asciidoc-tabs.js deleted file mode 100644 index ce75a82927d..00000000000 --- a/_versions/2.7/guides/javascript/asciidoc-tabs.js +++ /dev/null @@ -1,86 +0,0 @@ -// code originally coming from: -// https://github.com/bmuschko/asciidocj-tabbed-code-extension -// adapted to work with jQuery - -$(document).ready(function() { - function addBlockSwitches() { - $('.listingblock.primary, .sidebarblock.primary').each(function() { - var primary = $(this); - createSwitchItem(primary, createBlockSwitch(primary)).item.addClass("selected"); - if (primary.children('.title').length) { - primary.children('.title').remove(); - } else { - primary.children('.content').first().children('.title').remove(); - } - getAllSyncClasses(primary).forEach(className => primary.removeClass(className)); - }); - - $('.listingblock.secondary, .sidebarblock.secondary').each(function(idx, node) { - var secondary = $(node); - var primary = findPrimary(secondary); - var switchItem = createSwitchItem(secondary, primary.children('.asciidoc-tabs-switch')); - switchItem.content.addClass('asciidoc-tabs-hidden'); - findPrimary(secondary).append(switchItem.content); - secondary.remove(); - }); - } - - function createBlockSwitch(primary) { - var blockSwitch = $('
'); - primary.prepend(blockSwitch); - return blockSwitch; - } - - function findPrimary(secondary) { - return secondary.prev('.primary'); - } - - function getSyncClasses(element) { - return element.attr('class').replaceAll(/\s+/g, ' ').split(' ').filter(className => className.startsWith('asciidoc-tabs-sync')); - } - - function getTargetSyncClasses(element) { - return element.attr('class').replaceAll(/\s+/g, ' ').split(' ').filter(className => className.startsWith('asciidoc-tabs-target-sync')); - } - - function getAllSyncClasses(element) { - return element.attr('class').replaceAll(/\s+/g, ' ').split(' ').filter(className => className.startsWith('asciidoc-tabs-sync') || className.startsWith('asciidoc-tabs-target-sync')); - } - - function triggerSyncEvent(element) { - var syncClasses = getSyncClasses(element); - if (syncClasses.length > 0) { - $('.asciidoc-tabs-switch--item.' + syncClasses[0] + ':not(.selected)').not(element).click(); - $('.asciidoc-tabs-switch--item.' + syncClasses[0].replace('asciidoc-tabs-sync', 'asciidoc-tabs-target-sync') + ':not(.selected)').not(element).click(); - } - var targetSyncClasses = getTargetSyncClasses(element); - for (const targetSyncClass of targetSyncClasses) { - $('.asciidoc-tabs-switch--item.' + targetSyncClass + ':not(.selected)').not(element).click(); - } - } - - function createSwitchItem(block, blockSwitch) { - var blockName; - if (block.children('.title').length) { - blockName = block.children('.title').text(); - } else { - blockName = block.children('.content').first().children('.title').text(); - block.children('.content').first().children('.title').remove(); - } - var allSyncClasses = getAllSyncClasses(block); - var content = block.children('.content').first().append(block.next('.colist')); - var item = $('
' + blockName + '
'); - item.on('click', '', content, function(e) { - $(this).addClass('selected'); - $(this).siblings().removeClass('selected'); - e.data.siblings('.content').addClass('asciidoc-tabs-hidden'); - e.data.removeClass('asciidoc-tabs-hidden'); - - triggerSyncEvent($(this)); - }); - blockSwitch.append(item); - return {'item': item, 'content': content}; - } - - addBlockSwitches(); -}); diff --git a/_versions/2.7/guides/javascript/config.js b/_versions/2.7/guides/javascript/config.js deleted file mode 100644 index 22e453b5b85..00000000000 --- a/_versions/2.7/guides/javascript/config.js +++ /dev/null @@ -1,314 +0,0 @@ -jQuery(function(){ -/* - * SEARCH - */ -var inputs = {}; -var tables = document.querySelectorAll("table.configuration-reference"); -var typingTimer; - -if(tables){ - var idx = 0; - for (var table of tables) { - var caption = table.previousElementSibling; - if (table.classList.contains('searchable')) { // activate search engine only when needed - var input = document.createElement("input"); - input.setAttribute("type", "search"); - input.setAttribute("placeholder", "FILTER CONFIGURATION"); - input.id = "config-search-"+(idx++); - caption.children.item(0).appendChild(input); - input.addEventListener("keyup", initiateSearch); - input.addEventListener("input", initiateSearch); - var descriptions = table.querySelectorAll(".description"); - if(descriptions){ - var heights = new Array(descriptions.length); - var h = 0; - for (description of descriptions){ - heights[h++] = description.offsetHeight; - } - var shadowTable = table.cloneNode(true); - var shadowDescriptions = shadowTable.querySelectorAll(".description"); - h = 0; - for (shadowDescription of shadowDescriptions){ - makeCollapsible(shadowDescription, heights[h++]); - } - table.parentNode.replaceChild(shadowTable, table); - table = shadowTable; - } - inputs[input.id] = {"table": table}; - } - - var rowIdx = 0; - for (var row of table.querySelectorAll("table.configuration-reference > tbody > tr")) { - var heads = row.querySelectorAll("table.configuration-reference > tbody > tr > th"); - if(!heads || heads.length == 0){ - // mark even rows - if(++rowIdx % 2){ - row.classList.add("odd"); - }else{ - row.classList.remove("odd"); - } - }else{ - // reset count at each section - rowIdx = 0; - } - } - } -} - -function initiateSearch(event){ - // only start searching after the user stopped typing for 300ms, since we can't abort - // running tasks, we don't want to search three times for "foo" (one letter at a time) - if(typingTimer) - clearTimeout(typingTimer); - typingTimer = setTimeout(() => search(event.target), 300) -} - -function highlight(element, text){ - var iter = document.createNodeIterator(element, NodeFilter.SHOW_TEXT, null); - - while (n = iter.nextNode()){ - var parent = n.parentNode; - var elementText = n.nodeValue; - if(elementText == undefined) - continue; - var elementTextLC = elementText.toLowerCase(); - var index = elementTextLC.indexOf(text); - if(index != -1 - && acceptTextForSearch(n)){ - var start = 0; - var fragment = document.createDocumentFragment() - // we use the DOM here to avoid < and such being parsed as elements by jQuery when replacing content - do{ - // text before - fragment.appendChild(document.createTextNode(elementText.substring(start, index))); - // highlighted text - start = index + text.length; - var hlText = document.createTextNode(elementText.substring(index, start)); - var hl = document.createElement("span"); - hl.appendChild(hlText); - hl.setAttribute("class", "configuration-highlight"); - fragment.appendChild(hl); - }while((index = elementTextLC.indexOf(text, start)) != -1); - // text after - n.nodeValue = elementText.substring(start); - // replace - parent.insertBefore(fragment, n); - } - } - iter.detach(); -} - -function clearHighlights(table){ - for (var span of table.querySelectorAll("span.configuration-highlight")) { - var parent = span.parentNode; - var prev = span.previousSibling; - var next = span.nextSibling; - var target; - if(prev && prev.nodeType == Node.TEXT_NODE){ - target = prev; - } - var text = span.childNodes.item(0).nodeValue; - if(next && next.nodeType == Node.TEXT_NODE){ - text += next.nodeValue; - parent.removeChild(next); - } - if(target){ - target.nodeValue += text; - }else{ - target = document.createTextNode(text); - parent.insertBefore(target, span); - } - parent.removeChild(span); - } -} - -function findText(row, search){ - var iter = document.createNodeIterator(row, NodeFilter.SHOW_TEXT, null); - - while (n = iter.nextNode()){ - var elementText = n.nodeValue; - if(elementText == undefined) - continue; - if(elementText.toLowerCase().indexOf(search) != -1 - // check that it's not decoration - && acceptTextForSearch(n)){ - iter.detach(); - return true; - } - } - iter.detach(); - return false; -} - -function acceptTextForSearch(n){ - var classes = n.parentNode.classList; - return !classes.contains("link-collapsible") - && !classes.contains("description-label"); -} - -function getShadowTable(input){ - if(!inputs[input.id].shadowTable){ - inputs[input.id].shadowTable = inputs[input.id].table.cloneNode(true); - reinstallClickHandlers(inputs[input.id].shadowTable); - } - return inputs[input.id].shadowTable; -} - -function reinstallClickHandlers(table){ - var descriptions = table.querySelectorAll(".description"); - if(descriptions){ - for (descDiv of descriptions){ - if(!descDiv.classList.contains("description-collapsed")) - continue; - var content = descDiv.parentNode; - var td = getAncestor(descDiv, "td"); - var row = td.parentNode; - var decoration = content.lastElementChild; - var iconDecoration = decoration.children.item(0); - var collapsibleSpan = decoration.children.item(1); - var collapsibleHandler = makeCollapsibleHandler(descDiv, td, row, - collapsibleSpan, - iconDecoration); - - row.addEventListener("click", collapsibleHandler); - } - } -} - -function swapShadowTable(input){ - var currentTable = inputs[input.id].table; - var shadowTable = inputs[input.id].shadowTable; - currentTable.parentNode.replaceChild(shadowTable, currentTable); - inputs[input.id].table = shadowTable; - inputs[input.id].shadowTable = currentTable; -} - -function search(input){ - var search = input.value.trim().toLowerCase(); - var lastSearch = inputs[input.id].lastSearch; - if(search == lastSearch) - return; - // work on shadow table - var table = getShadowTable(input); - - applySearch(table, search, true); - - inputs[input.id].lastSearch = search; - // swap tables - swapShadowTable(input); -} - -function applySearch(table, search, autoExpand){ - // clear highlights - clearHighlights(table); - var lastSectionHeader = null; - var idx = 0; - for (var row of table.querySelectorAll("table.configuration-reference > tbody > tr")) { - var heads = row.querySelectorAll("table.configuration-reference > tbody > tr > th"); - if(!heads || heads.length == 0){ - // mark even rows - if(++idx % 2){ - row.classList.add("odd"); - }else{ - row.classList.remove("odd"); - } - }else{ - // reset count at each section - idx = 0; - } - if(!search){ - row.style.removeProperty("display"); - // recollapse when searching is over - if(autoExpand - && row.classList.contains("row-collapsible") - && !row.classList.contains("row-collapsed")) - row.click(); - }else{ - if(heads && heads.length > 0){ - // keep the column header with no highlight, but start hidden - lastSectionHeader = row; - row.style.display = "none"; - }else if(findText(row, search)){ - row.style.removeProperty("display"); - // expand if shown - if(autoExpand && row.classList.contains("row-collapsed")) - row.click(); - highlight(row, search); - if(lastSectionHeader){ - lastSectionHeader.style.removeProperty("display"); - // avoid showing it more than once - lastSectionHeader = null; - } - }else{ - row.style.display = "none"; - } - } - } -} - -function getAncestor(element, name){ - for ( ; element && element !== document; element = element.parentNode ) { - if ( element.localName == name ) - return element; - } - return null; -} - -/* - * COLLAPSIBLE DESCRIPTION - */ -function makeCollapsible(descDiv, descHeightLong){ - if (descHeightLong > 25) { - var td = getAncestor(descDiv, "td"); - var row = td.parentNode; - var iconDecoration = document.createElement("i"); - descDiv.classList.add('description-collapsed'); - iconDecoration.classList.add('fa', 'fa-chevron-down'); - - var descDecoration = document.createElement("div"); - descDecoration.classList.add('description-decoration'); - descDecoration.appendChild(iconDecoration); - - var collapsibleSpan = document.createElement("span"); - collapsibleSpan.appendChild(document.createTextNode("Show more")); - descDecoration.appendChild(collapsibleSpan); - - var collapsibleHandler = makeCollapsibleHandler(descDiv, td, row, - collapsibleSpan, - iconDecoration); - - var parent = descDiv.parentNode; - - parent.appendChild(descDecoration); - row.classList.add("row-collapsible", "row-collapsed"); - row.addEventListener("click", collapsibleHandler); - } - -}; - -function makeCollapsibleHandler(descDiv, td, row, - collapsibleSpan, - iconDecoration) { - - return function(event) { - var target = event.target; - if( (target.localName == 'a' || getAncestor(target, "a"))) { - return; - } - - var isCollapsed = descDiv.classList.contains('description-collapsed'); - if( isCollapsed ) { - collapsibleSpan.childNodes.item(0).nodeValue = 'Show less'; - iconDecoration.classList.replace('fa-chevron-down', 'fa-chevron-up'); - } - else { - collapsibleSpan.childNodes.item(0).nodeValue = 'Show more'; - iconDecoration.classList.replace('fa-chevron-up', 'fa-chevron-down'); - } - descDiv.classList.toggle('description-collapsed'); - descDiv.classList.toggle('description-expanded'); - row.classList.toggle('row-collapsed'); - }; -} - -}); diff --git a/_versions/2.7/guides/jms.adoc b/_versions/2.7/guides/jms.adoc deleted file mode 100644 index b25a8e65f45..00000000000 --- a/_versions/2.7/guides/jms.adoc +++ /dev/null @@ -1,391 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using JMS -include::./attributes.adoc[] -:extension-status: preview - - -This guide demonstrates how your Quarkus application can use JMS messaging via the -Apache Qpid JMS AMQP client, or alternatively the Apache ActiveMQ Artemis JMS client. - -include::./status-include.adoc[] - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* A running Artemis server, or Docker to start one - -== Architecture - -In this guide, we are going to generate (random) prices in one component. -These prices are written to a queue (`prices`) using a JMS client. -Another component reads from the `prices` queue and stores the latest price. -The data can be fetched from a browser using a fetch button from a JAX-RS resource. - - -The guide can be used either via the Apache Qpid JMS AMQP client as detailed immediately below, or -alternatively with the Apache ActiveMQ Artemis JMS client given some different configuration -as <>. - -[#qpid-jms-amqp] -== Qpid JMS - AMQP - -In the detailed steps below we will use the https://qpid.apache.org/components/jms/[Apache Qpid JMS] -client via the https://github.com/amqphub/quarkus-qpid-jms/[Quarkus Qpid JMS extension]. Qpid JMS -uses the AMQP 1.0 ISO standard as its wire protocol, allowing it to be used with a variety of -AMQP 1.0 servers and services such as ActiveMQ Artemis, ActiveMQ 5, Qpid Broker-J, Qpid Dispatch router, -Azure Service Bus, and more. - -=== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone https://github.com/amqphub/quarkus-qpid-jms-quickstart.git`, -or download an https://github.com/amqphub/quarkus-qpid-jms-quickstart/archive/main.zip[archive]. - -=== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: jms-quickstart -:create-app-extensions: resteasy,qpid-jms -include::includes/devtools/create-app.adoc[] - -This command generates a new project importing the quarkus-qpid-jms extension: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.amqphub.quarkus - quarkus-qpid-jms - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.amqphub.quarkus:quarkus-qpid-jms") ----- - -[#starting-the-broker] -=== Starting the broker - -Then, we need an AMQP broker. In this case we will use an Apache ActiveMQ Artemis server. -You can follow the instructions from the https://activemq.apache.org/components/artemis/[Apache Artemis web site] or start a broker via docker using the https://artemiscloud.io/[ArtemisCloud] container image: - -[source,bash] ----- -docker run -it --rm -p 8161:8161 -p 61616:61616 -p 5672:5672 -e AMQ_USER=quarkus -e AMQ_PASSWORD=quarkus quay.io/artemiscloud/activemq-artemis-broker:0.1.4 ----- - -=== The price producer - -Create the `src/main/java/org/acme/jms/PriceProducer.java` file, with the following content: - -[source, java] ----- -package org.acme.jms; - -import java.util.Random; -import java.util.concurrent.Executors; -import java.util.concurrent.ScheduledExecutorService; -import java.util.concurrent.TimeUnit; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; -import javax.inject.Inject; -import javax.jms.ConnectionFactory; -import javax.jms.JMSContext; - -import io.quarkus.runtime.ShutdownEvent; -import io.quarkus.runtime.StartupEvent; - -/** - * A bean producing random prices every 5 seconds and sending them to the prices JMS queue. - */ -@ApplicationScoped -public class PriceProducer implements Runnable { - - @Inject - ConnectionFactory connectionFactory; - - private final Random random = new Random(); - private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor(); - - void onStart(@Observes StartupEvent ev) { - scheduler.scheduleWithFixedDelay(this, 0L, 5L, TimeUnit.SECONDS); - } - - void onStop(@Observes ShutdownEvent ev) { - scheduler.shutdown(); - } - - @Override - public void run() { - try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) { - context.createProducer().send(context.createQueue("prices"), Integer.toString(random.nextInt(100))); - } - } -} ----- - -=== The price consumer - -The price consumer reads the prices from JMS, and stores the last one. -Create the `src/main/java/org/acme/jms/PriceConsumer.java` file with the following content: - -[source, java] ----- -package org.acme.jms; - -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; -import javax.inject.Inject; -import javax.jms.ConnectionFactory; -import javax.jms.JMSConsumer; -import javax.jms.JMSContext; -import javax.jms.JMSException; -import javax.jms.Message; - -import io.quarkus.runtime.ShutdownEvent; -import io.quarkus.runtime.StartupEvent; - -/** - * A bean consuming prices from the JMS queue. - */ -@ApplicationScoped -public class PriceConsumer implements Runnable { - - @Inject - ConnectionFactory connectionFactory; - - private final ExecutorService scheduler = Executors.newSingleThreadExecutor(); - - private volatile String lastPrice; - - public String getLastPrice() { - return lastPrice; - } - - void onStart(@Observes StartupEvent ev) { - scheduler.submit(this); - } - - void onStop(@Observes ShutdownEvent ev) { - scheduler.shutdown(); - } - - @Override - public void run() { - try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) { - JMSConsumer consumer = context.createConsumer(context.createQueue("prices")); - while (true) { - Message message = consumer.receive(); - if (message == null) return; - lastPrice = message.getBody(String.class); - } - } catch (JMSException e) { - throw new RuntimeException(e); - } - } -} ----- - -=== The price resource - -Finally, let's create a simple JAX-RS resource to show the last price. -Create the `src/main/java/org/acme/jms/PriceResource.java` file with the following content: - -[source, java] ----- -package org.acme.jms; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -/** - * A simple resource showing the last price. - */ -@Path("/prices") -public class PriceResource { - - @Inject - PriceConsumer consumer; - - @GET - @Path("last") - @Produces(MediaType.TEXT_PLAIN) - public String last() { - return consumer.getLastPrice(); - } -} ----- - -=== The HTML page - -Final touch, the HTML page reading the converted prices using SSE. - -Create the `src/main/resources/META-INF/resources/prices.html` file, with the following content: - -[source, html] ----- - - - - - Prices - - - - - -
- -

Last price

-
-

The last price is N/A €.

-
-
- - - ----- - -Nothing spectacular here. On each fetch, it updates the page. - -=== Configure the Qpid JMS properties - -We need to configure the Qpid JMS properties used by the extension when -injecting the ConnectionFactory. - -This is done in the `src/main/resources/application.properties` file. - -[source,properties] ----- -# Configures the Qpid JMS properties. -quarkus.qpid-jms.url=amqp://localhost:5672 -quarkus.qpid-jms.username=quarkus -quarkus.qpid-jms.password=quarkus ----- - -More detail about the configuration are available in the https://github.com/amqphub/quarkus-qpid-jms#configuration[Quarkus Qpid JMS] documentation. - -[#get-it-running] -=== Get it running - -If you followed the instructions, you should have the Artemis server running. -Then, you just need to run the application using: - -include::includes/devtools/dev.adoc[] - -Open `http://localhost:8080/prices.html` in your browser. - -=== Running Native - -You can build the native executable with: - -include::includes/devtools/build-native.adoc[] - -Or, if you don't have GraalVM installed, you can instead use Docker to build the native executable using: - -include::includes/devtools/build-native-container.adoc[] - -and then run with: - -[source,bash] ----- -./target/jms-quickstart-1.0.0-SNAPSHOT-runner ----- - -Open `http://localhost:8080/prices.html` in your browser. - -''' - - -[#artemis-jms] -== Artemis JMS - -The above steps detailed using the Qpid JMS AMQP client, however the guide can also be used -with the Artemis JMS client. Many of the individual steps are exactly as previously -<>. The individual component code is the same. -The only differences are in the dependency for the initial project creation, and the -configuration properties used. These changes are detailed below and should be substituted -for the equivalent step during the sequence above. - -=== Solution - -You can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The Artemis JMS solution is located in the `jms-quickstart` {quickstarts-tree-url}/jms-quickstart[directory]. - -=== Creating the Maven Project - -Create a new project with the following command: - -:create-app-artifact-id: jms-quickstart -:create-app-extensions: resteasy,artemis-jms -include::includes/devtools/create-app.adoc[] - -This creates a new project importing the quarkus-artemis-jms extension: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-artemis-jms - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-artemis-jms") ----- - -With the project created, you can resume from <> in the detailed steps above -and proceed until configuring the `application.properties` file, when you should use the Artemis -configuration below instead. - -=== Configure the Artemis properties - -We need to configure the Artemis connection properties. -This is done in the `src/main/resources/application.properties` file. - -[source,properties] ----- -# Configures the Artemis properties. -quarkus.artemis.url=tcp://localhost:61616 -quarkus.artemis.username=quarkus -quarkus.artemis.password=quarkus ----- - -With the Artemis properties configured, you can resume the steps above from <>. - -=== Configuration Reference - -To know more about how to configure the Artemis JMS client, have a look at https://quarkiverse.github.io/quarkiverse-docs/quarkus-artemis/dev/index.html[the documentation of the extension]. - diff --git a/_versions/2.7/guides/jreleaser.adoc b/_versions/2.7/guides/jreleaser.adoc deleted file mode 100644 index bc93ce6095c..00000000000 --- a/_versions/2.7/guides/jreleaser.adoc +++ /dev/null @@ -1,824 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Packaging And Releasing With JReleaser - -include::./attributes.adoc[] -:jreleaser-version: 0.9.1 - -:numbered: -:sectnums: -:sectnumlevels: 4 - - -This guide covers packaging and releasing CLI applications using the link:https://jreleaser.org[JReleaser] tool. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* a GitHub account and a GitHub Personal Access token - -== Bootstrapping the project - -First, we need a project that defines a CLI application. We recommend using the xref:picocli.adoc[PicoCLI] extension. -This can be done using the following command: - -:create-cli-artifact-id: app -:create-cli-code: -include::includes/devtools/create-cli.adoc[] - -This command initializes the file structure and the minimum set of required files in the project: - -[source] ----- -. -├── README.md -├── mvnw -├── mvnw.cmd -├── pom.xml -└── src - └── main - ├── docker - │ ├── Dockerfile.jvm - │ ├── Dockerfile.legacy-jar - │ └── Dockerfile.native - ├── java - │ └── org - │ └── acme - │ └── GreetingCommand.java - └── resources - └── application.properties ----- - -It will also configure the picocli extension in the `pom.xml`: - -[source,xml] ----- - - io.quarkus - quarkus-picocli - ----- - -== Preparing the project for GitHub releases - -The project must be hosted at a GitHub repository before we continue. This task can be completed by logging into your -GitHub account, creating a new repository, and adding the newly created sources to said repository. Choose the `main` -branch as default to take advantage of conventions and thus configure less in your `pom.xml`. - -You also need a GitHub Personal Access token to be able to post a release to the repository you just created. Follow -the official documentation for -link:https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token[creating a personal access token]. -Store the newly created token at a safe place for future reference. Next, you have the choice of configuring the token -as an environment variable named `JRELEASER_GITHUB_TOKEN` so that the tool can read it. Alternatively you may store -the token at a secure location of your choosing, using a `.yml`, `.toml`, `.json`, or `.properties` file. The default -location is `~/.jreleaser/config[format]`. For example, using the `.yml` format this file could look like: - -[source,yaml] -.~/.jreleaser/config.yml ----- -JRELEASER_GITHUB_TOKEN: ----- - -Alright. Add all sources and create a first commit. You can choose your own conventions for commit messages however you -can get more bang for your buck when using JReleaser if you follow the -link:https://www.conventionalcommits.org/en/v1.0.0/[Conventional Commits] specification. Make your first commit with the -following message "build: Add initial sources". - -== Packaging as a Native Image distribution - -Quarkus already knows how to create a native executable using GraalVM Native Image. The default setup will create a -single executable file following a naming convention. However the JReleaser tool expects a distribution that is, a -conventional file structure packaged as a Zip or Tar file. The file structure must follow this layout: - -[source] ----- -. -├── LICENSE -├── README -└── bin - └── executable ----- - -This structure lets you add all kinds of supporting files required by the executable, such as configuration files, -shell completion scripts, man pages, license, readme, and more. - -== Creating the distribution - -We can leverage the link:http://maven.apache.org/plugins/maven-assembly-plugin/[maven-assembly-plugin] to create such -a distribution. We'll also make use of the link:https://github.com/trustin/os-maven-plugin[os-maven-plugin] to properly -identify the platform on which this executable can run, adding said platform to the distribution's filename. - -First, let's add the os-maven-plugin to the `pom.xml`. This plugin works as a Maven extension and as such must be added -to the `` section of the file: - -[source,xml] ----- - - - - kr.motd.maven - os-maven-plugin - 1.7.0 - - - ----- - -Next, native executables on Linux and macOS platforms typically do not have a file extension but Windows executables do, -we need to take care of this when renaming the generated executable. We can also place the generated distributions into -their own directory to avoid cluttering the `target` directory. Thus, let's add a couple of properties to the existing -`` section in the `pom.xml`: - -[source,xml] ----- - -${project.build.directory}/distributions ----- - -Now we configure the maven-assembly-plugin to create a Zip and a Tar file containing the executable and any supporting files -it may need to perform its job. Take special note on the name of the distribution, this is where we make use of the platform -properties detected by the os-maven-plugin. This plugin is configured in its own profile with the `single` goal bound to -the `package` phase. It's done this way to avoid rebuilding the distribution every single time the build is invoked, as we -only needed when we're ready for a release. - -[source,xml] ----- - - dist - - - - org.apache.maven.plugins - maven-assembly-plugin - 3.3.0 - - false - false - ${project.artifactId}-${project.version}-${os.detected.classifier} - ${distribution.directory} - ${project.build.directory}/assembly/work - - src/main/assembly/assembly.xml - - - - - make-distribution - package - - single - - - - - - - - - dist-windows - - - windows - - - - .exe - - ----- - -Note that two profiles are configured. The `dist` profile configures the assembly plugin, and it's configured in such a way that -it must be activated explicitly by passing `-Pdist` as a command flag. On the other hand the `dist-windows` profile becomes -active automatically when the build is run on a Windows platform. This second profile takes care of setting the value for the -`executable-suffix` property which is required by the assembly descriptor, as shown next: - -[source,xml,subs=macros+] -.src/main/assembly/assembly.xml ----- - - dist - - tar.gz - zip - dir - - - - ${project.build.directory}/${project.artifactId}-${project.version}-runner${executable-suffix} - ./bin - ${project.artifactId}${executable-suffix} - - - ----- - -These are the files created by the assembly plugin when invoking `./mvnw -Pdist package` on macOS: - -[source] ----- -$ tree target/distributions/ -target/distributions/ -├── app-1.0.0-SNAPSHOT-osx-x86_64 -│ └── app-1.0.0-SNAPSHOT-osx-x86_64 -│ └── bin -│ └── app -├── app-1.0.0-SNAPSHOT-osx-x86_64.tar.gz -└── app-1.0.0-SNAPSHOT-osx-x86_64.zip ----- - -Feel free to update the assembly descriptor to include additional files such as LICENSE, readme, or anything else needed by -the consumers of the executable. Make another commit right here with "build: Configure distribution assembly". - -We're ready to go to the next phase: configuring the release. - -== Adding JReleaser - -The JReleaser tool can be invoked in many ways: as a CLI tool, as a Docker image, or as a Maven plugin. The last option is very -convenient given that we are already working with Maven. Let's add yet another profile that contains the release configuration -as once again we don't require this behavior to be active all the time only when we're ready to post a release: - -[source,xml,subs=attributes+] ----- - - release - - - - org.jreleaser - jreleaser-maven-plugin - {jreleaser-version} - - - - ----- - -There are a few goals we can invoke at this point, we can for example ask JReleaser to print out its current configuration by -invoking the `./mvnw -Prelease jreleaser:config` command. The tool will output everything that it knows about the project. We -can also generate the changelog by invoking `./mvnw -Prelease jreleaser:changelog`. A file containing the changelog will be -placed at `target/jreleaser/release/CHANGELOG.md` which at this point should look like this: - -[source,markdown] -.target/jreleaser/release/CHANGELOG.md ----- -## Changelog - -8ef3307 build: Configure distribution assembly -5215200 build: Add initial sources ----- - -Not very exicting. But we can change this by instructing JReleaser to format the changelog according to our own conventions. You -can manually specify patterns to categorize commits however if you chose to follow Conventional Commits we can instruct JReleaser -to do the same. Add the following to the JReleaser plugin configuration section: - -[source,xml] ----- - - - - - - ALWAYS - conventional-commits - - - - - ----- - -Run the previous Maven command once again and inspect the generated changelog, it should now look like this: - -[source,markdown] -.target/jreleaser/release/CHANGELOG.md ----- -## Changelog - -## 🛠 Build -- 8ef3307 Configure distribution assembly (Andres Almiray) -- 5215200 Add initial sources (Andres Almiray) - - -## Contributors -We'd like to thank the following people for their contributions: -Andres Almiray ----- - -There are more formatting options you may apply but for now these will suffice. Let's make yet another commit right now, with -"build: Configure JReleaser plugin" as a commit message. If you want you can generate the changelog once again and see this -latest commit added to the file. - -== Adding distributions to the release - -We've reached the point where we can configure the binary distributions. If you run the `./mvnw -Prelease jreleaser:config` -command you'll notice there's no mention of any distribution files that we configured in previous steps. This is because -the tool has no implicit knowledge of them, we must tell JReleaser which files we'd like to release. This decouples creation -of distributions from release assets as you might like to add or remove files at your leisure. For this particular case we'll -configure Zip files for both macOS and Windows, and a Tar file for Linux. These files must be added to the JReleaser plugin -configuration section, like so: - -[source,xml] ----- - - - - - - ALWAYS - conventional-commits - - - - - - NATIVE_IMAGE - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-linux-x86_64.tar.gz - linux-x86_64 - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-windows-x86_64.zip - windows-x86_64 - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-osx-x86_64.zip - osx-x86_64 - - - - - - ----- - -We can appreciate a distribution named `app` (same as the project's artifactId for convenience) with 3 configured artifacts. -Note the use of Maven properties and Mustache templates to define the paths. You may use explicit values if you want or rely -on properties to parameterize the configuration. Maven properties resolve eagerly during build validation while Mustache -templates resolve lazily during the execution of the JReleaser plugin goals. Each artifact must define a `platform` -property that uniquely identifies them. If we run the `./mvnw -Prelease jreleaser:config` we'll quickly get an error as now -that there's a configured distribution the plugin expects more metadata to be provided by the project: - -[source] ----- -[WARNING] [validation] project.copyright must not be blank since 0.4.0. This warning will become an error in a future release. -[ERROR] == JReleaser == -[ERROR] project.description must not be blank -[ERROR] project.website must not be blank -[ERROR] project.docsUrl must not be blank -[ERROR] project.license must not be blank -[ERROR] project.authors must not be blank ----- - -This metadata can be provided in two ways: either as part of the JReleaser plugin's configuration or using standard -POM elements. If you choose the former option then the plugin's configuration may look like this: - -[source,xml,subs=macros+] ----- - - - - app -- Sample Quarkus CLI application - pass:[https://github.com/aalmiray/app] - pass:[https://github.com/aalmiray/app] - APACHE-2.0 - Andres Almiray - 2021 Kordamp - - ----- - -If you choose to use standard POM elements then your `pom.xml` must contain these entries at the very least, of course -adapting values to your own: - -[source,xml,subs=macros+] ----- - app - app -- Sample Quarkus CLI application - 2021 - pass:[https://github.com/aalmiray/app] - - - aalmiray - Andres Almiray - - - - - Apache-2.0 - pass:[http://www.apache.org/licenses/LICENSE-2.0.txt] - repo - - ----- - -Yet, we're not still out of the woods as invoking the `./mvnw -Prelease jreleaser:config` once more will still result in -another error, this time the failure relates to missing artifacts. This is because we did not assemble all required -artifacts, yet the plugin expects them to be readily available. Here you have the choice to build the required artifacts -on other nodes then copy them to their expected locations -- a task that can be performed running a GitHub Actions -workflow on multiple nodes. Or you can instruct JReleaser to ignore some artifacts and select only those that match your -current platform. Previously we showed how the distribution would look like when created on macOS, assuming we're still on -that platform we have the correct artifact. - -We can instruct JReleaser to select only artifacts that match macOS at this point by invoking the `jreleaser:config` goal -with an additional flag: `./mvnw -Prelease jreleaser:config -Djreleaser.select.current.platform`. This time the command -will succeed and print out the model. Note that only the path for the macOS artifact has been fully resolved, leaving the -other 2 paths untouched. - -Let's make one more commit here with "build: Configure distribution artifacts" as message. We can create a release right -now, by invoking a different goal: `./mvnw -Prelease jreleaser:release -Djreleaser.select.current.platform`. This will -create a Git release at the chosen repository, which includes tagging the repository, uploading the changelog, all -distribution artifacts and their checksum as release assets. - -But before we do that let's add one additional feature, let's create a Homebrew formula that will make it easy for macOS -users to consume the binary distribution, shall we? - -== Configuring Homebrew as a packager - -link:https://brew.sh/[Homebrew] is a popular choice among macOS users to install and manage binaries. Homebrew packages -are at their core a Ruby file (known as a formula) that's executed on the target environment to install or upgrade a -particular binary. JReleaser can create formulae from binary distributions such as the one we already have configured. - -For this to work we simply have to enable Homebrew in the JReleaser plugin configuration like so: - -[source,xml] ----- - - - NATIVE_IMAGE - - ALWAYS - - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-linux-x86_64.tar.gz - linux-x86_64 - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-windows-x86_64.zip - windows-x86_64 - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-osx-x86_64.zip - osx-x86_64 - - - - ----- - -One last thing, it's a good practice to publish Homebrew formulae for non-snapshot releases thus change the project's version -from `1.0.0-SNAPSHOT` to say `1.0.0.Alpha1` or go directly with `1.0.0` as you feel like doing. One last commit and we're done, -say "feat: Add Homebrew packager configuration" as commit message. Alright, we're finally ready, let's post a release! - -== Creating a release - -It's been quite the whirlwind tour of adding configuration to the `pom.xml` but that's just for getting the project ready for -its first release; subsequent release require less tampering with configuration. We can create a git release and the -Homebrew formula with the `jreleaser:full-release` goal but if you still have some doubts on how things may play out then -you can invoke the goal in dry-run mode that is, let JReleaser perform all local operations as needed without affecting -remote resources such as Git repositories. This is how it would look like: - -[source,subs=attributes+] ----- -# because we changed the project's version -./mvnw -Pnative,dist package -./mvnw -Prelease jreleaser:full-release -Djreleaser.select.current.platform -Djreleaser.dryrun - -[INFO] --- jreleaser-maven-plugin:{jreleaser-version}:full-release (default-cli) @ app --- -[INFO] JReleaser {jreleaser-version} -[INFO] - basedir set to /tmp/app -[WARNING] Platform selection is in effect -[WARNING] Artifacts will be filtered by platform matching: [osx-x86_64] -[INFO] Loading variables from /Users/aalmiray/.jreleaser/config.toml -[INFO] Validating configuration -[INFO] Project version set to 1.0.0.Alpha1 -[INFO] Release is not snapshot -[INFO] Timestamp is 2021-12-16T13:31:12.163687+01:00 -[INFO] HEAD is at a21f3f2 -[INFO] Platform is osx-x86_64 -[INFO] dryrun set to true -[INFO] Generating changelog: target/jreleaser/release/CHANGELOG.md -[INFO] Calculating checksums -[INFO] [checksum] target/distributions/app-1.0.0.Alpha1-osx-x86_64.zip.sha256 -[INFO] Signing files -[INFO] Signing is not enabled. Skipping -[INFO] Uploading is not enabled. Skipping -[INFO] Releasing to https://github.com/aalmiray/app -[INFO] - uploading app-1.0.0.Alpha1-osx-x86_64.zip -[INFO] - uploading checksums_sha256.txt -[INFO] Preparing distributions -[INFO] - Preparing app distribution -[INFO] [brew] preparing app distribution -[INFO] Packaging distributions -[INFO] - Packaging app distribution -[INFO] [brew] packaging app distribution -[INFO] Publishing distributions -[INFO] - Publishing app distribution -[INFO] [brew] publishing app distribution -[INFO] [brew] setting up repository aalmiray/homebrew-tap -[INFO] Announcing release -[INFO] Announcing is not enabled. Skipping -[INFO] Writing output properties to target/jreleaser/output.properties -[INFO] JReleaser succeeded after 1.335 s ----- - -JReleaser will perform the following tasks for us: - -* Generate a changelog based on all commits from the last tag (if any) to the latest commit. -* Calculate SHA256 (default) checksums for all input files. -* Sign all files with GPG. In our case we did not configure this step thus it's skipped. -* Upload assets to JFrog Artifactory or AWS S3. We also skip this step as it's not configured. -* Create a Git release at the chosen repository, tagging it. -* Upload all assets, including checksums. -* Create a Homebrew formula, publishing to pass:[https://gitcom.com/aamiray/homebrew-tap]. - -Of course no remote repository was affected as we can appreciate the `-Djreleaser.dryrun` property was in effect. If you're -so inclined inspect the contents of `target/jreleaser/package/app/brew/Formula/app.rb` which defines the Homebrew formula -to be published. It should look something like this: - -[source,ruby,subs=macros+] -.app.rb ----- -class App < Formula - desc "app -- Sample Quarkus CLI application" - homepage "pass:[https://github.com/aalmiray/app]" - url "pass:[https://github.com/aalmiray/app/releases/download/v1.0.0.Alpha1/app-1.0.0.Alpha1-osx-x86_64.zip]" - version "1.0.0.Alpha1" - sha256 "a7e8df6eef3c4c5df7357e678b3c4bc6945b926cec4178a0239660de5dba0fc4" - license "Apache-2.0" - - - def install - libexec.install Dir["*"] - bin.install_symlink "#{libexec}/bin/app" - end - - test do - output = shell_output("#{bin}/app --version") - assert_match "1.0.0.Alpha1", output - end -end ----- - -When ready, create a release for real this time by simply removing the `-Djreleaser.dryrun` flag from the command line, then -browse to your repository and look at the freshly created release. - -== Further reading - -* link:https://jreleaser.org/guide/latest/index.html[JReleaser] documentation. - -== Reference - -As a reference, these are the full contents of the `pom.xml`: - -[source,xml,subs=attributes+,macros+] ----- - - - 4.0.0 - org.acme - app - 1.0.0.Alpha1 - app - app -- Sample Quarkus CLI application - 2021 - https://github.com/aalmiray/app - - - aalmiray - Andres Almiray - - - - - Apache-2.0 - http://www.apache.org/licenses/LICENSE-2.0.txt - repo - - - - - ${project.build.directory}/distributions - 3.8.1 - true - 11 - 11 - UTF-8 - UTF-8 - quarkus-bom - io.quarkus.platform - {quarkus-version} - 3.0.0-M5 - false - - - - - ${quarkus.platform.group-id} - ${quarkus.platform.artifact-id} - ${quarkus.platform.version} - pom - import - - - - - - io.quarkus - quarkus-picocli - - - io.quarkus - quarkus-arc - - - io.quarkus - quarkus-junit5 - test - - - - - - kr.motd.maven - os-maven-plugin - 1.7.0 - - - - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - true - - - - build - generate-code - generate-code-tests - - - - - - maven-compiler-plugin - ${compiler-plugin.version} - - ${maven.compiler.parameters} - - - - maven-surefire-plugin - ${surefire-plugin.version} - - - org.jboss.logmanager.LogManager - ${maven.home} - - - - - - - - native - - - native - - - - - - maven-failsafe-plugin - ${surefire-plugin.version} - - - - integration-test - verify - - - - ${project.build.directory}/${project.build.finalName}-runner - org.jboss.logmanager.LogManager - ${maven.home} - - - - - - - - - native - - - - dist - - - - org.apache.maven.plugins - maven-assembly-plugin - 3.3.0 - - false - false - ${project.artifactId}-${project.version}-${os.detected.classifier} - ${distribution.directory} - ${project.build.directory}/assembly/work - - src/main/assembly/assembly.xml - - - - - make-distribution - package - - single - - - - - - - - - dist-windows - - - windows - - - - .exe - - - - release - - - - org.jreleaser - jreleaser-maven-plugin - {jreleaser-version} - - - - - - - ALWAYS - conventional-commits - - - - - - NATIVE_IMAGE - - ALWAYS - - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-linux-x86_64.tar.gz - linux-x86_64 - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-windows-x86_64.zip - windows-x86_64 - - - ${distribution.directory}/{{distributionName}}-{{projectVersion}}-osx-x86_64.zip - osx-x86_64 - - - - - - - - - - - - ----- diff --git a/_versions/2.7/guides/kafka-dev-services.adoc b/_versions/2.7/guides/kafka-dev-services.adoc deleted file mode 100644 index 8704c7744fe..00000000000 --- a/_versions/2.7/guides/kafka-dev-services.adoc +++ /dev/null @@ -1,89 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services for Kafka - -include::./attributes.adoc[] - -If any Kafka-related extension is present (e.g. `quarkus-smallrye-reactive-messaging-kafka`), Dev Services for Kafka automatically starts a Kafka broker in dev mode and when running tests. -So, you don't have to start a broker manually. -The application is configured automatically. - -IMPORTANT: Because starting a Kafka broker can be long, Dev Services for Kafka uses https://vectorized.io/redpanda[Redpanda], a Kafka compatible broker which starts in ~1 second. - -== Enabling / Disabling Dev Services for Kafka - -Dev Services for Kafka is automatically enabled unless: - -- `quarkus.kafka.devservices.enabled` is set to `false` -- the `kafka.bootstrap.servers` is configured -- all the Reactive Messaging Kafka channels have the `bootstrap.servers` attribute set - -Dev Services for Kafka relies on Docker to start the broker. -If your environment does not support Docker, you will need to start the broker manually, or connect to an already running broker. -You can configure the broker address using `kafka.bootstrap.servers`. - -== Shared broker - -Most of the time you need to share the broker between applications. -Dev Services for Kafka implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single broker. - -NOTE: Dev Services for Kafka starts the container with the `quarkus-dev-service-kafka` label which is used to identify the container. - -If you need multiple (shared) brokers, you can configure the `quarkus.kafka.devservices.service-name` attribute and indicate the broker name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `kafka`. - -Sharing is enabled by default in dev mode, but disabled in test mode. -You can disable the sharing with `quarkus.kafka.devservices.shared=false`. - -== Setting the port - -By default, Dev Services for Kafka picks a random port and configures the application. -You can set the port by configuring the `quarkus.kafka.devservices.port` property. - -Note that the Kafka advertised address is automatically configured with the chosen port. - -== Configuring the image - -Dev Services for Kafka uses: `vectorized/redpanda` images. -You can select any version from https://hub.docker.com/r/vectorized/redpanda: - -[source, properties] ----- -quarkus.kafka.devservices.image-name=vectorized/redpanda:latest ----- - -IMPORTANT: Dev Services for Kafka only support Redpanda. - -== Configuring Kafka topics - -You can configure the Dev Services for Kafka to create topics once the broker is started. -Topics are created with given number of partitions and 1 replica. - -The following example creates a topic named `test` with 3 partitions, and a second topic named `messages` with 2 partitions. - -[source, properties] ----- -quarkus.kafka.devservices.topic-partitions.test=3 -quarkus.kafka.devservices.topic-partitions.messages=2 ----- - -If a topic already exists with the given name, the creation is skipped, -without trying to re-partition the existing topic to a different number of partitions. - -You can configure timeout for Kafka admin client calls used in topic creation using `quarkus.kafka.devservices.topic-partitions-timeout`, it defaults to 2 seconds. - -== Enabling transactions - -By default, the Red Panda broker does not act as a transaction coordinator. -To enable transactions, set: - -[source, properties] ----- -quarkus.kafka.devservices.redpanda.transaction-enabled=true ----- - -NOTE: It also enables producer idempotence support. \ No newline at end of file diff --git a/_versions/2.7/guides/kafka-reactive-getting-started.adoc b/_versions/2.7/guides/kafka-reactive-getting-started.adoc deleted file mode 100644 index 422e610fc4b..00000000000 --- a/_versions/2.7/guides/kafka-reactive-getting-started.adoc +++ /dev/null @@ -1,509 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Getting Started to SmallRye Reactive Messaging with Apache Kafka - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can utilize SmallRye Reactive Messaging to interact with Apache Kafka. - -== Prerequisites - -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we are going to develop two applications communicating with Kafka. -The first application sends a _quote request_ to Kafka and consumes Kafka messages from the _quote_ topic. -The second application receives the _quote request_ and sends a _quote_ back. - -image::kafka-qs-architecture.png[alt=Architecture, align=center] - -The first application, the _producer_, will let the user request some quotes over a HTTP endpoint. -For each quote request a random identifier is generated and returned to the user, to mark the quote request as _pending_. -At the same time, the generated request id is sent over a Kafka topic `quote-requests`. - -image::kafka-qs-app-screenshot.png[alt=Producer App UI, align=center] - -The second application, the _processor_, will read from the `quote-requests` topic, put a random price to the quote, and send it to a Kafka topic named `quotes`. - -Lastly, the _producer_ will read the quotes and send them to the browser using server-sent events. -The user will therefore see the quote price updated from _pending_ to the received price in real-time. - -== Solution - -We recommend that you follow the instructions in the next sections and create applications step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kafka-quickstart` {quickstarts-tree-url}/kafka-quickstart[directory]. - -== Creating the Maven Project - -First, we need to create two projects: the _producer_ and the _processor_. - -To create the _producer_ project, in a terminal run: - -:create-app-artifact-id: kafka-quickstart-producer -:create-app-extensions: resteasy-reactive-jackson,smallrye-reactive-messaging-kafka -:create-app-post-command: -include::includes/devtools/create-app.adoc[] - -This command creates the project structure and selects two Quarkus extensions we will be using: - -1. RESTEasy Reactive and its Jackson support (to handle JSON) to serve the HTTP endpoint. -2. The Kafka connector for Reactive Messaging - -To create the _processor_ project, from the same directory, run: - -:create-app-artifact-id: kafka-quickstart-processor -:create-app-extensions: smallrye-reactive-messaging-kafka -:create-app-post-command: -include::includes/devtools/create-app.adoc[] - -At that point, you should have the following structure: - -[source, text] ----- -. -├── kafka-quickstart-processor -│ ├── README.md -│ ├── mvnw -│ ├── mvnw.cmd -│ ├── pom.xml -│ └── src -│ └── main -│ ├── docker -│ ├── java -│ └── resources -│ └── application.properties -└── kafka-quickstart-producer - ├── README.md - ├── mvnw - ├── mvnw.cmd - ├── pom.xml - └── src - └── main - ├── docker - ├── java - └── resources - └── application.properties ----- - -Open the two projects in your favorite IDE. - -[TIP] -.Dev Services -==== -No need to start a Kafka broker when using the dev mode or for tests. -Quarkus starts a broker for you automatically. -See xref:kafka-dev-services.adoc[Dev Services for Kafka] for details. -==== - -== The Quote object - -The `Quote` class will be used in both _producer_ and _processor_ projects. -For the sake of simplicity, we will duplicate the class. -In both projects, create the `src/main/java/org/acme/kafka/model/Quote.java` file, with the following content: - -[source,java] ----- -package org.acme.kafka.model; - -public class Quote { - - public String id; - public int price; - - /** - * Default constructor required for Jackson serializer - */ - public Quote() { } - - public Quote(String id, int price) { - this.id = id; - this.price = price; - } - - @Override - public String toString() { - return "Quote{" + - "id='" + id + '\'' + - ", price=" + price + - '}'; - } -} ----- - -JSON representation of `Quote` objects will be used in messages sent to the Kafka topic -and also in the server-sent events sent to web browsers. - -Quarkus has built-in capabilities to deal with JSON Kafka messages. -In a following section, we will create serializer/deserializer classes for Jackson. - -== Sending quote request - -Inside the _producer_ project, create the `src/main/java/org/acme/kafka/producer/QuotesResource.java` file and add the following content: - -[source,java] ----- -package org.acme.kafka.producer; - -import java.util.UUID; - -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.acme.kafka.model.Quote; -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -@Path("/quotes") -public class QuotesResource { - - @Channel("quote-requests") - Emitter quoteRequestEmitter; // <1> - - /** - * Endpoint to generate a new quote request id and send it to "quote-requests" Kafka topic using the emitter. - */ - @POST - @Path("/request") - @Produces(MediaType.TEXT_PLAIN) - public String createRequest() { - UUID uuid = UUID.randomUUID(); - quoteRequestEmitter.send(uuid.toString()); // <2> - return uuid.toString(); // <3> - } -} ----- -<1> Inject a Reactive Messaging `Emitter` to send messages to the `quote-requests` channel. -<2> On a post request, generate a random UUID and send it to the Kafka topic using the emitter. -<3> Return the same UUID to the client. - - -The `quote-requests` channel is going to be managed as a Kafka topic, as that's the only connector on the classpath. -If not indicated otherwise, like in this example, Quarkus uses the channel name as topic name. -So, in this example, the application writes into the `quote-requests` topic. -Quarkus also configures the serializer automatically, because it finds that the `Emitter` produces `String` values. - -TIP: When you have multiple connectors, you would need to indicate which connector you want to use in the application configuration. - -== Processing quote requests - -Now let's consume the quote request and give out a price. -Inside the _processor_ project, create the `src/main/java/org/acme/kafka/processor/QuotesProcessor.java` file and add the following content: - -[source, java] ----- -package org.acme.kafka.processor; - -import java.util.Random; - -import javax.enterprise.context.ApplicationScoped; - -import org.acme.kafka.model.Quote; -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import io.smallrye.reactive.messaging.annotations.Blocking; - -/** - * A bean consuming data from the "quote-requests" Kafka topic (mapped to "requests" channel) and giving out a random quote. - * The result is pushed to the "quotes" Kafka topic. - */ -@ApplicationScoped -public class QuotesProcessor { - - private Random random = new Random(); - - @Incoming("requests") // <1> - @Outgoing("quotes") // <2> - @Blocking // <3> - public Quote process(String quoteRequest) throws InterruptedException { - // simulate some hard working task - Thread.sleep(200); - return new Quote(quoteRequest, random.nextInt(100)); - } -} - ----- -<1> Indicates that the method consumes the items from the `requests` channel. -<2> Indicates that the objects returned by the method are sent to the `quotes` channel. -<3> Indicates that the processing is _blocking_ and cannot be run on the caller thread. - -For every Kafka _record_ from the `quote-requests` topic, Reactive Messaging calls the `process` method, and sends the returned `Quote` object to the `quotes` channel. -In this case, we need to configure the channel in the `application.properties` file, to configures the `requests` and `quotes` channels: - -[source, properties] ----- -%dev.quarkus.http.port=8081 - -# Configure the incoming `quote-requests` Kafka topic -mp.messaging.incoming.requests.topic=quote-requests -mp.messaging.incoming.requests.auto.offset.reset=earliest ----- - -Note that in this case we have one incoming and one outgoing connector configuration, each one distinctly named. -The configuration keys are structured as follows: - -`mp.messaging.[outgoing|incoming].{channel-name}.property=value` - -The `channel-name` segment must match the value set in the `@Incoming` and `@Outgoing` annotation: - -* `quote-requests` -> Kafka topic from which we read the quote requests -* `quotes` -> Kafka topic in which we write the quotes - -[NOTE] -==== -More details about this configuration is available on the https://kafka.apache.org/documentation/#producerconfigs[Producer configuration] and https://kafka.apache.org/documentation/#consumerconfigs[Consumer configuration] section from the Kafka documentation. These properties are configured with the prefix `kafka`. -An exhaustive list of configuration properties is available in xref:kafka.adoc#kafka-configuration[Kafka Reference Guide - Configuration]. -==== - -`mp.messaging.incoming.requests.auto.offset.reset=earliest` instructs the application to start reading the topics from the first offset, when there is no committed offset for the consumer group. -In other words, it will also process messages sent before we start the processor application. - -There is no need to set serializers or deserializers. -Quarkus detects them, and if none are found, generates them using JSON serialization. - -== Receiving quotes - -Back to our _producer_ project. -Let's modify the `QuotesResource` to consume quotes from Kafka and send them back to the client via Server-Sent Events: - -[source,java] ----- -import io.smallrye.mutiny.Multi; - -... - -@Channel("quotes") -Multi quotes; // <1> - -/** - * Endpoint retrieving the "quotes" Kafka topic and sending the items to a server sent event. - */ -@GET -@Produces(MediaType.SERVER_SENT_EVENTS) // <2> -public Multi stream() { - return quotes; // <3> -} ----- -<1> Injects the `quotes` channel using the `@Channel` qualifier -<2> Indicates that the content is sent using `Server Sent Events` -<3> Returns the stream (_Reactive Stream_) - -No need to configure anything, as Quarkus will automatically associate the `quotes` channel to the `quotes` Kafka topic. -It will also generate a deserializer for the `Quote` class. - -[TIP] -==== -.Message serialization in Kafka -In this example we used Jackson to serialize/deserialize Kafka messages. -For more options on message serialization, see xref:kafka.adoc#kafka-serialization[Kafka Reference Guide - Serialization]. - -We strongly suggest adopting a contract-first approach using a schema registry. -To learn more about how to use Apache Kafka with the schema registry and Avro, follow the -xref:kafka-schema-registry-avro.adoc[Using Apache Kafka with Schema Registry and Avro] guide. -==== - -== The HTML page - -Final touch, the HTML page requesting quotes and displaying the prices obtained over SSE. - -Inside the _producer_ project, create the `src/main/resources/META-INF/resources/quotes.html` file with the following content: - -[source, html] ----- - - - - - Prices - - - - - -
-
-
-

Quotes

- -
-
-
-
- - - - ----- - -Nothing spectacular here. -When the user clicks the button, HTTP request is made to request a quote, and a pending quote is added to the list. -On each quote received over SSE, the corresponding item in the list is updated. - -== Get it running - -You just need to run both applications. -In one terminal, run: - -[source,bash] ----- -mvn -f kafka-quickstart-producer quarkus:dev ----- - -In another terminal, run: - -[source, bash] ----- -mvn -f kafka-quickstart-processor quarkus:dev ----- - -Quarkus starts a Kafka broker automatically, configures the application and shares the Kafka broker instance between different applications. -See xref:kafka-dev-services.adoc[Dev Services for Kafka] for more details. - -Open `http://localhost:8080/quotes.html` in your browser and request some quotes by clicking the button. - -== Running in JVM or Native mode - -When not running in dev or test mode, you will need to start your Kafka broker. -You can follow the instructions from the https://kafka.apache.org/quickstart[Apache Kafka website] or create a `docker-compose.yaml` file with the following content: - -[source, yaml] ----- -version: '3.5' - -services: - - zookeeper: - image: quay.io/strimzi/kafka:0.23.0-kafka-2.8.0 - command: [ - "sh", "-c", - "bin/zookeeper-server-start.sh config/zookeeper.properties" - ] - ports: - - "2181:2181" - environment: - LOG_DIR: /tmp/logs - networks: - - kafka-quickstart-network - - kafka: - image: quay.io/strimzi/kafka:0.23.0-kafka-2.8.0 - command: [ - "sh", "-c", - "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}" - ] - depends_on: - - zookeeper - ports: - - "9092:9092" - environment: - LOG_DIR: "/tmp/logs" - KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 - KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092 - KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 - networks: - - kafka-quickstart-network - - producer: - image: quarkus-quickstarts/kafka-quickstart-producer:1.0-${QUARKUS_MODE:-jvm} - build: - context: producer - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - depends_on: - - kafka - environment: - KAFKA_BOOTSTRAP_SERVERS: kafka:9092 - ports: - - "8080:8080" - networks: - - kafka-quickstart-network - - processor: - image: quarkus-quickstarts/kafka-quickstart-processor:1.0-${QUARKUS_MODE:-jvm} - build: - context: processor - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - depends_on: - - kafka - environment: - KAFKA_BOOTSTRAP_SERVERS: kafka:9092 - networks: - - kafka-quickstart-network - -networks: - kafka-quickstart-network: - name: kafkaquickstart ----- - -Make sure you first build both applications in JVM mode with: - -[source, bash] ----- -mvn -f kafka-quickstart-producer package -mvn -f kafka-quickstart-processor package ----- - -Once packaged, run `docker-compose up`. - -NOTE: This is a development cluster, do not use in production. - -You can also build and run our applications as native executables. -First, compile both applications as native: - -[source, bash] ----- -mvn -f kafka-quickstart-producer package -Dnative -Dquarkus.native.container-build=true -mvn -f kafka-quickstart-processor package -Dnative -Dquarkus.native.container-build=true ----- - -Run the system with: - -[source, bash] ----- -export QUARKUS_MODE=native -docker-compose up --build ----- - -== Going further - -This guide has shown how you can interact with Kafka using Quarkus. -It utilizes https://smallrye.io/smallrye-reactive-messaging[SmallRye Reactive Messaging] to build data streaming applications. - -For the exhaustive list of features and configuration options, check the xref:kafka.adoc[Reference guide for Apache Kafka Extension]. - -[NOTE] -==== -In this guide we explore Smallrye Reactive Messaging framework to interact with Apache Kafka. -Quarkus extension for Kafka also allows -xref:kafka.adoc#kafka-bare-clients[using Kafka clients directly]. -==== diff --git a/_versions/2.7/guides/kafka-schema-registry-avro.adoc b/_versions/2.7/guides/kafka-schema-registry-avro.adoc deleted file mode 100644 index bce053c9c05..00000000000 --- a/_versions/2.7/guides/kafka-schema-registry-avro.adoc +++ /dev/null @@ -1,705 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Apache Kafka with Schema Registry and Avro - -include::./attributes.adoc[] - -This guide shows how your Quarkus application can use Apache Kafka, http://avro.apache.org/docs/current/[Avro] serialized -records, and connect to a schema registry (such as the https://docs.confluent.io/platform/current/schema-registry/index.html[Confluent Schema Registry] or https://www.apicur.io/registry/[Apicurio Registry]. - -If you are not familiar with Kafka and Kafka in Quarkus in particular, consider -first going through the xref:kafka.adoc[Using Apache Kafka with Reactive Messaging] guide. - -== Prerequisites - -:prerequisites-time: 30 minutes -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide we are going to implement a REST resource, namely `MovieResource`, that -will consume movie DTOs and put them in a Kafka topic. - -Then, we will implement a consumer that will consume and collect messages from the same topic. -The collected messages will be then exposed by another resource, `ConsumedMovieResource`, via -https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events[Server-Sent Events]. - -The _Movies_ will be serialized and deserialized using Avro. -The schema, describing the _Movie_, is stored in Apicurio Registry. -The same concept applies if you are using the Confluent Avro _serde_ and Confluent Schema Registry. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kafka-avro-schema-quickstart` {quickstarts-tree-url}/kafka-avro-schema-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: kafka-avro-schema-quickstart -:create-app-extensions: resteasy-reactive-jackson,smallrye-reactive-messaging-kafka,apicurio-registry-avro -include::includes/devtools/create-app.adoc[] - -[TIP] -==== -If you use Confluent Schema Registry, you don't need the `quarkus-apicurio-registry-avro` extension. -Instead, you need the following dependencies and the Confluent Maven repository added -to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - ... - - - - io.quarkus - quarkus-avro - - - - io.quarkus - quarkus-rest-client-reactive - - - io.confluent - kafka-avro-serializer - 6.1.1 - - - jakarta.ws.rs - jakarta.ws.rs-api - - - - - - - - confluent - https://packages.confluent.io/maven/ - - false - - - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -repositories { - ... - - maven { - url "https://packages.confluent.io/maven/" - } -} - -dependencies { - ... - - // Quarkus extension for generating Java code from Avro schemas - implementation("io.quarkus:quarkus-avro") - - // Confluent registry libraries use JAX-RS client - implementation("io.quarkus:quarkus-rest-client-reactive") - - implementation("io.confluent:kafka-avro-serializer:6.1.1") { - exclude group: "jakarta.ws.rs", module: "jakarta.ws.rs-api" - } -} ----- -==== - -== Avro schema - -Apache Avro is a data serialization system. Data structures are described using schemas. -The first thing we need to do is to create a schema describing the `Movie` structure. -Create a file called `src/main/avro/movie.avsc` with the schema for our record (Kafka message): -[source,json] ----- -{ - "namespace": "org.acme.kafka.quarkus", - "type": "record", - "name": "Movie", - "fields": [ - { - "name": "title", - "type": "string" - }, - { - "name": "year", - "type": "int" - } - ] -} ----- - -If you build the project with: - -include::includes/devtools/build.adoc[] - -the `movies.avsc` will get compiled to a `Movie.java` file -placed in the `target/generated-sources/avsc` directory. - -Take a look at the https://avro.apache.org/docs/current/spec.html#schemas[Avro specification] to learn more about -the Avro syntax and supported types. - -TIP: With Quarkus, there's no need to use a specific Maven plugin to process the Avro schema, this is all done for you by the `quarkus-avro` extension! - -If you run the project with: - -include::includes/devtools/dev.adoc[] - -the changes you do to the schema file will be -automatically applied to the generated Java files. - -== The `Movie` producer - -Having defined the schema, we can now jump to implementing the `MovieResource`. - -Let's open the `MovieResource`, inject an https://quarkus.io/blog/reactive-messaging-emitter/[`Emitter`] of `Movie` DTO and implement a `@POST` method -that consumes `Movie` and sends it through the `Emitter`: - -[source,java] ----- -package org.acme.kafka; - -import org.acme.kafka.quarkus.Movie; -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; -import org.jboss.logging.Logger; - -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.core.Response; - -@Path("/movies") -public class MovieResource { - private static final Logger LOGGER = Logger.getLogger(MovieResource.class); - - @Channel("movies") - Emitter emitter; - - @POST - public Response enqueueMovie(Movie movie) { - LOGGER.infof("Sending movie %s to Kafka", movie.getTitle()); - emitter.send(movie); - return Response.accepted().build(); - } - -} ----- - -Now, we need to _map_ the `movies` channel (the `Emitter` emits to this channel) to a Kafka topic. -To achieve this, edit the `application.properties` file, and add the following content: - -[source,properties] ----- -# set the connector for the outgoing channel to `smallrye-kafka` -mp.messaging.outgoing.movies.connector=smallrye-kafka - -# set the topic name for the channel to `movies` -mp.messaging.outgoing.movies.topic=movies - -# automatically register the schema with the registry, if not present -mp.messaging.outgoing.movies.apicurio.registry.auto-register=true ----- - -[TIP] -==== -You might have noticed that we didn't define the `value.serializer`. -That's because Quarkus can xref:kafka.adoc#serialization-autodetection[autodetect] that `io.apicurio.registry.serde.avro.AvroKafkaSerializer` is appropriate here, based on the `@Channel` declaration, structure of the `Movie` type, and presence of the Apicurio Registry libraries. -We still have to define the `apicurio.registry.auto-register` property. - -If you use Confluent Schema Registry, you don't have to configure `value.serializer` either. -It is also detected automatically. -The Confluent Schema Registry analogue of `apicurio.registry.auto-register` is called `auto.register.schemas`. -It defaults to `true`, so it doesn't have to be configured in this example. -It can be explicitly set to `false` if you want to disable automatic schema registration. -==== - -== The `Movie` consumer - -So, we can write records into Kafka containing our `Movie` data. -That data is serialized using Avro. -Now, it's time to implement a consumer for them. - -Let's create `ConsumedMovieResource` that will consume `Movie` messages -from the `movies-from-kafka` channel and will expose it via Server-Sent Events: - -[source,java] ----- -package org.acme.kafka; - -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.acme.kafka.quarkus.Movie; -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.jboss.resteasy.reactive.RestSseElementType; - -import io.smallrye.mutiny.Multi; - -@ApplicationScoped -@Path("/consumed-movies") -public class ConsumedMovieResource { - - @Channel("movies-from-kafka") - Multi movies; - - @GET - @Produces(MediaType.SERVER_SENT_EVENTS) - @RestSseElementType(MediaType.TEXT_PLAIN) - public Multi stream() { - return movies.map(movie -> String.format("'%s' from %s", movie.getTitle(), movie.getYear())); - } -} ----- - -The last bit of the application's code is the configuration of the `movies-from-kafka` channel in -`application.properties`: - -[source,properties] ----- -# set the connector for the incoming channel to `smallrye-kafka` -mp.messaging.incoming.movies-from-kafka.connector=smallrye-kafka - -# set the topic name for the channel to `movies` -mp.messaging.incoming.movies-from-kafka.topic=movies - -# disable auto-commit, Reactive Messaging handles it itself -mp.messaging.incoming.movies-from-kafka.enable.auto.commit=false - -mp.messaging.incoming.movies-from-kafka.auto.offset.reset=earliest ----- - -[TIP] -==== -You might have noticed that we didn't define the `value.deserializer`. -That's because Quarkus can xref:kafka.adoc#serialization-autodetection[autodetect] that `io.apicurio.registry.serde.avro.AvroKafkaDeserializer` is appropriate here, based on the `@Channel` declaration, structure of the `Movie` type, and presence of the Apicurio Registry libraries. -We don't have to define the `apicurio.registry.use-specific-avro-reader` property either, that is also configured automatically. - -If you use Confluent Schema Registry, you don't have to configure `value.deserializer` or `specific.avro.reader` either. -They are both detected automatically. -==== - -== Running the application - -Start the application in dev mode: - -include::includes/devtools/dev.adoc[] - -Kafka broker and Apicurio Registry instance are started automatically thanks to Dev Services. -See xref:kafka-dev-services.adoc[Dev Services for Kafka] and xref:apicurio-registry-dev-services.adoc[Dev Services for Apicurio Registry] for more details. - -[TIP] -==== -You might have noticed that we didn't configure the schema registry URL anywhere. -This is because Dev Services for Apicurio Registry configures all Kafka channels in SmallRye Reactive Messaging to use the automatically started registry instance. - -There's no Dev Services support for Confluent Schema Registry. -If you want to use a running instance of Confluent Schema Registry, configure its URL, together with the URL of a Kafka broker: - -[source,properties] ----- -kafka.bootstrap.servers=PLAINTEXT://localhost:9092 -mp.messaging.connector.smallrye-kafka.schema.registry.url=http://localhost:8081 ----- -==== - -In the second terminal, query the `ConsumedMovieResource` resource with `curl`: - -[source,bash] ----- -curl -N http://localhost:8080/consumed-movies ----- - -In the third one, post a few movies: - -[source,bash] ----- -curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"title":"The Shawshank Redemption","year":1994}' \ - http://localhost:8080/movies - -curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"title":"The Godfather","year":1972}' \ - http://localhost:8080/movies - -curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"title":"The Dark Knight","year":2008}' \ - http://localhost:8080/movies - -curl --header "Content-Type: application/json" \ - --request POST \ - --data '{"title":"12 Angry Men","year":1957}' \ - http://localhost:8080/movies ----- - -Observe what is printed in the second terminal. You should see something along the lines of: - -[source] ----- -data:'The Shawshank Redemption' from 1994 - -data:'The Godfather' from 1972 - -data:'The Dark Knight' from 2008 - -data:'12 Angry Men' from 1957 ----- - -== Running in JVM or Native mode - -When not running in dev or test mode, you will need to start your own Kafka broker and Apicurio Registry. -The easiest way to get them running is to use `docker-compose` to start the appropriate containers. - -TIP: If you use Confluent Schema Registry, you already have a Kafka broker and Confluent Schema Registry instance running and configured. -You can ignore the `docker-compose` instructions here, as well as the Apicurio Registry configuration. - -Create a `docker-compose.yaml` file at the root of the project with the following content: - -[source,yaml] ----- -version: '2' - -services: - - zookeeper: - image: quay.io/strimzi/kafka:0.22.1-kafka-2.7.0 - command: [ - "sh", "-c", - "bin/zookeeper-server-start.sh config/zookeeper.properties" - ] - ports: - - "2181:2181" - environment: - LOG_DIR: /tmp/logs - - kafka: - image: quay.io/strimzi/kafka:0.22.1-kafka-2.7.0 - command: [ - "sh", "-c", - "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}" - ] - depends_on: - - zookeeper - ports: - - "9092:9092" - environment: - LOG_DIR: "/tmp/logs" - KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 - KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092 - KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 - - schema-registry: - image: apicurio/apicurio-registry-mem:2.1.5.Final - ports: - - 8081:8080 - depends_on: - - kafka - environment: - QUARKUS_PROFILE: prod ----- - -Before starting the application, let's first start the Kafka broker and Apicurio Registry: - -[source,bash] ----- -docker-compose up ----- - -NOTE: To stop the containers, use `docker-compose down`. You can also clean up -the containers with `docker-compose rm` - -You can build the application with: - -include::includes/devtools/build.adoc[] - -And run it in JVM mode with: - -[source, bash] ----- -java -Dmp.messaging.connector.smallrye-kafka.apicurio.registry.url=http://localhost:8081/apis/registry/v2 -jar target/quarkus-app/quarkus-run.jar ----- - -NOTE: By default, the application tries to connect to a Kafka broker listening at `localhost:9092`. -You can configure the bootstrap server using: `java -Dkafka.bootstrap.servers=\... -jar target/quarkus-app/quarkus-run.jar` - -Specifying the registry URL on the command line is not very convenient, so you can add a configuration property only for the `prod` profile: - -[source,properties] ----- -%prod.mp.messaging.connector.smallrye-kafka.apicurio.registry.url=http://localhost:8081/apis/registry/v2 ----- - -You can build a native executable with: - -include::includes/devtools/build-native.adoc[] - -and run it with: - -[source,bash] ----- -./target/kafka-avro-schema-quickstart-1.0.0-SNAPSHOT-runner -Dkafka.bootstrap.servers=localhost:9092 ----- - -== Testing the application - -As mentioned above, Dev Services for Kafka and Apicurio Registry automatically start and configure a Kafka broker and Apicurio Registry instance in dev mode and for tests. -Hence, we don't have to set up Kafka and Apicurio Registry ourselves. -We can just focus on writing the test. - -First, let's add test dependencies on REST Client and Awaitility to the build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-rest-client-reactive - test - - - org.awaitility - awaitility - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-rest-client-reactive") -testImplementation("org.awaitility:awaitility") ----- - -In the test, we will send movies in a loop and check if the `ConsumedMovieResource` returns -what we send. - -[source,java] ----- -package org.acme.kafka; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.common.http.TestHTTPResource; -import io.quarkus.test.junit.QuarkusTest; -import io.restassured.http.ContentType; -import org.hamcrest.Matchers; -import org.junit.jupiter.api.Test; - -import javax.ws.rs.client.Client; -import javax.ws.rs.client.ClientBuilder; -import javax.ws.rs.client.WebTarget; -import javax.ws.rs.sse.SseEventSource; -import java.net.URI; -import java.util.List; -import java.util.concurrent.CopyOnWriteArrayList; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; - -import static io.restassured.RestAssured.given; -import static java.util.concurrent.TimeUnit.MILLISECONDS; -import static java.util.concurrent.TimeUnit.SECONDS; -import static org.awaitility.Awaitility.await; -import static org.hamcrest.MatcherAssert.assertThat; - -@QuarkusTest -public class MovieResourceTest { - - @TestHTTPResource("/consumed-movies") - URI consumedMovies; - - @Test - public void testHelloEndpoint() throws InterruptedException { - // create a client for `ConsumedMovieResource` and collect the consumed resources in a list - Client client = ClientBuilder.newClient(); - WebTarget target = client.target(consumedMovies); - - List received = new CopyOnWriteArrayList<>(); - - SseEventSource source = SseEventSource.target(target).build(); - source.register(inboundSseEvent -> received.add(inboundSseEvent.readData())); - - // in a separate thread, feed the `MovieResource` - ExecutorService movieSender = startSendingMovies(); - - source.open(); - - // check if, after at most 5 seconds, we have at least 2 items collected, and they are what we expect - await().atMost(5, SECONDS).until(() -> received.size() >= 2); - assertThat(received, Matchers.hasItems("'The Shawshank Redemption' from 1994", - "'12 Angry Men' from 1957")); - source.close(); - - // shutdown the executor that is feeding the `MovieResource` - movieSender.shutdownNow(); - movieSender.awaitTermination(5, SECONDS); - } - - private ExecutorService startSendingMovies() { - ExecutorService executorService = Executors.newSingleThreadExecutor(); - executorService.execute(() -> { - while (true) { - given() - .contentType(ContentType.JSON) - .body("{\"title\":\"The Shawshank Redemption\",\"year\":1994}") - .when() - .post("/movies") - .then() - .statusCode(202); - - given() - .contentType(ContentType.JSON) - .body("{\"title\":\"12 Angry Men\",\"year\":1957}") - .when() - .post("/movies") - .then() - .statusCode(202); - - try { - Thread.sleep(200L); - } catch (InterruptedException e) { - break; - } - } - }); - return executorService; - } - -} ----- - -NOTE: We modified the `MovieResourceTest` that was generated together with the project. This test class has a -subclass, `NativeMovieResourceIT`, that runs the same test against the native executable. -To run it, execute: - -include::includes/devtools/build-native.adoc[] - -=== Manual setup - -If we couldn't use Dev Services and wanted to start a Kafka broker and Apicurio Registry instance manually, we would define a xref:getting-started-testing.adoc#quarkus-test-resource[QuarkusTestResourceLifecycleManager]. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.strimzi - strimzi-test-container - 0.22.1 - test - - - org.apache.logging.log4j - log4j-core - - - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.strimzi:strimzi-test-container:0.22.1") { - exclude group: "org.apache.logging.log4j", module: "log4j-core" -} ----- - -[source,java] ----- -package org.acme.kafka; - -import java.util.HashMap; -import java.util.Map; - -import org.testcontainers.containers.GenericContainer; - -import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; -import io.strimzi.StrimziKafkaContainer; - -public class KafkaAndSchemaRegistryTestResource implements QuarkusTestResourceLifecycleManager { - - private final StrimziKafkaContainer kafka = new StrimziKafkaContainer(); - - private GenericContainer registry; - - @Override - public Map start() { - kafka.start(); - registry = new GenericContainer<>("apicurio/apicurio-registry-mem:2.1.5.Final") - .withExposedPorts(8080) - .withEnv("QUARKUS_PROFILE", "prod"); - registry.start(); - Map properties = new HashMap<>(); - properties.put("mp.messaging.connector.smallrye-kafka.apicurio.registry.url", - "http://" + registry.getContainerIpAddress() + ":" + registry.getMappedPort(8080) + "/apis/registry/v2"); - properties.put("kafka.bootstrap.servers", kafka.getBootstrapServers()); - return properties; - } - - @Override - public void stop() { - registry.stop(); - kafka.stop(); - } -} ----- - -[source,java] ----- -@QuarkusTest -@QuarkusTestResource(KafkaAndSchemaRegistryTestResource.class) -public class MovieResourceTest { - ... -} ----- - -== Avro code generation details - -In this guide we used the Quarkus code generation mechanism to generate Java files -from Avro schema. - -Under the hood, the mechanism uses `org.apache.avro:avro-compiler`. - -You can use the following configuration properties to alter how it works: - -- `avro.codegen.[avsc|avdl|avpr].imports` - a list of files or directories that should be compiled first thus making them -importable by subsequently compiled schemas. Note that imported files should not reference each other. All paths should be relative -to the `src/[main|test]/avro` directory. Passed as a comma-separated list. -- `avro.codegen.stringType` - the Java type to use for Avro strings. May be one of `CharSequence`, `String` or -`Utf8`. Defaults to `String` -- `avro.codegen.createOptionalGetters` - enables generating the `getOptional...` -methods that return an Optional of the requested type. Defaults to `false` -- `avro.codegen.enableDecimalLogicalType` - determines whether to use Java classes for decimal types, defaults to `false` -- `avro.codegen.createSetters` - determines whether to create setters for the fields of the record. -Defaults to `false` -- `avro.codegen.gettersReturnOptional` - enables generating `get...` methods that -return an Optional of the requested type. Defaults to `false` -- `avro.codegen.optionalGettersForNullableFieldsOnly`, works in conjunction with `gettersReturnOptional` option. -If it is set, `Optional` getters will be generated only for fields that are nullable. If the field is mandatory, -regular getter will be generated. Defaults to `false` - -== Further reading - -* link:https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/2.9/kafka/kafka.html[SmallRye Reactive Messaging Kafka] documentation -* link:https://quarkus.io/blog/kafka-avro/[How to Use Kafka, Schema Registry and Avro with Quarkus] - a blog post on which -the guide is based. It gives a good introduction to Avro and the concept of schema registry diff --git a/_versions/2.7/guides/kafka-streams.adoc b/_versions/2.7/guides/kafka-streams.adoc deleted file mode 100644 index 84d9f09bab0..00000000000 --- a/_versions/2.7/guides/kafka-streams.adoc +++ /dev/null @@ -1,1252 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Apache Kafka Streams - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can utilize the Apache Kafka Streams API to implement stream processing applications based on Apache Kafka. - -== Prerequisites - -:prerequisites-time: 30 minutes -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -It is recommended, that you have read the {quickstarts-tree-url}/kafka-quickstart[Kafka quickstart] before. - -[NOTE] -==== -The Quarkus extension for Kafka Streams allows for very fast turnaround times during development by supporting the Quarkus Dev Mode (e.g. via `./mvnw compile quarkus:dev`). -After changing the code of your Kafka Streams topology, the application will automatically be reloaded when the next input message arrives. - -A recommended development set-up is to have some producer which creates test messages on the processed topic(s) in fixed intervals, e.g. every second and observe the streaming application's output topic(s) using a tool such as `kafkacat`. -Using the dev mode, you'll instantly see messages on the output topic(s) as produced by the latest version of your streaming application when saving. - -For the best development experience, we recommend applying the following configuration settings to your Kafka broker: - -[source,properties,subs=attributes+] ----- -group.min.session.timeout.ms=250 ----- - -Also specify the following settings in your Quarkus `application.properties`: - -[source,properties,subs=attributes+] ----- -kafka-streams.consumer.session.timeout.ms=250 -kafka-streams.consumer.heartbeat.interval.ms=200 ----- - -Together, these settings will ensure that the application can very quickly reconnect to the broker after being restarted in dev mode. -==== - -== Architecture - -In this guide, we are going to generate (random) temperature values in one component (named `generator`). -These values are associated to given weather stations and are written in a Kafka topic (`temperature-values`). -Another topic (`weather-stations`) contains just the main data about the weather stations themselves (id and name). - -A second component (`aggregator`) reads from the two Kafka topics and processes them in a streaming pipeline: - -* the two topics are joined on weather station id -* per weather station the min, max and average temperature is determined -* this aggregated data is written out to a third topic (`temperatures-aggregated`) - -The data can be examined by inspecting the output topic. -By exposing a Kafka Streams https://kafka.apache.org/22/documentation/streams/developer-guide/interactive-queries.html[interactive query], -the latest result for each weather station can alternatively be obtained via a simple REST query. - -The overall architecture looks like so: - -image::kafka-streams-guide-architecture.png[alt=Architecture, align=center, width=90%] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kafka-streams-quickstart` {quickstarts-tree-url}/kafka-streams-quickstart[directory]. - -== Creating the Producer Maven Project - -First, we need a new project with the temperature value producer. -Create a new project with the following command: - -:create-app-artifact-id: kafka-streams-quickstart-producer -:create-app-extensions: kafka -:create-app-post-command: mv kafka-streams-quickstart-producer producer -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, importing the Reactive Messaging and Kafka connector extensions. - -If you already have your Quarkus project configured, you can add the `smallrye-reactive-messaging-kafka` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: quarkus-smallrye-reactive-messaging-kafka -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-reactive-messaging-kafka - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-reactive-messaging-kafka") ----- - -=== The Temperature Value Producer - -Create the `producer/src/main/java/org/acme/kafka/streams/producer/generator/ValuesGenerator.java` file, -with the following content: - -[source, java] ----- -package org.acme.kafka.streams.producer.generator; - -import java.math.BigDecimal; -import java.math.RoundingMode; -import java.time.Duration; -import java.time.Instant; -import java.util.Arrays; -import java.util.Collections; -import java.util.List; -import java.util.Random; - -import javax.enterprise.context.ApplicationScoped; - -import io.smallrye.mutiny.Multi; -import io.smallrye.reactive.messaging.kafka.Record; -import org.eclipse.microprofile.reactive.messaging.Outgoing; -import org.jboss.logging.Logger; - -/** - * A bean producing random temperature data every second. - * The values are written to a Kafka topic (temperature-values). - * Another topic contains the name of weather stations (weather-stations). - * The Kafka configuration is specified in the application configuration. - */ -@ApplicationScoped -public class ValuesGenerator { - - private static final Logger LOG = Logger.getLogger(ValuesGenerator.class); - - private Random random = new Random(); - - private List stations = List.of( - new WeatherStation(1, "Hamburg", 13), - new WeatherStation(2, "Snowdonia", 5), - new WeatherStation(3, "Boston", 11), - new WeatherStation(4, "Tokio", 16), - new WeatherStation(5, "Cusco", 12), - new WeatherStation(6, "Svalbard", -7), - new WeatherStation(7, "Porthsmouth", 11), - new WeatherStation(8, "Oslo", 7), - new WeatherStation(9, "Marrakesh", 20)); - - @Outgoing("temperature-values") // <1> - public Multi> generate() { - return Multi.createFrom().ticks().every(Duration.ofMillis(500)) // <2> - .onOverflow().drop() - .map(tick -> { - WeatherStation station = stations.get(random.nextInt(stations.size())); - double temperature = BigDecimal.valueOf(random.nextGaussian() * 15 + station.averageTemperature) - .setScale(1, RoundingMode.HALF_UP) - .doubleValue(); - - LOG.infov("station: {0}, temperature: {1}", station.name, temperature); - return Record.of(station.id, Instant.now() + ";" + temperature); - }); - } - - @Outgoing("weather-stations") // <3> - public Multi> weatherStations() { - return Multi.createFrom().items(stations.stream() - .map(s -> Record.of( - s.id, - "{ \"id\" : " + s.id + - ", \"name\" : \"" + s.name + "\" }")) - ); - } - - private static class WeatherStation { - - int id; - String name; - int averageTemperature; - - public WeatherStation(int id, String name, int averageTemperature) { - this.id = id; - this.name = name; - this.averageTemperature = averageTemperature; - } - } -} ----- -<1> Instruct Reactive Messaging to dispatch the items from the returned `Multi` to `temperature-values`. -<2> The method returns a Mutiny _stream_ (`Multi`) emitting a random temperature value every 0.5 seconds. -<3> Instruct Reactive Messaging to dispatch the items from the returned `Multi` (static list of weather stations) to `weather-stations`. - -The two methods each return a _reactive stream_ whose items are sent to the streams named `temperature-values` and `weather-stations`, respectively. - -=== Topic Configuration - -The two channels are mapped to Kafka topics using the Quarkus configuration file `application.properties`. -For that, add the following to the file `producer/src/main/resources/application.properties`: - -[source,properties] ----- -# Configure the Kafka broker location -kafka.bootstrap.servers=localhost:9092 - -mp.messaging.outgoing.temperature-values.connector=smallrye-kafka -mp.messaging.outgoing.temperature-values.key.serializer=org.apache.kafka.common.serialization.IntegerSerializer -mp.messaging.outgoing.temperature-values.value.serializer=org.apache.kafka.common.serialization.StringSerializer - -mp.messaging.outgoing.weather-stations.connector=smallrye-kafka -mp.messaging.outgoing.weather-stations.key.serializer=org.apache.kafka.common.serialization.IntegerSerializer -mp.messaging.outgoing.weather-stations.value.serializer=org.apache.kafka.common.serialization.StringSerializer ----- - -This configures the Kafka bootstrap server, the two topics and the corresponding (de-)serializers. -More details about the different configuration options are available on the https://kafka.apache.org/documentation/#producerconfigs[Producer configuration] and https://kafka.apache.org/documentation/#consumerconfigs[Consumer configuration] section from the Kafka documentation. - -== Creating the Aggregator Maven Project - -With the producer application in place, it's time to implement the actual aggregator application, -which will run the Kafka Streams pipeline. -Create another project like so: - -:create-app-artifact-id: kafka-streams-quickstart-aggregator -:create-app-extensions: kafka-streams,resteasy-jackson -:create-app-post-command: mv kafka-streams-quickstart-aggregator aggregator -include::includes/devtools/create-app.adoc[] - -This creates the `aggregator` project with the Quarkus extension for Kafka Streams and with RESTEasy support for Jackson. - -If you already have your Quarkus project configured, you can add the `kafka-streams` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: kafka-streams -include::includes/devtools/extension-add.adoc[] - -This will add the following to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-kafka-streams - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-kafka-streams") ----- - -=== The Pipeline Implementation - -Let's begin the implementation of the stream processing application by creating -a few value objects for representing temperature measurements, weather stations and for keeping track of aggregated values. - -First, create the file `aggregator/src/main/java/org/acme/kafka/streams/aggregator/model/WeatherStation.java`, -representing a weather station, with the following content: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.model; - -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection // <1> -public class WeatherStation { - - public int id; - public String name; -} ----- -<1> The `@RegisterForReflection` annotation instructs Quarkus to keep the class and its members during the native compilation. More details about the `@RegisterForReflection` annotation can be found on the xref:writing-native-applications-tips.adoc#registerForReflection[native application tips] page. - -Then the file `aggregator/src/main/java/org/acme/kafka/streams/aggregator/model/TemperatureMeasurement.java`, -representing temperature measurements for a given station: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.model; - -import java.time.Instant; - -public class TemperatureMeasurement { - - public int stationId; - public String stationName; - public Instant timestamp; - public double value; - - public TemperatureMeasurement(int stationId, String stationName, Instant timestamp, - double value) { - this.stationId = stationId; - this.stationName = stationName; - this.timestamp = timestamp; - this.value = value; - } -} ----- - -And finally `aggregator/src/main/java/org/acme/kafka/streams/aggregator/model/Aggregation.java`, -which will be used to keep track of the aggregated values while the events are processed in the streaming pipeline: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.model; - -import java.math.BigDecimal; -import java.math.RoundingMode; - -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection -public class Aggregation { - - public int stationId; - public String stationName; - public double min = Double.MAX_VALUE; - public double max = Double.MIN_VALUE; - public int count; - public double sum; - public double avg; - - public Aggregation updateFrom(TemperatureMeasurement measurement) { - stationId = measurement.stationId; - stationName = measurement.stationName; - - count++; - sum += measurement.value; - avg = BigDecimal.valueOf(sum / count) - .setScale(1, RoundingMode.HALF_UP).doubleValue(); - - min = Math.min(min, measurement.value); - max = Math.max(max, measurement.value); - - return this; - } -} ----- - -Next, let's create the actual streaming query implementation itself in the `aggregator/src/main/java/org/acme/kafka/streams/aggregator/streams/TopologyProducer.java` file. -All we need to do for that is to declare a CDI producer method which returns the Kafka Streams `Topology`; -the Quarkus extension will take care of configuring, starting and stopping the actual Kafka Streams engine. - -[source, java] ----- -package org.acme.kafka.streams.aggregator.streams; - -import java.time.Instant; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.inject.Produces; - -import org.acme.kafka.streams.aggregator.model.Aggregation; -import org.acme.kafka.streams.aggregator.model.TemperatureMeasurement; -import org.acme.kafka.streams.aggregator.model.WeatherStation; -import org.apache.kafka.common.serialization.Serdes; -import org.apache.kafka.streams.StreamsBuilder; -import org.apache.kafka.streams.Topology; -import org.apache.kafka.streams.kstream.Consumed; -import org.apache.kafka.streams.kstream.GlobalKTable; -import org.apache.kafka.streams.kstream.Materialized; -import org.apache.kafka.streams.kstream.Produced; -import org.apache.kafka.streams.state.KeyValueBytesStoreSupplier; -import org.apache.kafka.streams.state.Stores; - -import io.quarkus.kafka.client.serialization.ObjectMapperSerde; - -@ApplicationScoped -public class TopologyProducer { - - static final String WEATHER_STATIONS_STORE = "weather-stations-store"; - - private static final String WEATHER_STATIONS_TOPIC = "weather-stations"; - private static final String TEMPERATURE_VALUES_TOPIC = "temperature-values"; - private static final String TEMPERATURES_AGGREGATED_TOPIC = "temperatures-aggregated"; - - @Produces - public Topology buildTopology() { - StreamsBuilder builder = new StreamsBuilder(); - - ObjectMapperSerde weatherStationSerde = new ObjectMapperSerde<>( - WeatherStation.class); - ObjectMapperSerde aggregationSerde = new ObjectMapperSerde<>(Aggregation.class); - - KeyValueBytesStoreSupplier storeSupplier = Stores.persistentKeyValueStore( - WEATHER_STATIONS_STORE); - - GlobalKTable stations = builder.globalTable( // <1> - WEATHER_STATIONS_TOPIC, - Consumed.with(Serdes.Integer(), weatherStationSerde)); - - builder.stream( // <2> - TEMPERATURE_VALUES_TOPIC, - Consumed.with(Serdes.Integer(), Serdes.String()) - ) - .join( // <3> - stations, - (stationId, timestampAndValue) -> stationId, - (timestampAndValue, station) -> { - String[] parts = timestampAndValue.split(";"); - return new TemperatureMeasurement(station.id, station.name, - Instant.parse(parts[0]), Double.valueOf(parts[1])); - } - ) - .groupByKey() // <4> - .aggregate( // <5> - Aggregation::new, - (stationId, value, aggregation) -> aggregation.updateFrom(value), - Materialized. as(storeSupplier) - .withKeySerde(Serdes.Integer()) - .withValueSerde(aggregationSerde) - ) - .toStream() - .to( // <6> - TEMPERATURES_AGGREGATED_TOPIC, - Produced.with(Serdes.Integer(), aggregationSerde) - ); - - return builder.build(); - } -} ----- -<1> The `weather-stations` table is read into a `GlobalKTable`, representing the current state of each weather station -<2> The `temperature-values` topic is read into a `KStream`; whenever a new message arrives to this topic, the pipeline will be processed for this measurement -<3> The message from the `temperature-values` topic is joined with the corresponding weather station, using the topic's key (weather station id); the join result contains the data from the measurement and associated weather station message -<4> The values are grouped by message key (the weather station id) -<5> Within each group, all the measurements of that station are aggregated, by keeping track of minimum and maximum values and calculating the average value of all measurements of that station (see the `Aggregation` type) -<6> The results of the pipeline are written out to the `temperatures-aggregated` topic - -The Kafka Streams extension is configured via the Quarkus configuration file `application.properties`. -Create the file `aggregator/src/main/resources/application.properties` with the following contents: - -[source] ----- -quarkus.kafka-streams.bootstrap-servers=localhost:9092 -quarkus.kafka-streams.application-server=${hostname}:8080 -quarkus.kafka-streams.topics=weather-stations,temperature-values - -# pass-through options -kafka-streams.cache.max.bytes.buffering=10240 -kafka-streams.commit.interval.ms=1000 -kafka-streams.metadata.max.age.ms=500 -kafka-streams.auto.offset.reset=earliest -kafka-streams.metrics.recording.level=DEBUG ----- - -The options with the `quarkus.kafka-streams` prefix can be changed dynamically at application startup, -e.g. via environment variables or system properties. -`bootstrap-servers` and `application-server` are mapped to the Kafka Streams properties `bootstrap.servers` and `application.server`, respectively. -`topics` is specific to Quarkus: the application will wait for all the given topics to exist before launching the Kafka Streams engine. -This is to done to gracefully await the creation of topics that don't yet exist at application startup time. - -TIP: Alternatively, you can use `kafka.bootstrap.servers` instead of `quarkus.kafka-streams.bootstrap-servers` as you did in the _generator_ project above. - -All the properties within the `kafka-streams` namespace are passed through as-is to the Kafka Streams engine. -Changing their values requires a rebuild of the application. - -== Building and Running the Applications - -We now can build the `producer` and `aggregator` applications: - -[source,bash,subs=attributes+] ----- -./mvnw clean package -f producer/pom.xml -./mvnw clean package -f aggregator/pom.xml ----- - -Instead of running them directly on the host machine using the Quarkus dev mode, -we're going to package them into container images and launch them via Docker Compose. -This is done in order to demonstrate scaling the `aggregator` aggregation to multiple nodes later on. - -The `Dockerfile` created by Quarkus by default needs one adjustment for the `aggregator` application in order to run the Kafka Streams pipeline. -To do so, edit the file `aggregator/src/main/docker/Dockerfile.jvm` and replace the line `FROM fabric8/java-alpine-openjdk8-jre` with `FROM fabric8/java-centos-openjdk8-jdk`. - -Next create a Docker Compose file (`docker-compose.yaml`) for spinning up the two applications as well as Apache Kafka and ZooKeeper like so: - -[source, yaml] ----- -version: '3.5' - -services: - zookeeper: - image: strimzi/kafka:0.19.0-kafka-2.5.0 - command: [ - "sh", "-c", - "bin/zookeeper-server-start.sh config/zookeeper.properties" - ] - ports: - - "2181:2181" - environment: - LOG_DIR: /tmp/logs - networks: - - kafkastreams-network - kafka: - image: strimzi/kafka:0.19.0-kafka-2.5.0 - command: [ - "sh", "-c", - "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT} --override num.partitions=$${KAFKA_NUM_PARTITIONS}" - ] - depends_on: - - zookeeper - ports: - - "9092:9092" - environment: - LOG_DIR: "/tmp/logs" - KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 - KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092 - KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 - KAFKA_NUM_PARTITIONS: 3 - networks: - - kafkastreams-network - - producer: - image: quarkus-quickstarts/kafka-streams-producer:1.0 - build: - context: producer - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - KAFKA_BOOTSTRAP_SERVERS: kafka:9092 - networks: - - kafkastreams-network - - aggregator: - image: quarkus-quickstarts/kafka-streams-aggregator:1.0 - build: - context: aggregator - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS: kafka:9092 - networks: - - kafkastreams-network - -networks: - kafkastreams-network: - name: ks ----- - -To launch all the containers, building the `producer` and `aggregator` container images, -run `docker-compose up --build`. - -TIP: Instead of `QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS`, you can use `KAFKA_BOOTSTRAP_SERVERS`. - -You should see log statements from the `producer` application about messages being sent to the "temperature-values" topic. - -Now run an instance of the _debezium/tooling_ image, attaching to the same network all the other containers run in. -This image provides several useful tools such as _kafkacat_ and _httpie_: - -[source,bash,subs=attributes+] ----- -docker run --tty --rm -i --network ks debezium/tooling:1.1 ----- - -Within the tooling container, run _kafkacat_ to examine the results of the streaming pipeline: - -[source,subs=attributes+] ----- -kafkacat -b kafka:9092 -C -o beginning -q -t temperatures-aggregated - -{"avg":34.7,"count":4,"max":49.4,"min":16.8,"stationId":9,"stationName":"Marrakesh","sum":138.8} -{"avg":15.7,"count":1,"max":15.7,"min":15.7,"stationId":2,"stationName":"Snowdonia","sum":15.7} -{"avg":12.8,"count":7,"max":25.5,"min":-13.8,"stationId":7,"stationName":"Porthsmouth","sum":89.7} -... ----- - -You should see new values arrive as the producer continues to emit temperature measurements, -each value on the outbound topic showing the minimum, maximum and average temperature values of the represented weather station. - -== Interactive Queries - -Subscribing to the `temperatures-aggregated` topic is a great way to react to any new temperature values. -It's a bit wasteful though if you're just interested in the latest aggregated value for a given weather station. -This is where Kafka Streams interactive queries shine: -they let you directly query the underlying state store of the pipeline for the value associated to a given key. -By exposing a simple REST endpoint which queries the state store, -the latest aggregation result can be retrieved without having to subscribe to any Kafka topic. - -Let's begin by creating a new class `InteractiveQueries` in the file `aggregator/src/main/java/org/acme/kafka/streams/aggregator/streams/InteractiveQueries.java`: - -one more method to the `KafkaStreamsPipeline` class which obtains the current state for a given key: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.streams; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import org.acme.kafka.streams.aggregator.model.Aggregation; -import org.acme.kafka.streams.aggregator.model.WeatherStationData; -import org.apache.kafka.streams.KafkaStreams; -import org.apache.kafka.streams.errors.InvalidStateStoreException; -import org.apache.kafka.streams.state.QueryableStoreTypes; -import org.apache.kafka.streams.state.ReadOnlyKeyValueStore; - -@ApplicationScoped -public class InteractiveQueries { - - @Inject - KafkaStreams streams; - - public GetWeatherStationDataResult getWeatherStationData(int id) { - Aggregation result = getWeatherStationStore().get(id); - - if (result != null) { - return GetWeatherStationDataResult.found(WeatherStationData.from(result)); // <1> - } - else { - return GetWeatherStationDataResult.notFound(); // <2> - } - } - - private ReadOnlyKeyValueStore getWeatherStationStore() { - while (true) { - try { - return streams.store(TopologyProducer.WEATHER_STATIONS_STORE, QueryableStoreTypes.keyValueStore()); - } catch (InvalidStateStoreException e) { - // ignore, store not ready yet - } - } - } -} ----- -<1> A value for the given station id was found, so that value will be returned -<2> No value was found, either because a non-existing station was queried or no measurement exists yet for the given station - -Also create the method's return type in the file `aggregator/src/main/java/org/acme/kafka/streams/aggregator/streams/GetWeatherStationDataResult.java`: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.streams; - -import java.util.Optional; -import java.util.OptionalInt; - -import org.acme.kafka.streams.aggregator.model.WeatherStationData; - -public class GetWeatherStationDataResult { - - private static GetWeatherStationDataResult NOT_FOUND = - new GetWeatherStationDataResult(null); - - private final WeatherStationData result; - - private GetWeatherStationDataResult(WeatherStationData result) { - this.result = result; - } - - public static GetWeatherStationDataResult found(WeatherStationData data) { - return new GetWeatherStationDataResult(data); - } - - public static GetWeatherStationDataResult notFound() { - return NOT_FOUND; - } - - public Optional getResult() { - return Optional.ofNullable(result); - } -} ----- - -Also create `aggregator/src/main/java/org/acme/kafka/streams/aggregator/model/WeatherStationData.java`, -which represents the actual aggregation result for a weather station: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.model; - -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection -public class WeatherStationData { - - public int stationId; - public String stationName; - public double min = Double.MAX_VALUE; - public double max = Double.MIN_VALUE; - public int count; - public double avg; - - private WeatherStationData(int stationId, String stationName, double min, double max, - int count, double avg) { - this.stationId = stationId; - this.stationName = stationName; - this.min = min; - this.max = max; - this.count = count; - this.avg = avg; - } - - public static WeatherStationData from(Aggregation aggregation) { - return new WeatherStationData( - aggregation.stationId, - aggregation.stationName, - aggregation.min, - aggregation.max, - aggregation.count, - aggregation.avg); - } -} ----- - -We now can add a simple REST endpoint (`aggregator/src/main/java/org/acme/kafka/streams/aggregator/rest/WeatherStationEndpoint.java`), -which invokes `getWeatherStationData()` and returns the data to the client: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.rest; - -import java.net.URI; -import java.net.URISyntaxException; -import java.util.List; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.Response; -import javax.ws.rs.core.Response.Status; - -import org.acme.kafka.streams.aggregator.streams.GetWeatherStationDataResult; -import org.acme.kafka.streams.aggregator.streams.KafkaStreamsPipeline; - -@ApplicationScoped -@Path("/weather-stations") -public class WeatherStationEndpoint { - - @Inject - InteractiveQueries interactiveQueries; - - @GET - @Path("/data/{id}") - public Response getWeatherStationData(@PathParam("id") int id) { - GetWeatherStationDataResult result = interactiveQueries.getWeatherStationData(id); - - if (result.getResult().isPresent()) { // <1> - return Response.ok(result.getResult().get()).build(); - } - else { - return Response.status(Status.NOT_FOUND.getStatusCode(), - "No data found for weather station " + id).build(); - } - } -} ----- -<1> Depending on whether a value was obtained, either return that value or a 404 response - -With this code in place, it's time to rebuild the application and the `aggregator` service in Docker Compose: - -[source,bash,subs=attributes+] ----- -./mvnw clean package -f aggregator/pom.xml -docker-compose stop aggregator -docker-compose up --build -d ----- - -This will rebuild the `aggregator` container and restart its service. -Once that's done, you can invoke the service's REST API to obtain the temperature data for one of the existing stations. -To do so, you can use `httpie` in the tooling container launched before: - -[source, subs=attributes+] ----- -http aggregator:8080/weather-stations/data/1 - -HTTP/1.1 200 OK -Connection: keep-alive -Content-Length: 85 -Content-Type: application/json -Date: Tue, 18 Jun 2019 19:29:16 GMT - -{ - "avg": 12.9, - "count": 146, - "max": 41.0, - "min": -25.6, - "stationId": 1, - "stationName": "Hamburg" -} ----- - -== Scaling Out - -A very interesting trait of Kafka Streams applications is that they can be scaled out, -i.e. the load and state can be distributed amongst multiple application instances running the same pipeline. -Each node will then contain a subset of the aggregation results, -but Kafka Streams provides you with https://kafka.apache.org/22/documentation/streams/developer-guide/interactive-queries.html#querying-remote-state-stores-for-the-entire-app[an API] to obtain the information which node is hosting a given key. -The application can then either fetch the data directly from the other instance, or simply point the client to the location of that other node. - -Launching multiple instances of the `aggregator` application will make look the overall architecture like so: - -image::kafka-streams-guide-architecture-distributed.png[alt=Architecture with multiple aggregator nodes, align=center, width=90%] - -The `InteractiveQueries` class must be adjusted slightly for this distributed architecture: - -[source, java] ----- -public GetWeatherStationDataResult getWeatherStationData(int id) { - StreamsMetadata metadata = streams.metadataForKey( // <1> - TopologyProducer.WEATHER_STATIONS_STORE, - id, - Serdes.Integer().serializer() - ); - - if (metadata == null || metadata == StreamsMetadata.NOT_AVAILABLE) { - LOG.warn("Found no metadata for key {}", id); - return GetWeatherStationDataResult.notFound(); - } - else if (metadata.host().equals(host)) { // <2> - LOG.info("Found data for key {} locally", id); - Aggregation result = getWeatherStationStore().get(id); - - if (result != null) { - return GetWeatherStationDataResult.found(WeatherStationData.from(result)); - } - else { - return GetWeatherStationDataResult.notFound(); - } - } - else { // <3> - LOG.info( - "Found data for key {} on remote host {}:{}", - id, - metadata.host(), - metadata.port() - ); - return GetWeatherStationDataResult.foundRemotely(metadata.host(), metadata.port()); - } -} - -public List getMetaData() { // <4> - return streams.allMetadataForStore(TopologyProducer.WEATHER_STATIONS_STORE) - .stream() - .map(m -> new PipelineMetadata( - m.hostInfo().host() + ":" + m.hostInfo().port(), - m.topicPartitions() - .stream() - .map(TopicPartition::toString) - .collect(Collectors.toSet())) - ) - .collect(Collectors.toList()); -} ----- -<1> The streams metadata for the given weather station id is obtained -<2> The given key (weather station id) is maintained by the local application node, i.e. it can answer the query itself -<3> The given key is maintained by another application node; in this case the information about that node (host and port) will be returned -<4> The `getMetaData()` method is added to provide callers with a list of all the nodes in the application cluster. - -The `GetWeatherStationDataResult` type must be adjusted accordingly: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.streams; - -import java.util.Optional; -import java.util.OptionalInt; - -import org.acme.kafka.streams.aggregator.model.WeatherStationData; - -public class GetWeatherStationDataResult { - - private static GetWeatherStationDataResult NOT_FOUND = - new GetWeatherStationDataResult(null, null, null); - - private final WeatherStationData result; - private final String host; - private final Integer port; - - private GetWeatherStationDataResult(WeatherStationData result, String host, - Integer port) { - this.result = result; - this.host = host; - this.port = port; - } - - public static GetWeatherStationDataResult found(WeatherStationData data) { - return new GetWeatherStationDataResult(data, null, null); - } - - public static GetWeatherStationDataResult foundRemotely(String host, int port) { - return new GetWeatherStationDataResult(null, host, port); - } - - public static GetWeatherStationDataResult notFound() { - return NOT_FOUND; - } - - public Optional getResult() { - return Optional.ofNullable(result); - } - - public Optional getHost() { - return Optional.ofNullable(host); - } - - public OptionalInt getPort() { - return port != null ? OptionalInt.of(port) : OptionalInt.empty(); - } -} ----- - -Also the return type for `getMetaData()` must be defined -(`aggregator/src/main/java/org/acme/kafka/streams/aggregator/streams/PipelineMetadata.java`): - -[source, java] ----- -package org.acme.kafka.streams.aggregator.streams; - -import java.util.Set; - -public class PipelineMetadata { - - public String host; - public Set partitions; - - public PipelineMetadata(String host, Set partitions) { - this.host = host; - this.partitions = partitions; - } -} ----- - -Lastly, the REST endpoint class must be updated: - -[source, java] ----- -package org.acme.kafka.streams.aggregator.rest; - -import java.net.URI; -import java.net.URISyntaxException; -import java.util.List; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.ws.rs.Consumes; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.Response; -import javax.ws.rs.core.Response.Status; - -import org.acme.kafka.streams.aggregator.streams.GetWeatherStationDataResult; -import org.acme.kafka.streams.aggregator.streams.KafkaStreamsPipeline; -import org.acme.kafka.streams.aggregator.streams.PipelineMetadata; - -@ApplicationScoped -@Path("/weather-stations") -public class WeatherStationEndpoint { - - @Inject - InteractiveQueries interactiveQueries; - - @GET - @Path("/data/{id}") - @Consumes(MediaType.APPLICATION_JSON) - @Produces(MediaType.APPLICATION_JSON) - public Response getWeatherStationData(@PathParam("id") int id) { - GetWeatherStationDataResult result = interactiveQueries.getWeatherStationData(id); - - if (result.getResult().isPresent()) { // <1> - return Response.ok(result.getResult().get()).build(); - } - else if (result.getHost().isPresent()) { // <2> - URI otherUri = getOtherUri(result.getHost().get(), result.getPort().getAsInt(), - id); - return Response.seeOther(otherUri).build(); - } - else { // <3> - return Response.status(Status.NOT_FOUND.getStatusCode(), - "No data found for weather station " + id).build(); - } - } - - @GET - @Path("/meta-data") - @Produces(MediaType.APPLICATION_JSON) - public List getMetaData() { // <4> - return interactiveQueries.getMetaData(); - } - - private URI getOtherUri(String host, int port, int id) { - try { - return new URI("http://" + host + ":" + port + "/weather-stations/data/" + id); - } - catch (URISyntaxException e) { - throw new RuntimeException(e); - } - } -} ----- -<1> The data was found locally, so return it -<2> The data is maintained by another node, so reply with a redirect (HTTP status code 303) if the data for the given key is stored on one of the other nodes. -<3> No data was found for the given weather station id -<4> Exposes information about all the hosts forming the application cluster - -Now stop the `aggregator` service again and rebuild it. -Then let's spin up three instances of it: - -[source,bash,subs=attributes+] ----- -./mvnw clean package -f aggregator/pom.xml -docker-compose stop aggregator -docker-compose up --build -d --scale aggregator=3 ----- - -When invoking the REST API on any of the three instances, it might either be -that the aggregation for the requested weather station id is stored locally on the node receiving the query, -or it could be stored on one of the other two nodes. - -As the load balancer of Docker Compose will distribute requests to the `aggregator` service in a round-robin fashion, -we'll invoke the actual nodes directly. -The application exposes information about all the host names via REST: - -[source, subs=attributes+] ----- -http aggregator:8080/weather-stations/meta-data - -HTTP/1.1 200 OK -Connection: keep-alive -Content-Length: 202 -Content-Type: application/json -Date: Tue, 18 Jun 2019 20:00:23 GMT - -[ - { - "host": "2af13fe516a9:8080", - "partitions": [ - "temperature-values-2" - ] - }, - { - "host": "32cc8309611b:8080", - "partitions": [ - "temperature-values-1" - ] - }, - { - "host": "1eb39af8d587:8080", - "partitions": [ - "temperature-values-0" - ] - } -] ----- - -Retrieve the data from one of the three hosts shown in the response -(your actual host names will differ): - -[source, subs=attributes+] ----- -http 2af13fe516a9:8080/weather-stations/data/1 ----- - -If that node holds the data for key "1", you'll get a response like this: - -[source] ----- -HTTP/1.1 200 OK -Connection: keep-alive -Content-Length: 74 -Content-Type: application/json -Date: Tue, 11 Jun 2019 19:16:31 GMT - -{ - "avg": 11.9, - "count": 259, - "max": 50.0, - "min": -30.1, - "stationId": 1, - "stationName": "Hamburg" -} ----- - -Otherwise, the service will send a redirect: - -[source] ----- -HTTP/1.1 303 See Other -Connection: keep-alive -Content-Length: 0 -Date: Tue, 18 Jun 2019 20:01:03 GMT -Location: http://1eb39af8d587:8080/weather-stations/data/1 ----- - -You can also have _httpie_ automatically follow the redirect by passing the `--follow option`: - -[source,bash] ----- -http --follow 2af13fe516a9:8080/weather-stations/data/1 ----- - -== Running Natively - -The Quarkus extension for Kafka Streams enables the execution of stream processing applications -natively via GraalVM without further configuration. - -To run both the `producer` and `aggregator` applications in native mode, -the Maven builds can be executed using `-Dnative`: - -[source,bash] ----- -./mvnw clean package -f producer/pom.xml -Dnative -Dnative-image.container-runtime=docker -./mvnw clean package -f aggregator/pom.xml -Dnative -Dnative-image.container-runtime=docker ----- - -Now create an environment variable named `QUARKUS_MODE` and with value set to "native": - -[source,bash] ----- -export QUARKUS_MODE=native ----- - -This is used by the Docker Compose file to use the correct `Dockerfile` when building the `producer` and `aggregator` images. -The Kafka Streams application can work with less than 50 MB RSS in native mode. -To do so, add the `Xmx` option to the program invocation in `aggregator/src/main/docker/Dockerfile.native`: - -[source,dockerfile] ----- -CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Xmx32m"] ----- - -Now start Docker Compose as described above -(don't forget to rebuild the container images). - -== Kafka Streams Health Checks - -If you are using the `quarkus-smallrye-health` extension, `quarkus-kafka-streams` will automatically add: - -* a readiness health check to validate that all topics declared in the `quarkus.kafka-streams.topics` property are created, -* a liveness health check based on the Kafka Streams state. - -So when you access the `/q/health` endpoint of your application you will have information about the state of the Kafka Streams and the available and/or missing topics. - -This is an example of when the status is `DOWN`: -[source, subs=attributes+] ----- -curl -i http://aggregator:8080/q/health - -HTTP/1.1 503 Service Unavailable -content-type: application/json; charset=UTF-8 -content-length: 454 - -{ - "status": "DOWN", - "checks": [ - { - "name": "Kafka Streams state health check", <1> - "status": "DOWN", - "data": { - "state": "CREATED" - } - }, - { - "name": "Kafka Streams topics health check", <2> - "status": "DOWN", - "data": { - "available_topics": "weather-stations,temperature-values", - "missing_topics": "hygrometry-values" - } - } - ] -} ----- -<1> Liveness health check. Also available at `/q/health/live` endpoint. -<2> Readiness health check. Also available at `/q/health/ready` endpoint. - -So as you can see, the status is `DOWN` as soon as one of the `quarkus.kafka-streams.topics` is missing or the Kafka Streams `state` is not `RUNNING`. - -If no topics are available, the `available_topics` key will not be present in the `data` field of the `Kafka Streams topics health check`. -As well as if no topics are missing, the `missing_topics` key will not be present in the `data` field of the `Kafka Streams topics health check`. - -You can of course disable the health check of the `quarkus-kafka-streams` extension by setting the `quarkus.kafka-streams.health.enabled` property to `false` in your `application.properties`. - -Obviously you can create your liveness and readiness probes based on the respective endpoints `/q/health/live` and `/q/health/ready`. - -=== Liveness health check - -Here is an example of the liveness check: - -[source] ----- -curl -i http://aggregator:8080/q/health/live - -HTTP/1.1 503 Service Unavailable -content-type: application/json; charset=UTF-8 -content-length: 225 - -{ - "status": "DOWN", - "checks": [ - { - "name": "Kafka Streams state health check", - "status": "DOWN", - "data": { - "state": "CREATED" - } - } - ] -} ----- -The `state` is coming from the `KafkaStreams.State` enum. - -=== Readiness health check - -Here is an example of the readiness check: - -[source] ----- -curl -i http://aggregator:8080/q/health/ready - -HTTP/1.1 503 Service Unavailable -content-type: application/json; charset=UTF-8 -content-length: 265 - -{ - "status": "DOWN", - "checks": [ - { - "name": "Kafka Streams topics health check", - "status": "DOWN", - "data": { - "missing_topics": "weather-stations,temperature-values" - } - } - ] -} ----- - -== Going Further - -This guide has shown how you can build stream processing applications using Quarkus and the Kafka Streams APIs, -both in JVM and native modes. -For running your KStreams application in production, you could also add health checks and metrics for the data pipeline. -Refer to the Quarkus guides on xref:micrometer.adoc[Micrometer], xref:smallrye-metrics.adoc[SmallRye Metrics], and xref:smallrye-health.adoc[SmallRye Health] to learn more. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-kafka-streams.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/kafka.adoc b/_versions/2.7/guides/kafka.adoc deleted file mode 100644 index 0978fd0fbae..00000000000 --- a/_versions/2.7/guides/kafka.adoc +++ /dev/null @@ -1,2295 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Apache Kafka Reference Guide - -include::./attributes.adoc[] - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -This reference guide demonstrates how your Quarkus application can utilize SmallRye Reactive Messaging to interact with Apache Kafka. - -== Introduction - -https://kafka.apache.org[Apache Kafka] is a popular open-source distributed event streaming platform. -It is used commonly for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. -Similar to a message queue, or an enterprise messaging platform, it lets you: - -- *publish* (write) and *subscribe* to (read) streams of events, called _records_. -- *store* streams of records durably and reliably inside _topics_. -- *process* streams of records as they occur or retrospectively. - -And all this functionality is provided in a distributed, highly scalable, elastic, fault-tolerant, and secure manner. - -== Quarkus Extension for Apache Kafka - -Quarkus provides support for Apache Kafka through https://smallrye.io/smallrye-reactive-messaging/[SmallRye Reactive Messaging] framework. -Based on Eclipse MicroProfile Reactive Messaging specification 2.0, it proposes a flexible programming model bridging CDI and event-driven. - -[NOTE] -==== -This guide provides an in-depth look on Apache Kafka and SmallRye Reactive Messaging framework. -For a quick start take a look at xref:kafka-reactive-getting-started.adoc[Getting Started to SmallRye Reactive Messaging with Apache Kafka]. -==== - -You can add the `smallrye-reactive-messaging-kafka` extensions to your project by running the following command in your project base directory: - -:add-extension-extensions: smallrye-reactive-messaging-kafka -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-reactive-messaging-kafka - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-reactive-messaging-kafka") ----- - -[NOTE] -==== -The extension includes `kafka-clients` version 3.1.0 as a transitive dependency and is compatible with Kafka brokers version 2.x. -==== - -== Configuring Smallrye Kafka Connector - -Because Smallrye Reactive Messaging framework supports different messaging backends like Apache Kafka, AMQP, Apache Camel, JMS, MQTT, etc., it employs a generic vocabulary: - -- Applications send and receive *messages*. A message wraps a _payload_ and can be extended with some _metadata_. With the Kafka connector, a _message_ corresponds to a Kafka _record_. -- Messages transit on *channels*. Application components connect to channels to publish and consume messages. The Kafka connector maps _channels_ to Kafka _topics_. -- Channels are connected to message backends using *connectors*. Connectors are configured to map incoming messages to a specific channel (consumed by the application) and collect outgoing messages sent to a specific channel. Each connector is dedicated to a specific messaging technology. For example, the connector dealing with Kafka is named `smallrye-kafka`. - -A minimal configuration for the Kafka connector with an incoming channel looks like the following: - -[source, properties] ----- -%prod.kafka.bootstrap.servers=kafka:9092 <1> -mp.messaging.incoming.prices.connector=smallrye-kafka <2> ----- -<1> Configure the broker location for the production profile. You can configure it globally or per channel using `mp.messaging.incoming.$channel.bootstrap.servers` property. -In dev mode and when running tests, <> automatically starts a Kafka broker. -When not provided this property defaults to `localhost:9092`. -<2> Configure the connector to manage the prices channel. By default the topic name is same as the channel name. You can configure the topic attribute to override it. - -NOTE: The `%prod` prefix indicates that the property is only used when the application runs in prod mode (so not in dev or test). Refer to the xref:config-reference.adoc#profiles[Profile documentation] for further details. - -[TIP] -.Connector auto-attachment -==== -If you have a single connector on your classpath, you can omit the `connector` attribute configuration. -Quarkus automatically associates _orphan_ channels to the (unique) connector found on the classpath. -_Orphans_ channels are outgoing channels without a downstream consumer or incoming channels without an upstream producer. - -This auto-attachment can be disabled using: - -[source, properties] ----- -quarkus.reactive-messaging.auto-connector-attachment=false ----- -==== - -== Receiving messages from Kafka - -Continuing from the previous minimal configuration, your Quarkus application can receive message payload directly: - -[source, java] ----- -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class PriceConsumer { - - @Incoming("prices") - public void consume(double price) { - // process your price. - } - -} ----- - -There are several other ways your application can consume incoming messages: - -.Message -[source, java] ----- -@Incoming("prices") -public CompletionStage consume(Message msg) { - // access record metadata - var metadata = msg.getMetadata(IncomingKafkaRecordMetadata.class).orElseThrow(); - // process the message payload. - double price = msg.getPayload(); - // Acknowledge the incoming message (commit the offset) - return msg.ack(); -} ----- - -The `Message` type lets the consuming method access the incoming message metadata and handle the acknowledgment manually. -We'll explore different acknowledgment strategies in <>. - -If you want to access the Kafka record objects directly, use: - -.ConsumerRecord -[source, java] ----- -@Incoming("prices") -public void consume(ConsumerRecord record) { - String key = record.key(); // Can be `null` if the incoming record has no key - String value = record.value(); // Can be `null` if the incoming record has no value - String topic = record.topic(); - int partition = record.partition(); - // ... -} ----- - -`ConsumerRecord` is provided by the underlying Kafka client and can be injected directly to the consumer method. -Another simpler approach consists in using `Record`: - -.Record -[source, java] ----- -@Incoming("prices") -public void consume(Record record) { - String key = record.key(); // Can be `null` if the incoming record has no key - String value = record.value(); // Can be `null` if the incoming record has no value -} ----- - -`Record` is a simple wrapper around key and payload of the incoming Kafka record. - -.@Channel - -Alternatively, your application can inject a `Multi` in your bean and subscribe to its events as the following example: - -[source, java] ----- -import io.smallrye.mutiny.Multi; -import io.smallrye.reactive.messaging.annotations.Channel; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import org.jboss.resteasy.annotations.SseElementType; - -@Path("/prices") -public class PriceResource { - - @Inject - @Channel("prices") - Multi prices; - - @GET - @Path("/prices") - @Produces(MediaType.SERVER_SENT_EVENTS) - @SseElementType("text/plain") - public Multi stream() { - return prices; - } -} ----- - -This is a good example of how to integrate a Kafka consumer with another downstream, -in this example exposing it as a Server-Sent Events endpoint. - - -[NOTE] -==== -When consuming messages with `@Channel`, the application code is responsible for the subscription. -In the example above, RESTEasy endpoint handles that for you. -==== - -Following types can be injected as channels: - -[source, java] ----- -@Inject @Channel("prices") Multi streamOfPayloads; - -@Inject @Channel("prices") Multi> streamOfMessages; - -@Inject @Channel("prices") Publisher publisherOfPayloads; - -@Inject @Channel("prices") Publisher> publisherOfMessages; ----- - -As with the previous `Message` example, if your injected channel receives payloads (`Multi`), it acknowledges the message automatically, and support multiple subscribers. -If you injected channel receives Message (`Multi>`), you will be responsible for the acknowledgment and broadcasting. -We will explore sending broadcast messages in <>. - -[IMPORTANT] -==== -Injecting `@Channel("prices")` or having `@Incoming("prices")` does not automatically configure the application to consume messages from Kafka. -You need to configure an inbound connector with `mp.messaging.incoming.prices\...` or have an `@Outgoing("prices")` method somewhere in your application (in which case, `prices` will be an in-memory channel). -==== - -[#blocking-processing] -=== Blocking processing - -Reactive Messaging invokes your method on an I/O thread. -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. -But, you often need to combine Reactive Messaging with blocking processing such as database interactions. -For this, you need to use the `@Blocking` annotation indicating that the processing is _blocking_ and should not be run on the caller thread. - -For example, The following code illustrates how you can store incoming payloads to a database using Hibernate with Panache: - -[source,java] ----- -import io.smallrye.reactive.messaging.annotations.Blocking; -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; -import javax.transaction.Transactional; - -@ApplicationScoped -public class PriceStorage { - - @Incoming("prices") - @Transactional - public void store(int priceInUsd) { - Price price = new Price(); - price.value = priceInUsd; - price.persist(); - } - -} ----- - -The complete example is available in the `kafka-panache-quickstart` {quickstarts-tree-url}/kafka-panache-quickstart[directory]. - -[NOTE] -==== -There are 2 `@Blocking` annotations: - -1. `io.smallrye.reactive.messaging.annotations.Blocking` -2. `io.smallrye.common.annotation.Blocking` - -They have the same effect. -Thus, you can use both. -The first one provides more fine-grained tuning such as the worker pool to use and whether it preserves the order. -The second one, used also with other reactive features of Quarkus, uses the default worker pool and preserves the order. - -Detailed information on the usage of `@Blocking` annotation can be found in https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.1/advanced/blocking.html[SmallRye Reactive Messaging – Handling blocking execution]. -==== - -[TIP] -.@Transactional -==== -If your method is annotated with `@Transactional`, it will be considered _blocking_ automatically, even if the method is not annotated with `@Blocking`. -==== - -=== Acknowledgment Strategies - -All messages received by a consumer must be acknowledged. -In the absence of acknowledgment, the processing is considered in error. -If the consumer method receives a `Record` or a payload, the message will be acked on method return, also known as `Strategy.POST_PROCESSING`. -If the consumer method returns another reactive stream or `CompletionStage`, the message will be acked when the downstream message is acked. -You can override the default behavior to ack the message on arrival (`Strategy.PRE_PROCESSING`), -or do not ack the message at all (`Strategy.NONE`) on the consumer method as in the following example: - -[source, java] ----- -@Incoming("prices") -@Acknowledgment(Acknowledgment.Strategy.PRE_PROCESSING) -public void process(double price) { - // process price -} ----- - -If the consumer method receives a `Message`, the acknowledgment strategy is `Strategy.MANUAL` -and the consumer method is in charge of ack/nack the message. - -[source, java] ----- -@Incoming("prices") -public CompletionStage process(Message msg) { - // process price - return msg.ack(); -} ----- - -As mentioned above, the method can also override the acknowledgment strategy to `PRE_PROCESSING` or `NONE`. - -[[commit-strategies]] -=== Commit Strategies - -When a message produced from a Kafka record is acknowledged, the connector invokes a commit strategy. -These strategies decide when the consumer offset for a specific topic/partition is committed. -Committing an offset indicates that all previous records have been processed. -It is also the position where the application would restart the processing after a crash recovery or a restart. - -Committing every offset has performance penalties as Kafka offset management can be slow. -However, not committing the offset often enough may lead to message duplication if the application crashes between two commits. - -The Kafka connector supports three strategies: - -- `throttled` keeps track of received messages and commits an offset of the latest acked message in sequence (meaning, all previous messages were also acked). -This strategy guarantees at-least-once delivery even if the channel performs asynchronous processing. -The connector tracks the received records and periodically (period specified by `auto.commit.interval.ms`, default: 5000 ms) commits the highest consecutive offset. -The connector will be marked as unhealthy if a message associated with a record is not acknowledged in `throttled.unprocessed-record-max-age.ms` (default: 60000 ms). -Indeed, this strategy cannot commit the offset as soon as a single record processing fails (see <> to configure what happens on failing processing). -If `throttled.unprocessed-record-max-age.ms` is set to less than or equal to `0`, it does not perform any health check verification. -Such a setting might lead to running out of memory if there are "poison pill" messages (that are never acked). -This strategy is the default if `enable.auto.commit` is not explicitly set to true. - -- `latest` commits the record offset received by the Kafka consumer as soon as the associated message is acknowledged (if the offset is higher than the previously committed offset). -This strategy provides at-least-once delivery if the channel processes the message without performing any asynchronous processing. -This strategy should not be used in high load environment, as offset commit is expensive. However, it reduces the risk of duplicates. - -- `ignore` performs no commit. This strategy is the default strategy when the consumer is explicitly configured with `enable.auto.commit` to true. -It delegates the offset commit to the underlying Kafka client. -When `enable.auto.commit` is `true` this strategy **DOES NOT** guarantee at-least-once delivery. -SmallRye Reactive Messaging processes records asynchronously, so offsets may be committed for records that have been polled but not yet processed. -In case of a failure, only records that were not committed yet will be re-processed. - -[IMPORTANT] -==== -The Kafka connector disables the Kafka auto commit when it is not explicitly enabled. This behavior differs from the traditional Kafka consumer. -If high throughput is important for you, and you are not limited by the downstream, we recommend to either: - -- use the `throttled` policy, -- or set `enable.auto.commit` to true and annotate the consuming method with `@Acknowledgment(Acknowledgment.Strategy.NONE)`. -==== - -[[error-handling]] -=== Error Handling Strategies - -If a message produced from a Kafka record is nacked, a failure strategy is applied. The Kafka connector supports three strategies: - -- `fail`: fail the application, no more records will be processed (default strategy). The offset of the record that has not been processed correctly is not committed. -- `ignore`: the failure is logged, but the processing continue. The offset of the record that has not been processed correctly is committed. -- `dead-letter-queue`: the offset of the record that has not been processed correctly is committed, but the record is written to a Kafka dead letter topic. - -The strategy is selected using the `failure-strategy` attribute. - -In the case of `dead-letter-queue`, you can configure the following attributes: - -- `dead-letter-queue.topic`: the topic to use to write the records not processed correctly, default is `dead-letter-topic-$channel`, with `$channel` being the name of the channel. -- `dead-letter-queue.key.serializer`: the serializer used to write the record key on the dead letter queue. By default, it deduces the serializer from the key deserializer. -- `dead-letter-queue.value.serializer`: the serializer used to write the record value on the dead letter queue. By default, it deduces the serializer from the value deserializer. - -The record written on the dead letter queue contains a set of additional headers about the original record: - -- *dead-letter-reason*: the reason of the failure -- *dead-letter-cause*: the cause of the failure if any -- *dead-letter-topic*: the original topic of the record -- *dead-letter-partition*: the original partition of the record (integer mapped to String) -- *dead-letter-offset*: the original offset of the record (long mapped to String) - -==== Retrying processing - -You can combine Reactive Messaging with https://github.com/smallrye/smallrye-fault-tolerance[SmallRye Fault Tolerance], and retry processing if it failed: - -[source, java] ----- -@Incoming("kafka") -@Retry(delay = 10, maxRetries = 5) -public void consume(String v) { - // ... retry if this method throws an exception -} ----- - -You can configure the delay, the number of retries, the jitter, etc. - -If your method returns a `Uni` or `CompletionStage`, you need to add the `@NonBlocking` annotation: - -[source,java] ----- -@Incoming("kafka") -@Retry(delay = 10, maxRetries = 5) -@NonBlocking -public Uni consume(String v) { - // ... retry if this method throws an exception or the returned Uni produce a failure -} ----- - -NOTE: The `@NonBlocking` annotation is only required with SmallRye Fault Tolerance 5.1.0 and earlier. -Starting with SmallRye Fault Tolerance 5.2.0 (available since Quarkus 2.1.0.Final), it is not necessary. -See https://smallrye.io/docs/smallrye-fault-tolerance/5.2.0/usage/extra.html#_non_compatible_mode[SmallRye Fault Tolerance documentation] for more information. - -The incoming messages are acknowledged only once the processing completes successfully. -So, it commits the offset after the successful processing. -If the processing still fails, even after all retries, the message is _nacked_ and the failure strategy is applied. - -==== Handling Deserialization Failures - -When a deserialization failure occurs, you can intercept it and provide a failure strategy. -To achieve this, you need to create a bean implementing `DeserializationFailureHandler` interface: - -[source, java] ----- -@ApplicationScoped -@Identifier("failure-retry") // Set the name of the failure handler -public class MyDeserializationFailureHandler - implements DeserializationFailureHandler { // Specify the expected type - - @Override - public JsonObject decorateDeserialization(Uni deserialization, String topic, boolean isKey, - String deserializer, byte[] data, Headers headers) { - return deserialization - .onFailure().retry().atMost(3) - .await().atMost(Duration.ofMillis(200)); - } -} ----- - -To use this failure handler, the bean must be exposed with the `@Identifier` qualifier and the connector configuration must specify the attribute `mp.messaging.incoming.$channel.[key|value]-deserialization-failure-handler` (for key or value deserializers). - -The handler is called with details of the deserialization, including the action represented as `Uni`. -On the deserialization `Uni` failure strategies like retry, providing a fallback value or applying timeout can be implemented. - -=== Consumer Groups - -In Kafka, a consumer group is a set of consumers which cooperate to consume data from a topic. -A topic is divided into a set of partitions. -The partitions of a topic are assigned among the consumers in the group, effectively allowing to scale consumption throughput. -Note that each partition is assigned to a single consumer from a group. -However, a consumer can be assigned multiple partitions if the number of partitions is greater than the number of consumer in the group. - -Let's explore briefly different producer/consumer patterns and how to implement them using Quarkus: - -. *Single consumer thread inside a consumer group* -+ -This is the default behavior of an application subscribing to a Kafka topic: Each Kafka connector will create a single consumer thread and place it inside a single consumer group. -Consumer group id defaults to the application name as set by the `quarkus.application.name` configuration property. -It can also be set using the `kafka.group.id` property. -+ -image::kafka-one-app-one-consumer.png[alt=Architecture, width=60%, align=center] - -. *Multiple consumer threads inside a consumer group* -+ -For a given application instance, the number of consumers inside the consumer group can be configured using `mp.messaging.incoming.$channel.partitions` property. -The partitions of the subscribed topic will be divided among the consumer threads. -Note that if the `partitions` value exceed the number of partitions of the topic, some consumer threads won't be assigned any partitions. -+ -image::kafka-one-app-two-consumers.png[alt=Architecture, width=60%, align=center] - -. *Multiple consumer applications inside a consumer group* -+ -Similar to the previous example, multiple instances of an application can subscribe to a single consumer group, configured via `mp.messaging.incoming.$channel.group.id` property, or left default to the application name. -This in turn will divide partitions of the topic among application instances. -+ -image::kafka-two-app-one-consumer-group.png[alt=Architecture, width=60% , align=center] - -. *Pub/Sub: Multiple consumer groups subscribed to a topic* -+ -Lastly different applications can subscribe independently to same topics using different *consumer group ids*. -For example, messages published to a topic called _orders_ can be consumed independently on two consumer applications, one with `mp.messaging.incoming.orders.group.id=invoicing` and second with `mp.messaging.incoming.orders.group.id=shipping`. -Different consumer groups can thus scale independently according to the message consumption requirements. -+ -image::kafka-two-app-two-consumer-groups.png[alt=Architecture, width=60%, align=center] - -==== Consumer Rebalance Listener - -Inside a consumer group, as new group members arrive and old members leave, the partitions are re-assigned so that each member receives a proportional share of the partitions. -This is known as rebalancing the group. -To handle offset commit and assigned partitions yourself, you can provide a consumer rebalance listener. -To achieve this, implement the `io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener` interface and expose it as a CDI bean with the `@Idenfier` qualifier. -A common use case is to store offset in a separate data store to implement exactly-once semantic, or starting the processing at a specific offset. - -The listener is invoked every time the consumer topic/partition assignment changes. -For example, when the application starts, it invokes the `partitionsAssigned` callback with the initial set of topics/partitions associated with the consumer. -If, later, this set changes, it calls the `partitionsRevoked` and `partitionsAssigned` callbacks again, so you can implement custom logic. - -Note that the rebalance listener methods are called from the Kafka polling thread and **will** block the caller thread until completion. -That’s because the rebalance protocol has synchronization barriers, and using asynchronous code in a rebalance listener may be executed after the synchronization barrier. - -When topics/partitions are assigned or revoked from a consumer, it pauses the message delivery and resumes once the rebalance completes. - -If the rebalance listener handles offset commit on behalf of the user (using the `NONE` commit strategy), -the rebalance listener must commit the offset synchronously in the partitionsRevoked callback. -We also recommend applying the same logic when the application stops. - -Unlike the `ConsumerRebalanceListener` from Apache Kafka, the `io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener` methods pass the Kafka Consumer and the set of topics/partitions. - -In the following example we set-up a consumer that always starts on messages from at most 10 minutes ago (or offset 0). -First we need to provide a bean that implements `io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener` and is annotated with `io.smallrye.common.annotation.Identifier`. -We then must configure our inbound connector to use this bean. - -[source, java] ----- -package inbound; - -import io.smallrye.common.annotation.Identifier; -import io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener; -import org.apache.kafka.clients.consumer.Consumer; -import org.apache.kafka.clients.consumer.OffsetAndTimestamp; -import org.apache.kafka.clients.consumer.TopicPartition; - -import javax.enterprise.context.ApplicationScoped; -import java.util.Collection; -import java.util.HashMap; -import java.util.Map; -import java.util.logging.Logger; - -@ApplicationScoped -@Identifier("rebalanced-example.rebalancer") -public class KafkaRebalancedConsumerRebalanceListener implements KafkaConsumerRebalanceListener { - - private static final Logger LOGGER = Logger.getLogger(KafkaRebalancedConsumerRebalanceListener.class.getName()); - - /** - * When receiving a list of partitions, will search for the earliest offset within 10 minutes - * and seek the consumer to it. - * - * @param consumer underlying consumer - * @param partitions set of assigned topic partitions - */ - @Override - public void onPartitionsAssigned(Consumer consumer, Collection partitions) { - long now = System.currentTimeMillis(); - long shouldStartAt = now - 600_000L; //10 minute ago - - Map request = new HashMap<>(); - for (TopicPartition partition : partitions) { - LOGGER.info("Assigned " + partition); - request.put(partition, shouldStartAt); - } - Map offsets = consumer.offsetsForTimes(request); - for (Map.Entry position : offsets.entrySet()) { - long target = position.getValue() == null ? 0L : position.getValue().offset(); - LOGGER.info("Seeking position " + target + " for " + position.getKey()); - consumer.seek(position.getKey(), target); - } - } - -} ----- - -[source, java] ----- -package inbound; - -import io.smallrye.reactive.messaging.kafka.IncomingKafkaRecord; -import org.eclipse.microprofile.reactive.messaging.Acknowledgment; -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; - -@ApplicationScoped -public class KafkaRebalancedConsumer { - - @Incoming("rebalanced-example") - @Acknowledgment(Acknowledgment.Strategy.NONE) - public CompletionStage consume(IncomingKafkaRecord message) { - // We don't need to ACK messages because in this example, - // we set offset during consumer rebalance - return CompletableFuture.completedFuture(null); - } - -} ----- - -To configure the inbound connector to use the provided listener, we either set the consumer rebalance listener’s identifier: -`mp.messaging.incoming.rebalanced-example.consumer-rebalance-listener.name=rebalanced-example.rebalancer` - -Or have the listener’s name be the same as the group id: - -`mp.messaging.incoming.rebalanced-example.group.id=rebalanced-example.rebalancer` - -Setting the consumer rebalance listener’s name takes precedence over using the group id. - -==== Using unique consumer groups - -If you want to process all the records from a topic (from its beginning), you need: - -1. to set `auto.offset.reset = earliest` -2. assign your consumer to a consumer group not used by any other application. - -Quarkus generates a UUID that changes between two executions (including in dev mode). -So, you are sure no other consumer uses it, and you receive a new unique group id every time your application starts. - -You can use that generated UUID as the consumer group as follows: - -[source, properties] ----- -mp.messaging.incoming.your-channel.auto.offset.reset=earliest -mp.messaging.incoming.your-channel.group.id=${quarkus.uuid} ----- - -IMPORTANT: If the `group.id` attribute is not set, it defaults the `quarkus.application.name` configuration property. - -=== Receiving Kafka Records in Batches - -By default, incoming methods receive each Kafka record individually. -Under the hood, Kafka consumer clients poll the broker constantly and receive records in batches, presented inside the `ConsumerRecords` container. - -In *batch* mode, your application can receive all the records returned by the consumer *poll* in one go. - -To achieve this you need to specify a compatible container type to receive all the data: - -[source, java] ----- -@Incoming("prices") -public void consume(List prices) { - for (double price : prices) { - // process price - } -} ----- - -The incoming method can also receive `Message>`, `KafkaRecordBatch` `ConsumerRecords` types. -They give access to record details such as offset or timestamp: - -[source, java] ----- -@Incoming("prices") -public CompletionStage consumeMessage(KafkaRecordBatch records) { - for (KafkaRecord record : records) { - String payload = record.getPayload(); - String topic = record.getTopic(); - // process messages - } - // ack will commit the latest offsets (per partition) of the batch. - return records.ack(); -} - ----- - -Note that the successful processing of the incoming record batch will commit the latest offsets for each partition received inside the batch. -The configured commit strategy will be applied for these records only. - -Conversely, if the processing throws an exception, all messages are _nacked_, applying the failure strategy for all the records inside the batch. - -[NOTE] -==== -Quarkus autodetects batch types for incoming channels and sets batch configuration automatically. -You can configure batch mode explicitly with `mp.messaging.incoming.$channel.batch` property. -==== - -== Sending messages to Kafka - -Configuration for the Kafka connector outgoing channels is similar to that of incoming: - -[source, properties] ----- -%prod.kafka.bootstrap.servers=kafka:9092 <1> -mp.messaging.outgoing.prices-out.connector=smallrye-kafka <2> -mp.messaging.outgoing.prices-out.topic=prices <3> ----- - -<1> Configure the broker location for the production profile. You can configure it globally or per channel using `mp.messaging.outgoing.$channel.bootstrap.servers` property. -In dev mode and when running tests, <> automatically starts a Kafka broker. -When not provided, this property defaults to `localhost:9092`. -<2> Configure the connector to manage the `prices-out` channel. -<3> By default, the topic name is same as the channel name. You can configure the topic attribute to override it. - -[IMPORTANT] -==== -Inside application configuration, channel names are unique. -Therefore, if you'd like to configure an incoming and outgoing channel on the same topic, you will need to name channels differently (like in the examples of this guide, `mp.messaging.incoming.prices` and `mp.messaging.outgoing.prices-out`). -==== - -Then, your application can generate messages and publish them to the `prices-out` channel. -It can use `double` payloads as in the following snippet: - -[source, java] ----- -import io.smallrye.mutiny.Multi; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import javax.enterprise.context.ApplicationScoped; -import java.time.Duration; -import java.util.Random; - -@ApplicationScoped -public class KafkaPriceProducer { - - private final Random random = new Random(); - - @Outgoing("prices-out") - public Multi generate() { - // Build an infinite stream of random prices - // It emits a price every second - return Multi.createFrom().ticks().every(Duration.ofSeconds(1)) - .map(x -> random.nextDouble()); - } - -} ----- - -[IMPORTANT] -==== -You should not call methods annotated with `@Incoming` and/or `@Outgoing` directly from your code. They are invoked by the framework. Having user code invoking them would not have the expected outcome. -==== - -Note that the `generate` method returns a `Multi`, which implements the Reactive Streams `Publisher` interface. -This publisher will be used by the framework to generate messages and send them to the configured Kafka topic. - -Instead of returning a payload, you can return a `io.smallrye.reactive.messaging.kafka.Record` to send key/value pairs: - -[source, java] ----- -@Outgoing("out") -public Multi> generate() { - return Multi.createFrom().ticks().every(Duration.ofSeconds(1)) - .map(x -> Record.of("my-key", random.nextDouble())); -} ----- - -Payload can be wrapped inside `org.eclipse.microprofile.reactive.messaging.Message` to have more control on the written records: - -[source, java] ----- -@Outgoing("generated-price") -public Multi> generate() { - return Multi.createFrom().ticks().every(Duration.ofSeconds(1)) - .map(x -> Message.of(random.nextDouble()) - .addMetadata(OutgoingKafkaRecordMetadata.builder() - .withKey("my-key") - .withTopic("my-key-prices") - .withHeaders(new RecordHeaders().add("my-header", "value".getBytes())) - .build())); -} ----- - -`OutgoingKafkaRecordMetadata` allows to set metadata attributes of the Kafka record, such as `key`, `topic`, `partition` or `timestamp`. -One use case is to dynamically select the destination topic of a message. -In this case, instead of configuring the topic inside your application configuration file, you need to use the outgoing metadata to set the name of the topic. - -Other than method signatures returning a Reactive Stream `Publisher` (`Multi` being an implementation of `Publisher`), outgoing method can also return single message. -In this case the producer will use this method as generator to create an infinite stream. - -[source, java] ----- -@Outgoing("prices-out") T generate(); // T excluding void - -@Outgoing("prices-out") Message generate(); - -@Outgoing("prices-out") Uni generate(); - -@Outgoing("prices-out") Uni> generate(); - -@Outgoing("prices-out") CompletionStage generate(); - -@Outgoing("prices-out") CompletionStage> generate(); ----- - -=== Sending messages with @Emitter - -Sometimes, you need to have an imperative way of sending messages. - -For example, if you need to send a message to a stream when receiving a POST request inside a REST endpoint. -In this case, you cannot use `@Outgoing` because your method has parameters. - -For this, you can use an `Emitter`. - -[source, java] ----- -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -import javax.inject.Inject; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Consumes; -import javax.ws.rs.core.MediaType; - -@Path("/prices") -public class PriceResource { - - @Inject - @Channel("price-create") - Emitter priceEmitter; - - @POST - @Consumes(MediaType.TEXT_PLAIN) - public void addPrice(Double price) { - CompletionStage ack = priceEmitter.send(price); - } -} ----- - -Sending a payload returns a `CompletionStage`, completed when the message is acked. If the message transmission fails, the `CompletionStage` is completed exceptionally with the reason of the nack. - -[NOTE] -==== -The `Emitter` configuration is done the same way as the other stream configuration used by `@Incoming` and `@Outgoing`. -==== - -[IMPORTANT] -==== -Using the `Emitter` you are sending messages from your imperative code to reactive messaging. -These messages are stored in a queue until they are sent. -If the Kafka producer client can't keep up with messages trying to be sent over to Kafka, this queue can become a memory hog and you may even run out of memory. -You can use `@OnOverflow` to configure back-pressure strategy. -It lets you configure the size of the queue (default is 256) and the strategy to apply when the buffer size is reached. Available strategies are `DROP`, `LATEST`, `FAIL`, `BUFFER`, `UNBOUNDED_BUFFER` and `NONE`. -==== - -With the `Emitter` API, you can also encapsulate the outgoing payload inside `Message`. As with the previous examples, `Message` lets you handle the ack/nack cases differently. - -[source,java] ----- -import java.util.concurrent.CompletableFuture; -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -import javax.inject.Inject; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Consumes; -import javax.ws.rs.core.MediaType; - -@Path("/prices") -public class PriceResource { - - @Inject @Channel("price-create") Emitter priceEmitter; - - @POST - @Consumes(MediaType.TEXT_PLAIN) - public void addPrice(Double price) { - priceEmitter.send(Message.of(price) - .withAck(() -> { - // Called when the message is acked - return CompletableFuture.completedFuture(null); - }) - .withNack(throwable -> { - // Called when the message is nacked - return CompletableFuture.completedFuture(null); - })); - } -} ----- - -If you prefer using Reactive Stream APIs, you can use `MutinyEmitter` that will return `Uni` from the `send` method. -You can therefore use Mutiny APIs for handling downstream messages and errors. - -[source,java] ----- -import org.eclipse.microprofile.reactive.messaging.Channel; - -import javax.inject.Inject; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Consumes; -import javax.ws.rs.core.MediaType; - -import io.smallrye.reactive.messaging.MutinyEmitter; - -@Path("/prices") -public class PriceResource { - - @Inject - @Channel("price-create") - MutinyEmitter priceEmitter; - - @POST - @Consumes(MediaType.TEXT_PLAIN) - public Uni addPrice(Double price) { - return quoteRequestEmitter.send(price) - .map(x -> "ok") - .onFailure().recoverWithItem("ko"); - } -} ----- - -It is also possible to block on sending the event to the emitter with the `sendAndAwait` method. -It will only return from the method when the event is acked or nacked by the receiver. - -[NOTE] -.Deprecation -==== -The `io.smallrye.reactive.messaging.annotations.Emitter`, `io.smallrye.reactive.messaging.annotations.Channel` and `io.smallrye.reactive.messaging.annotations.OnOverflow` classes are now deprecated and replaced by: - -* `org.eclipse.microprofile.reactive.messaging.Emitter` -* `org.eclipse.microprofile.reactive.messaging.Channel` -* `org.eclipse.microprofile.reactive.messaging.OnOverflow` - -The new `Emitter.send` method returns a `CompletionStage` completed when the produced message is acknowledged. -==== - -More information on how to use `Emitter` can be found in https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.1/emitter/emitter.html#_emitter_and_channel[SmallRye Reactive Messaging – Emitters and Channels] - -=== Write Acknowledgement - -When Kafka broker receives a record, its acknowledgement can take time depending on the configuration. -Also, it stores in-memory the records that cannot be written. - -By default, the connector does wait for Kafka to acknowledge the record to continue the processing (acknowledging the received Message). -You can disable this by setting the `waitForWriteCompletion` attribute to `false`. - -Note that the `acks` attribute has a huge impact on the record acknowledgement. - -If a record cannot be written, the message is nacked. - -=== Backpressure - -The Kafka outbound connector handles back-pressure, monitoring the number of in-flight messages waiting to be written to the Kafka broker. -The number of in-flight messages is configured using the `max-inflight-messages` attribute and defaults to 1024. - -The connector only sends that amount of messages concurrently. -No other messages will be sent until at least one in-flight message gets acknowledged by the broker. -Then, the connector writes a new message to Kafka when one of the broker’s in-flight messages get acknowledged. -Be sure to configure Kafka’s `batch.size` and `linger.ms` accordingly. - -You can also remove the limit of in-flight messages by setting `max-inflight-messages` to `0`. -However, note that the Kafka producer may block if the number of requests reaches `max.in.flight.requests.per.connection`. - -=== Retrying message dispatch - -When the Kafka producer receives an error from the server, if it is a transient, recoverable error, the client will retry sending the batch of messages. -This behavior is controlled by `retries` and `retry.backoff.ms` parameters. -In addition to this, SmallRye Reactive Messaging will retry individual messages on recoverable errors, depending on the `retries` and `delivery.timeout.ms` parameters. - -Note that while having retries in a reliable system is a best practice, the `max.in.flight.requests.per.connection` parameter defaults to `5`, meaning that the order of the messages is not guaranteed. -If the message order is a must for your use case, setting `max.in.flight.requests.per.connection` to `1` will make sure a single batch of messages is sent at a time, in the expense of limiting the throughput of the producer. - -For applying retry mechanism on processing errors, see the section on <>. - -=== Handling Serialization Failures - -For Kafka producer client serialization failures are not recoverable, thus the message dispatch is not retried. In these cases you may need to apply a failure strategy for the serializer. -To achieve this, you need to create a bean implementing `SerializationFailureHandler` interface: - -[source, java] ----- -@ApplicationScoped -@Identifier("failure-fallback") // Set the name of the failure handler -public class MySerializationFailureHandler - implements SerializationFailureHandler { // Specify the expected type - - @Override - public byte[] decorateSerialization(Uni serialization, String topic, boolean isKey, - String serializer, Object data, Headers headers) { - return serialization - .onFailure().retry().atMost(3) - .await().indefinitely(); - } -} ----- - -To use this failure handler, the bean must be exposed with the `@Identifier` qualifier and the connector configuration must specify the attribute `mp.messaging.outgoing.$channel.[key|value]-serialization-failure-handler` (for key or value serializers). - -The handler is called with details of the serialization, including the action represented as `Uni`. -Note that the method must await on the result and return the serialized byte array. - -=== In-memory channels - -In some use cases, it is convenient to use the messaging patterns to transfer messages inside the same application. -When you don't connect a channel to a messaging backend like Kafka, everything happens in-memory, and the streams are created by chaining methods together. -Each chain is still a reactive stream and enforces the back-pressure protocol. - -The framework verifies that the producer/consumer chain is complete, -meaning that if the application writes messages into an in-memory channel (using a method with only `@Outgoing`, or an `Emitter`), -it must also consume the messages from within the application (using a method with only `@Incoming` or using an unmanaged stream). - -[[broadcasting-messages-on-multiple-consumers]] -=== Broadcasting messages on multiple consumers - -By default, a channel can be linked to a single consumer, using `@Incoming` method or `@Channel` reactive stream. -At application startup, channels are verified to form a chain of consumers and producers with single consumer and producer. -You can override this behavior by setting `mp.messaging.$channel.broadcast=true` on a channel. - -In case of in-memory channels, `@Broadcast` annotation can be used on the `@Outgoing` method. For example, - -[source, java] ----- -import java.util.Random; - -import javax.enterprise.context.ApplicationScoped; - -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import io.smallrye.reactive.messaging.annotations.Broadcast; - -@ApplicationScoped -public class MultipleConsumer { - - private final Random random = new Random(); - - @Outgoing("in-memory-channel") - @Broadcast - double generate() { - return random.nextDouble(); - } - - @Incoming("in-memory-channel") - void consumeAndLog(double price) { - System.out.println(price); - } - - @Incoming("in-memory-channel") - @Outgoing("prices2") - double consumeAndSend(double price) { - return price; - } -} ----- - -[NOTE] -==== -Reciprocally, multiple producers on the same channel can be merged by setting `mp.messaging.incoming.$channel.merge=true`. -On the `@Incoming` methods, you can control how multiple channels are merged using the `@Merge` annotation. -==== - -== Processing Messages - -Applications streaming data often need to consume some events from a topic, process them and publish the result to a different topic. -A processor method can be simply implemented using both the `@Incoming` and `@Outgoing` annotations: - -[source, java] ----- -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class PriceProcessor { - - private static final double CONVERSION_RATE = 0.88; - - @Incoming("price-in") - @Outgoing("price-out") - public double process(double price) { - return price * CONVERSION_RATE; - } - -} ----- - -The parameter of the `process` method is the incoming message payload, whereas the return value will be used as the outgoing message payload. -Previously mentioned signatures for parameter and return types are also supported, such as `Message`, `Record`, etc. - -You can apply asynchronous stream processing by consuming and returning reactive stream `Multi` type: - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; - -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import io.smallrye.mutiny.Multi; - -@ApplicationScoped -public class PriceProcessor { - - private static final double CONVERSION_RATE = 0.88; - - @Incoming("price-in") - @Outgoing("price-out") - public Multi process(Multi prices) { - return prices.filter(p -> p > 100).map(p -> p * CONVERSION_RATE); - } - -} ----- - -=== Propagating Record Key - -When processing messages, you can propagate incoming record key to the outgoing record. - -Enabled with `mp.messaging.outgoing.$channel.propagate-record-key=true` configuration, -record key propagation produces the outgoing record with the same _key_ as the incoming record. - -If the outgoing record already contains a _key_, it *won't be overridden* by the incoming record key. -If the incoming record does have a _null_ key, the `mp.messaging.outgoing.$channel.key` property is used. - -[[kafka-bare-clients]] -== Accessing Kafka clients directly - -In rare cases, you may need to access the underlying Kafka clients. -`KafkaClientService` provides thread-safe access to `Producer` and `Consumer`. - -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; -import javax.inject.Inject; - -import org.apache.kafka.clients.producer.ProducerRecord; - -import io.quarkus.runtime.StartupEvent; -import io.smallrye.reactive.messaging.kafka.KafkaClientService; -import io.smallrye.reactive.messaging.kafka.KafkaConsumer; -import io.smallrye.reactive.messaging.kafka.KafkaProducer; - -@ApplicationScoped -public class PriceSender { - - @Inject - KafkaClientService clientService; - - void onStartup(@Observes StartupEvent startupEvent) { - KafkaProducer producer = clientService.getProducer("generated-price"); - producer.runOnSendingThread(client -> client.send(new ProducerRecord<>("prices", 2.4))) - .await().indefinitely(); - } -} ----- - -[IMPORTANT] -==== -The `KafkaClientService` is an experimental API and can change in the future. -==== - -You can also get the Kafka configuration injected to your application and create Kafka producer, consumer and admin clients directly: - -[source,java] ----- -import io.smallrye.common.annotation.Identifier; -import org.apache.kafka.clients.admin.AdminClient; -import org.apache.kafka.clients.admin.AdminClientConfig; -import org.apache.kafka.clients.admin.KafkaAdminClient; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.inject.Produces; -import javax.inject.Inject; -import java.util.HashMap; -import java.util.Map; - -@ApplicationScoped -public class KafkaClients { - - @Inject - @Identifier("default-kafka-broker") - Map config; - - @Produces - AdminClient getAdmin() { - Map copy = new HashMap<>(); - for (Map.Entry entry : config.entrySet()) { - if (AdminClientConfig.configNames().contains(entry.getKey())) { - copy.put(entry.getKey(), entry.getValue()); - } - } - return KafkaAdminClient.create(copy); - } - -} - ----- - -The `default-kafka-broker` configuration map contains all application properties prefixed with `kafka.` or `KAFKA_`. -For more configuration options check out <>. - -[[kafka-serialization]] -== JSON serialization - -Quarkus has built-in capabilities to deal with JSON Kafka messages. - -Imagine we have a `Fruit` data class as follows: - -[source,java] ----- -public class Fruit { - - public String name; - public int price; - - public Fruit() { - } - - public Fruit(String name, int price) { - this.name = name; - this.price = price; - } -} ----- - -And we want to use it to receive messages from Kafka, make some price transformation, and send messages back to Kafka. - -[source,java] ----- -import io.smallrye.reactive.messaging.annotations.Broadcast; -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import javax.enterprise.context.ApplicationScoped; - -/** -* A bean consuming data from the "fruit-in" channel and applying some price conversion. -* The result is pushed to the "fruit-out" channel. -*/ -@ApplicationScoped -public class FruitProcessor { - - private static final double CONVERSION_RATE = 0.88; - - @Incoming("fruit-in") - @Outgoing("fruit-out") - @Broadcast - public Fruit process(Fruit fruit) { - fruit.price = fruit.price * CONVERSION_RATE; - return fruit; - } - -} ----- - -To do this, we will need to setup JSON serialization with Jackson or JSON-B. - -NOTE: With JSON serialization correctly configured, you can also use `Publisher` and `Emitter`. - -[[jackson-serialization]] -=== Serializing via Jackson - -Quarkus has built-in support for JSON serialization and deserialization based on Jackson. -It will also <> the serializer and deserializer for you, so you do not have to configure anything. -When generation is disabled, you can use the provided `ObjectMapperSerializer` and `ObjectMapperDeserializer` as explained below. - -There is an existing `ObjectMapperSerializer` that can be used to serialize all data objects via Jackson. -You may create an empty subclass if you want to use <>. - -NOTE: By default, the `ObjectMapperSerializer` serializes null as the `"null"` String, this can be customized by setting the Kafka configuration -property `json.serialize.null-as-null=true` which will serialize null as `null`. -This is handy when using a compacted topic, as `null` is used as a tombstone to know which messages delete during compaction phase. - -The corresponding deserializer class needs to be subclassed. -So, let's create a `FruitDeserializer` that extends the `ObjectMapperDeserializer`. - -[source,java] ----- -package com.acme.fruit.jackson; - -import io.quarkus.kafka.client.serialization.ObjectMapperDeserializer; - -public class FruitDeserializer extends ObjectMapperDeserializer { - public FruitDeserializer() { - super(Fruit.class); - } -} ----- - -Finally, configure your channels to use the Jackson serializer and deserializer. - -[source,properties] ----- -# Configure the Kafka source (we read from it) -mp.messaging.incoming.fruit-in.topic=fruit-in -mp.messaging.incoming.fruit-in.value.deserializer=com.acme.fruit.jackson.FruitDeserializer - -# Configure the Kafka sink (we write to it) -mp.messaging.outgoing.fruit-out.topic=fruit-out -mp.messaging.outgoing.fruit-out.value.serializer=io.quarkus.kafka.client.serialization.ObjectMapperSerializer ----- - -Now, your Kafka messages will contain a Jackson serialized representation of your `Fruit` data object. -In this case, the `deserializer` configuration is not necessary as the <> is enabled by default. - -If you want to deserialize a list of fruits, you need to create a deserializer with a Jackson `TypeReference` denoted the generic collection used. - -[source,java] ----- -package com.acme.fruit.jackson; - -import java.util.List; -import com.fasterxml.jackson.core.type.TypeReference; -import io.quarkus.kafka.client.serialization.ObjectMapperDeserializer; - -public class ListOfFruitDeserializer extends ObjectMapperDeserializer> { - public ListOfFruitDeserializer() { - super(new TypeReference>() {}); - } -} ----- - -[[jsonb-serialization]] -=== Serializing via JSON-B - -First, you need to include the `quarkus-jsonb` extension. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-jsonb - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-jsonb") ----- - -There is an existing `JsonbSerializer` that can be used to serialize all data objects via JSON-B. -You may create an empty subclass if you want to use <>. - -NOTE: By default, the `JsonbSerializer` serializes null as the `"null"` String, this can be customized by setting the Kafka configuration -property `json.serialize.null-as-null=true` which will serialize null as `null`. -This is handy when using a compacted topic, as `null` is used as a tombstone to know which messages delete during compaction phase. - -The corresponding deserializer class needs to be subclassed. -So, let's create a `FruitDeserializer` that extends the generic `JsonbDeserializer`. - -[source,java] ----- -package com.acme.fruit.jsonb; - -import io.quarkus.kafka.client.serialization.JsonbDeserializer; - -public class FruitDeserializer extends JsonbDeserializer { - public FruitDeserializer() { - super(Fruit.class); - } -} ----- - -Finally, configure your channels to use the JSON-B serializer and deserializer. - -[source,properties] ----- -# Configure the Kafka source (we read from it) -mp.messaging.incoming.fruit-in.connector=smallrye-kafka -mp.messaging.incoming.fruit-in.topic=fruit-in -mp.messaging.incoming.fruit-in.value.deserializer=com.acme.fruit.jsonb.FruitDeserializer - -# Configure the Kafka sink (we write to it) -mp.messaging.outgoing.fruit-out.connector=smallrye-kafka -mp.messaging.outgoing.fruit-out.topic=fruit-out -mp.messaging.outgoing.fruit-out.value.serializer=io.quarkus.kafka.client.serialization.JsonbSerializer ----- - -Now, your Kafka messages will contain a JSON-B serialized representation of your `Fruit` data object. - -If you want to deserialize a list of fruits, you need to create a deserializer with a `Type` denoted the generic collection used. - -[source,java] ----- -package com.acme.fruit.jsonb; -import java.lang.reflect.Type; -import java.util.ArrayList; -import java.util.List; -import io.quarkus.kafka.client.serialization.JsonbDeserializer; - -public class ListOfFruitDeserializer extends JsonbDeserializer> { - public ListOfFruitDeserializer() { - super(new ArrayList() {}.getClass().getGenericSuperclass()); - } -} ----- - -NOTE: If you don't want to create a deserializer for each data object, you can use the generic `io.vertx.kafka.client.serialization.JsonObjectDeserializer` -that will deserialize to a `io.vertx.core.json.JsonObject`. The corresponding serializer can also be used: `io.vertx.kafka.client.serialization.JsonObjectSerializer`. - -== Avro Serialization - -This is described in a dedicated guide: xref:kafka-schema-registry-avro.adoc[Using Apache Kafka with Schema Registry and Avro]. - -[[serialization-autodetection]] -== Serializer/deserializer autodetection - -When using SmallRye Reactive Messaging with Kafka (`io.quarkus:quarkus-smallrye-reactive-messaging-kafka`), Quarkus can often automatically detect the correct serializer and deserializer class. -This autodetection is based on declarations of `@Incoming` and `@Outgoing` methods, as well as injected ``@Channel``s. - -For example, if you declare - -[source,java] ----- -@Outgoing("generated-price") -public Multi generate() { - ... -} ----- - -and your configuration indicates that the `generated-price` channel uses the `smallrye-kafka` connector, then Quarkus will automatically set the `value.serializer` to Kafka's built-in `IntegerSerializer`. - -Similarly, if you declare - -[source,java] ----- -@Incoming("my-kafka-records") -public void consume(KafkaRecord record) { - ... -} ----- - -and your configuration indicates that the `my-kafka-records` channel uses the `smallrye-kafka` connector, then Quarkus will automatically set the `key.deserializer` to Kafka's built-in `LongDeserializer`, as well as the `value.deserializer` to `ByteArrayDeserializer`. - -Finally, if you declare - -[source,java] ----- -@Inject -@Channel("price-create") -Emitter priceEmitter; ----- - -and your configuration indicates that the `price-create` channel uses the `smallrye-kafka` connector, then Quarkus will automatically set the `value.serializer` to Kafka's built-in `DoubleSerializer`. - -The full set of types supported by the serializer/deserializer autodetection is: - -* `short` and `java.lang.Short` -* `int` and `java.lang.Integer` -* `long` and `java.lang.Long` -* `float` and `java.lang.Float` -* `double` and `java.lang.Double` -* `byte[]` -* `java.lang.String` -* `java.util.UUID` -* `java.nio.ByteBuffer` -* `org.apache.kafka.common.utils.Bytes` -* `io.vertx.core.buffer.Buffer` -* `io.vertx.core.json.JsonObject` -* `io.vertx.core.json.JsonArray` -* classes generated from Avro schemas, as well as Avro `GenericRecord`, if Confluent or Apicurio Registry _serde_ is present -** see xref:kafka-schema-registry-avro.adoc[Using Apache Kafka with Schema Registry and Avro] for more information about using Confluent or Apicurio Registry libraries -* classes for which a subclass of `ObjectMapperSerializer` / `ObjectMapperDeserializer` is present, as described in <> -** it is technically not needed to subclass `ObjectMapperSerializer`, but in such case, autodetection isn't possible -* classes for which a subclass of `JsonbSerializer` / `JsonbDeserializer` is present, as described in <> -** it is technically not needed to subclass `JsonbSerializer`, but in such case, autodetection isn't possible - -If a serializer/deserializer is set by configuration, it won't be replaced by the autodetection. - -In case you have any issues with serializer autodetection, you can switch it off completely by setting `quarkus.reactive-messaging.kafka.serializer-autodetection.enabled=false`. -If you find you need to do this, please file a bug in the link:https://github.com/quarkusio/quarkus/issues[Quarkus issue tracker] so we can fix whatever problem you have. - -[[serialization-generation]] -== JSON Serializer/deserializer generation -Quarkus automatically generates serializers and deserializers for channels where: - -1. the serializer/deserializer is not configured -2. the auto-detection did not find a matching serializer/deserializer - -It uses Jackson underneath. - -This generation can be disabled using: - -[source, properties] ----- -quarkus.reactive-messaging.kafka.serializer-generation.enabled=false ----- - -IMPORTANT: Generation does not support collections such as `List`. -Refer to <> to write your own serializer/deserializer for this case. - -== Using Schema Registry - -This is described in a dedicated guide: xref:kafka-schema-registry-avro.adoc[Using Apache Kafka with Schema Registry and Avro]. - -[[kafka-health-check]] -== Health Checks - -Quarkus provides several health checks for Kafka. -These checks are used in combination with the `quarkus-smallrye-health` extension. - -=== Kafka Broker Readiness Check -When using the `quarkus-kafka-client` extension, you can enable _readiness_ health check by setting the `quarkus.kafka.health.enabled` property to `true` in your `application.properties`. -This check reports the status of the interaction with a _default_ Kafka broker (configured using `kafka.bootstrap.servers`). -It requires an _admin connection_ with the Kafka broker, and it is disabled by default. -If enabled, when you access the `/q/health/ready` endpoint of your application, you will have information about the connection validation status. - -=== Kafka Reactive Messaging Health Checks -When using Reactive Messaging and the Kafka connector, each configured channel (incoming or outgoing) provides _startup_, _liveness_ and _readiness_ checks. - -- The _startup_ check verifies that the communication with Kafka cluster is established. -- The _liveness_ check captures any unrecoverable failure happening during the communication with Kafka. -- The _readiness_ check verifies that the Kafka connector is ready to consume/produce messages to the configured Kafka topics. - -For each channel, you can disable the checks using: - -[source, properties] ----- -# Disable both liveness and readiness checks with `health-enabled=false`: - -# Incoming channel (receiving records form Kafka) -mp.messaging.incoming.your-channel.health-enabled=false -# Outgoing channel (writing records to Kafka) -mp.messaging.outgoing.your-channel.health-enabled=false - -# Disable only the readiness check with `health-readiness-enabled=false`: - -mp.messaging.incoming.your-channel.health-readiness-enabled=false -mp.messaging.outgoing.your-channel.health-readiness-enabled=false ----- - -NOTE: You can configure the `bootstrap.servers` for each channel using `mp.messaging.incoming|outgoing.$channel.bootstrap.servers` property. -Default is `kafka.bootstrap.servers`. - -Reactive Messaging _startup_ and _readiness_ checks offer two strategies. -The default strategy verifies that an active connection is established with the broker. -This approach is not intrusive as it's based on built-in Kafka client metrics. - -Using the `health-topic-verification-enabled=true` attribute, _startup_ probe uses an _admin client_ to check for the list of topics. -Whereas the _readiness_ probe for an incoming channel checks that at least one partition is assigned for consumption, -and for an outgoing channel checks that the topic used by the producer exist in the broker. - -Note that to achieve this, an _admin connection_ is required. -You can adjust the timeout for topic verification calls to the broker using the `health-topic-verification-timeout` configuration. - -== Kafka Streams - -This is described in a dedicated guide: xref:kafka-streams.adoc[Using Apache Kafka Streams]. - -== Using Snappy for message compression - -On _outgoing_ channels, you can enable Snappy compression by setting the `compression.type` attribute to `snappy`: - -[source, properties] ----- -mp.messaging.outgoing.fruit-out.compression.type=snappy ----- - -In JVM mode, it will work out of the box. -However, to compile your application to a native executable, you need to: - -1. Uses GraalVM 21.+ -2. Add `quarkus.kafka.snappy.enabled=true` to your `application.properties` - -In native mode, Snappy is disabled by default as the use of Snappy requires embedding a native library and unpacking it when the application starts. - -== Authentication with OAuth - -If your Kafka broker uses OAuth as authentication mechanism, you need to configure the Kafka consumer to enable this authentication process. -First, add the following dependency to your application: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.strimzi - kafka-oauth-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.strimzi:kafka-oauth-client") ----- - -This dependency provides the callback handler required to handle the OAuth workflow. -Then, in the `application.properties`, add: - -[source, properties] ----- -mp.messaging.connector.smallrye-kafka.security.protocol=SASL_PLAINTEXT -mp.messaging.connector.smallrye-kafka.sasl.mechanism=OAUTHBEARER -mp.messaging.connector.smallrye-kafka.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ - oauth.client.id="team-a-client" \ - oauth.client.secret="team-a-client-secret" \ - oauth.token.endpoint.uri="http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token" ; -mp.messaging.connector.smallrye-kafka.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler - -quarkus.ssl.native=true ----- - -Update the `oauth.client.id`, `oauth.client.secret` and `oauth.token.endpoint.uri` values. - -OAuth authentication works for both JVM and native modes. Since SSL in not enabled by default in native mode, `quarkus.ssl.native=true` must be added to support JaasClientOauthLoginCallbackHandler, which uses SSL. (See the xref:native-and-ssl.adoc[Using SSL with Native Executables] guide for more details.) - -== Testing a Kafka application - -=== Testing without a broker - -It can be useful to test the application without having to start a Kafka broker. -To achieve this, you can _switch_ the channels managed by the Kafka connector to _in-memory_. - -IMPORTANT: This approach only works for JVM tests. It cannot be used for native tests (because they do not support injection). - -Let's say we want to test the following processor application: - -[source, java] ----- -@ApplicationScoped -public class BeverageProcessor { - - @Incoming("orders") - @Outgoing("beverages") - Beverage process(Order order) { - System.out.println("Order received " + order.getProduct()); - Beverage beverage = new Beverage(); - beverage.setBeverage(order.getProduct()); - beverage.setCustomer(order.getCustomer()); - beverage.setOrderId(order.getOrderId()); - beverage.setPreparationState("RECEIVED"); - return beverage; - } - -} ----- - -First, add the following test dependency to your application: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.reactive - smallrye-reactive-messaging-in-memory - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.smallrye.reactive:smallrye-reactive-messaging-in-memory") ----- - -Then, create a Quarkus Test Resource as follows: - -[source, java] ----- -public class KafkaTestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager { - - @Override - public Map start() { - Map env = new HashMap<>(); - Map props1 = InMemoryConnector.switchIncomingChannelsToInMemory("orders"); // <1> - Map props2 = InMemoryConnector.switchOutgoingChannelsToInMemory("beverages"); // <2> - env.putAll(props1); - env.putAll(props2); - return env; // <3> - } - - @Override - public void stop() { - InMemoryConnector.clear(); // <4> - } -} ----- -<1> Switch the incoming channel `orders` (expecting messages from Kafka) to in-memory. -<2> Switch the outgoing channel `beverages` (writing messages to Kafka) to in-memory. -<3> Builds and returns a `Map` containing all the properties required to configure the application to use in-memory channels. -<4> When the test stops, clear the `InMemoryConnector` (discard all the received and sent messages) - -Create a Quarkus Test using the test resource created above: - -[source, java] ----- -@QuarkusTest -@QuarkusTestResource(KafkaTestResourceLifecycleManager.class) -class BaristaTest { - - @Inject - InMemoryConnector connector; // <1> - - @Test - void testProcessOrder() { - InMemorySource ordersIn = connector.source("orders"); // <2> - InMemorySink beveragesOut = connector.sink("beverages"); // <3> - - Order order = new Order(); - order.setProduct("coffee"); - order.setName("Coffee lover"); - order.setOrderId("1234"); - - ordersIn.send(order); // <4> - - await().>>until(beveragesOut::received, t -> t.size() == 1); // <5> - - Beverage queuedBeverage = beveragesOut.received().get(0).getPayload(); - Assertions.assertEquals(Beverage.State.READY, queuedBeverage.getPreparationState()); - Assertions.assertEquals("coffee", queuedBeverage.getBeverage()); - Assertions.assertEquals("Coffee lover", queuedBeverage.getCustomer()); - Assertions.assertEquals("1234", queuedBeverage.getOrderId()); - } - -} ----- -<1> Inject the in-memory connector in your test class. -<2> Retrieve the incoming channel (`orders`) - the channel must have been switched to in-memory in the test resource. -<3> Retrieve the outgoing channel (`beverages`) - the channel must have been switched to in-memory in the test resource. -<4> Use the `send` method to send a message to the `orders` channel. -The application will process this message and send a message to `beverages` channel. -<5> Use the `received` method on `beverages` channel to check the messages produced by the application. - -[IMPORTANT] -==== -With in-memory channels we were able to test application code processing messages without starting a Kafka broker. -Note that different in-memory channels are independent, and switching channel connector to in-memory does not simulate message delivery between channels configured to the same Kafka topic. -==== - -=== Starting Kafka in a test resource - -Alternatively, you can start a Kafka broker in a test resource. -The following snippet shows a test resource starting a Kafka broker using https://www.testcontainers.org/modules/kafka/[Testcontainers]: - -[source, java] ----- -public class KafkaResource implements QuarkusTestResourceLifecycleManager { - - private final KafkaContainer kafka = new KafkaContainer(); - - @Override - public Map start() { - kafka.start(); - return Collections.singletonMap("kafka.bootstrap.servers", kafka.getBootstrapServers()); // <1> - } - - @Override - public void stop() { - kafka.close(); - } -} ----- -<1> Configure the Kafka bootstrap location, so the application connects to this broker. - -[[kafka-dev-services]] -include::kafka-dev-services.adoc[leveloffset=+1] - -== Kubernetes Service Bindings - -Quarkus Kafka extension supports -xref:deploying-to-kubernetes.adoc[Service Binding Specification for Kubernetes]. -You can enable this by adding the `quarkus-kubernetes-service-binding` extension to your application. - -When running in appropriately configured Kubernetes clusters, Kafka extension will pull its Kafka broker connection configuration from the service binding available inside the cluster, without the need for user configuration. - -== Execution model - -Reactive Messaging invokes user's methods on an I/O thread. -Thus, by default, the methods must not block. -As described in <>, you need to add the `@Blocking` annotation on the method if this method will block the caller thread. - -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. - -[[kafka-configuration]] -== Configuration Reference - -More details about the SmallRye Reactive Messaging configuration can be found in the https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.1/kafka/kafka.html[SmallRye Reactive Messaging - Kafka Connector Documentation]. - -The most important attributes are listed in the tables below: - -=== Incoming channel configuration (polling from Kafka) - -The following attributes are configured using: - -[source, properties] ----- -mp.messaging.incoming.your-channel-name.attribute=value ----- - -Some properties have aliases which can be configured globally: - -[source, properties] ----- -kafka.bootstrap.servers=... ----- - -You can also pass any property supported by the underlying https://kafka.apache.org/documentation/#consumerconfigs[Kafka consumer]. - -For example, to configure the `max.poll.records` property, use: - -[source,properties] ----- -mp.messaging.incoming.[channel].max.poll.records=1000 ----- - -Some consumer client properties are configured to sensible default values: - -If not set, `reconnect.backoff.max.ms` is set to `10000` to avoid high load on disconnection. - -If not set, `key.deserializer` is set to `org.apache.kafka.common.serialization.StringDeserializer`. - -The consumer `client.id` is configured according to the number of clients to create using `mp.messaging.incoming.[channel].partitions` property. - -- If a `client.id` is provided, it is used as-is or suffixed with client index if `partitions` property is set. -- If a `client.id` is not provided, it is generated as `kafka-consumer-[channel][-index]`. - - -include::smallrye-kafka-incoming.adoc[] - -=== Outgoing channel configuration (writing to Kafka) - -The following attributes are configured using: - -[source, properties] ----- -mp.messaging.outgoing.your-channel-name.attribute=value ----- - -Some properties have aliases which can be configured globally: - -[source, properties] ----- -kafka.bootstrap.servers=... ----- - -Some producer client properties are configured to sensible default values: - -If not set, `reconnect.backoff.max.ms` is set to `10000` to avoid high load on disconnection. - -If not set, `key.serializer` is set to `org.apache.kafka.common.serialization.StringSerializer`. - -If not set, producer `client.id` is generated as `kafka-producer-[channel]`. - -include::smallrye-kafka-outgoing.adoc[] - -[[kafka-configuration-resolution]] -=== Kafka Configuration Resolution - -Quarkus exposes all Kafka related application properties, prefixed with `kafka.` or `KAFKA_` inside a configuration map with `default-kafka-broker` name. -This configuration is used to establish the connection with the Kafka broker. - -In addition to this default configuration, you can configure the name of the `Map` producer using the `kafka-configuration` attribute: - -[source, properties] ----- -mp.messaging.incoming.my-channel.connector=smallrye-kafka -mp.messaging.incoming.my-channel.kafka-configuration=my-configuration ----- - -In this case, the connector looks for the `Map` associated with the `my-configuration` name. -If `kafka-configuration` is not set, an optional lookup for a `Map` exposed with the channel name (`my-channel` in the previous example) is done. - -[source, java] ----- -@Produces -@ApplicationScoped -@Identifier("my-configuration") -Map outgoing() { - return Map.ofEntries( - Map.entry("value.serializer", ObjectMapperSerializer.class.getName()) - ); -} ----- - -IMPORTANT: If `kafka-configuration` is set and no `Map` can be found, the deployment fails. - -Attribute values are resolved as follows: - -1. the attribute is set directly on the channel configuration (`mp.messaging.incoming.my-channel.attribute=value`), -2. if not set, the connector looks for a `Map` with the channel name or the configured `kafka-configuration` (if set) and the value is retrieved from that `Map` -3. If the resolved `Map` does not contain the value the default `Map` is used (exposed with the `default-kafka-broker` name) - -== Integrating with Kafka - Common patterns - -=== Writing to Kafka from an HTTP endpoint - -To send messages to Kafka from an HTTP endpoint, inject an `Emitter` (or a `MutinyEmitter`) in your endpoint: - -[source, java] ----- -package org.acme; - -import java.util.concurrent.CompletionStage; - -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -@Path("/") -public class ResourceSendingToKafka { - - @Channel("kafka") Emitter emitter; // <1> - - @POST - @Produces(MediaType.TEXT_PLAIN) - public CompletionStage send(String payload) { // <2> - return emitter.send(payload); // <3> - } -} ----- -<1> Inject an `Emitter` -<2> The HTTP method receives the payload and returns a `CompletionStage` completed when the message is written to Kafka -<3> Send the message to Kafka, the `send` method returns a `CompletionStage` - -The endpoint sends the passed payload (from a `POST` HTTP request) to the emitter. -The emitter's channel is mapped to a Kafka topic in the `application.properties` file: - -[source, properties] ----- -mp.messaging.outgoing.kafka.connector=smallrye-kafka -mp.messaging.outgoing.kafka.topic=my-topic ----- - -The endpoint returns a `CompletionStage` indicating the asynchronous nature of the method. -The `emitter.send` method returns a `CompletionStage` . -The returned future is completed when the message has been written to Kafka. -If the writing fails, the returned `CompletionStage` is completed exceptionally. - -If the endpoint does not return a `CompletionStage`, the HTTP response may be written before the message is sent to Kafka, and so failures won't be reported to the user. - -If you need to send a Kafka record, use: - -[source, java] ----- -package org.acme; - -import java.util.concurrent.CompletionStage; - -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -import io.smallrye.reactive.messaging.kafka.Record; - -@Path("/") -public class ResourceSendingToKafka { - - @Channel("kafka") Emitter> emitter; // <1> - - - @POST - @Produces(MediaType.TEXT_PLAIN) - public CompletionStage send(String payload) { - return emitter.send(Record.of("my-key", payload)); // <2> - } -} ----- -<1> Note the usage of an `Emitter>` -<2> Create the record using `Record.of(k, v)` - -=== Persisting Kafka messages with Hibernate with Panache - -To persist objects received from Kafka into a database, you can use Hibernate with Panache. - -NOTE: If you use Hibernate Reactive, look at <>. - -Let's imagine you receive `Fruit` objects. -For simplicity purposes, our `Fruit` class is pretty simple: - -[source, java] ----- -package org.acme; - -import javax.persistence.Entity; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; - -@Entity -public class Fruit extends PanacheEntity { - - public String name; - -} ----- - -To consume `Fruit` instances stored on a Kafka topic, and persist them into a database, you can use the following approach: - -[source, java] ----- -package org.acme; - -import javax.enterprise.context.ApplicationScoped; -import javax.transaction.Transactional; - -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import io.smallrye.common.annotation.Blocking; - -@ApplicationScoped -public class FruitConsumer { - - @Incoming("fruits") // <1> - @Transactional // <2> - public void persistFruits(Fruit fruit) { // <3> - fruit.persist(); // <4> - } -} ----- -<1> Configuring the incoming channel. This channel reads from Kafka. -<2> As we are writing in a database, we must be in a transaction. This annotation starts a new transaction and commits it when the method returns. -Quarkus automatically considers the method as _blocking_. Indeed, writing to a database using classic Hibernate is blocking. So, Quarkus calls the method on a worker thread you can block (and not an I/O thread). -<3> The method receives each Fruit. Note that you would need a deserializer to reconstruct the Fruit instances from the Kafka records. -<4> Persist the received `fruit` object. - -As mentioned in <4>, you need a deserializer that can create a `Fruit` from the record. -This can be done using a Jackson deserializer: - -[source, java] ----- -package org.acme; - -import io.quarkus.kafka.client.serialization.ObjectMapperDeserializer; - -public class FruitDeserializer extends ObjectMapperDeserializer { - public FruitDeserializer() { - super(Fruit.class); - } -} ----- - -The associated configuration would be: - -[source, properties] ----- -mp.messaging.incoming.fruits.connector=smallrye-kafka -mp.messaging.incoming.fruits.value.deserializer=org.acme.FruitDeserializer ----- - -Check <> for more detail about the usage of Jackson with Kafka. -You can also use Avro. - -[#persisting-kafka-messages-with-hibernate-reactive] -=== Persisting Kafka messages with Hibernate Reactive - -To persist objects received from Kafka into a database, you can use Hibernate Reactive with Panache. - -Let's imagine you receive `Fruit` objects. -For simplicity purposes, our `Fruit` class is pretty simple: - -[source, java] ----- -package org.acme; - -import javax.persistence.Entity; - -import io.quarkus.hibernate.reactive.panache.PanacheEntity; // <1> - -@Entity -public class Fruit extends PanacheEntity { - - public String name; - -} ----- -<1> Make sure to use the reactive variant - -To consume `Fruit` instances stored on a Kafka topic, and persist them into a database, you can use the following approach: - -[source, java] ----- -package org.acme; - -import javax.enterprise.context.ApplicationScoped; - -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import io.quarkus.hibernate.reactive.panache.Panache; -import io.smallrye.mutiny.Uni; - -@ApplicationScoped -public class FruitStore { - - @Incoming("fruits") - public Uni persist(Fruit fruit) { - return Panache.withTransaction(() -> // <1> - fruit.persist() // <2> - .map(persisted -> null) // <3> - ); - } - -} ----- -<1> Instruct Panache to run the given (asynchronous) action in a transaction. The transaction completes when the action completes. -<2> Persist the entity. It returns a `Uni`. -<3> Switch back to a `Uni`. - -Unlike with _classic_ Hibernate, you can't use `@Transactional`. -Instead, we use `Panache.withTransaction` and persist our entity. -The `map` is used to return a `Uni` and not a `Uni`. - -You need a deserializer that can create a `Fruit` from the record. -This can be done using a Jackson deserializer: - -[source, java] ----- -package org.acme; - -import io.quarkus.kafka.client.serialization.ObjectMapperDeserializer; - -public class FruitDeserializer extends ObjectMapperDeserializer { - public FruitDeserializer() { - super(Fruit.class); - } -} ----- - -The associated configuration would be: - -[source, properties] ----- -mp.messaging.incoming.fruits.connector=smallrye-kafka -mp.messaging.incoming.fruits.value.deserializer=org.acme.FruitDeserializer ----- - -Check <> for more detail about the usage of Jackson with Kafka. -You can also use Avro. - -=== Writing entities managed by Hibernate to Kafka - -Let's imagine the following process: - -1. You receive an HTTP request with a payload, -2. You create an Hibernate entity instance from this payload, -3. You persist that entity into a database, -4. You send the entity to a Kafka topic - -NOTE: If you use Hibernate Reactive, look at <>. - -Because we write to a database, we must run this method in a transaction. -Yet, sending the entity to Kafka happens asynchronously. -The operation returns a `CompletionStage` (or a `Uni` if you use a `MutinyEmitter`) reporting when the operation completes. -We must be sure that the transaction is still running until the object is written. -Otherwise, you may access the object outside the transaction, which is not allowed. - -To implement this process, you need the following approach: - -[source, java] ----- -package org.acme; - -import java.util.concurrent.CompletionStage; - -import javax.transaction.Transactional; -import javax.ws.rs.POST; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.reactive.messaging.Channel; -import org.eclipse.microprofile.reactive.messaging.Emitter; - -@Path("/") -public class ResourceSendingToKafka { - - @Channel("kafka") Emitter emitter; - - @POST - @Path("/fruits") - @Transactional // <1> - public CompletionStage storeAndSendToKafka(Fruit fruit) { // <2> - fruit.persist(); - return emitter.send(fruit); // <3> - } -} ----- -<1> As we are writing to the database, make sure we run inside a transaction -<2> The method receives the fruit instance to persist. It returns a `CompletionStage` which is used for the transaction demarcation. The transaction is committed when the return `CompletionStage` completes. In our case, it's when the message is written to Kafka. -<3> Send the managed instance to Kafka. Make sure we wait for the message to complete before closing the transaction. - -[#writing-entities-managed-by-hibernate-reactive-to-kafka] -=== Writing entities managed by Hibernate Reactive to Kafka - -To send to Kafka entities managed by Hibernate Reactive, we recommend using: - -* RESTEasy Reactive to serve HTTP requests -* A `MutinyEmitter` to send message to a channel, so it can be easily integrated with the Mutiny API exposed by Hibernate Reactive or Hibernate Reactive with Panache. - -The following example demonstrates how to receive a payload, store it in the database using Hibernate Reactive with Panache, and send the persisted entity to Kafka: - -[source, java] ----- -package org.acme; - -import javax.ws.rs.POST; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.reactive.messaging.Channel; - -import io.quarkus.hibernate.reactive.panache.Panache; -import io.smallrye.mutiny.Uni; -import io.smallrye.reactive.messaging.MutinyEmitter; - -@Path("/") -public class ReactiveGreetingResource { - - @Channel("kafka") MutinyEmitter emitter; // <1> - - @POST - @Path("/fruits") - public Uni sendToKafka(Fruit fruit) { // <2> - return Panache.withTransaction(() -> // <3> - fruit.persist() - ) - .chain(f -> emitter.send(f)); // <4> - } -} ----- -<1> Inject a `MutinyEmitter` which exposes a Mutiny API. It simplifies the integration with the Mutiny API exposed by Hibernate Reactive with Panache. -<2> The HTTP method receiving the payload returns a `Uni`. The HTTP response is written when the operation completes (the entity is persisted and written to Kafka). -<3> We need to write the entity into the database in a transaction. -<4> Once the persist operation completes, we send the entity to Kafka. The `send` method returns a `Uni`. - - -=== Streaming Kafka topics as server-sent events - -Streaming a Kafka topic as server-sent events (SSE) is straightforward: - -1. You inject the channel representing the Kafka topic in your HTTP endpoint -2. You return that channel as a `Publisher` or a `Multi` from the HTTP method - -The following code provides an example: - -[source, java] ----- -@Channel("fruits") -Multi fruits; - -@GET -@Produces(MediaType.SERVER_SENT_EVENTS) -public Multi stream() { - return fruits; -} ----- - -Some environment cuts the SSE connection when there is not enough activity. -The workaround consists of sending _ping_ messages (or empty objects) periodically. - -[source, java] ----- -@Channel("fruits") -Multi fruits; - -@Inject -ObjectMapper mapper; - -@GET -@Produces(MediaType.SERVER_SENT_EVENTS) -public Multi stream() { - return Multi.createBy().merging() - .streams( - fruits.map(this::toJson), - emitAPeriodicPing() - ); -} - -Multi emitAPeriodicPing() { - return Multi.createFrom().ticks().every(Duration.ofSeconds(10)) - .onItem().transform(x -> "{}"); -} - -private String toJson(Fruit f) { - try { - return mapper.writeValueAsString(f); - } catch (JsonProcessingException e) { - throw new RuntimeException(e); - } -} ----- - -The workaround is a bit more complex as besides sending the fruits coming from Kafka, we need to send pings periodically. -To achieve this we merge the stream coming from Kafka and a periodic stream emitting `{}` every 10 seconds. - -== Logging - -To reduce the amount of log written by the Kafka client, Quarkus sets the level of the following log categories to `WARNING`: - -- `org.apache.kafka.clients` -- `org.apache.kafka.common.utils` -- `org.apache.kafka.common.metrics` - -You can override the configuration by adding the following lines to the `application.properties`: - -[source, properties] ----- -quarkus.log.category."org.apache.kafka.clients".level=INFO -quarkus.log.category."org.apache.kafka.common.utils".level=INFO -quarkus.log.category."org.apache.kafka.common.metrics".level=INFO ----- - -== Going further - -This guide has shown how you can interact with Kafka using Quarkus. -It utilizes SmallRye Reactive Messaging to build data streaming applications. - -If you want to go further, check the documentation of https://smallrye.io/smallrye-reactive-messaging[SmallRye Reactive Messaging], the implementation used in Quarkus. diff --git a/_versions/2.7/guides/kogito-dev-services-build-time-config.adoc b/_versions/2.7/guides/kogito-dev-services-build-time-config.adoc deleted file mode 100644 index 5132a4afd63..00000000000 --- a/_versions/2.7/guides/kogito-dev-services-build-time-config.adoc +++ /dev/null @@ -1,54 +0,0 @@ -[.configuration-legend] -icon:lock[title=Fixed at build time] Configuration property fixed at build time - All other configuration properties are overridable at runtime -[.configuration-reference, cols="80,.^10,.^10"] -|=== - -h|[[quarkus-kogito-dev-services-build-time-config_configuration]]link:#quarkus-kogito-dev-services-build-time-config_configuration[Configuration property] - -h|Type -h|Default - -a|icon:lock[title=Fixed at build time] [[quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.enabled]]`link:#quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.enabled[quarkus.kogito.devservices.enabled]` - -[.description] --- -If DevServices has been explicitly enabled or disabled. DevServices is generally enabled by default, unless there is an existing configuration present. When DevServices is enabled Quarkus will attempt to automatically configure and start a Data Index when running in Dev mode. ---|boolean -|true - - -a|icon:lock[title=Fixed at build time] [[quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.image-name]]`link:#quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.image-name[quarkus.kogito.devservices.image-name]` - -[.description] --- -The container image name to use. ---|string -|quay.io/kiegroup/kogito-data-index-ephemeral - - -a|icon:lock[title=Fixed at build time] [[quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.port]]`link:#quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.port[quarkus.kogito.devservices.port]` - -[.description] --- -Optional fixed port the dev service will listen to. -If not defined, the port will be chosen randomly. ---|int -|8180 - -a|icon:lock[title=Fixed at build time] [[quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.shared]]`link:#quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.shared[quarkus.kogito.devservices.shared]` - -[.description] --- -Indicates if the Data Index instance managed by Quarkus Dev Services is shared. When shared, Quarkus looks for running containers using label-based service discovery. If a matching container is found, it is used, and so a second one is not started. Otherwise, Dev Services for Kogito starts a new container. The discovery uses the `kogito-dev-service-data-index` label. The value is configured using the service-name property. Container sharing is only used in dev mode. ---|boolean -|true - -a|icon:lock[title=Fixed at build time] [[quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.service-name]]`link:#quarkus-kogito-dev-services-build-time-config_quarkus.kogito.devservices.service-name[quarkus.kogito.devservices.service-name]` - -[.description] --- -The value of the `kogito-dev-service-data-index` label attached to the started container. This property is used when shared is set to true. In this case, before starting a container, Dev Services for Kogito looks for a container with the `kogito-dev-service-data-index` label set to the configured value. If found, it will use this container instead of starting a new one. Otherwise it starts a new container with the `kogito-dev-service-data-index` label set to the specified value. This property is used when you need multiple shared Data Index instances. ---|string -|kogito-data-index - -|=== \ No newline at end of file diff --git a/_versions/2.7/guides/kogito-dev-services.adoc b/_versions/2.7/guides/kogito-dev-services.adoc deleted file mode 100644 index 361286d8cce..00000000000 --- a/_versions/2.7/guides/kogito-dev-services.adoc +++ /dev/null @@ -1,65 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services for Kogito - -include::./attributes.adoc[] - -If any Kogito process-related extension is present (e.g. `kogito-quarkus` or `kogito-quarkus-processes`), Dev Services for Kogito automatically starts a Data Index in dev mode. -So, you don't have to start it manually or have any other service set-up manually. -The application is configured automatically, meaning that will replicate any -Kogito messaging events related to Process Instances and User Tasks into the provisioned Data Index instance. - -Additionally, xref:dev-ui.adoc[Dev UI] available at http://localhost:8080/q/dev[/q/dev] complements this feature with a Dev UI page which helps to Query Data Index via its GraphiQL UI. - -image::dev-ui-kogito-data-index-card.png[alt=Dev UI Kogito,role="center"] - -image::dev-ui-kogito-data-index.png[alt=Dev UI Kogito Data Index GraphiQL,role="center"] - -For more details about how to query data about processes and user tasks, please visit https://docs.kogito.kie.org/latest/html_single/#ref-data-index-service-queries_kogito-configuring[Kogito Data Index documentation]. - -== Enabling / Disabling Dev Services for Kogito - -Dev Services for Kogito is automatically enabled unless: - -- `quarkus.kogito.devservices.enabled` is set to `false` - -Dev Services for Kogito relies on Docker to start the broker. -If your environment does not support Docker, you will need to start the broker manually, or connect to an already running Data Index. - -== Shared Data Index - -In case you would like to share the Data Index instance between applications. -Dev Services for Kogito implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single instance. - -NOTE: Dev Services for Kogito starts the container with the `kogito-dev-service-data-index` label which is used to identify the container. - -If you need multiple (shared) Data Index instances, you can configure the `quarkus.kogito.devservices.service-name` attribute and indicate the instance name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `kogito-data-index`. - -Sharing is enabled by default in dev mode. -You can disable the sharing with `quarkus.kogito.devservices.shared=false`. - -== Setting the port - -By default, Dev Services for Kogito starts a Data Index using port 8180. -You can set the port by configuring the `quarkus.kogito.devservices.port` property. - -== Configuring the image - -Dev Services for Kogito uses: `kiegroup/kogito-data-index-ephemeral` images. -You can select any version from https://quay.io/repository/kiegroup/kogito-data-index-ephemeral?tab=tags. - -[source, properties] ----- -quarkus.kogito.devservices.image-name=quay.io/kiegroup/kogito-data-index-ephemeral ----- - -== References - -* xref:dev-ui.adoc[Dev UI] -* https://docs.kogito.kie.org/latest/html_single/[Kogito Documentation] -* xref:kogito.adoc[Quarkus - Kogito] \ No newline at end of file diff --git a/_versions/2.7/guides/kogito-dmn.adoc b/_versions/2.7/guides/kogito-dmn.adoc deleted file mode 100644 index 7ae91e4b184..00000000000 --- a/_versions/2.7/guides/kogito-dmn.adoc +++ /dev/null @@ -1,260 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Kogito DMN support to add decision automation capabilities to an application - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use Kogito to add business automation -and power it up with DMN decision support. - -Kogito is a next generation business automation toolkit that originates from well known Open Source projects -Drools (for business rules) and jBPM (for business processes). Kogito aims at providing a newer approach -to business automation where the main message is to expose your business knowledge (processes, rules, decisions, predictions) -in a domain specific way. - -== Prerequisites - -:prerequisites-docker: -:prerequisites-ide: (VSCode is preferred, with the Red Hat DMN Editor VSCode Extension) -include::includes/devtools/prerequisites.adoc[] - -=== DMN Editor - -Kogito Tooling is currently supported via VSCode, web browsers and on other platforms: - -VSCode:: - - Download and install the https://marketplace.visualstudio.com/items?itemName=redhat.vscode-extension-dmn-editor[Red Hat DMN Editor VSCode Extension] to edit and model process definitions from VSCode IDE. - -Online:: - - To avoid any modeler installation you can use directly use https://dmn.new[DMN.new] to author your DMN model through your favorite web browser. - -Other platforms:: - - You can reference to https://kiegroup.github.io/kogito-online/#/download[Business Modeler Hub] to download the latest platforms supported for the https://github.com/kiegroup/kogito-tooling/releases[Kogito tooling releases]. - - -// leave the double space above -== Architecture - -In this example, we build a very simple microservice which offers one REST endpoint: - -* `/pricing` - -This endpoint will be automatically generated based on the defined DMN model. - -=== Decision rules as a DMN model - -A DMN model is an open standard for visual and semantic execution of declarative logic; DMN allows you to externalise decision logic into reusable pieces that can be easily used in declarative way. There are multiple ways of writing rules other than DMN, like: decision tables, decision trees, rules, etc. - -For this example we focus on using the https://drools.org/learn/dmn.html[DMN (Decision Model and Notation)] open standard to describe the decision rules. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the complete example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kogito-dmn-quickstart` {quickstarts-tree-url}/kogito-dmn-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: kogito-dmn-quickstart -:create-app-extensions: dmn,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, importing the `kogito-quarkus-decisions` extension -that comes with all needed dependencies and configuration to equip your application -with business automation. -It also imports the `resteasy-jackson` extension that is needed for Kogito to expose REST services. - -The `kogito-quarkus-decisions` is a specialized extension of the Kogito project focusing only on DMN support; if you want to -make use of the full capabilities offered by the Kogito platform, you can reference the generic Kogito extension of Quarkus. - -If you already have your Quarkus project configured, you can add the `kogito-quarkus-decisions` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: dmn -include::includes/devtools/extension-add.adoc[] - -or alternatively: - -:add-extension-extensions: kogito-quarkus-decisions -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.kie.kogito - kogito-quarkus-decisions - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.kie.kogito:kogito-quarkus-decisions") ----- - -== Authoring the DMN model - -We will author a DMN model that will provide a base price quotation based on some criteria. -Create a new file `pricing.dmn` inside the `src/main/resources/` directory of the generated project. - -This model should consist of: - -* `Age` (InputData element, of type `number`) -* `Previous incidents?` (InputData element, of type `boolean`) -* `Base price` (Decision element, of type `number`) - -And the Decision Requirement Graph (DRG) should look like: - -image:kogito-DMN-guide-screenshot-DRG.png[alt=DMN model definition] - -To get started quickly you may copy the DMN model definition file from the -{quickstarts-tree-url}/kogito-dmn-quickstart/src/main/resources/pricing.dmn[quickstart] - -The decision logic for the `Base price` Decision node shall be a DMN Decision Table with the following entries: - -image:kogito-DMN-guide-screenshot-DT.png[alt=DMN Decision Table definition] - -To author the DMN model yourself, just follow these steps: - -* drag an InputData node from the palette, name it `Age` and assign it type `number` using the Properties panel. -* drag an InputData node from the palette, name it `Previous incidents?` and assign it type `boolean` using the Properties panel. -* drag a Decision node from the palette, name it `Base price` and assign it type `number` using the Properties panel. -* establish an `InformationRequirement` edge from `Age` to `Base price` nodes, by using the node palette by mouse overing near the element in the graph. -* establish an `InformationRequirement` edge from `Previous incidents?` to `Base price` nodes, by using the node palette by mouse overing near the element in the graph. -* select the Edit decision logic option for the node `Base price`. -** select Decision Table as the decision logic for the node. -** create the relevant rules (rows) entries as per the above screenshot. -* save the file - -For more information about DMN, you can reference the Kogito documentation at the links below. - -== Running and Using the Application - -=== Running in Dev Mode - -To run the microservice in dev mode, use: - -include::includes/devtools/dev.adoc[] - -=== Running in JVM Mode - -When you're done playing with dev mode, you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build a native executable with the following command: - -include::includes/devtools/build-native.adoc[] - -Native compilation will always take some time to complete; then, you'll be able to run this binary directly: - -[source,bash] ----- -./target/kogito-dmn-quickstart-1.0.0-SNAPSHOT-runner ----- - -== Testing the Application - -To test your final decision service application, just send a request to the endpoint by supplying as JSON -payload the expected inputs: - -[source,bash] ----- -curl -X POST 'http://localhost:8080/pricing' \ --H 'Accept: application/json' \ --H 'Content-Type: application/json' \ --d '{ "Age": 47, "Previous incidents?": false }' ----- - -In the response, the `Base price` will be quoted -accordingly to the defined DMN model- for a total amount of `500`; this is visible in the response payload: - -[source,JSON] ----- -{"Previous incidents?":false,"Age":47,"Base price":500} ----- - -== Using Test Scenario tool - -Kogito allows to define visually test scenarios, and execute them as JUnit tests as part of the normal build of the Quarkus application. - -To be able to use Test Scenario assets in your application, an additional dependency is required: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.kie.kogito - kogito-scenario-simulation - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("org.kie.kogito:kogito-scenario-simulation") ----- - -You can now create a `KogitoScenarioJunitActivatorTest.java` class file in the `src/test/java/testscenario` directory: -[source,java] ----- -package testscenario; -@org.junit.runner.RunWith(org.kogito.scenariosimulation.runner.KogitoJunitActivator.class) -public class KogitoScenarioJunitActivatorTest { -} ----- - -This activator class is a custom JUnit runner that enables the execution of test scenario files in your application. - -You can now create a `PricingTest.scesim` file in the `src/test/resources` directory: - -image:kogito-DMN-guide-screenshot-scesim.png[alt=DMN Test scenario] - -The test scenarios will be run as part of the JUnit test suite. - -For more information about the Test Scenario tool, you can reference the Kogito documentation at the links below. - -== Where to go from here - -This was a minimal example using a DMN modeling; as you can see the Kogito framework allow to quickly define a decision logic using a visual and standard notation, such as DMN, and create a fully functioning microservice on top of Quarkus! - -To see additional capabilities of the Kogito platform, you can reference the Kogito documentation at the links below. -This includes more detailed guides about integrating with Processes (BPMN2), Rules (Drools' DRL), Prediction (PMML), Test Scenario (visual notation for testing), assisted deployment to OpenShift, and many more. - -== References - -* https://kogito.kie.org[Kogito Website] -* https://drools.org/learn/dmn.html[What is DMN] -* https://docs.jboss.org/kogito/release/latest/html_single[Kogito Documentation] diff --git a/_versions/2.7/guides/kogito-drl.adoc b/_versions/2.7/guides/kogito-drl.adoc deleted file mode 100644 index 5e0310df346..00000000000 --- a/_versions/2.7/guides/kogito-drl.adoc +++ /dev/null @@ -1,352 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Kogito to add rule engine capabilities to an application - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use Kogito to add DRL files with rules. - -Kogito is a next generation business automation toolkit that originates from well known Open Source projects -Drools (for business rules) and jBPM (for business processes). Kogito aims at providing another approach -to business automation where the main message is to expose your business knowledge (processes, rules and decisions) -in a domain specific way. - -== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this example, we build a very simple microservice which offers one REST endpoint: - -* `/find-approved` - -This endpoint will be automatically generated based on the query inserted in the Rule Unit of the DRL file. -It's an example of a stateless invocation (also called "pure function invocation") in which the execution of our business rules doesn't have any side effects. -The output value returned is based uniquely on the input provided. - -=== Business rule - -A business rule allows to externalise decision logic into reusable pieces that can be easily -used in declarative way. There are multiple ways of writing rules like https://drools.org/learn/dmn.html[DMN models], -decision tables, decision trees, rules, etc. For this example we focus on the rule format backed by DRL -(Drools Rule Language). - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the complete example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kogito-drl-quickstart` {quickstarts-tree-url}/kogito-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: kogito-drl-quickstart -:create-app-extensions: kogito-quarkus-rules,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, importing the `kogito-quarkus-rules` extension -that comes with all needed dependencies and configuration to equip your application -with business automation. -It also imports the `resteasy-jackson` extension that is needed for Kogito to expose REST services. - -If you already have your Quarkus project configured, you can add the `kogito-quarkus-rules` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: kogito-quarkus-rules -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.kie.kogito - kogito-quarkus-rules - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.kie.kogito:kogito-quarkus-rules") ----- - -== Writing the application - -Let's start from the application domain model. -This application will approve Loan Applications so we have a class with all the details of the wanted Loan: - -[source,java] ----- -package org.acme.kogito.model; - -public class LoanApplication { - - private String id; - private Applicant applicant; - private int amount; - private int deposit; - private boolean approved = false; - - public LoanApplication() { - - } - - public LoanApplication(String id, Applicant applicant, - int amount, int deposit) { - this.id = id; - this.applicant = applicant; - this.amount = amount; - this.deposit = deposit; - } - - public String getId() { - return id; - } - - public void setId(String id) { - this.id = id; - } - - public Applicant getApplicant() { - return applicant; - } - - public void setApplicant(Applicant applicant) { - this.applicant = applicant; - } - - public int getAmount() { - return amount; - } - - public void setAmount(int amount) { - this.amount = amount; - } - - public int getDeposit() { - return deposit; - } - - public void setDeposit(int deposit) { - this.deposit = deposit; - } - - public boolean isApproved() { - return approved; - } - - public void setApproved(boolean approved) { - this.approved = approved; - } -} - - ----- - -And another class with the details of the Applicant: - -[source,java] ----- -package org.acme.kogito.model; - -public class Applicant { - - private String name; - private int age; - - public Applicant() { - } - - public Applicant(String name, int age) { - this.name = name; - this.age = age; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - public int getAge() { - return age; - } - - public void setAge(int age) { - this.age = age; - } -} - ----- - -Next, we create a rule file `loan-rules.drl` inside the `src/main/resources/org/acme/kogito/queries` folder of -the generated project. - -[source,plain] ----- -package org.acme.kogito.queries; - -unit LoanUnit; // no need to using globals, all variables and facts are stored in the rule unit - -import org.acme.kogito.model.Applicant; -import org.acme.kogito.model.LoanApplication; - -rule LargeDepositApprove when - $l: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount <= maxAmount ] // oopath style -then - modify($l) { setApproved(true) }; -end - -rule LargeDepositReject when - $l: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount > maxAmount ] -then - modify($l) { setApproved(false) }; -end - -// ... more loans approval/rejections business rules ... - -// approved loan applications are now retrieved through a query -query FindApproved - $l: /loanApplications[ approved ] -end - ----- - -In this file there are some example rules to decide whether the Loan should be approved or not. The service wants the Applicant to have an age equal or greater than 20 and more than 1000 currency on their bank account. -The amount of the Loan shouldn't be more than the `maxAmount`. - -This example uses Rule Units, a new concept introduced in Kogito that helps users to encapsulate the set of rules and the facts against which those rules will be matched. - -The facts inserted will be inserted into a `DataStore`, a type-safe entry point. To make everything work, we need to define both the RuleUnit and the DataStore. - -[source,java] ----- -package org.acme.kogito.queries; - -import org.acme.kogito.model.LoanApplication; -import org.kie.kogito.rules.DataSource; -import org.kie.kogito.rules.DataStore; -import org.kie.kogito.rules.RuleUnitData; - -public class LoanUnit implements RuleUnitData { - - private int maxAmount; - private DataStore loanApplications; - - public LoanUnit() { - this(DataSource.createStore(), 0); - } - - public LoanUnit(DataStore loanApplications, int maxAmount) { - this.loanApplications = loanApplications; - this.maxAmount = maxAmount; - } - - public DataStore getLoanApplications() { return loanApplications; } - - public void setLoanApplications(DataStore loanApplications) { - this.loanApplications = loanApplications; - } - - public int getMaxAmount() { return maxAmount; } - public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } -} - - ----- - -And that's it: REST endpoint to validate Loan Applications will be automatically generated from this Rule Unit. - - -== Running and Using the Application - -=== Running in Dev Mode - -To run the microservice in dev mode, use: - -include::includes/devtools/dev.adoc[] - -=== Running in JVM Mode - -When you're done playing with dev mode you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build a native executable with the following command: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -target/kogito-drl-quickstart-1.0.0-SNAPSHOT-runner ----- - -== Testing the Application - -To test your application, just send a request to the service with giving the person as JSON -payload. - -[source,bash] ----- - -curl -X POST http://localhost:8080/find-approved \ - -H 'Content-Type: application/json'\ - -H 'Accept: application/json' \ - -d '{"maxAmount":5000, - "loanApplications":[ - {"id":"ABC10001","amount":2000,"deposit":1000, - "applicant":{"age":45,"name":"John"}}, - {"id":"ABC10002","amount":5000,"deposit":100, - "applicant":{"age":25,"name":"Paul"}}, - {"id":"ABC10015","amount":1000,"deposit":100, - "applicant":{"age":12,"name":"George"}} -]}' ----- - -In the response, the list of the approved applications will be returned: - - -[source,JSON] ----- -[{"id":"ABC10001", - "applicant":{"name":"John","age":45}, - "amount":2000,"deposit":100,"approved":true}] ----- - -== References - -* https://kogito.kie.org[Kogito Website] -* https://docs.jboss.org/kogito/release/latest/html_single[Kogito Documentation] diff --git a/_versions/2.7/guides/kogito-pmml.adoc b/_versions/2.7/guides/kogito-pmml.adoc deleted file mode 100644 index fa528e22a81..00000000000 --- a/_versions/2.7/guides/kogito-pmml.adoc +++ /dev/null @@ -1,243 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Kogito to add prediction capabilities to an application - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use Kogito to add business automation -to power it up with predictions. - -Kogito is a next generation business automation toolkit that originates from the well known Open Source project -Drools (for predictions). Kogito aims at providing another approach -to business automation where the main message is to expose your business knowledge (processes, rules, decisions, predictions) -in a domain specific way. - - -== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this example, we build a very simple microservice which offers one REST endpoint: - -* `/LogisticRegressionIrisData` - -This endpoint will be automatically generated based on given PMML file, that in turn will -make use of generated code to make certain predictions based on the data being processed. - -=== PMML file - -The PMML file describes the prediction logic of our microservice. -It should provide the actual model (Regression, Tree, Scorecard, Clustering, etc) needed to make the prediction. - -=== Prediction endpoints - -Those are the entry points to the service that can be consumed by clients. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the complete example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kogito-pmml-quickstart` {quickstarts-tree-url}/kogito-pmml-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: kogito-pmml-quickstart -:create-app-extensions: kogito,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, importing the `kogito` extension -that comes with all needed dependencies and configuration to equip your application -with business automation. -It also imports the `resteasy-jackson` extension that is needed for Kogito to expose REST services. - -If you already have your Quarkus project configured, you can add the `kogito` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: kogito -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.kie.kogito - kogito-quarkus - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.kie.kogito:kogito-quarkus") ----- - -== Writing the application - -Predictions are evaluated based on a PMML model, whose standard and specifications may be read http://dmg.org/pmml/v4-4-1/GeneralStructure.html[here]. -Let's start by adding a simple PMML file: `LogisticRegressionIrisData.pmml`. It contains a _Regression_ model named `LogisticRegressionIrisData`, and it uses a regression function to predict plant species from sepal and petal dimensions: - -[source,xml] ----- - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ----- - -During project compilation, Kogito will read the file and generate the classes needed for the evaluation, together with a couple of REST endpoints. - -To get started quickly copy the PMML file from the -{quickstarts-tree-url}/kogito-pmml-quickstart/src/main/resources/LogisticRegressionIrisData.pmml[quickstart]. - -== Running and Using the Application - -=== Running in Dev Mode - -To run the microservice in dev mode, use: - -include::includes/devtools/dev.adoc[] - -=== Running in JVM Mode - -When you're done playing with dev mode you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build a native executable with the following command: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -./target/kogito-pmml-quickstart-1.0.0-SNAPSHOT-runner ----- - -== Testing the Application - -To test your application, just send a request to the service with giving the person as JSON -payload. - -[source,bash] ----- -curl -X POST http://localhost:8080/LogisticRegressionIrisData \ - -H 'content-type: application/json' \ - -H 'accept: application/json' \ - -d '{ "Sepal.Length": 6.9, "Sepal.Width": 3.1, "Petal.Length": 5.1, "Petal.Width": 2.3 }' ----- - -In the response, you should see the prediction, that should be _virginica_: - -[source,JSON] ----- -{ - "Species": "virginica" -} ----- - -You can also invoke the _descriptive_ endpoint, that will provide also the _OutputField_ evaluated: - -[source,bash] ----- -curl -X POST http://localhost:8080/LogisticRegressionIrisData/descriptive \ - -H 'content-type: application/json' \ - -H 'accept: application/json' \ - -d '{ "Sepal.Length": 6.9, "Sepal.Width": 3.1, "Petal.Length": 5.1, "Petal.Width": 2.3 }' ----- - -[source,JSON] ----- -{ - "correlationId": null, - "segmentationId": null, - "segmentId": null, - "segmentIndex": 0, - "resultCode": "OK", - "resultObjectName": "Species", - "resultVariables": { - "Probability_setosa": 0.04871813160275851, - "Probability_versicolor": 0.04509592640753013, - "Probability_virginica": 0.9061859419897114, - "Species": "virginica" - } -} ----- - -== References - -* https://kogito.kie.org[Kogito Website] -* https://docs.jboss.org/kogito/release/latest/html_single[Kogito Documentation] diff --git a/_versions/2.7/guides/kogito.adoc b/_versions/2.7/guides/kogito.adoc deleted file mode 100644 index 6220a9c5607..00000000000 --- a/_versions/2.7/guides/kogito.adoc +++ /dev/null @@ -1,487 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Kogito to add business automation capabilities to an application - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use Kogito to add business automation -to power it up with business processes and rules. - -Kogito is a next generation business automation toolkit that originates from well known Open Source projects -Drools (for business rules) and jBPM (for business processes). Kogito aims at providing another approach -to business automation where the main message is to expose your business knowledge (processes, rules, decisions, predictions) -in a domain specific way. - -== Prerequisites - -:prerequisites-docker: -:prerequisites-ide: (VSCode is preferred with the Red Hat BPMN Editor VSCode Extension) -include::includes/devtools/prerequisites.adoc[] - -=== Install modelling plugins in your IDE - -Kogito Tooling is currently supported in VSCode, Online and on other platforms: - -VSCode:: - - Download and install the https://marketplace.visualstudio.com/items?itemName=redhat.vscode-extension-bpmn-editor[Red Hat BPMN Editor VSCode Extension] to edit and model process definitions from VSCode IDE. - -Online:: - - To avoid any modeler installation you can use directly use https://bpmn.new[BPMN.new] to design and model your process through your favorite web browser. - -Eclipse:: - - To be able to make use of visual modelling of your processes, download Eclipse IDE and - install from Marketplace Eclipse BPMN2 Modeller plugin (with jBPM Runtime Extension) - -Other platforms:: - - You can go to https://kiegroup.github.io/kogito-online/#/download[Business Modeler Hub] to download the latest platforms supported for the https://github.com/kiegroup/kogito-tooling/releases[Kogito tooling releases]. - -== Architecture - -In this example, we build a very simple microservice which offers one REST endpoint: - -* `/persons` - -This endpoint will be automatically generated based on business process, that in turn will -make use of business rules to make certain decisions based on the data being processed. - -=== Business process - -The business process will be responsible for encapsulating business logic of our microservice. -It should provide complete set of steps to achieve a business goal. -At the same time this is the entry point to the service that can be consumed by clients. - -=== Business rule - -A business rule allows to externalise decision logic into reusable pieces that can be easily -used in declarative way. There are multiple ways of writing rules like https://drools.org/learn/dmn.html[DMN models], -decision tables, decision trees, rules, etc. - -For this example we focus on the rule format backed by DRL (Drools Rule Language), -but the same business logic may be expressed with other supported Kogito knowledge formats as well. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the complete example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `kogito-quickstart` {quickstarts-tree-url}/kogito-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: kogito-quickstart -:create-app-extensions: kogito,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, importing the `kogito` extension -that comes with all needed dependencies and configuration to equip your application -with business automation. -It also imports the `resteasy-jackson` extension that is needed for Kogito to expose REST services. - -If you already have your Quarkus project configured, you can add the `kogito` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: kogito -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.kie.kogito - kogito-quarkus - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.kie.kogito:kogito-quarkus") ----- - -== Writing the application - -Let's start by implementing the simple data object `Person`. As you can see from the source code below it is just a POJO: - -[source,java] ----- -package org.acme.kogito.model; - -public class Person { - - private String name; - private int age; - private boolean adult; - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - public int getAge() { - return age; - } - - public void setAge(int age) { - this.age = age; - } - - public boolean isAdult() { - return adult; - } - - public void setAdult(boolean adult) { - this.adult = adult; - } - - @Override - public String toString() { - return "Person [name=" + name + ", age=" + age + ", adult=" + adult + "]"; - } - -} - ----- - -Next, we create a rule file `person-rules.drl` inside the `src/main/resources/org/acme/kogito` folder of -the generated project. - -[source,plain] ----- -package org.acme.kogito; - -unit PersonUnit; - -import org.acme.kogito.model.Person; - -rule "Is adult" -when - $person: /person[age > 18] -then - modify($person) { - setAdult(true) - }; -end ----- - -This is really a simple rule that marks a person who is older that 18 years as an adult. - -This example rule uses Rule Units, a new concept introduced in Kogito that helps users to encapsulate the set of rules and the facts against which those rules will be matched. The facts inserted will be inserted into a `DataStore`, a type-safe entry point. To make everything work, we need to define both the RuleUnit and the DataStore, by creating a new class `PersonUnit` inside `src/main/java/org/acme/kogito` directory: - -[source,java] ----- -package org.acme.kogito; - -import org.acme.kogito.model.Person; -import org.kie.kogito.rules.DataSource; -import org.kie.kogito.rules.RuleUnitData; -import org.kie.kogito.rules.SingletonStore; - -public class PersonUnit implements RuleUnitData { - - private SingletonStore person; - - public PersonUnit() { - this(DataSource.createSingleton()); - } - - public PersonUnit(SingletonStore person) { - this.person = person; - } - - public SingletonStore getPerson() { - return person; - } - - public void setPerson(SingletonStore person) { - this.person = person; - } -} ----- - -Finally we create a business process that will make use of this rule and some other -activities to approve a given person. Using new item wizard (File -> New -> Other -> BPMN2 Model) -create `persons.bpmn` inside `src/main/resources/org/acme/kogito` folder of the generated project. - -This process should consist of - -* start event -* business rule task -* exclusive gateway -* user task -* end events - -And should look like - -image:kogito-guide-screenshot.png[alt=Process definition] - -To get started quickly copy the process definition from the -{quickstarts-tree-url}/kogito-quickstart/src/main/resources/org/acme/kogito/persons.bpmn2[quickstart] - -To model this process yourself, just follow these steps (start event should be automatically added) - -* define a process variable with name `person` of type `org.acme.kogito.model.Person` -* drag the Tasks -> Business Rule Task from the palette and drop it next to start event, link it with start event -** double click on the business rule task -*** on tab I/O Parameters, set data input and output (map `person` process variable to input data with name `person` and same for data output) -*** on tab Business Rule Task, set rule flow group to the FQCN value of the RuleUnit using the `unit:` prefix (`unit:org.acme.kogito.PersonUnit`) -* drag the Gateways -> XOR gateway from the palette and drop it next to the business rule task, link it with rule task -* drag the Tasks -> User Task from the palette and drop it next to the gateway, link it with gateway -** double click on the user task -*** on tak User Task, set task name to `ChildrenHandling` -*** on tab I/O Parameters, set data input (map `person` process variable to input data with name `person`) -* drag the End Events -> End from the palette and drop it next to the user task, link it with the user task -* drag the End Events -> End from the palette and drop it next to the gateway, link it with the user task -* double click on the gateway -** on tab Gateway, set the diverging direction for the gateway -** on tab Gateway, set conditions on sequence flow list -*** -> going to end event `return person.isAdult() == true;` with language `Java` -*** -> going to user task `return person.isAdult() == false;` with language `Java` -* save the file - -== Running and Using the Application - -=== Running in Dev Mode - -To run the microservice in dev mode, use: - -include::includes/devtools/dev.adoc[] - -=== Running in JVM Mode - -When you're done playing with "dev-mode" you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build a native executable with the following command: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -./target/kogito-quickstart-1.0.0-SNAPSHOT-runner ----- - -== Testing the Application - -To test your application, just send request to the service with giving the person as JSON -payload. - -[source,bash] ----- -curl -X POST http://localhost:8080/persons \ - -H 'content-type: application/json' \ - -H 'accept: application/json' \ - -d '{"person": {"name":"John Quark", "age": 20}}' ----- - -In the response, the person should be approved as an adult and that should also be visible in the response payload. - -[source,JSON] ----- -{"id":"dace1d6a-a5fa-429d-b253-d6b66e265bbc","person":{"adult":true,"age":20,"name":"John Quark"}} ----- - -You can also verify that there are no more active instances - -[source,bash] ----- -curl -X GET http://localhost:8080/persons \ - -H 'content-type: application/json' \ - -H 'accept: application/json' ----- - -To verify the non adult case, send another request with the age set to less than 18 - -[source,bash] ----- -curl -X POST http://localhost:8080/persons \ - -H 'content-type: application/json' \ - -H 'accept: application/json' \ - -d '{"person": {"name":"Jenny Quark", "age": 15}}' ----- - -this time there should be one active instance, replace `{uuid}` with the id attribute taken from the response - -[source,bash] ----- -curl -X GET http://localhost:8080/persons/{uuid}/tasks \ - -H 'content-type: application/json' \ - -H 'accept: application/json' ----- - -You can get the details of the task by calling another endpoint, replace `uuids` with the values taken from -the responses (`uuid-1` is the process instance id and `uuid-2` is the task instance id). -First corresponds to the process instance id and the other to the task instance id. - -[source,bash] ----- -curl -X GET http://localhost:8080/persons/{uuid-1}/ChildrenHandling/{uuid-2} \ - -H 'content-type: application/json' \ - -H 'accept: application/json' ----- - -You can complete this person evaluation process instance by calling the same endpoint but with POST, -replace `uuids` with the values taken from the responses (`uuid-1` is the process instance id and `uuid-2` is the task instance id). - -[source,bash] ----- -curl -X POST http://localhost:8080/persons/{uuid-1}/ChildrenHandling/{uuid-2} \ - -H 'content-type: application/json' \ - -H 'accept: application/json' \ - -d '{}' ----- - -== Enabling persistence - -Since 0.3.0 of Kogito, there is an option to enable persistence to preserve process instance state -across application restarts. That supports long running process instances that can be resumed at any -point in time. - -=== Prerequisites - -Kogito uses Infinispan as the persistence service so you need to have Infinispan server installed and running. -Version of the Infinispan is aligned with Quarkus BOM so make sure the right version is installed. - -=== Add dependencies to project - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-infinispan-client - - - org.kie.kogito - infinispan-persistence-addon - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-infinispan-client") -implementation("org.kie.kogito:infinispan-persistence-addon") ----- - -=== Configure connection with Infinispan server - -Add following into the src/main/resources/application.properties file (create the file if it does not exist) - -[source,plain] ----- -quarkus.infinispan-client.server-list=localhost:11222 ----- - -NOTE: Adjust the host and port number according to your Infinispan server installation. - -=== Test with enabled persistence - -After configuring persistence on the project level, you can test and verify that the process instance -state is preserved across application restarts. - -* start Infinispan server -* build and run your project -* execute non adult use case - -[source,bash] ----- -curl -X POST http://localhost:8080/persons \ - -H 'content-type: application/json' \ - -H 'accept: application/json' \ - -d '{"person": {"name":"Jenny Quark", "age": 15}}' ----- - -You can also verify that there is active instance - -[source,bash] ----- -curl -X GET http://localhost:8080/persons \ - -H 'content-type: application/json' \ - -H 'accept: application/json' ----- - -Restart your application while keeping Infinispan server up and running. - -Check if you can see active instance which should have exactly the same id - -[source,bash] ----- -curl -X GET http://localhost:8080/persons \ - -H 'content-type: application/json' \ - -H 'accept: application/json' ----- - - -To learn more about persistence in Kogito visit https://github.com/kiegroup/kogito-runtimes/wiki/Persistence[this page] - -== Using DMN decision tables - -Kogito, like Drools, offers support of the https://drools.org/learn/dmn.html[DMN open standard] for visual and semantic execution of declarative logic. -The business rules in this example may be also expressed using DMN decision tables or other visual paradigm of DMN, instead of DRL and RuleUnits. - -For more information about DMN support in Kogito, you may refer to the companion Quarkus guide specific to xref:kogito-dmn.adoc[Kogito DMN support on Quarkus], or the Kogito documentation in the links below. - -== Using legacy decision tables - -Kogito allows to define DRL rules as decision tables using the Microsoft Excel file formats. -To be able to use such assets in your application, an additional dependency is required: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.kie.kogito - drools-decisiontables - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.kie.kogito:drools-decisiontables") ----- - -Once the dependency is added to the project, decision tables in `xls` or `xlsx` format can be properly handled. - -== References - -* https://kogito.kie.org[Kogito Website] -* https://docs.jboss.org/kogito/release/latest/html_single[Kogito Documentation] -* xref:kogito-dev-services.adoc[Kogito Dev Services] diff --git a/_versions/2.7/guides/kotlin.adoc b/_versions/2.7/guides/kotlin.adoc deleted file mode 100644 index f7fd551e5fd..00000000000 --- a/_versions/2.7/guides/kotlin.adoc +++ /dev/null @@ -1,498 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Kotlin - -:extension-status: preview -include::./attributes.adoc[] - -https://kotlinlang.org/[Kotlin] is a very popular programming language that targets the JVM (amongst other environments). Kotlin has experienced a surge in popularity the last few years making it the most popular JVM language, except for Java of course. - -Quarkus provides first class support for using Kotlin as will be explained in this guide. - -include::./status-include.adoc[] - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -NB: For Gradle project setup please see below, and for further reference consult the guide in the xref:gradle-tooling.adoc[Gradle setup page]. - -== Creating the Maven project - -First, we need a new Kotlin project. This can be done using the following command: - -:create-app-artifact-id: rest-kotlin-quickstart -:create-app-extensions: kotlin,resteasy-reactive-jackson -:create-app-code: -include::includes/devtools/create-app.adoc[] - -When adding `kotlin` to the extensions list, the Maven plugin will generate a project that is properly -configured to work with Kotlin. Furthermore the `org.acme.ReactiveGreetingResource` class is implemented as Kotlin source code (as is the case with the generated tests). -The addition of `resteasy-reactive-jackson` in the extension list results in importing the RESTEasy Reactive and Jackson extensions. - -`ReactiveGreetingResource.kt` looks like this: - -[source,kotlin] ----- -package org.acme - -import javax.ws.rs.GET -import javax.ws.rs.Path -import javax.ws.rs.Produces -import javax.ws.rs.core.MediaType - -@Path("/hello") -class ReactiveGreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - fun hello() = "Hello RESTEasy Reactive" -} ----- - -=== Update code - -In order to show a more practical example of Kotlin usage we will add a simple link:https://kotlinlang.org/docs/reference/data-classes.html[data class] called `Greeting.kt` like so: - -[source,kotlin] ----- -package org.acme.rest - -data class Greeting(val message: String = "") ----- - -We also update the `ReactiveGreetingResource.kt` like so: - -[source,kotlin] ----- -import javax.ws.rs.GET -import javax.ws.rs.Path -import javax.ws.rs.core.MediaType - -@Path("/hello") -class GreetingResource { - - @GET - fun hello() = Greeting("hello") -} ----- - -With these changes in place the `/hello` endpoint will reply with a JSON object instead of a simple String. - -To make the test pass, we also need to update `ReactiveGreetingResourceTest.kt` like so: - -[source,kotlin] ----- -import org.hamcrest.Matchers.equalTo - -@QuarkusTest -class ReactiveGreetingResourceTest { - - @Test - fun testHelloEndpoint() { - given() - .`when`().get("/hello") - .then() - .statusCode(200) - .body("message", equalTo("hello")) - } - -} ----- - -== Important Maven configuration points - -The generated `pom.xml` contains the following modifications compared to its counterpart when Kotlin is not selected: - -* The `quarkus-kotlin` artifact is added to the dependencies. This artifact provides support for Kotlin in the live reload mode (more about this later on) -* The `kotlin-stdlib-jdk8` is also added as a dependency. -* Maven's `sourceDirectory` and `testSourceDirectory` build properties are configured to point to Kotlin sources (`src/main/kotlin` and `src/test/kotlin` respectively) -* The `kotlin-maven-plugin` is configured as follows: - -[source,xml] ----- - - kotlin-maven-plugin - org.jetbrains.kotlin - ${kotlin.version} - - - compile - - compile - - - - test-compile - - test-compile - - - - - - all-open - - - - - - - - - - - org.jetbrains.kotlin - kotlin-maven-allopen - ${kotlin.version} - - - ----- - -The important thing to note is the use of the https://kotlinlang.org/docs/reference/compiler-plugins.html#all-open-compiler-plugin[all-open] Kotlin compiler plugin. -In order to understand why this plugin is needed, first we need to note that by default all the classes generated from the Kotlin compiler are marked as `final`. - -Having `final` classes however does not work well with various frameworks that need to create https://docs.oracle.com/javase/8/docs/technotes/guides/reflection/proxy.html[Dynamic Proxies]. - -Thus, the `all-open` Kotlin compiler plugin allows us to configure the compiler to *not* mark as `final` classes that have certain annotations. In the snippet above, -we have specified that classes annotated with `javax.ws.rs.Path` should not be `final`. - -If your application contains classes annotated with `javax.enterprise.context.ApplicationScoped` -for example, then `` needs to be added as well. Same goes for any class that needs to have a dynamic proxy created at runtime. - -Future versions of Quarkus will configure the Kotlin compiler plugin in a way that will make it unnecessary to alter this configuration. - -== Important Gradle configuration points - -Similar to the Maven configuration, when using Gradle, the following modifications are required when Kotlin is selected: - -* The `quarkus-kotlin` artifact is added to the dependencies. This artifact provides support for Kotlin in the live reload mode (more about this later on) -* The `kotlin-stdlib-jdk8` is also added as a dependency. -* The Kotlin plugin is activated, which implicitly adds `sourceDirectory` and `testSourceDirectory` build properties to point to Kotlin sources (`src/main/kotlin` and `src/test/kotlin` respectively) -* The all-open Kotlin plugin tells the compiler not to mark as final, those classes with the annotations highlighted (customize as required) -* When using native-image, the use of http (or https) protocol(s) must be declared -* An example configuration follows: - -[source,groovy,subs=attributes+] ----- -plugins { - id 'java' - id 'io.quarkus' - - id "org.jetbrains.kotlin.jvm" version "{kotlin-version}" // <1> - id "org.jetbrains.kotlin.plugin.allopen" version "{kotlin-version}" // <1> -} - -repositories { - mavenLocal() - mavenCentral() -} - -dependencies { - implementation 'org.jetbrains.kotlin:kotlin-stdlib-jdk8:{kotlin-version}' - - implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}") - - implementation 'io.quarkus:quarkus-resteasy-reactive' - implementation 'io.quarkus:quarkus-resteasy-reactive-jackson' - implementation 'io.quarkus:quarkus-kotlin' - - testImplementation 'io.quarkus:quarkus-junit5' - testImplementation 'io.rest-assured:rest-assured' -} - -group = '...' // set your group -version = '1.0.0-SNAPSHOT' - -java { - sourceCompatibility = JavaVersion.VERSION_11 - targetCompatibility = JavaVersion.VERSION_11 -} - -allOpen { // <2> - annotation("javax.ws.rs.Path") - annotation("javax.enterprise.context.ApplicationScoped") - annotation("io.quarkus.test.junit.QuarkusTest") -} - -compileKotlin { - kotlinOptions.jvmTarget = JavaVersion.VERSION_11 - kotlinOptions.javaParameters = true -} - -compileTestKotlin { - kotlinOptions.jvmTarget = JavaVersion.VERSION_11 -} ----- - -<1> The Kotlin plugin version needs to be specified. -<2> The all-open configuration required, as per Maven guide above - -or, if you use the Gradle Kotlin DSL: - -[source,kotlin,subs=attributes+] ----- -plugins { - kotlin("jvm") version "{kotlin-version}" // <1> - kotlin("plugin.allopen") version "{kotlin-version}" - id("io.quarkus") -} - -repositories { - mavenLocal() - mavenCentral() -} - -val quarkusPlatformGroupId: String by project -val quarkusPlatformArtifactId: String by project -val quarkusPlatformVersion: String by project - -group = "..." -version = "1.0.0-SNAPSHOT" - - -repositories { - mavenLocal() - mavenCentral() -} - -dependencies { - implementation(kotlin("stdlib-jdk8")) - - implementation(enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}")) - - implementation("io.quarkus:quarkus-kotlin") - implementation("io.quarkus:quarkus-resteasy-reactive") - implementation("io.quarkus:quarkus-resteasy-reactive-jackson") - - testImplementation("io.quarkus:quarkus-junit5") - testImplementation("io.rest-assured:rest-assured") -} - -group = '...' // set your group -version = "1.0.0-SNAPSHOT" - -java { - sourceCompatibility = JavaVersion.VERSION_11 - targetCompatibility = JavaVersion.VERSION_11 -} - -allOpen { // <2> - annotation("javax.ws.rs.Path") - annotation("javax.enterprise.context.ApplicationScoped") - annotation("io.quarkus.test.junit.QuarkusTest") -} - -tasks.withType { - kotlinOptions.jvmTarget = JavaVersion.VERSION_11.toString() - kotlinOptions.javaParameters = true -} - ----- - -<1> The Kotlin plugin version needs to be specified. -<2> The all-open configuration required, as per Maven guide above - - - -== Live reload - -Quarkus provides support for live reloading changes made to source code. This support is also available to Kotlin, meaning that developers can update their Kotlin source -code and immediately see their changes reflected. - -To see this feature in action, first execute: - -include::includes/devtools/dev.adoc[] - -When executing an HTTP GET request against `http://localhost:8080/hello`, you see a JSON message with the value `hello` as its `message` field. - -Now using your favorite editor or IDE, update `ReactiveGreetingResource.kt` and change the `hello` method to the following: - -[source,kotlin] ----- -fun hello() = Greeting("hi") ----- - -When you now execute an HTTP GET request against `http://localhost:8080/hello`, you should see a JSON message with the value `hi` as its `message` field. - -One thing to note is that the live reload feature is not available when making changes to both Java and Kotlin source that have dependencies on each other. We hope to alleviate this limitation in the future. - -== Packaging the application - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -and executed with `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also build the native executable using: - -include::includes/devtools/build-native.adoc[] - -== Kotlin and Jackson - -If the `com.fasterxml.jackson.module:jackson-module-kotlin` dependency and the `quarkus-jackson` extension (or one of the `quarkus-resteasy-jackson` or `quarkus-resteasy-reactive-jackson` extensions) have been added to the project, -then Quarkus automatically registers the `KotlinModule` to the `ObjectMapper` bean (see xref:rest-json.adoc#jackson[this] guide for more details). - -When using Kotlin data classes with `native-image` you may experience serialization errors that do not occur with the `JVM` version, despite the Kotlin Jackson Module being registered. This is especially so if you have a more complex JSON hierarchy, where an issue on a lower node causes a serialization failure. The error message displayed is a catch-all and typically displays an issue with the root object, which may not necessarily be the case. - -[source] ----- -com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `Address` (no Creators, like default construct, exist): cannot deserialize from Object value (no delegate- or property-based Creator) ----- - -To ensure full-compability with `native-image`, it is recommended to apply the Jackson `@field:JsonProperty("fieldName")` annotation, and set a nullable default, as illustrated below. You can automate the generation of Kotlin data classes for your sample JSON using Intellij plugins (such as JSON to Kotlin Class), and easily enable the Jackson annotation and select nullable parameters as part of the auto-code generation. - -[source,kotlin] ----- -import com.fasterxml.jackson.annotation.JsonProperty - -data class Response( - @field:JsonProperty("chart") - val chart: ChartData? = null -) - -data class ChartData( - @field:JsonProperty("result") - val result: List? = null, - - @field:JsonProperty("error") - val error: Any? = null -) - -data class ResultItem( - @field:JsonProperty("meta") - val meta: Meta? = null, - - @field:JsonProperty("indicators") - val indicators: IndicatorItems? = null, - - @field:JsonProperty("timestamp") - val timestamp: List? = null -) - -... ----- - -== Kotlin and the Kubernetes Client - -When working with the `quarkus-kubernetes` extension and have Kotlin classes bound to CustomResource definitions (like you do for building operators), you need to be aware that the underlying Fabric8 Kubernetes Client uses its own static Jackson `ObjectMapper` s, which can be configured as follows with the `KotlinModule`: - -[source,kotlin] ----- -import io.fabric8.kubernetes.client.utils.Serialization -import com.fasterxml.jackson.module.kotlin.KotlinModule - -... - -Serialization.jsonMapper().registerModule(KotlinModule()) -Serialization.yamlMapper().registerModule(KotlinModule()) ----- - -_Please test this carefully on compilation to native images and fallback to Java-compatible Jackson bindings if you experience problems._ - -== Kotlin coroutines and Mutiny - -Kotlin coroutines provide a imperative programming model that actually gets executed in an asynchronous, reactive manner. -To simplify the interoperation between Mutiny and Kotlin there is the module `io.smallrye.reactive:mutiny-kotlin`, described link:https://smallrye.io/smallrye-mutiny/guides/kotlin[here]. - -== RESTEasy Reactive and Coroutines - -The `quarkus-resteasy-reactive` extension supports Kotlin `suspend fun` ctions in combination with `quarkus-kotlin` as well: - -[source,kotlin] ----- -@Path("/hello") -class ReactiveGreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - suspend fun hello() = "Hello RESTEasy Reactive with Coroutines" -} ----- - -== CDI @Inject with Kotlin - -Kotlin reflection annotation processing differs from Java. You may experience an error when using CDI @Inject such as: -"kotlin.UninitializedPropertyAccessException: lateinit property xxx has not been initialized" - -In the example below, this can be easily solved by adapting the annotation, adding @field: Default, to handle the lack of a @Target on the Kotlin reflection annotation definition. - -[source,kotlin] ----- -import javax.inject.Inject -import javax.enterprise.inject.Default -import javax.enterprise.context.ApplicationScoped - -import javax.ws.rs.GET -import javax.ws.rs.Path -import javax.ws.rs.PathParam -import javax.ws.rs.Produces -import javax.ws.rs.core.MediaType - - - -@ApplicationScoped -class GreetingService { - - fun greeting(name: String): String { - return "hello $name" - } - -} - -@Path("/") -class ReactiveGreetingResource { - - @Inject - @field: Default // <1> - lateinit var service: GreetingService - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("/hello/{name}") - fun greeting(@PathParam("name") name: String): String { - return service.greeting(name) - } - -} ----- -<1> Kotlin requires a @field: xxx qualifier as it has no @Target on the annotation definition. Add @field: xxx in this example. @Default is used as the qualifier, explicitly specifying the use of the default bean. - -Alternatively, prefer the use of constructor injection which works without modification of the Java examples, increases testability and complies best to a Kotlin programming style. - -[source,kotlin] ----- -import javax.enterprise.context.ApplicationScoped - -import javax.ws.rs.GET -import javax.ws.rs.Path -import javax.ws.rs.PathParam -import javax.ws.rs.Produces -import javax.ws.rs.core.MediaType - -@ApplicationScoped -class GreetingService { - fun greeting(name: String): String { - return "hello $name" - } -} - -@Path("/") -class ReactiveGreetingResource( - private val service: GreetingService -) { - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("/hello/{name}") - fun greeting(@PathParam("name") name: String): String { - return service.greeting(name) - } - -} ----- diff --git a/_versions/2.7/guides/kubernetes-client.adoc b/_versions/2.7/guides/kubernetes-client.adoc deleted file mode 100644 index d14d2bea2e6..00000000000 --- a/_versions/2.7/guides/kubernetes-client.adoc +++ /dev/null @@ -1,470 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Kubernetes Client - -include::./attributes.adoc[] - - -Quarkus includes the `kubernetes-client` extension which enables the use of the https://github.com/fabric8io/kubernetes-client[Fabric8 Kubernetes Client] -in native mode while also making it easier to work with. - -Having a Kubernetes Client extension in Quarkus is very useful in order to unlock the power of Kubernetes Operators. -Kubernetes Operators are quickly emerging as a new class of Cloud Native applications. -These applications essentially watch the Kubernetes API and react to changes on various resources and can be used to manage the lifecycle of all kinds of complex systems like databases, messaging systems and much much more. -Being able to write such operators in Java with the very low footprint that native images provide is a great match. - -== Configuration - -Once you have your Quarkus project configured you can add the `kubernetes-client` extension -to your project by running the following command in your project base directory. - -:add-extension-extensions: kubernetes-client -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-kubernetes-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-kubernetes-client") ----- - -== Usage - -Quarkus configures a Bean of type `KubernetesClient` which can be injected into application code using the well known CDI methods. -This client can be configured using various properties as can be seen in the following example: - -[source,properties] ----- -quarkus.kubernetes-client.trust-certs=false -quarkus.kubernetes-client.namespace=default ----- - -Note that the full list of properties is available in the https://github.com/quarkusio/quarkus/blob/main/extensions/kubernetes-client/runtime-internal/src/main/java/io/quarkus/kubernetes/client/runtime/KubernetesClientBuildConfig.java[KubernetesClientBuildConfig] class. - -=== Overriding - -The extension also allows application code to override either of `io.fabric8.kubernetes.client.Config` or `io.fabric8.kubernetes.client.KubernetesClient` which are -normally provided by the extension by simply declaring custom versions of those beans. - -An example of this can be seen in the following snippet: - -[source,java] ----- -@Singleton -public class KubernetesClientProducer { - - @Produces - public KubernetesClient kubernetesClient() { - // here you would create a custom client - return new DefaultKubernetesClient(); - } -} ----- - -== Testing - -To make testing against a mock Kubernetes API extremely simple, Quarkus provides the `WithKubernetesTestServer` annotation which automatically launches -a mock of the Kubernetes API server and sets the proper environment variables needed so that the Kubernetes Client configures itself to use said mock. -Tests can inject the mock server and set it up in any way necessary for the particular testing using the `@KubernetesTestServer` annotation. - -Let's assume we have a REST endpoint defined like so: - -[source%nowrap,java] ----- -@Path("/pod") -public class Pods { - - private final KubernetesClient kubernetesClient; - - public Pods(KubernetesClient kubernetesClient) { - this.kubernetesClient = kubernetesClient; - } - - @GET - @Path("/{namespace}") - public List pods(@PathParam("namespace") String namespace) { - return kubernetesClient.pods().inNamespace(namespace).list().getItems(); - } -} ----- - -We could write a test for this endpoint very easily like so: - -[source%nowrap,java] ----- -// you can even configure aspects like crud, https and port on this annotation -@WithKubernetesTestServer -@QuarkusTest -public class KubernetesClientTest { - - @KubernetesTestServer - KubernetesServer mockServer; - - @BeforeEach - public void before() { - final Pod pod1 = new PodBuilder().withNewMetadata().withName("pod1").withNamespace("test").and().build(); - final Pod pod2 = new PodBuilder().withNewMetadata().withName("pod2").withNamespace("test").and().build(); - - // Set up Kubernetes so that our "pretend" pods are created - mockServer.getClient().pods().create(pod1); - mockServer.getClient().pods().create(pod2); - } - - @Test - public void testInteractionWithAPIServer() { - RestAssured.when().get("/pod/test").then() - .body("size()", is(2)); - } - -} ----- - -Note that to take advantage of these features, the `quarkus-test-kubernetes-client` dependency needs to be added, for example like so: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-kubernetes-client - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-kubernetes-client") ----- - -By default, the mock server will be in CRUD mode, so you have to use the client to build your state before your application can retrieve it, -but you can also set it up in non-CRUD mode and mock all HTTP requests made to Kubernetes: - -[source%nowrap,java] ----- -// you can even configure aspects like crud, https and port on this annotation -@WithKubernetesTestServer(crud = false) -@QuarkusTest -public class KubernetesClientTest { - - @KubernetesTestServer - KubernetesServer mockServer; - - @BeforeEach - public void before() { - final Pod pod1 = new PodBuilder().withNewMetadata().withName("pod1").withNamespace("test").and().build(); - final Pod pod2 = new PodBuilder().withNewMetadata().withName("pod2").withNamespace("test").and().build(); - - // Mock any HTTP request to Kubernetes pods so that our pods are returned - mockServer.expect().get().withPath("/api/v1/namespaces/test/pods") - .andReturn(200, - new PodListBuilder().withNewMetadata().withResourceVersion("1").endMetadata().withItems(pod1, pod2) - .build()) - .always(); - } - - @Test - public void testInteractionWithAPIServer() { - RestAssured.when().get("/pod/test").then() - .body("size()", is(2)); - } - -} ----- - -You can also use the `setup` attribute on the `@WithKubernetesTestServer` annotation to provide a class that will configure the `KubernetesServer` instance: - -[source%nowrap,java] ----- -@WithKubernetesTestServer(setup = MyTest.Setup.class) -@QuarkusTest -public class MyTest { - - public static class Setup implements Consumer { - - @Override - public void accept(KubernetesServer server) { - server.expect().get().withPath("/api/v1/namespaces/test/pods") - .andReturn(200, new PodList()).always(); - } - } - - // tests -} ----- - -Alternately, you can create an extension of the `KubernetesServerTestResource` class to ensure all your `@QuarkusTest` enabled test classes share the same mock server setup via the `QuarkusTestResource` annotation: - -[source%nowrap,java] ----- -public class CustomKubernetesMockServerTestResource extends KubernetesServerTestResource { - - @Override - protected void configureServer() { - super.configureServer(); - server.expect().get().withPath("/api/v1/namespaces/test/pods") - .andReturn(200, new PodList()).always(); - } -} ----- - -and use this in your other test classes as follows: -[source%nowrap,java] ----- -@QuarkusTestResource(CustomKubernetesMockServerTestResource.class) -@QuarkusTest -public class KubernetesClientTest { - - //tests will now use the configured server... -} ----- - -[#note-on-generic-types] -== Note on implementing or extending generic types - -Due to the restrictions imposed by GraalVM, extra care needs to be taken when implementing or extending generic types provided by the client if the application is intended to work in native mode. -Essentially every implementation or extension of generic classes such as `Watcher`, `ResourceHandler` or `CustomResource` needs to specify their associated Kubernetes model class (or, in the case of `CustomResource`, regular Java types) at class definition time. -To better understand this, suppose we want to watch for changes to Kubernetes `Pod` resources. -There are a couple ways to write such a `Watcher` that are guaranteed to work in native: - -[source%nowrap,java] ----- -client.pods().watch(new Watcher() { - @Override - public void eventReceived(Action action, Pod pod) { - // do something - } - - @Override - public void onClose(KubernetesClientException e) { - // do something - } -}); ----- - -or - -[source%nowrap,java] ----- -public class PodResourceWatcher implements Watcher { - @Override - public void eventReceived(Action action, Pod pod) { - // do something - } - - @Override - public void onClose(KubernetesClientException e) { - // do something - } -} - -... - - -client.pods().watch(new PodResourceWatcher()); ----- - -Note that defining the generic type via a class hierarchy similar to the following example will also work correctly: - -[source%nowrap,java] ----- -public abstract class MyWatcher implements Watcher { -} - -... - - -client.pods().watch(new MyWatcher() { - @Override - public void eventReceived(Action action, Pod pod) { - // do something - } -}); ----- - -WARNING: The following example will **not** work in native mode because the generic type of watcher cannot be determined by looking at the class and method definitions -thus making Quarkus unable to properly determine the Kubernetes model class for which reflection registration is needed: - -[source%nowrap,java] ----- -public class ResourceWatcher implements Watcher { - @Override - public void eventReceived(Action action, T resource) { - // do something - } - - @Override - public void onClose(KubernetesClientException e) { - // do something - } -} - -client.pods().watch(new ResourceWatcher()); ----- - -[#note-on-ec-keys] -== Note on using Elliptic Curve keys - -Please note that if you would like to use Elliptic Curve keys with Kubernetes Client then adding a BouncyCastle PKIX dependency is required: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.bouncycastle - bcpkix-jdk15on - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.bouncycastle:bcpkix-jdk15on") ----- - -Note that internally an `org.bouncycastle.jce.provider.BouncyCastleProvider` provider will be registered if it has not already been registered. - -You can have this provider registered as described in the xref:security-customization.adoc#bouncy-castle[BouncyCastle] or xref:security-customization.adoc#bouncy-castle-fips[BouncyCastle FIPS] sections. - -== Access to the Kubernetes API - -In many cases in order to access the Kubernetes API server a `ServiceAccount`, `Role` and `RoleBinding` will be necessary. -An example that allows listing all pods could look something like this: - -[source,yaml] ----- ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: - namespace: ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: - namespace: -rules: - - apiGroups: [""] - resources: ["pods"] - verbs: ["list"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: - namespace: -roleRef: - kind: Role - name: - apiGroup: rbac.authorization.k8s.io -subjects: - - kind: ServiceAccount - name: - namespace: ----- - -Replace `` and `` with your values. -Have a look at https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/[Configure Service Accounts for Pods] to get further information. - -== OpenShift Client - -If the targeted Kubernetes cluster is an OpenShift cluster, it is possible to access it through -the `openshift-client` extension, in a similar way. This leverages the dedicated fabric8 -openshift client, and provides access to `OpenShift` proprietary objects (e.g. `Route`, `ProjectRequest`, `BuildConfig` ...) - -Note that the configuration properties are shared with the `kubernetes-client` extension. In -particular they have the same `quarkus.kubernetes-client` prefix. - -Add the extension with: - -:add-extension-extensions: openshift-client -include::includes/devtools/extension-add.adoc[] - -Note that `openshift-client` extension has a dependency on the `kubernetes-client` extension. - -To use the client, inject an `OpenShiftClient` instead of the `KubernetesClient`: - -[source, java] ----- -@Inject -private OpenShiftClient openshiftClient; ----- - -If you need to override the default `OpenShiftClient`, provide a producer such as: - -[source, java] ----- -@Singleton -public class OpenShiftClientProducer { - - @Produces - public OpenShiftClient openshiftClient() { - // here you would create a custom client - return new DefaultOpenShiftClient(); - } -} ----- - -Mock support is also provided in a similar fashion: - -[source, java] ----- -@QuarkusTestResource(OpenShiftMockServerTestResource.class) -@QuarkusTest -public class OpenShiftClientTest { - - @MockServer - private OpenShiftMockServer mockServer; -... ----- - -Or by using the `@WithOpenShiftTestServer` similar to the `@WithKubernetesTestServer` explained in the -previous section: - -[source, java] ----- -@WithOpenShiftTestServer -@QuarkusTest -public class OpenShiftClientTest { - - @OpenShiftTestServer - private OpenShiftServer mockOpenShiftServer; -... ----- - -To use this feature, you have to add a dependency on `quarkus-test-openshift-client`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-openshift-client - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-openshift-client") ----- - -== Configuration Reference - -include::{generated-dir}/config/quarkus-kubernetes-client.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/kubernetes-config.adoc b/_versions/2.7/guides/kubernetes-config.adoc deleted file mode 100644 index 0955de08f68..00000000000 --- a/_versions/2.7/guides/kubernetes-config.adoc +++ /dev/null @@ -1,150 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Kubernetes Config - -include::./attributes.adoc[] - - -Quarkus includes the `kubernetes-config` extension which allows developers to use Kubernetes https://cloud.google.com/kubernetes-engine/docs/concepts/configmap[ConfigMaps] and https://cloud.google.com/kubernetes-engine/docs/concepts/secret[Secrets] as a configuration source, without having to mount them into the https://kubernetes.io/docs/concepts/workloads/pods/pod/[Pod] running the Quarkus application or make any other modifications to their Kubernetes `Deployment` (or OpenShift `DeploymentConfig`). - - -== Configuration - -Once you have your Quarkus project configured you can add the `kubernetes-config` extension -by running the following command in your project base directory. - -:add-extension-extensions: kubernetes-config -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-kubernetes-config - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-kubernetes-config") ----- - -== Usage - -The extension works by reading ConfigMaps and Secrets directly from the Kubernetes API server using the xref:kubernetes-client.adoc[Kubernetes Client]. - -The extension understands the following types of ConfigMaps and Secrets as input sources: - -* ConfigMaps and Secrets that contain literal data (see https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-literal-values[this] for an example on how to create one) -* ConfigMaps and Secrets created from files named `application.properties`, `application.yaml` or `application.yml` (see https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files[this] for an example on how to create one). - -The extension is disabled by default in order to prevent the application for making API calls when it is not running in a Kubernetes environment. To enable it, set `quarkus.kubernetes-config.enabled=true` (for example using a specific xref:config-reference.adoc#profiles[profile]). - -The values of `quarkus.kubernetes-config.config-maps` and `quarkus.kubernetes-config.secrets` determine which ConfigMaps and/or Secrets will be used as configuration sources. Keep in mind that these ConfigMaps and Secrets must be in the same Kubernetes `Namespace` -as the running application. If they are to be found in a different namespace, then `quarkus.kubernetes-config.namespace` must be set to the proper value. - -=== Priority of obtained properties - -The properties obtained from the ConfigMaps and Secrets have a higher priority than (i.e. they override) any properties of the same name that are found in `application.properties` (or the YAML equivalents), but they have lower priority than properties set via Environment Variables or Java System Properties. - -Furthermore, when multiple ConfigMaps (or Secrets) are used, ConfigMaps (or Secrets) defined later in the list have a higher priority that ConfigMaps defined earlier in the list. - -Finally, when both ConfigMaps and Secrets are used, the latter always a higher priority than the former. - -=== Kubernetes Permissions - -Since reading ConfigMaps involves interacting with the Kubernetes API Server, when https://kubernetes.io/docs/reference/access-authn-authz/rbac/[RBAC] is enabled on the cluster, the https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/[ServiceAccount] -that is used to run the application needs to have the proper permissions for such access. - -Thankfully, when using the `kubernetes-config` extension along with the xref:deploying-to-kubernetes.adoc[Kubernetes] extension, all the necessary Kubernetes resources to make that happen are automatically generated. - -==== Secrets - -By default, the xref:deploying-to-kubernetes.adoc[Kubernetes] extension doesn't generate the necessary resources to allow accessing secrets. -Set `quarkus.kubernetes-config.secrets.enabled=true` to generate the necessary role and corresponding role binding. - -== Example configuration - -A very common use case is to deploy a Quarkus application that needs to access a relational database which has itself already been deployed on Kubernetes. Using the `quarkus-kubernetes-config` extension makes this use case very simple to handle. - -Let's assume that our Quarkus application needs to talk to PostgreSQL and that when PostgreSQL was deployed on our Kubernetes cluster, a `Secret` named `postgresql` was created as part of that deployment and contains the following entries: - -* `database-name` -* `database-user` -* `database-password` - -One possible way to make Quarkus use these entries to connect the database is to use the following configuration: - -[source,properties] ----- -%prod.quarkus.kubernetes-config.secrets.enabled=true <1> -quarkus.kubernetes-config.secrets=postgresql <2> - -%prod.quarkus.datasource.jdbc.url=postgresql://somehost:5432/${database-name} <3> -%prod.quarkus.datasource.username=${database-user} <4> -%prod.quarkus.datasource.password=${database-password} <5> ----- -<1> Enable reading of secrets. Note the use of `%prod` profile as we only want this setting applied when the application is running in production. -<2> Configure the name of the secret that will be used. This doesn't need to be prefixed with the `%prod` profile as it won't have any effect if secret reading is disabled. -<3> Quarkus will substitute `${database-name}` with the value obtained from the entry with name `database-name` of the `postgres` Secret. `somehost` is the name of the Kubernetes `Service` that was created when PostgreSQL was deployed to Kubernetes. -<4> Quarkus will substitute `${database-user}` with the value obtained from the entry with name `database-user` of the `postgres` Secret. -<5> Quarkus will substitute `${database-password}` with the value obtained from the entry with name `database-password` of the `postgres` Secret. - -The values above allow the application to be completely agnostic of the actual database configuration used in production while also not inhibiting the usability of the application at development time. - -=== Alternatives - -The use of the `quarkus-kubernetes-config` extensions is completely optional as there are other ways an application can be configured to use ConfigMaps or Secrets. - -One common alternative is to map each entry of the ConfigMap and / Secret to an environment variable on the Kubernetes `Deployment` - see link:https://kubernetes.io/docs/concepts/configuration/secret/#use-case-as-container-environment-variables[this] for more details. -To achieve that in Quarkus, we could use the `quarkus-kubernetes` extension (which is responsible for creating Kubernetes manifests and include the following configuration) and configure it as so: - -[source,properties] ----- -quarkus.kubernetes.env.secrets=postgresql -quarkus.kubernetes.env.mapping.database-name.from-secret=postgresql -quarkus.kubernetes.env.mapping.database-name.with-key=database-name -quarkus.kubernetes.env.mapping.database-user.from-secret=postgresql -quarkus.kubernetes.env.mapping.database-user.with-key=database-user -quarkus.kubernetes.env.mapping.database-password.from-secret=postgresql -quarkus.kubernetes.env.mapping.database-password.with-key=database-password - -%prod.quarkus.datasource.jdbc.url=postgresql://somehost:5432/${database-name} -%prod.quarkus.datasource.username=${database-user} -%prod.quarkus.datasource.password=${database-password} ----- - -The end result of the above configuration would be the following `env` part being applied the generated `Deployment`: - -[source,yaml] ----- - env: - - name: DATABASE_NAME - valueFrom: - secretKeyRef: - key: database-name - name: postgresql - - name: DATABASE_USER - valueFrom: - secretKeyRef: - key: database-user - name: postgresql - - name: DATABASE_PASSWORD - valueFrom: - secretKeyRef: - key: database-password - name: postgresql ----- - -See xref:deploying-to-kubernetes.adoc#secret-mapping[this] for more details. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-kubernetes-config.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/lifecycle.adoc b/_versions/2.7/guides/lifecycle.adoc deleted file mode 100644 index 41c42ba7775..00000000000 --- a/_versions/2.7/guides/lifecycle.adoc +++ /dev/null @@ -1,248 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Application Initialization and Termination - -include::./attributes.adoc[] - -You often need to execute custom actions when the application starts and clean up everything when the application stops. -This guide explains how to: - -* Write a Quarkus application with a main method -* Write command mode applications that run a task and then terminate -* Be notified when the application starts -* Be notified when the application stops - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `lifecycle-quickstart` {quickstarts-tree-url}/lifecycle-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: lifecycle-quickstart -include::includes/devtools/create-app.adoc[] - -It generates: - -* the Maven structure -* example `Dockerfile` files for both `native` and `jvm` modes -* the application configuration file - -== The main method - -By default Quarkus will automatically generate a main method, that will bootstrap Quarkus and then just wait for -shutdown to be initiated. Let's provide our own main method: -[source,java] ----- -package com.acme; - -import io.quarkus.runtime.annotations.QuarkusMain; -import io.quarkus.runtime.Quarkus; - -@QuarkusMain <1> -public class Main { - - public static void main(String ... args) { - System.out.println("Running main method"); - Quarkus.run(args); <2> - } -} ----- -<1> This annotation tells Quarkus to use this as the main method, unless it is overridden in the config -<2> This launches Quarkus - -This main class will bootstrap Quarkus and run it until it stops. This is no different to the automatically -generated main class, but has the advantage that you can just launch it directly from the IDE without needing -to run a Maven or Gradle command. - -WARNING: It is not recommenced to do any business logic in this main method, as Quarkus has not been set up yet, -and Quarkus may run in a different ClassLoader. If you want to perform logic on startup use an `io.quarkus.runtime.QuarkusApplication` -as described below. - -If we want to actually perform business logic on startup (or write applications that complete a task and then exit) -we need to supply a `io.quarkus.runtime.QuarkusApplication` class to the run method. After Quarkus has been started -the `run` method of the application will be invoked. When this method returns the Quarkus application will exit. - -If you want to perform logic on startup you should call `Quarkus.waitForExit()`, this method will wait until a shutdown -is requested (either from an external signal like when you press `Ctrl+C` or because a thread has called `Quarkus.asyncExit()`). - -An example of what this looks like is below: - -[source,java] ----- -package com.acme; - -import io.quarkus.runtime.Quarkus; -import io.quarkus.runtime.QuarkusApplication; -import io.quarkus.runtime.annotations.QuarkusMain; - -@QuarkusMain -public class Main { - public static void main(String... args) { - Quarkus.run(MyApp.class, args); - } - - public static class MyApp implements QuarkusApplication { - - @Override - public int run(String... args) throws Exception { - System.out.println("Do startup logic here"); - Quarkus.waitForExit(); - return 0; - } - } -} ----- - -=== Injecting the command line arguments - -It is possible to inject the arguments that were passed in on the command line: - -[source,java] ----- -@Inject -@CommandLineArguments -String[] args; ----- - -Command line arguments can be passed to the application through the `-D` flag with the property `quarkus.args`: - -:devtools-wrapped: -// TODO: use once Asciidoctor escaping bug is fixed -:dev-additional-parameters: -Dquarkus.args=cmd-args - -* For Quarkus dev mode: -+ -include::includes/devtools/dev-parameters.adoc[] - -* For a runner jar: `java -Dquarkus.args= -jar target/quarkus-app/quarkus-run.jar` -* For a native executable: `./target/lifecycle-quickstart-1.0-SNAPSHOT-runner -Dquarkus.args=` - -:!dev-additional-parameters: -:!devtools-wrapped: - -== Listening for startup and shutdown events - -Create a new class named `AppLifecycleBean` (or pick another name) in the `org.acme.lifecycle` package, and copy the -following content: - -[source,java] ----- -package org.acme.lifecycle; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; - -import io.quarkus.runtime.ShutdownEvent; -import io.quarkus.runtime.StartupEvent; -import org.jboss.logging.Logger; - -@ApplicationScoped -public class AppLifecycleBean { - - private static final Logger LOGGER = Logger.getLogger("ListenerBean"); - - void onStart(@Observes StartupEvent ev) { // <1> - LOGGER.info("The application is starting..."); - } - - void onStop(@Observes ShutdownEvent ev) { // <2> - LOGGER.info("The application is stopping..."); - } - -} ----- -<1> Method called when the application is starting -<2> Method called when the application is terminating - -TIP: The events are also called in _dev mode_ between each redeployment. - -NOTE: The methods can access injected beans. Check the {quickstarts-blob-url}/lifecycle-quickstart/src/main/java/org/acme/lifecycle/AppLifecycleBean.java[AppLifecycleBean.java] class for details. - -=== What is the difference from `@Initialized(ApplicationScoped.class)` and `@Destroyed(ApplicationScoped.class)` - -In the JVM mode, there is no real difference, except that `StartupEvent` is always fired *after* `@Initialized(ApplicationScoped.class)` and `ShutdownEvent` is fired *before* `@Destroyed(ApplicationScoped.class)`. -For a native executable build, however, `@Initialized(ApplicationScoped.class)` is fired as *part of the native build process*, whereas `StartupEvent` is fired when the native image is executed. -See xref:writing-extensions.adoc#bootstrap-three-phases[Three Phases of Bootstrap and Quarkus Philosophy] for more details. - -NOTE: In CDI applications, an event with qualifier `@Initialized(ApplicationScoped.class)` is fired when the application context is initialized. See https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#application_context[the spec, window="_blank"] for more info. - -[[startup_annotation]] -=== Using `@Startup` to initialize a CDI bean at application startup - -A bean represented by a class, producer method or field annotated with `@Startup` is initialized at application startup: - -[source,java] ----- -package org.acme.lifecycle; - -import javax.enterprise.context.ApplicationScoped; - -@Startup // <1> -@ApplicationScoped -public class EagerAppBean { - - private final String name; - - EagerAppBean(NameGenerator generator) { // <2> - this.name = generator.createName(); - } -} ----- -<1> For each bean annotated with `@Startup` a synthetic observer of `StartupEvent` is generated. The default priority is used. -<2> The bean constructor is called when the application starts and the resulting contextual instance is stored in the application context. - -NOTE: `@Dependent` beans are destroyed immediately afterwards to follow the behavior of observers declared on `@Dependent` beans. - -TIP: If a class is annotated with `@Startup` but with no scope annotation then `@ApplicationScoped` is added automatically. - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -The logged message is printed. -When the application is stopped, the second log message is printed. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -and executed using `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable using: - -include::includes/devtools/build-native.adoc[] - -== Launch Modes - -Quarkus has 3 different launch modes, `NORMAL` (i.e. production), `DEVELOPMENT` and `TEST`. If you are running `quarkus:dev` -then the mode will be `DEVELOPMENT`, if you are running a JUnit test it will be `TEST`, otherwise it will be `NORMAL`. - -Your application can get the launch mode by injecting the `io.quarkus.runtime.LaunchMode` enum into a CDI bean, -or by invoking the static method `io.quarkus.runtime.LaunchMode.current()`. - -== Graceful Shutdown - -Quarkus includes support for graceful shutdown, this allows Quarkus to wait for running requests to finish, up -till a set timeout. By default this is disabled, however you can configure this by setting the `quarkus.shutdown.timeout` -config property. When this is set shutdown will not happen until all running requests have completed, or until -this timeout has elapsed. This config property is a duration, and can be set using the standard -`java.time.Duration` format, if only a number is specified it is interpreted as seconds. - -Extensions that accept requests need to add support for this on an individual basis. At the moment only the -HTTP extension supports this, so shutdown may still happen when messaging requests are active. \ No newline at end of file diff --git a/_versions/2.7/guides/liquibase-mongodb.adoc b/_versions/2.7/guides/liquibase-mongodb.adoc deleted file mode 100644 index 78080204ac7..00000000000 --- a/_versions/2.7/guides/liquibase-mongodb.adoc +++ /dev/null @@ -1,156 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Liquibase MongoDB - -include::./attributes.adoc[] -:change-log: src/main/resources/db/changeLog.xml -:config-file: application.properties - -https://www.liquibase.org/[Liquibase] is an open source tool for database schema change management, -it allows managing MongoDB databases via it's https://github.com/liquibase/liquibase-mongodb[MongoDB Extension]. - -Quarkus provides first class support for using Liquibase MongoDB Extension as will be explained in this guide. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `liquibase-mongodb-quickstart` {quickstarts-tree-url}/liquibase-mongodb-quickstart[directory]. - -== Setting up support for Liquibase - -To start using the Liquibase MongoDB Extension with your project, you just need to: - -* add your changeLog to the `{change-log}` file as you usually do with Liquibase -* activate the `migrate-at-start` option to migrate the schema automatically or inject the `Liquibase` object and run -your migration as you normally do. - -In your `pom.xml`, add the following dependencies: - -* the Liquibase MongoDB extension -* the MongoDB client extension - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-liquibase-mongodb - - - - - io.quarkus - quarkus-mongodb-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Liquibase MongoDB -implementation("io.quarkus:quarkus-liquibase-mongodb") - -// MongoDB client dependency -implementation("io.quarkus:quarkus-mongodb-client") ----- - -Liquibase MongoDB extension support relies on the Quarkus MongoDB client config. -For the time being, it does not support multiple clients. -First, you need to add the MongoDB config to the `{config-file}` file -in order to allow Liquibase to manage the schema. - -The following is an example for the `{config-file}` file: - -[source,properties] ----- -# configure MongoDB -quarkus.mongodb.connection-string = mongodb://localhost:27017 - -# Liquibase MongoDB minimal config properties -quarkus.liquibase-mongodb.migrate-at-start=true - -# Liquibase MongoDB optional config properties -# quarkus.liquibase-mongodb.change-log=db/changeLog.xml -# quarkus.liquibase-mongodb.validate-on-migrate=true -# quarkus.liquibase-mongodb.clean-at-start=false -# quarkus.liquibase-mongodb.contexts=Context1,Context2 -# quarkus.liquibase-mongodb.labels=Label1,Label2 -# quarkus.liquibase-mongodb.default-catalog-name=DefaultCatalog -# quarkus.liquibase-mongodb.default-schema-name=DefaultSchema ----- - -Add a changeLog file to the default folder following the Liquibase naming conventions: `{change-log}` -YAML, JSON and XML formats are supported for the changeLog. - -[source,xml] ----- - - - - - - - - {color: 1} - {name: "colorIdx"} - - - - {"name":"orange", "color": "orange"} - - - - ----- - -Now you can start your application and Quarkus will run the Liquibase's update method according to your config. - -== Using the Liquibase object - -In case you are interested in using the `Liquibase` object directly, you can inject it as follows: - -NOTE: If you enabled the `quarkus.liquibase.migrate-at-start` property, by the time you use the Liquibase instance, -Quarkus will already have run the migrate operation. - -[source,java] ----- -import org.quarkus.liquibase.LiquibaseFactory; - -@ApplicationScoped -public class MigrationService { - // You can Inject the object if you want to use it manually - @Inject - LiquibaseMongodbFactory liquibaseMongodbFactory; <1> - - public void checkMigration() { - // Use the liquibase instance manually - try (Liquibase liquibase = liquibaseFactory.createLiquibase()) { - liquibase.dropAll(); <2> - liquibase.validate(); - liquibase.update(liquibaseFactory.createContexts(), liquibaseFactory.createLabels()); - // Get the list of liquibase change set statuses - List status = liquibase.getChangeSetStatuses(liquibaseFactory.createContexts(), liquibaseFactory.createLabels()); <3> - } - } -} ----- -<1> Inject the LiquibaseFactory object -<2> Use the Liquibase instance directly -<3> List of applied or not applied liquibase ChangeSets - -== Configuration Reference - -include::{generated-dir}/config/quarkus-liquibase-mongodb.adoc[opts=optional, leveloffset=+2] \ No newline at end of file diff --git a/_versions/2.7/guides/liquibase.adoc b/_versions/2.7/guides/liquibase.adoc deleted file mode 100644 index c4d82205d20..00000000000 --- a/_versions/2.7/guides/liquibase.adoc +++ /dev/null @@ -1,233 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Liquibase - -include::./attributes.adoc[] -:change-log: src/main/resources/db/changeLog.xml -:config-file: application.properties - -https://www.liquibase.org/[Liquibase] is an open source tool for database schema change management. - -Quarkus provides first class support for using Liquibase as will be explained in this guide. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `liquibase-quickstart` {quickstarts-tree-url}/liquibase-quickstart[directory]. - -== Setting up support for Liquibase - -To start using Liquibase with your project, you just need to: - -* add your changeLog to the `{change-log}` file as you usually do with Liquibase -* activate the `migrate-at-start` option to migrate the schema automatically or inject the `Liquibase` object and run -your migration as you normally do. - -In your `pom.xml`, add the following dependencies: - -* the Liquibase extension -* your JDBC driver extension (`quarkus-jdbc-postgresql`, `quarkus-jdbc-h2`, `quarkus-jdbc-mariadb`, ...) - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - io.quarkus - quarkus-liquibase - - - - - io.quarkus - quarkus-jdbc-postgresql - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -// Liquibase specific dependencies -implementation("io.quarkus:quarkus-liquibase") - -// JDBC driver dependencies -implementation("io.quarkus:quarkus-jdbc-postgresql") ----- - -Liquibase support relies on the Quarkus datasource config. -It can be customized for the default datasource as well as for every <>. -First, you need to add the datasource config to the `{config-file}` file -in order to allow Liquibase to manage the schema. - -The following is an example for the `{config-file}` file: - -[source,properties] ----- -# configure your datasource -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=sarah -quarkus.datasource.password=connor -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/mydatabase - -# Liquibase minimal config properties -quarkus.liquibase.migrate-at-start=true - -# Liquibase optional config properties -# quarkus.liquibase.change-log=db/changeLog.xml -# quarkus.liquibase.validate-on-migrate=true -# quarkus.liquibase.clean-at-start=false -# quarkus.liquibase.database-change-log-lock-table-name=DATABASECHANGELOGLOCK -# quarkus.liquibase.database-change-log-table-name=DATABASECHANGELOG -# quarkus.liquibase.contexts=Context1,Context2 -# quarkus.liquibase.labels=Label1,Label2 -# quarkus.liquibase.default-catalog-name=DefaultCatalog -# quarkus.liquibase.default-schema-name=DefaultSchema -# quarkus.liquibase.liquibase-catalog-name=liquibaseCatalog -# quarkus.liquibase.liquibase-schema-name=liquibaseSchema -# quarkus.liquibase.liquibase-tablespace-name=liquibaseSpace ----- - -Add a changeLog file to the default folder following the Liquibase naming conventions: `{change-log}` -The yaml, json, xml and sql changeLog file formats are also supported. - -[source,xml] ----- - - - - - - - - - - - - ----- - -Now you can start your application and Quarkus will run the Liquibase's update method according to your config: - -[source,java] ----- -import org.quarkus.liquibase.LiquibaseFactory; <1> - -@ApplicationScoped -public class MigrationService { - // You can Inject the object if you want to use it manually - @Inject - LiquibaseFactory liquibaseFactory; <2> - - public void checkMigration() { - // Get the list of liquibase change set statuses - try (Liquibase liquibase = liquibaseFactory.createLiquibase()) { - List status = liquibase.getChangeSetStatuses(liquibaseFactory.createContexts(), liquibaseFactory.createLabels()); - } - } -} ----- -<1> The Quarkus extension provides a factory to initialize a Liquibase instance -<2> Inject the Quarkus liquibase factory if you want to use the liquibase methods directly - -== Multiple datasources - -Liquibase can be configured for multiple datasources. -The Liquibase properties are prefixed exactly the same way as the named datasources, for example: - -[source,properties] ----- -quarkus.datasource.db-kind=h2 -quarkus.datasource.username=username-default -quarkus.datasource.jdbc.url=jdbc:h2:tcp://localhost/mem:default -quarkus.datasource.jdbc.max-size=13 - -quarkus.datasource.users.db-kind=h2 -quarkus.datasource.users.username=username1 -quarkus.datasource.users.jdbc.url=jdbc:h2:tcp://localhost/mem:users -quarkus.datasource.users.jdbc.max-size=11 - -quarkus.datasource.inventory.db-kind=h2 -quarkus.datasource.inventory.username=username2 -quarkus.datasource.inventory.jdbc.url=jdbc:h2:tcp://localhost/mem:inventory -quarkus.datasource.inventory.jdbc.max-size=12 - -# Liquibase configuration for the default datasource -quarkus.liquibase.schemas=DEFAULT_TEST_SCHEMA -quarkus.liquibase.change-log=db/changeLog.xml -quarkus.liquibase.migrate-at-start=true - -# Liquibase configuration for the "users" datasource -quarkus.liquibase.users.schemas=USERS_TEST_SCHEMA -quarkus.liquibase.users.change-log=db/users.xml -quarkus.liquibase.users.migrate-at-start=true - -# Liquibase configuration for the "inventory" datasource -quarkus.liquibase.inventory.schemas=INVENTORY_TEST_SCHEMA -quarkus.liquibase.inventory.change-log=db/inventory.xml -quarkus.liquibase.inventory.migrate-at-start=true ----- - -Notice there's an extra bit in the key. -The syntax is as follows: `quarkus.liquibase.[optional name.][datasource property]`. - -NOTE: Without configuration, Liquibase is set up for every datasource using the default settings. - -== Using the Liquibase object - -In case you are interested in using the `Liquibase` object directly, you can inject it as follows: - -NOTE: If you enabled the `quarkus.liquibase.migrate-at-start` property, by the time you use the Liquibase instance, -Quarkus will already have run the migrate operation. - -[source,java] ----- -import org.quarkus.liquibase.LiquibaseFactory; - -@ApplicationScoped -public class MigrationService { - // You can Inject the object if you want to use it manually - @Inject - LiquibaseFactory liquibaseFactory; <1> - - @Inject - @LiquibaseDataSource("inventory") <2> - LiquibaseFactory liquibaseFactoryForInventory; - - @Inject - @Named("liquibase_users") <3> - LiquibaseFactory liquibaseFactoryForUsers; - - public void checkMigration() { - // Use the liquibase instance manually - try (Liquibase liquibase = liquibaseFactory.createLiquibase()) { - liquibase.dropAll(); <4> - liquibase.validate(); - liquibase.update(liquibaseFactory.createContexts(), liquibaseFactory.createLabels()); - // Get the list of liquibase change set statuses - List status = liquibase.getChangeSetStatuses(liquibaseFactory.createContexts(), liquibaseFactory.createLabels()); <5> - } - } -} ----- -<1> Inject the LiquibaseFactory object -<2> Inject Liquibase for named datasources using the Quarkus `LiquibaseDataSource` qualifier -<3> Inject Liquibase for named datasources -<4> Use the Liquibase instance directly -<5> List of applied or not applied liquibase ChangeSets - -== Configuration Reference - -include::{generated-dir}/config/quarkus-liquibase.adoc[opts=optional, leveloffset=+2] diff --git a/_versions/2.7/guides/logging.adoc b/_versions/2.7/guides/logging.adoc deleted file mode 100644 index e536a29f4c1..00000000000 --- a/_versions/2.7/guides/logging.adoc +++ /dev/null @@ -1,517 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Configuring Logging - -include::./attributes.adoc[] - -This guide explains logging and how to configure it. - -Internally, Quarkus uses JBoss Log Manager and the JBoss Logging facade. -You can use the JBoss Logging facade inside your code as it's already provided, -or any of the supported Logging API listed in the next chapter as Quarkus will send them to JBoss Log Manager. - -All the logging configuration will then be done inside your `application.properties`. - -== Supported Logging APIs - -Applications and components may use any of the following APIs for logging, and the logs will be merged: - -* JDK `java.util.logging` (also called JUL) -* https://github.com/jboss-logging/jboss-logging[JBoss Logging] -* https://www.slf4j.org/[SLF4J] -* https://commons.apache.org/proper/commons-logging/[Apache Commons Logging] - -Internally Quarkus uses JBoss Logging; you can also use it inside your application so that no other dependencies should be added for your logs. - -[source,java] ----- -import org.jboss.logging.Logger; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class ExampleResource { - - private static final Logger LOG = Logger.getLogger(ExampleResource.class); - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - LOG.info("Hello"); - return "hello"; - } -} ----- - -NOTE: If you use JBoss Logging but one of your libraries uses a different logging API, you may need to configure a <>. - -=== Logging with Panache - -Instead of declaring a `Logger` field, you can use the simplified logging API: - -[source,java] ----- -package com.example; - -import io.quarkus.logging.Log; // <1> - -class MyService { // <2> - public void doSomething() { - Log.info("Simple!"); // <3> - } -} ----- -<1> The `io.quarkus.logging.Log` class mirrors the JBoss Logging API, except all methods are `static`. -<2> Note that the class doesn't declare a logger field. -This is because during application build, a `private static final org.jboss.logging.Logger` field is created automatically, in each class that uses the `Log` API. -The fully qualified name of the class that calls the `Log` methods is used as a logger name. -In this example, the logger name would be `com.example.MyService`. -<3> Finally, during application build, all calls to `Log` methods are rewritten to regular JBoss Logging calls on the logger field. - -WARNING: Only use the `Log` API in application classes, not in external dependencies. -`Log` method calls that are not processed by Quarkus at build time will throw an exception. - -=== Injecting a Logger - -You can also inject a configured `org.jboss.logging.Logger` instance in your beans and resource classes. - -[source, java] ----- -import org.jboss.logging.Logger; - -@ApplicationScoped -class SimpleBean { - - @Inject - Logger log; <1> - - @LoggerName("foo") - Logger fooLog; <2> - - public void ping() { - log.info("Simple!"); - fooLog.info("Goes to _foo_ logger!"); - } -} ----- -<1> The FQCN of the declaring class is used as a logger name, i.e. `org.jboss.logging.Logger.getLogger(SimpleBean.class)` will be used. -<2> In this case, the name _foo_ is used as a logger name, i.e. `org.jboss.logging.Logger.getLogger("foo")` will be used. - -NOTE: The logger instances are cached internally. Therefore, a logger injected e.g. into a `@RequestScoped` bean is shared for all bean instances to avoid possible performance penalty associated with logger instantiation. - -=== What about Apache Log4j ? - -link:https://logging.apache.org/log4j/2.x/[Log4j] is a logging implementation: it contains a logging backend and a logging facade. -Quarkus uses the JBoss Log Manager backend, so you will need to include the `log4j2-jboss-logmanager` library to route Log4j logs to JBoss Log Manager. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.jboss.logmanager - log4j2-jboss-logmanager <1> - ----- -<1> This is the library needed for Log4j version 2; if you use the legacy Log4j version 1 you need to use `log4j-jboss-logmanager` instead. - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.jboss.logmanager:log4j2-jboss-logmanager") <1> ----- -<1> This is the library needed for Log4j version 2; if you use the legacy Log4j version 1 you need to use `log4j-jboss-logmanager` instead. - -You can then use the Log4j API inside your application. - -WARNING: Do not include any Log4j dependencies. The `log4j2-jboss-logmanager` library includes what's needed to use Log4j as a logging facade. - -== Logging levels - -These are the log levels used by Quarkus: - -[horizontal] -OFF:: Special level to turn off logging. -FATAL:: A critical service failure/complete inability to service requests of any kind. -ERROR:: A significant disruption in a request or the inability to service a request. -WARN:: A non-critical service error or problem that may not require immediate correction. -INFO:: Service lifecycle events or important related very-low-frequency information. -DEBUG:: Messages that convey extra information regarding lifecycle or non-request-bound events which may be helpful for debugging. -TRACE:: Messages that convey extra per-request debugging information that may be very high frequency. -ALL:: Special level for all messages including custom levels. - -In addition, the following levels may be configured for applications and libraries using link:https://docs.oracle.com/javase/8/docs/api/java/util/logging/Level.html[`java.util.logging`]: - -[horizontal] -SEVERE:: Same as **ERROR**. -WARNING:: Same as **WARN**. -CONFIG:: Service configuration information. -FINE:: Same as **DEBUG**. -FINER:: Same as **TRACE**. -FINEST:: Event more debugging information than `TRACE`, maybe with even higher frequency. - -== Runtime configuration - -Run time logging is configured in the `application.properties` file, -for example, to set the default log level to `INFO` logging and include Hibernate `DEBUG` logs: - -[source, properties] ----- -quarkus.log.level=INFO -quarkus.log.category."org.hibernate".level=DEBUG ----- - -Setting a log level below `DEBUG` requires the minimum log level to be adjusted, -either globally via the `quarkus.log.min-level` property or per-category as shown in the example above, -as well as adjusting the log level itself. - -Minimum logging level sets a floor level that Quarkus will be needed to potentially generate, -opening the door to optimization opportunities. -As an example, in native execution the minimum level enables lower level checks (e.g. `isTraceEnabled`) to be folded to `false`, -resulting in dead code elimination for code that will never to be executed. - -All possible properties are listed in <>. - -NOTE: If you are adding these properties via command line make sure `"` is escaped. -For example `-Dquarkus.log.category.\"org.hibernate\".level=TRACE`. - -=== Logging categories - -Logging is done on a per-category basis. Each category can be independently configured. -A configuration which applies to a category will also apply to all sub-categories of that category, -unless there is a more specific matching sub-category configuration. -For every category the same settings that are configured on ( console / file / syslog ) apply. -These can also be overridden by attaching a one or more named handlers to a category. See example in <> - -[cols="".level|INFO footnote:[Some extensions may define customized default log levels for certain categories, in order to reduce log noise by default. Setting the log level in configuration will override any extension-defined log levels.]|The level to use to configure the category named ``. The quotes are necessary. -|quarkus.log.category."".min-level|DEBUG |The minimum logging level to use to configure the category named ``. The quotes are necessary. -|quarkus.log.category."".use-parent-handlers|true|Specify whether or not this logger should send its output to its parent logger. -|quarkus.log.category."".handlers=[]|empty footnote:[By default the configured category gets the same handlers attached as the one on the root logger.]|The names of the handlers that you want to attach to a specific category. -|=== - -NOTE: The quotes shown in the property name are required as categories normally contain '.' which must -be escaped. An example is shown in <>. - -=== Root logger configuration - -The root logger category is handled separately, and is configured via the following properties: - -[cols="}|Time zone|Set the time zone of the output to ``. -|%X{}|Mapped Diagnostics Context Value|Renders the value from Mapped Diagnostics Context -|%X|Mapped Diagnostics Context Values|Renders all the values from Mapped Diagnostics Context in format {property.key=property.value} -|%x|Nested Diagnostics context values|Renders all the values from Nested Diagnostics Context in format {value1.value2} -|=== - -[id="alt-console-format"] -=== Alternative Console Logging Formats - -It is possible to change the output format of the console log. This can be useful in environments where the output -of the Quarkus application is captured by a service which can, for example, process and store the log information for -later analysis. - -[id="json-logging"] -==== JSON Logging Format - -In order to configure the JSON logging format, the `quarkus-logging-json` extension may be employed. -Add this extension to your build file as the following snippet illustrates: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-logging-json - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-logging-json") ----- - -The presence of this extension will, by default, replace the output format configuration from the console configuration. -This means that the format string and the color settings (if any) will be ignored. The other console configuration items -(including those controlling asynchronous logging and the log level) will continue to be applied. - -For some, it will make sense to use logging that is humanly readable (unstructured) in dev mode and JSON logging (structured) in production mode. This can be achieved using different profiles, as shown in the following configuration. - -.Disable JSON logging in application.properties for dev and test mode -[source, properties] ----- -%dev.quarkus.log.console.json=false -%test.quarkus.log.console.json=false ----- - -===== Configuration - -The JSON logging extension can be configured in various ways. The following properties are supported: - -include::{generated-dir}/config/quarkus-logging-json.adoc[opts=optional, leveloffset=+1] - -WARNING: Enabling pretty printing might cause certain processors and JSON parsers to fail. - -NOTE: Printing the details can be expensive as the values are retrieved from the caller. The details include the -source class name, source file name, source method name and source line number. - -== Log Handlers - -A log handler is a logging component responsible for the emission of log events to a recipient. -Quarkus comes with three different log handlers: **console**, **file** and **syslog**. - -=== Console log handler - -The console log handler is enabled by default. It outputs all log events to the console of your application (typically to the system's `stdout`). - -For details of its configuration options, see link:#quarkus-log-logging-log-config_quarkus.log.console-console-logging[the Console Logging configuration reference]. - -=== File log handler - -The file log handler is disabled by default. It outputs all log events to a file on the application's host. -It supports log file rotation. - -For details of its configuration options, see link:#quarkus-log-logging-log-config_quarkus.log.file-file-logging[the File Logging configuration reference]. - -=== Syslog log handler - -link:https://en.wikipedia.org/wiki/Syslog[Syslog] is a protocol for sending log messages on Unix-like systems using a protocol defined by link:https://tools.ietf.org/html/rfc5424[RFC 5424]. - -The syslog handler sends all log events to a syslog server (by default, the syslog server that is local to the application). -It is disabled by default. - -For details of its configuration options, see link:#quarkus-log-logging-log-config_quarkus.log.syslog-syslog-logging[the Syslog Logging configuration reference]. - -== Examples - -.Console DEBUG Logging except for Quarkus logs (INFO), No color, Shortened Time, Shortened Category Prefixes -[source, properties] ----- -quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n -quarkus.log.console.level=DEBUG -quarkus.log.console.color=false - -quarkus.log.category."io.quarkus".level=INFO ----- - -NOTE: If you are adding these properties via command line make sure `"` is escaped. -For example `-Dquarkus.log.category.\"io.quarkus\".level=DEBUG`. - -[#category-example] -.File TRACE Logging Configuration -[source, properties] ----- -quarkus.log.file.enable=true -# Send output to a trace.log file under the /tmp directory -quarkus.log.file.path=/tmp/trace.log -quarkus.log.file.level=TRACE -quarkus.log.file.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n -# Set 2 categories (io.quarkus.smallrye.jwt, io.undertow.request.security) to TRACE level -quarkus.log.min-level=TRACE -quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE -quarkus.log.category."io.undertow.request.security".level=TRACE ----- - -NOTE: As we don't change the root logger, console log will only contain `INFO` or higher order logs. - -[#category-named-handlers-example] -.Named handlers attached to a category -[source, properties] ----- -# Send output to a trace.log file under the /tmp directory -quarkus.log.file.path=/tmp/trace.log -quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n -# Configure a named handler that logs to console -quarkus.log.handler.console."STRUCTURED_LOGGING".format=%e%n -# Configure a named handler that logs to file -quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".enable=true -quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".format=%e%n -# Configure the category and link the two named handlers to it -quarkus.log.category."io.quarkus.category".level=INFO -quarkus.log.category."io.quarkus.category".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE ----- - -== Centralized Log Management - -If you want to send your logs to a centralized tool like Graylog, Logstash or Fluentd, you can follow the xref:centralized-log-management.adoc[Centralized log management guide]. - -== How to Configure Logging for `@QuarkusTest` - -If you want to configure logging for your `@QuarkusTest`, don't forget to set up the `maven-surefire-plugin` accordingly. -In particular, you need to set the appropriate `LogManager` using the `java.util.logging.manager` system property. - -.Example Configuration -[source, xml] ----- - - - - maven-surefire-plugin - ${surefire-plugin.version} - - - org.jboss.logmanager.LogManager <1> - DEBUG <2> - ${maven.home} - - - - - ----- -<1> Make sure the `org.jboss.logmanager.LogManager` is used. -<2> Enable debug logging for all logging categories. - -If you are using Gradle, add this to your `build.gradle`: - -[source, groovy] ----- -test { - systemProperty "java.util.logging.manager", "org.jboss.logmanager.LogManager" -} ----- - -See also: <> - -[[logging-adapters]] -== Logging Adapters - -Quarkus relies on the JBoss Logging library for all the logging requirements. - -If you are using libraries that have dependencies on other logging libraries such as Apache Commons Logging, Log4j or SLF4J, you need to exclude them from the dependencies and use one of the adapters provided by JBoss Logging. - -This is especially important when building native executables as you could encounter issues similar to the following when compiling the native executable: - -[source] ----- -Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.LogFactoryImpl ----- - -This is due to the logging implementation not being included in the native executable. -Using the JBoss Logging adapters will solve this problem. - -These adapters are available for most of the common Open Source logging components. - -Apache Commons Logging: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.jboss.logging - commons-logging-jboss-logging - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.jboss.logging:commons-logging-jboss-logging") ----- - -Log4j: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.jboss.logmanager - log4j-jboss-logmanager - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.jboss.logmanager:log4j-jboss-logmanager") ----- - -Log4j 2: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.jboss.logmanager - log4j2-jboss-logmanager - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.jboss.logmanager:log4j2-jboss-logmanager") ----- - -And SLF4J: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.jboss.slf4j - slf4j-jboss-logmanager - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.jboss.slf4j:slf4j-jboss-logmanager") ----- - -NOTE: This is not needed for libraries that are dependencies of a Quarkus extension as the extension will take care of this for you. - -[[loggingConfigurationReference]] -== Logging configuration reference - -include::{generated-dir}/config/quarkus-log-logging-log-config.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/lra.adoc b/_versions/2.7/guides/lra.adoc deleted file mode 100644 index 247871497a7..00000000000 --- a/_versions/2.7/guides/lra.adoc +++ /dev/null @@ -1,218 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Narayana LRA Participant Support - -include::./attributes.adoc[] - -== Introduction - -The LRA (short for Long Running Action) participant extension is useful in microservice based -designs where different services can benefit from a relaxed notion of distributed consistency. - -The idea is for multiple services to perform different computations/actions in concert, whilst -retaining the option to compensate for any actions performed during the computation. -This kind of loose coupling of services bridges the gap between strong consistency models -such as JTA/XA and "home grown" ad hoc consistency solutions. - -The model is based on the https://github.com/eclipse/microprofile-lra/blob/master/spec/src/main/asciidoc/microprofile-lra-spec.adoc#eclipse-microprofile-lra[Eclipse MicroProfile LRA specification]. -The approach is for the developer to annotate a business method with a Java annotation -(https://download.eclipse.org/microprofile/microprofile-lra-1.0/apidocs/[`@LRA`]). -When such a method is called, an LRA context is created (if one is not already present) which is passed -along with subsequent JAX-RS invocations until a method is reached -which also contains an `@LRA` annotation with an attribute that indicates that the LRA should be -closed or cancelled. The default is for the LRA to be closed in the same method that started the -LRA (which itself may have propagated the context during method execution). -The JAX-RS resource indicates that it wishes to participate in the interaction by, minimally, -marking one of the methods with an -https://download.eclipse.org/microprofile/microprofile-lra-1.0/apidocs/[`@Compensate`] -annotation. If the context is later cancelled then this compensate action is guaranteed to be -called even in the presence of failures and is the trigger for the resource to compensate for any -activities it performed in the context of the LRA. This guarantee enables services to operate -reliably with the assurance of eventual consistency (when all compensation activities have -ran to completion). The participant can ask to be reliably notified when the LRA it is participating -in is closed by marking one of the methods with an -https://download.eclipse.org/microprofile/microprofile-lra-1.0/apidocs/[`@Complete`] -annotation. In this way cancelling an LRA causes all participants to be notified via their Compensate callback -and closing an LRA causes all participants to be notified via their Complete callback (if they have one). -Other annotations for controlling participants are documented in the -https://download.eclipse.org/microprofile/microprofile-lra-1.0/apidocs/[MicroProfile LRA API v1.0 javadoc]. - -== Configuration - -Once you have your Quarkus Maven project configured you can add the `narayana-lra` extension -by running the following command in your project base directory: - -:add-extension-extensions: narayana-lra,resteasy-jackson,rest-client-jackson -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-narayana-lra - - - io.quarkus - quarkus-resteasy-jackson - - - io.quarkus - quarkus-rest-client-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-narayana-lra") -implementation("io.quarkus:quarkus-resteasy-jackson") -implementation("io.quarkus:quarkus-rest-client-jackson") ----- - -[IMPORTANT] -==== -`quarkus-narayana-lra` needs to be complemented with a server JAX-RS implementation and a REST Client implementation in order to work. -This means that users should also have either `quarkus-resteasy-jackson` and `quarkus-rest-client-jackson` or `quarkus-resteasy-reactive-jackson` and `quarkus-rest-client-reactive-jackson` dependencies in their application. -==== - -If there is a running coordinator then this is all you need in order to create -new LRAs and to enlist participants with them. - -The LRA extension can be configured by updating an `application.properties` file -in the `src/main/resources` directory. The only LRA specific property is -`quarkus.lra.coordinator-url=` which specifies the HTTP endpoint of an external -coordinator, for example: - -[source,bash] ----- -quarkus.lra.coordinator-url=http://localhost:8080/lra-coordinator ----- - -For a Narayana coordinator the path component of the url is normally `lra-coordinator`. -Coordinators can be obtained from `https://hub.docker.com/r/jbosstm/lra-coordinator` -or you can build your own coordinator using a maven pom that includes the appropriate -dependencies. A Quarkus quickstart will be provided to show how to do this or you can -take a look at one of the https://github.com/jbosstm/quickstart/tree/master/rts/lra-examples/lra-coordinator[Narayana quickstarts]. -Another option would be to run it managed inside a WildFly application server. - -== Handling failures - -When an LRA is told to finish, i.e. when a method annotated with `@LRA(end = true, ...)` -is invoked, the coordinator will instruct all services involved in the interaction to -finish. If a service is unavailable (or still finishing) then the coordinator will retry -periodically. It is the users responsibility to restart failed services on the same -endpoint that they used when they first joined the LRA, or to tell the coordinator that -they wish to be notified on new endpoints. An LRA is not deemed finished until *all* -participants have acknowledged that they have finished. - -The coordinator is responsible for reliably creating and ending LRAs and for managing -participant enlistment and it therefore must be available (for example if it or the -network fail then something in the environment is responsible for restarting -the coordinator or for repairing the network, respectively). To fulfill this task the -coordinator must have access to durable storage for its logs (via a filesystem or in -a database). At the time of writing, managing coordinators is the responsibility of -the user. An "out-of-the-box" solution will be forthcoming. - -== Examples - -The following is a simple example of how to start an LRA and how to receive a notification -when the LRA is later cancelled (the `@Compensate` annotated method is called) or closed -(`@Complete` is called): - -[source,java] ----- -@Path("/") -@ApplicationScoped -public class SimpleLRAParticipant -{ - @LRA(LRA.Type.REQUIRES_NEW) // a new LRA is created on method entry - @Path("/work") - @PUT - public void doInNewLongRunningAction(@HeaderParam(LRA_HTTP_CONTEXT_HEADER) URI lraId) - { - /* - * Perform business actions in the context of the LRA identified by the - * value in the injected JAX-RS header. This LRA was started just before - * the method was entered (REQUIRES_NEW) and will be closed when the - * method finishes at which point the completeWork method below will be - * invoked. - */ - } - - @org.eclipse.microprofile.lra.annotation.Complete - @Path("/complete") - @PUT - public Response completeWork(@HeaderParam(LRA_HTTP_CONTEXT_HEADER) URI lraId, - String userData) - { - /* - * Free up resources allocated in the context of the LRA identified by the - * value in the injected JAX-RS header. - * - * Since there is no @Status method in this class, completeWork MUST be - * idempotent and MUST return the status. - */ - return Response.ok(ParticipantStatus.Completed.name()).build(); - } - - @org.eclipse.microprofile.lra.annotation.Compensate - @Path("/compensate") - @PUT - public Response compensateWork(@HeaderParam(LRA_HTTP_CONTEXT_HEADER) URI lraId, - String userData) - { - /* - * The LRA identified by the value in the injected JAX-RS header was - * cancelled so the business logic should compensate for any actions - * that have been performed while running in its context. - * - * Since there is no @Status method in this class, compensateWork MUST be - * idempotent and MUST return the status - */ - return Response.ok(ParticipantStatus.Compensated.name()).build(); - } -} ----- - -The example also shows that when an LRA is present its identifier can be obtained -by reading the request headers via the `@HeaderParam` JAX-RS annotation type. - -And here's an example of how to start an LRA in one resource method and close it in -a different resource method using the `end` element of the `LRA` annotation. It also -shows how to configure the LRA to be automatically cancelled if the business method -returns the particular HTTP status codes identified in the `cancelOn` and -`cancelOnFamily` elements: - -[source,java] ----- - @LRA(value = LRA.Type.REQUIRED, // if there is no incoming context a new one is created - cancelOn = { - Response.Status.INTERNAL_SERVER_ERROR // cancel on a 500 code - }, - cancelOnFamily = { - Response.Status.Family.CLIENT_ERROR // cancel on any 4xx code - }, - end = false) // the LRA will continue to run when the method finishes - @Path("/book") - @POST - public Response bookTrip(...) { ... } - - @LRA(value = LRA.Type.MANDATORY, // requires an active context before method can be executed - end = true) // end the LRA started by the bookTrip method - @Path("/confirm") - @PUT - public Booking confirmTrip(Booking booking) throws BookingException { ... } ----- - -The `end = false` element on the bookTrip method forces the LRA to continue running when -the method finishes and the `end = true` element on the confirmTrip method forces the LRA -(started by the bookTrip method) to be closed when the method finishes. Note that this -end element can be placed on any JAX-RS resource (ie one service can start the LRA whilst -a different service ends it). There are many more examples in the -https://github.com/eclipse/microprofile-lra/blob/master/spec/src/main/asciidoc/microprofile-lra-spec.adoc#java-annotations[Microprofile LRA specification document] and in the https://github.com/eclipse/microprofile-lra/tree/master/tck/src/main/java/org/eclipse/microprofile/lra/tck[Microprofile LRA TCK]. diff --git a/_versions/2.7/guides/mailer-reference.adoc b/_versions/2.7/guides/mailer-reference.adoc deleted file mode 100644 index 692e476c9aa..00000000000 --- a/_versions/2.7/guides/mailer-reference.adoc +++ /dev/null @@ -1,384 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Mailer Reference Guide - -include::./attributes.adoc[] - -This guide is the companion from the xref:mailer.adoc[Mailer Getting Started Guide]. -It explains in more details the configuration and usage of the Quarkus Mailer. - -== Mailer extension - -To use the mailer, you need to add the `quarkus-mailer` extension. - -You can add the extension to your project using: - -[source, bash] ----- -> ./mvnw quarkus:add-extensions -Dextensions="mailer" ----- - -Or just add the following dependency to your project: - -[source, xml] ----- - - io.quarkus - quarkus-mailer - ----- - - -== Accessing the mailer - -You can inject the mailer in your application using: - -[source, java] ----- -@Inject -Mailer mailer; - -@Inject -ReactiveMailer reactiveMailer; ----- - -There are 2 APIs: - -* `io.quarkus.mailer.Mailer` provides the imperative (blocking and synchronous) API; -* `io.quarkus.mailer.reactive.ReactiveMailer` provides the reactive (non-blocking and asynchronous) API - -NOTE: The two APIs are equivalent feature-wise. Actually the `Mailer` implementation is built on top of the `ReactiveMailer` implementation. - -[NOTE] -.Deprecation -==== -`io.quarkus.mailer.ReactiveMailer` is deprecated in favor of `io.quarkus.mailer.reactive.ReactiveMailer`. -==== - -To send a simple email, proceed as follows: - -[source, java] ----- -// Imperative API: -mailer.send(Mail.withText("to@acme.org", "A simple email from quarkus", "This is my body.")); -// Reactive API: -Uni stage = reactiveMailer.send(Mail.withText("to@acme.org", "A reactive email from quarkus", "This is my body.")); ----- - -For example, you can use the `Mailer` in an HTTP endpoint as follows: - -[source, java] ----- -@GET -@Path("/imperative") -public void sendASimpleEmail() { - mailer.send(Mail.withText("to@acme.org", "A simple email from quarkus", "This is my body")); -} - -@GET -@Path("/reactive") -public Uni sendASimpleEmailAsync() { - return reactiveMailer.send( - Mail.withText("to@acme.org", "A reactive email from quarkus", "This is my body")); -} ----- - -== Creating Mail objects - -The mailer lets you send `io.quarkus.mailer.Mail` objects. -You can create new `io.quarkus.mailer.Mail` instances from the constructor or from the `Mail.withText` and -`Mail.withHtml` helper methods. -The `Mail` instance lets you add recipients (to, cc, or bcc), set the subject, headers, sender (from) address... - -You can also send several `Mail` objects in one call: - -[source, java] ----- -mailer.send(mail1, mail2, mail3); ----- - -[[attachments]] -== Sending attachments - -To send attachments, just use the `addAttachment` methods on the `io.quarkus.mailer.Mail` instance: - -[source,java] ----- -@GET -@Path("/attachment") -public void sendEmailWithAttachment() { - mailer.send(Mail.withText("clement.escoffier@gmail.com", "An email from quarkus with attachment", - "This is my body") - .addAttachment("my-file-1.txt", - "content of my file".getBytes(), "text/plain") - .addAttachment("my-file-2.txt", - new File("my-file.txt"), "text/plain") - ); -} ----- - -Attachments can be created from raw bytes (as shown in the snippet) or files. -Note that files are resolved from the working directory of the application. - -[[html]] -== Sending HTML emails with inlined attachments - -When sending HTML emails, you can add inlined attachments. -For example, you can send an image with your email, and this image will be displayed in the mail content. -If you put the image file into the `META-INF/resources` folder, you should specify the full path to the file, _e.g._ `META-INF/resources/quarkus-logo.png` otherwise Quarkus will look for the file in the root directory. - -[source, java] ----- -@GET -@Path("/html") -public void sendingHTML() { - String body = "Hello!" + "\n" + - "

Here is an image for you:

" + - "

Regards

"; - mailer.send(Mail.withHtml("to@acme.org", "An email in HTML", body) - .addInlineAttachment("quarkus-logo.png", - new File("quarkus-logo.png"), - "image/png", "")); -} ----- - -Note the _content-id_ format and reference. -By spec, when you create the inline attachment, the content-id must be structured as follows: ``. -If you don't wrap your content-id between `<>`, it is automatically wrapped for you. -When you want to reference your attachment, for instance in the `src` attribute, use `cid:id@domain` (without the `<` and `>`). - -[[templates]] -== Message Body Based on Qute Templates - -It's possible to inject a mail template, where the message body is created automatically using xref:qute.adoc[Qute templates]. - -[source, java] ----- -@Path("") -public class MailingResource { - - @CheckedTemplate - static class Templates { - public static native MailTemplateInstance hello(String name); // <1> - } - - @GET - @Path("/mail") - public Uni send() { - // the template looks like: Hello {name}! // <2> - return Templates.hello("John") - .to("to@acme.org") // <3> - .subject("Hello from Qute template") - .send(); // <4> - } -} ----- -<1> By convention, the enclosing class name and method names are used to locate the template. In this particular case, -we will use the `src/main/resources/templates/MailingResource/hello.html` and `src/main/resources/templates/MailingResource/hello.txt` templates to create the message body. -<2> Set the data used in the template. -<3> Create a mail template instance and set the recipient. -<4> `MailTemplate.send()` triggers the rendering and, once finished, sends the e-mail via a `Mailer` instance. - -TIP: Injected mail templates are validated during build. -The build fails if there is no matching template in `src/main/resources/templates`. - -You can also do this without type-safe templates: - -[source, java] ----- -@Inject -@Location("hello") -MailTemplate hello; // <1> - -@GET -@Path("/mail") -public Uni send() { - return hello.to("to@acme.org") // <2> - .subject("Hello from Qute template") - .data("name", "John") // <3> - .send() // <4> -} ----- -<1> If there is no `@Location` qualifier provided, the field name is used to locate the template. -Otherwise, search for the template as the specified location. In this particular case, we will use the `src/main/resources/templates/hello.html` and `src/main/resources/templates/hello.txt` templates to create the message body. -<2> Create a mail template instance and set the recipient. -<3> Set the data used in the template. -<4> `MailTemplate.send()` triggers the rendering and, once finished, sends the e-mail via a `Mailer` instance. - -[[execution-model]] -== Execution model - -The reactive mailer is non-blocking, and the results are provided on an I/O thread. -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. - -The non-reactive mailer blocks until the messages are sent to the SMTP server. -Note that does not mean that the message is delivered, just that it's been sent successfully to the SMTP server, which will be responsible for the delivery. - -[[testing]] -== Testing email sending - -Because it is very inconvenient to send emails during development and testing, you can set the `quarkus.mailer.mock` boolean -configuration to `true` to prevent the actual sending of emails but instead print them on stdout and collect them in a `MockMailbox` bean instead. -This is the default if you are running Quarkus in `DEV` or `TEST` mode. - -You can then write tests to verify that your emails were sent, for example, by a REST endpoint: - -[source, java] ----- -@QuarkusTest -class MailTest { - - private static final String TO = "foo@quarkus.io"; - - @Inject - MockMailbox mailbox; - - @BeforeEach - void init() { - mailbox.clear(); - } - - @Test - void testTextMail() throws MessagingException, IOException { - // call a REST endpoint that sends email - given() - .when() - .get("/send-email") - .then() - .statusCode(202) - .body(is("OK")); - - // verify that it was sent - List sent = mailbox.getMessagesSentTo(TO); - assertThat(sent).hasSize(1); - Mail actual = sent.get(0); - assertThat(actual.getText()).contains("Wake up!"); - assertThat(actual.getSubject()).isEqualTo("Alarm!"); - - assertThat(mailbox.getTotalMessagesSent()).isEqualTo(6); - } -} ----- - -== Using the underlying Vert.x Mail Client - -The Quarkus Mailer is implemented on top of the https://vertx.io/docs/vertx-mail-client/java/[Vert.x Mail Client], providing an asynchronous and non-blocking way to send emails. -If you need fine control on how the mail is sent, for instance if you need to retrieve the message ids, you can inject the underlying client, and use it directly: - -[source, java] ----- -@Inject MailClient client; ----- - -Three API flavors are exposed: - -* the Mutiny client (`io.vertx.mutiny.ext.mail.MailClient`) -* the bare client (`io.vertx.ext.mail.MailClient`) - -Check the xref:vertx.adoc[Using Vert.x guide] for further details about these different APIs and how to select the most suitable for you. - -The retrieved `MailClient` is configured using the configuration key presented above. -You can also create your own instance, and pass your own configuration. - -[#gmail-specific-configuration] -== Gmail specific configuration - -If you want to use the Gmail SMTP server, first create a dedicated password in `Google Account > Security > App passwords` or go to https://myaccount.google.com/apppasswords. - -[NOTE] -==== -You need to switch on 2-Step Verification at https://myaccount.google.com/security in order to access the App passwords page. -==== - -When done, you can configure your Quarkus application by adding the following properties to your `application.properties`: - -With TLS: - -[source,properties] ----- -quarkus.mailer.auth-methods=DIGEST-MD5 CRAM-SHA256 CRAM-SHA1 CRAM-MD5 PLAIN LOGIN -quarkus.mailer.from=YOUREMAIL@gmail.com -quarkus.mailer.host=smtp.gmail.com -quarkus.mailer.port=587 -quarkus.mailer.start-tls=REQUIRED -quarkus.mailer.username=YOUREMAIL@gmail.com -quarkus.mailer.password=YOURGENERATEDAPPLICATIONPASSWORD ----- - -Or with SSL: - -[source,properties] ----- -quarkus.mailer.auth-methods=DIGEST-MD5 CRAM-SHA256 CRAM-SHA1 CRAM-MD5 PLAIN LOGIN -quarkus.mailer.from=YOUREMAIL@gmail.com -quarkus.mailer.host=smtp.gmail.com -quarkus.mailer.port=465 -quarkus.mailer.ssl=true -quarkus.mailer.username=YOUREMAIL@gmail.com -quarkus.mailer.password=YOURGENERATEDAPPLICATIONPASSWORD ----- - -[NOTE] -==== -The `quarkus.mailer.auth-methods` configuration option is needed for the Quarkus mailer to support password authentication with Gmail. -By default both the mailer and Gmail default to `XOAUTH2` which requires registering an application, getting tokens, etc. -==== - -== Using SSL with native executables - -Note that if you enable SSL for the mailer and you want to build a native executable, you will need to enable the SSL support. -Please refer to the xref:native-and-ssl.adoc[Using SSL With Native Executables] guide for more information. - -== Configuring the SMTP credentials - -It is recommended to encrypt any sensitive data, such as the `quarkus.mailer.password`. -One approach is to save the value into a secure store like HashiCorp Vault, and refer to it from the configuration. -Assuming for instance that Vault contains key `mail-password` at path `myapps/myapp/myconfig`, then the mailer -extension can be simply configured as: - -[source,properties] ----- -... -# path within the kv secret engine where is located the application sensitive configuration -quarkus.vault.secret-config-kv-path=myapps/myapp/myconfig - -... -quarkus.mailer.password=${mail-password} ----- -Please note that the password value is evaluated only once, at startup time. If `mail-password` was changed in Vault, -the only way to get the new value would be to restart the application. - -[TIP] -For more information about the Mailer configuration please refer to the <>. - -== Configuring a trust store - -If your SMTP requires a trust store, you can configure the trust store as follows: - -[source, properties] ----- -quarkus.mailer.host=... -quarkus.mailer.port=... -quarkus.mailer.ssl=true -quarkus.mailer.trust-store.paths=truststore.jks # the path to your trust store -quarkus.mailer.trust-store.password=secret # the trust store password if any -quarkus.mailer.trust-store.type=JKS # the type of trust store if it can't be deduced from the file extension ----- - -Quarkus mailer supports JKS, PCKS#12 and PEM trust stores. -For PEM, you can configure multiple files. -For JKS and PCKS#12, you can configure the password if any. - -`quarkus.mailer.trust-store.type` is optional and allows configuring the type of trust store (among `JKS`, `PEM` and `PCKS`). -When not set, Quarkus tries to deduce the type from the file name. - -NOTE: You can also configure `quarkus.mailer.trust-all=true` to bypass the verification. - -[[configuration-reference]] -== Mailer Configuration Reference - -include::{generated-dir}/config/quarkus-mailer.adoc[opts=optional, leveloffset=+1] - diff --git a/_versions/2.7/guides/mailer.adoc b/_versions/2.7/guides/mailer.adoc deleted file mode 100644 index 65570b53fe9..00000000000 --- a/_versions/2.7/guides/mailer.adoc +++ /dev/null @@ -1,220 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Sending emails using SMTP - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can send emails using an SMTP server. -This is a getting started guide. -Check the xref:mailer-reference.adoc[Quarkus Mailer Reference documentation] for more complete explanation about the mailer and its usage. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* The SMTP hostname, port and credentials, and an email address -* cURL - -== Architecture - -In this guide, we will build an application: - -1. exposing an HTTP endpoint, -2. sending email when the endpoint receives an HTTP request. - -The application will demonstrate how to send emails using the _imperative_ and _reactive_ mailer APIs. - -Attachments, inlined attachments, templating, testing and more advanced configuration are covered in the xref:mailer-reference.adoc[Mailer Reference documentation]. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `mailer-quickstart` {quickstarts-tree-url}/mailer-quickstart[directory]. - -== Creating the Maven Project - -First, we need a project. -Open your browser to https://code.quarkus.io and select the following extensions: - -1. RESTEasy Reactive - we use it to expose our HTTP endpoint -2. Mailer - which offer the possibility to send emails -3. Qute - the Quarkus template engine - -Alternatively, this https://code.quarkus.io/?a=quarkus-mailer-getting-started&nc=true&e=resteasy-reactive&e=qute&e=mailer&extension-search=mail[link] pre-configures the application. -Click on "Generate your application", download the zip file and unzip it on your file system. -Open the generated project in your IDE. -In a terminal, navigate to the project and start dev mode: - -include::includes/devtools/dev.adoc[] - -=== Implement the HTTP endpoint - -First, create the `src/main/java/org/acme/MailResource.java` file, with the following content: - -[source, java] ----- -package org.acme; - -import io.quarkus.mailer.Mail; -import io.quarkus.mailer.Mailer; -import io.smallrye.common.annotation.Blocking; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("/mail") // <1> -public class MailResource { - - @Inject Mailer mailer; // <2> - - @GET // <3> - @Blocking // <4> - public void sendEmail() { - mailer.send( - Mail.withText("quarkus@quarkus.io", // <5> - "Ahoy from Quarkus", - "A simple email sent from a Quarkus application." - ) - ); - } - -} ----- -<1> Configure the root path of our HTTP endpoint -<2> Inject the `Mailer` object managed by Quarkus -<3> Create a method that will handle the HTTP GET request on `/mail` -<4> Because we are using RESTEasy Reactive and the _imperative_ mailer, we need to add the `@Blocking` annotation. We will see later the reactive variant. -<5> Create a `Mail` object by configuring the _to_ recipient, the subject and body - -The `MailResource` class implements the HTTP API exposed by our application. -It handles `GET` request on `http://localhost:8080/mail. - -So, if in another terminal, you run: - -[source, bash] ----- -> curl http://localhost:8080/mail ----- - -You should see in the application log something like: - -[source, text] ----- -INFO [quarkus-mailer] (executor-thread-0) Sending email Ahoy from Quarkus from null to [quarkus@quarkus.io], text body: -A simple email sent from a Quarkus application. -html body: - ----- - -As the application runs in _dev mode_, it simulates the sending of the emails. -It prints it in the log, so you can check that what was about to be sent. - -NOTE: This section used the _imperative_ mailer API. -It blocks the caller thread until the mail is sent. - -== Using the reactive mailer - -The last section use the _imperative_ mailer. -Quarkus also offers a reactive API. - - -[TIP] -.Mutiny -==== -The reactive mailer uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -In the same class, add: - -[source, java] ----- -@Inject -ReactiveMailer reactiveMailer; // <1> - -@GET -@Path("/reactive") // <2> -public Uni sendEmailUsingReactiveMailer() { // <3> - return reactiveMailer.send( // <4> - Mail.withText("quarkus@quarkus.io", - "Ahoy from Quarkus", - "A simple email sent from a Quarkus application using the reactive API." - ) - ); -} ----- -<1> Inject the reactive mailer. The class to import is `io.quarkus.mailer.reactive.ReactiveMailer`. -<2> Configure the path to handle GET request on `/mail/reactive`. Note that because we are using the reactive API, we don't need `@Blocking` -<3> The method returns a `Uni` which completes when the mail is sent. It does not block the caller thread. -<4> The API is similar to the _imperative_ one except that the `send` method returns a `Uni`. - -Now, in your terminal, run - -[source, bash] ----- -> curl http://localhost:8080/mail/reactive ----- - -You should see in the application log something like: - -[source, text] ----- -INFO [quarkus-mailer] (vert.x-eventloop-thread-11) Sending email Ahoy from Quarkus from null to [quarkus@quarkus.io], text body: -A simple email sent from a Quarkus application using the reactive API. -html body: - ----- - -== Configuring the mailer - -It's time to configure the mailer to not simulate the sending of the emails. -The Quarkus mailer is using SMTP, so make sure you have access to a SMTP server. - -In the `src/main/resources/application.properties` file, you need to configure the host, port, username, password as well as the other configuration aspect. -Note that the password can also be configured using system properties and environment variables. -See the xref:config-reference.adoc[configuration reference guide] for details. - -Here is an example using _sendgrid_: - -[source,properties] ----- -# Your email address you send from - must match the "from" address from sendgrid. -quarkus.mailer.from=test@quarkus.io - -# The SMTP host -quarkus.mailer.host=smtp.sendgrid.net -# The SMTP port -quarkus.mailer.port=465 -# If the SMTP connection requires SSL/TLS -quarkus.mailer.ssl=true -# Your username -quarkus.mailer.username=.... -# Your password -quarkus.mailer.password=.... - -# By default, in dev mode, the mailer is a mock. This disables the mock and use the configured mailer. -quarkus.mailer.mock=false ----- - -Once you have configured the mailer, if you call the HTTP endpoint as shown above, you will send emails. - -== Conclusion - -This guide has shown how to send emails from your Quarkus application. -The xref:mailer-reference.adoc[mailer reference guide] provides more details about the mailer usage and configuration such as: - -* xref:mailer-reference.adoc#attachments[how to add attachments] -* xref:mailer-reference.adoc#html[how to format the email as HTML and use inline attachments] -* xref:mailer-reference.adoc#templates[how to use Qute templates] -* xref:mailer-reference.adoc#testing[how to test applications sending emails] -* xref:mailer-reference.adoc#gmail-specific-configuration[how to configure the mailer to send emails with GMAIL] - - - diff --git a/_versions/2.7/guides/maven-tooling.adoc b/_versions/2.7/guides/maven-tooling.adoc deleted file mode 100644 index d923411998c..00000000000 --- a/_versions/2.7/guides/maven-tooling.adoc +++ /dev/null @@ -1,1099 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Building applications with Maven - -include::./attributes.adoc[] -:devtools-no-gradle: - -[[project-creation]] -== Creating a new project - -You can scaffold a new Maven project with: - -:create-app-group-id: my-groupId -:create-app-artifact-id: my-artifactId -:create-app-code: -:create-app-post-command: -include::includes/devtools/create-app.adoc[] - -If you are using the CLI, you can get the list of available options with: - -[source,bash] ----- -quarkus create app --help ----- - -If you are using the Maven command, the following table lists the attributes you can pass to the `create` command: - -[cols=3*,options="header"] -|=== -| Attribute -| Default Value -| Description - -| `projectGroupId` -| `org.acme.sample` -| The group id of the created project - -| `projectArtifactId` -| _mandatory_ -| The artifact id of the created project. Not passing it triggers the interactive mode. - -| `projectVersion` -| `1.0.0-SNAPSHOT` -| The version of the created project - -| `platformGroupId` -| `io.quarkus.platform` -| The group id of the target platform. - -| `platformArtifactId` -| `quarkus-bom` -| The artifact id of the target platform BOM. - -| `platformVersion` -| The version currently recommended by the https://quarkus.io/guides/extension-registry-user[Quarkus Extension Registry] -| The version of the platform you want the project to use. It can also accept a version range, in which case the latest from the specified range will be used. - -| `className` -| _Not created if omitted_ -| The fully qualified name of the generated resource - -| `path` -| `/hello` -| The resource path, only relevant if `className` is set. - -| `extensions` -| _[]_ -| The list of extensions to add to the project (comma-separated) - -| `quarkusRegistryClient` -| `true` -| Whether or not Quarkus should use the online registry to resolve extension catalogs. If this is set to false, the extension catalog will be narrowed to the defined (or default) platform BOM. - -|=== - -By default, the command will target the `io.quarkus.platform:quarkus-bom:{quarkus-version}` platform release (unless the coordinates of the desired platform release have been specified). - -The project is generated in a directory named after the passed artifactId. -If the directory already exists, the generation fails. - -A pair of Dockerfiles for native and jvm mode are also generated in `src/main/docker`. -Instructions to build the image and run the container are written in those Dockerfiles. - -== Dealing with extensions - -From inside a Quarkus project, you can obtain a list of the available extensions with: - -include::includes/devtools/extension-list.adoc[] - -You can add an extension using: - -:add-extension-extensions: hibernate-validator -include::includes/devtools/extension-add.adoc[] - -Extensions are passed using a comma-separated list. - -The extension name is the GAV name of the extension: e.g. `io.quarkus:quarkus-agroal`. -But you can pass a partial name and Quarkus will do its best to find the right extension. -For example, `agroal`, `Agroal` or `agro` will expand to `io.quarkus:quarkus-agroal`. -If no extension is found or if more than one extensions match, you will see a red check mark ❌ in the command result. - -[source,shell] ----- -$ ./mvnw quarkus:add-extensions -Dextensions=jdbc,agroal,non-exist-ent -[...] -❌ Multiple extensions matching 'jdbc' - * io.quarkus:quarkus-jdbc-h2 - * io.quarkus:quarkus-jdbc-mariadb - * io.quarkus:quarkus-jdbc-postgresql - Be more specific e.g using the exact name or the full gav. -✅ Adding extension io.quarkus:quarkus-agroal -❌ Cannot find a dependency matching 'non-exist-ent', maybe a typo? -[...] ----- - -You can install all extensions which match a globbing pattern : - -:add-extension-extensions: smallrye-* -include::includes/devtools/extension-add.adoc[] - -[[dev-mode]] -== Development mode - -Quarkus comes with a built-in development mode. -Run your application with: - -include::includes/devtools/dev.adoc[] - -You can then update the application sources, resources and configurations. -The changes are automatically reflected in your running application. -This is great to do development spanning UI and database as you see changes reflected immediately. - -Dev mode enables hot deployment with background compilation, which means -that when you modify your Java files or your resource files and refresh your browser these changes will automatically take effect. -This works too for resource files like the configuration property file. -The act of -refreshing the browser triggers a scan of the workspace, and if any changes are detected the Java files are compiled, -and the application is redeployed, then your request is serviced by the redeployed application. If there are any issues -with compilation or deployment an error page will let you know. - -Hit `CTRL+C` to stop the application. - -[NOTE] -==== -By default, `quarkus:dev` sets the debug host to `localhost` (for security reasons). If you need to change this, for example to enable debugging on all hosts, you can use the `-DdebugHost` option like so: - -:dev-additional-parameters: -DdebugHost=0.0.0.0 -include::includes/devtools/dev-parameters.adoc[] -:!dev-additional-parameters: -==== - -=== Remote Development Mode - -It is possible to use development mode remotely, so that you can run Quarkus in a container environment (such as OpenShift) -and have changes made to your local files become immediately visible. - -This allows you to develop in the same environment you will actually run your app in, and with access to the same services. - -WARNING: Do not use this in production. This should only be used in a development environment. You should not run production application in dev mode. - -To do this you must build a mutable application, using the `mutable-jar` format. Set the following properties in `application.properties`: - -[source,properties] ----- -quarkus.package.type=mutable-jar <1> -quarkus.live-reload.password=changeit <2> -quarkus.live-reload.url=http://my.cluster.host.com:8080 <3> ----- -<1> This tells Quarkus to use the mutable-jar format. Mutable applications also include the deployment time parts of Quarkus, -so they take up a bit more disk space. If run normally they start just as fast and use the same memory as an immutable application, -however they can also be started in dev mode. -<2> The password that is used to secure communication between the remote side and the local side. -<3> The URL that your app is going to be running in dev mode at. This is only needed on the local side, so you -may want to leave it out of the properties file and specify it as a system property on the command line. - -The `mutable-jar` is then built in the same way that a regular Quarkus jar is built, i.e. by issuing: - -include::includes/devtools/build.adoc[] - -Before you start Quarkus on the remote host set the environment variable `QUARKUS_LAUNCH_DEVMODE=true`. If you are -on bare metal you can set it via the `export QUARKUS_LAUNCH_DEVMODE=true` command and then run the application with the proper `java -jar ...` command to run the application. - -If you plan on running the application via Docker, then you'll need to add `-e QUARKUS_LAUNCH_DEVMODE=true` to the `docker run` command. -When the application starts you should now see the following line in the logs: `Profile dev activated. Live Coding activated`. - -NOTE: The remote side does not need to include Maven or any other development tools. The normal `fast-jar` Dockerfile -that is generated with a new Quarkus application is all you need. If you are using bare metal launch the Quarkus runner -jar, do not attempt to run normal devmode. - -Now you need to connect your local agent to the remote host, using the `remote-dev` command: - -[source,bash] ----- -./mvnw quarkus:remote-dev -Dquarkus.live-reload.url=http://my-remote-host:8080 ----- - -Now every time you refresh the browser you should see any changes you have made locally immediately visible in the remote -app. This is done via a HTTP based long polling transport, that will synchronize your local workspace and the remote -application via HTTP calls. - -If you do not want to use the HTTP feature then you can simply run the `remote-dev` command without specifying the URL. -In this mode the command will continuously rebuild the local application, so you can use an external tool such as odo or -rsync to sync to the remote application. - -All the config options are shown below: - -include::{generated-dir}/config/quarkus-live-reload-live-reload-config.adoc[opts=optional, leveloffset=+1] - -NOTE: It is recommended you use SSL when using remote dev mode, however even if you are using an unencrypted connection -your password is never sent directly over the wire. For the initial connection request the password is hashed with the -initial state data, and subsequent requests hash it with a random session id generated by the server and any body contents -for POST requests, and the path for DELETE requests, as well as an incrementing counter to prevent replay attacks. - -=== Configuring Development Mode - -By default, the Maven plugin picks up compiler flags to pass to -`javac` from `maven-compiler-plugin`. - -If you need to customize the compiler flags used in development mode, -add a `configuration` section to the `plugin` block and set the -`compilerArgs` property just as you would when configuring -`maven-compiler-plugin`. You can also set `source`, `target`, and -`jvmArgs`. For example, to pass `--enable-preview` to both the JVM -and `javac`: - -[source,xml] ----- - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - - - ${maven.compiler.source} - ${maven.compiler.target} - - --enable-preview - - --enable-preview - - - ... - ----- - - -== Debugging - -In development mode, Quarkus starts by default with debug mode enabled, listening to port `5005` without suspending the JVM. - -This behavior can be changed by giving the `debug` system property one of the following values: - -* `false` - the JVM will start with debug mode disabled -* `true` - The JVM is started in debug mode and will be listening on port `5005` -* `client` - the JVM will start in client mode and attempt to connect to `localhost:5005` -* `{port}` - The JVM is started in debug mode and will be listening on `{port}` - -An additional system property `suspend` can be used to suspend the JVM, when launched in debug mode. `suspend` supports the following values: - -* `y` or `true` - The debug mode JVM launch is suspended -* `n` or `false` - The debug mode JVM is started without suspending - - -[TIP] -==== -You can also run a Quarkus application in debug mode with a suspended JVM using: - -:dev-additional-parameters: -Dsuspend -Ddebug -include::includes/devtools/dev-parameters.adoc[] -:!dev-additional-parameters: - -Then, attach your debugger to `localhost:5005`. -==== - -== Import in your IDE - -Once you have a <>, you can import it in your favorite IDE. -The only requirement is the ability to import a Maven project. - -**Eclipse** - -In Eclipse, click on: `File -> Import`. -In the wizard, select: `Maven -> Existing Maven Project`. -On the next screen, select the root location of the project. -The next screen list the found modules; select the generated project and click on `Finish`. Done! - -In a separated terminal, run: - -include::includes/devtools/dev.adoc[] - -and enjoy a highly productive environment. - -**IntelliJ** - -In IntelliJ: - -1. From inside IntelliJ select `File -> New -> Project From Existing Sources...` or, if you are on the welcome dialog, select `Import project`. -2. Select the project root -3. Select `Import project from external model` and `Maven` -4. Next a few times (review the different options if needed) -5. On the last screen click on Finish - -In a separated terminal or in the embedded terminal, run: - -include::includes/devtools/dev.adoc[] - -Enjoy! - -**Apache NetBeans** - -In NetBeans: - -1. Select `File -> Open Project` -2. Select the project root -3. Click on `Open Project` - -In a separated terminal or the embedded terminal, go to the project root and run: - -include::includes/devtools/dev.adoc[] - -Enjoy! - -**Visual Studio Code** - -Open the project directory in VS Code. If you have installed the Java Extension Pack (grouping a set of Java extensions), the project is loaded as a Maven project. - -== Logging Quarkus application build classpath tree - -Usually, dependencies of an application (which is a Maven project) could be displayed using `mvn dependency:tree` command. In case of a Quarkus application, however, this command will list only the runtime dependencies of the application. -Given that the Quarkus build process adds deployment dependencies of the extensions used in the application to the original application classpath, it could be useful to know which dependencies and which versions end up on the build classpath. -Luckily, the `quarkus` Maven plugin includes the `dependency-tree` goal which displays the build dependency tree for the application. - -Executing `./mvnw quarkus:dependency-tree` on your project should result in an output similar to: - -[source,text,subs=attributes+] ----- -[INFO] --- quarkus-maven-plugin:{quarkus-version}:dependency-tree (default-cli) @ getting-started --- -[INFO] org.acme:getting-started:jar:1.0.0-SNAPSHOT -[INFO] └─ io.quarkus:quarkus-resteasy-deployment:jar:{quarkus-version} (compile) -[INFO] ├─ io.quarkus:quarkus-resteasy-server-common-deployment:jar:{quarkus-version} (compile) -[INFO] │ ├─ io.quarkus:quarkus-core-deployment:jar:{quarkus-version} (compile) -[INFO] │ │ ├─ commons-beanutils:commons-beanutils:jar:1.9.3 (compile) -[INFO] │ │ │ ├─ commons-logging:commons-logging:jar:1.2 (compile) -[INFO] │ │ │ └─ commons-collections:commons-collections:jar:3.2.2 (compile) -... ----- - -The goal accepts the following optional parameters: - -* `mode` - the default value is `prod`, i.e. the production build dependency tree. Alternatively, it accepts values `test` to display the test dependency tree and `dev` to display the dev mode dependency tree; -* `outputFile` - specifies the file to persist the dependency tree to; -* `appendOutput` - the default value is `false`, indicates whether the output to the command should be appended to the file specified with the `outputFile` parameter or it should be overriden. - -== Downloading Maven artifact dependencies for offline development and testing - -Quarkus extension dependencies are divided into the runtime extension dependencies that end up on the application runtime classpath and the deployment (or build time) extension dependencies that are resolved by Quarkus only at application build time to create -the build classpath. Application developers are expected to express dependencies only on the runtime artifacts of Quarkus extensions. As a consequence, the deployment extension dependencies aren't visible to Maven plugins that aren't aware of the Quarkus -extension dependency model, such as the `maven-dependency-plugin`, `go-offline-maven-plugin`, etc. That means those plugins can not be used to pre-download all the application dependencies to be able to build and test the application later in offline mode. - -To enable the use-case of building and testing a Quarkus application offline, the `quarkus-maven-plugin` includes the `go-offline` goal that could be called from the command line like this: - -[source,bash] ----- -./mvnw quarkus:go-offline ----- - -This goal will resolve all the runtime, build time, test and dev mode dependencies of the application downloading them to the configured local Maven repository. - -== Building a native executable - -Native executables make Quarkus applications ideal for containers and serverless workloads. - -Make sure to have `GRAALVM_HOME` configured and pointing to GraalVM version {graalvm-version} (Make sure to use a Java 11 version of GraalVM). -Verify that your `pom.xml` has the proper `native` profile (see <>). - -Create a native executable using: - -include::includes/devtools/build-native.adoc[] - -A native executable will be present in `target/`. - -To run Integration Tests on the native executable, make sure to have the proper Maven plugin configured (see <>) and launch the `verify` goal. - -[source,shell] ----- -$ ./mvnw verify -Pnative -... -[quarkus-quickstart-runner:50955] universe: 391.96 ms -[quarkus-quickstart-runner:50955] (parse): 904.37 ms -[quarkus-quickstart-runner:50955] (inline): 1,143.32 ms -[quarkus-quickstart-runner:50955] (compile): 6,228.44 ms -[quarkus-quickstart-runner:50955] compile: 9,130.58 ms -[quarkus-quickstart-runner:50955] image: 2,101.42 ms -[quarkus-quickstart-runner:50955] write: 803.18 ms -[quarkus-quickstart-runner:50955] [total]: 33,520.15 ms -[INFO] -[INFO] --- maven-failsafe-plugin:2.22.0:integration-test (default) @ quarkus-quickstart-native --- -[INFO] -[INFO] ------------------------------------------------------- -[INFO] T E S T S -[INFO] ------------------------------------------------------- -[INFO] Running org.acme.quickstart.GreetingResourceIT -Executing [/Users/starksm/Dev/JBoss/Quarkus/starksm64-quarkus-quickstarts/getting-started-native/target/quarkus-quickstart-runner, -Dquarkus.http.port=8081, -Dtest.url=http://localhost:8081, -Dquarkus.log.file.path=target/quarkus.log] -2019-02-28 16:52:42,020 INFO [io.quarkus] (main) Quarkus started in 0.007s. Listening on: http://localhost:8080 -2019-02-28 16:52:42,021 INFO [io.quarkus] (main) Installed features: [cdi, resteasy] -[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.081 s - in org.acme.quickstart.GreetingResourceIT -[INFO] -[INFO] Results: -[INFO] -[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 - -... ----- - -=== Build a container friendly executable - -The native executable will be specific to your operating system. -To create an executable that will run in a container, use the following: - -:build-additional-parameters: -Dquarkus.native.container-build=true -include::includes/devtools/build-native.adoc[] -:!build-additional-parameters: - -The produced executable will be a 64 bit Linux executable, so depending on your operating system it may no longer be runnable. -However, it's not an issue as we are going to copy it to a Docker container. -Note that in this case the build itself runs in a Docker container too, so you don't need to have GraalVM installed locally. - -[TIP] -==== -By default, the native executable will be generated using the `quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor}` Docker image. - -If you want to build a native executable with a different Docker image (for instance to use a different GraalVM version), -use the `-Dquarkus.native.builder-image=` build argument. - -The list of the available Docker images can be found on https://quay.io/repository/quarkus/ubi-quarkus-native-image?tab=tags[quay.io]. -Be aware that a given Quarkus version might not be compatible with all the images available. -==== - -You can follow the xref:building-native-image.adoc[Build a native executable guide] as well as xref:deploying-to-kubernetes.adoc[Deploying Application to Kubernetes and OpenShift] for more information. - -[[build-tool-maven]] -== Maven configuration - -If you have not used <>, add the following elements in your `pom.xml` - -[source,xml,subs=attributes+] ----- - - - <1> - ${quarkus.platform.group-id} - quarkus-bom - ${quarkus.platform.version} - pom - import - - - - - - - <2> - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - true <3> - - - - build - generate-code - generate-code-tests - - - - - <4> - org.apache.maven.plugins - maven-surefire-plugin - ${surefire-plugin.version} - - - org.jboss.logmanager.LogManager - ${maven.home} - - - - - - - - <5> - native - <6> - native - - - - <7> - org.apache.maven.plugins - maven-failsafe-plugin - ${surefire-plugin.version} - - - - integration-test - verify - - - - ${project.build.directory}/${project.build.finalName}-runner - org.jboss.logmanager.LogManager - ${maven.home} - - - - - - - - - ----- - -<1> Optionally use a BOM file to omit the version of the different Quarkus dependencies. -<2> Use the Quarkus Maven plugin that will hook into the build process. -<3> Enabling Maven plugin extensions will register a Quarkus `MavenLifecycleParticipant` which will make sure the Quarkus classloaders used during the build are properly closed. During the `generate-code` and `generate-code-tests` goals the Quarkus application bootstrap is initialized and re-used in the `build` goal (which actually builds and packages a production application). The Quarkus classloaders will be properly closed in the `build` goal of the `quarkus-maven-plugin`. However, if the build fails in between the `generate-code` or `generate-code-tests` and `build` then the Quarkus augmentation classloader won't be properly closed, which may lead to locking of JAR files that happened to be on the classpath on Windows OS. -<4> Add system properties to `maven-surefire-plugin`. + -`maven.home` is only required if you have custom configuration in `${maven.home}/conf/settings.xml`. -<5> Use a specific `native` profile for native executable building. -<6> Enable the `native` package type. The build will therefore produce a native executable. -<7> If you want to test your native executable with Integration Tests, add the following plugin configuration. Test names `*IT` and annotated `@NativeImageTest` will be run against the native executable. See the xref:building-native-image.adoc[Native executable guide] for more info. - -[[fast-jar]] -=== Using fast-jar - -`fast-jar` is the default quarkus package type. - -The result of the build is a directory under `target` named `quarkus-app`. - -You can run the application using: `java -jar target/quarkus-app/quarkus-run.jar`. - -WARNING: In order to successfully run the produced jar, you need to have the entire contents of the `quarkus-app` directory. If any of the files are missing, the application will not start or -might not function correctly. - -TIP: The `fast-jar` packaging results in creating an artifact that starts a little faster and consumes slightly less memory than a legacy Quarkus jar -because it has indexed information about which dependency jar contains classes and resources. It can thus avoid the lookup into potentially every jar -on the classpath that the legacy jar necessitates, when loading a class or resource. - -[[uber-jar-maven]] -=== Uber-Jar Creation - -Quarkus Maven plugin supports the generation of Uber-Jars by specifying a `quarkus.package.type=uber-jar` configuration option in your `application.properties` -(or `uber-jar` in your `pom.xml`). - -The original jar will still be present in the `target` directory but it will be renamed to contain the `.original` suffix. - -When building an Uber-Jar you can specify entries that you want to exclude from the generated jar by using the `quarkus.package.ignored-entries` configuration -option, this takes a comma separated list of entries to ignore. - -Uber-Jar creation by default excludes link:https://docs.oracle.com/javase/tutorial/deployment/jar/intro.html[signature files] that might be present in the dependencies of the application. - -Uber-Jar's final name is configurable via a Maven's build settings `finalName` option. - -[[multi-module-maven]] -=== Working with multi-module projects - -By default, Quarkus will not discover CDI beans inside another module. - -The best way to enable CDI bean discovery for a module in a multi-module project would be to include the `jandex-maven-plugin`, -unless it is the main application module already configured with the quarkus-maven-plugin, in which case it will indexed automatically. - -[source,xml,subs="attributes+"] ----- - - - - org.jboss.jandex - jandex-maven-plugin - {jandex-maven-plugin-version} - - - make-index - - jandex - - - - - - ----- - -More information on this topic can be found on the xref:cdi-reference.adoc#bean_discovery[Bean Discovery] section of the CDI guide. - -[[maven-configuration-profile]] -=== Building with a specific configuration profile - -Quarkus supports xref:config-reference.adoc#profiles[configuration profiles] in order to provide a specific configuration according to the target environment. - -The profile can be provided directly in the Maven build's command thanks to the system property `quarkus.profile` with a command of type: - -// TODO switch to once escaping issue is fixed in Asciidoctor -:build-additional-parameters: -Dquarkus.profile=profile-name-here -include::includes/devtools/build.adoc[] -:!build-additional-parameters: - -However it is also possible to specify the profile directly in the POM file of the project using project properties, the Quarkus Maven plugin configuration properties or system properties set in the Quarkus Maven plugin configuration. - -In order of precedence (greater precedence first): - -.1. System properties set in the Quarkus Maven plugin configuration - -[source,xml] ----- - - ... - - - ... - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - true - - - prod-aws <1> - - - - ... - - -... - ----- -<1> The default configuration profile of this project is `prod-aws`. - -.2. Quarkus Maven plugin configuration properties - -[source,xml] ----- - - ... - - - ... - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - true - - - prod-aws <1> - - - - ... - - -... - ----- -<1> The default configuration profile of this project is `prod-aws`. - -.3. Project properties - -[source,xml] ----- - - ... - - prod-aws <1> - ... - -... - ----- -<1> The default configuration profile of this project is `prod-aws`. - -NOTE: Whatever the approach is chosen, the profile can still be overridden with the `quarkus.profile` system property or the `QUARKUS_PROFILE` environment variable. - -[[maven-multi-build]] -=== Building several artifacts from a single module - -In some particular use cases, it can be interesting to build several artifacts of your application from the same module. -A typical example is when you want to build your application with different configuration profiles. - -In that case, it is possible to add as many executions as needed to the Quarkus Maven plugin configuration. - -Below is an example of a Quarkus Maven plugin configuration that will produce two builds of the same application: one using the `prod-oracle` profile and the other one using the `prod-postgresql` profile. - -[source,xml] ----- - - ... - - - ... - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - true - - - oracle - - build - - - - prod-oracle <1> - oracle-quarkus-app <2> - - - - - postgresql - - build - - - - prod-postgresql <3> - postgresql-quarkus-app <4> - - - - - - ... - - -... - ----- -<1> The default configuration profile of the first execution of the plugin is `prod-oracle`. -<2> The output directory of the first execution of the plugin is set to `oracle-quarkus-app` instead of `quarkus-app` to have a dedicated directory. -<3> The default configuration profile of the second execution of the plugin is `prod-postgresql`. -<4> The output directory of the second execution of the plugin is set to `postgresql-quarkus-app` instead of `quarkus-app` to have a dedicated directory. - -NOTE: With the configuration above, both profile builds will be using the same dependencies, so if we added dependencies on the Oracle and PostgreSQL drivers to the application, both of the drivers will appear in both builds. - - -To isolate profile-specific dependencies from other profiles, the JDBC drivers could be added as optional dependencies to the application but configured to be included in each profile that requires them, e.g.: - -[source,xml] ----- - - ... - - ... - - org.postgresql - postgresql - ${postgresql.driver.version} - true <1> - - - - - ... - - ${quarkus.platform.group-id} - quarkus-maven-plugin - ${quarkus.platform.version} - true - - ... - - postgresql - - build - - - - prod-postgresql - postgresql-quarkus-app - true <2> - org.postgresql:postgresql::jar <3> - - - - - - ... - - -... - ----- -<1> The JDBC driver of PostgreSQL is defined as an optional dependency -<2> For backward compatibility reasons, it is necessary to explicitly indicate that the optional dependencies need to be filtered. -<3> Only the optional dependency corresponding to the JDBC driver of PostgreSQL is expected in the final artifact. - -[[configuration-reference]] -== Configuring the Project Output - -There are a several configuration options that will define what the output of your project build will be. -These are provided in `application.properties` the same as any other config property. - -The properties are shown below: - -include::{generated-dir}/config/quarkus-package-pkg-package-config.adoc[opts=optional] - -[[custom-test-configuration-profile]] -=== Custom test configuration profile in JVM mode - -By default, Quarkus tests in JVM mode are run using the `test` configuration profile. If you are not familiar with Quarkus -configuration profiles, everything you need to know is explained in the -xref:config.adoc#configuration-profiles[Configuration Profiles Documentation]. - -It is however possible to use a custom configuration profile for your tests with the Maven Surefire and Maven Failsafe -configurations shown below. This can be useful if you need for example to run some tests using a specific database which is not -your default testing database. - -[source,xml,subs=attributes+] ----- - - [...] - - - - org.apache.maven.plugins - maven-surefire-plugin - ${surefire-plugin.version} - - - foo <1> - ${project.build.directory} - [...] - - - - - org.apache.maven.plugins - maven-failsafe-plugin - ${failsafe-plugin.version} - - - foo <1> - ${project.build.directory} - [...] - - - - - - [...] - ----- - -<1> The `foo` configuration profile will be used to run the tests. - -[WARNING] -==== -It is not possible to use a custom test configuration profile in native mode for now. Native tests are always run using the -`prod` profile. -==== - -[[bootstrap-maven-properties]] -=== Bootstrap Maven properties - -Quarkus bootstrap includes a Maven resolver implementation that is used to resolve application runtime and build time dependencies. The Quarkus Maven resolver is initialized from the same Maven command line that launched the build, test or dev mode. Typically, there is no need to add any extra configuration for it. However, there could be cases where an extra configuration option may be necessary to properly resolve application dependencies in test or dev modes, or IDEs. - -Maven test plugins (such as `surefire` and `failsafe`), for example, are not propagating build system properties to the running tests by default. Which means some of the system properties set by the Maven CLI aren't available for the Quarkus Maven resolver initialized for the tests, which may result in test dependencies being resolved using different settings than the main Maven build. - -Here is a list of system properties the Quarkus bootstrap Maven resolver checks during its initialization. - -[cols=3*,options="header"] -|=== -| Property name -| Default Value -| Description - -| `maven.home` -| `MAVEN_HOME` envvar -| The Maven home dir is used to resolve the global settings file unless it was explicitly provided on the command line with the `-gs` argument - -| `maven.settings` -| `~/.m2/settings.xml` -| Unless the custom settings file has been provided with the `-s` argument, this property can be used to point the resolver to a custom Maven settings file - -| `maven.repo.local` -| `~/.m2/repository` -| This property could be used to configure a custom local Maven repository directory, if it is different from the default one and the one specified in the `settings.xml` - -| `maven.top-level-basedir` -| none -| This property may be useful to help the Maven resolver identify the top-level Maven project in the workspace. By default, the Maven resolver will be discovering a project's workspace by navigating the parent-module POM relationship. However there could be project layouts that are using an aggregator module which isn't appearing as the parent for its modules. In this case, this property will help the Quarkus Maven resolver to properly discover the workspace. - -| `quarkus.bootstrap.effective-model-builder` -| `false` -| By default, the Quarkus Maven resolver is reading project's POMs directly when discovering the project's layout. While in most cases it works well enough and relatively fast, reading raw POMs has its limitation. E.g. if a POM includes modules in a profile, these modules will not be discovered. This system property enables project's layout discovery based on the effective POM models, that are properly interpolated, instead of the raw ones. The reason this option is not enabled by default is it may appear to be significantly more time consuming that could increase, e.g. CI testing times. Until there is a better approach found that could be used by default, projects that require it should enable this option. - -|=== - -These system properties above could be added to, e.g., a `surefire` and/or `failsafe` plugin configuration as -[source,xml,subs=attributes+] ----- - - [...] - - - - org.apache.maven.plugins - maven-surefire-plugin - ${surefire-plugin.version} - - - ${maven.home} <1> - ${settings.localRepository} <2> - ${session.request.userSettingsFile.path} <3> - ${session.topLevelProject.basedir.absolutePath} <4> - true <5> - - - - - - [...] - ----- - -<1> Propagate `maven.home` system property set by the Maven CLI to the tests -<2> Set the Maven local repository directory for the tests -<3> Set the Maven settings file the tests -<4> Point to the top-level project directory for the tests -<5> Enable effective POM-based project layout discovery - -==== Top-level vs Multi-module project directory - -In Maven there appears to be a notion of the top-level project (that is exposed as a project property `${session.topLevelProject.basedir.absolutePath}`) -and the multi-module project directory (that is available as property `${maven.multiModuleProjectDirectory}`). These directories might not always match! - -IMPORTANT: `maven.multiModuleProjectDirectory` is meant to be consulted by the Maven code itself and not something to be relied upon by user code. So, if you find it useful, use it at your own risk! - -The `${maven.multiModuleProjectDirectory}` will be resolved to the first directory that contains `.mvn` directory as its child going up the workspace file system tree -starting from the current directory (or the one specified with the `-f` argument) from which the `mvn` command was launched. If the `.mvn` directory was not found, however, -the `${maven.multiModuleProjectDirectory}` will be pointing to the directory from which the `mvn` command was launched (or the one targeted with the `-f` argument). - -The `${session.topLevelProject.basedir.absolutePath}` will be pointing either to the directory from which the `mvn` command was launched or to the directory targeted with -the `-f` argument, if it was specified. - -[[project-info]] -== Quarkus project info - -NOTE: This goal was introduced in Quarkus Maven plugin 2.7.0.Final and can be used in projects that are based on Quarkus version 2.0.0.Final or later. - -The Quarkus Maven plugin includes a goal called `info` (currently marked as 'experimental') that logs Quarkus-specific information about the project, such as: the imported Quarkus platform BOMs and the Quarkus extensions found among the project dependencies. -In a multi-module project `quarkus:info` will assume that the current module, in which it is executed, is the main module of the application. - -NOTE: The report generated by `quarkus:info` is not currently including the Quarkus Maven plugin information, however it's planned to be added in the future releases. - -Here is an example `info` output for a simple project: -[source,text,subs=attributes+] ----- -[aloubyansky@localhost code-with-quarkus]$ mvn quarkus:info -[INFO] Scanning for projects... -[INFO] -[INFO] ---------------------< org.acme:code-with-quarkus >--------------------- -[INFO] Building code-with-quarkus 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -[INFO] -[INFO] --- quarkus-maven-plugin:{quarkus-version}:info (default-cli) @ code-with-quarkus --- -[WARNING] quarkus:info goal is experimental, its options and output may change in future versions -[INFO] Quarkus platform BOMs: <1> -[INFO] io.quarkus.platform:quarkus-bom:pom:{quarkus-version} -[INFO] io.quarkus.platform:quarkus-kogito-bom:pom:{quarkus-version} -[INFO] io.quarkus.platform:quarkus-camel-bom:pom:{quarkus-version} -[INFO] -[INFO] Extensions from io.quarkus.platform:quarkus-bom: <2> -[INFO] io.quarkus:quarkus-resteasy-reactive -[INFO] -[INFO] Extensions from io.quarkus.platform:quarkus-kogito-bom: <3> -[INFO] org.kie.kogito:kogito-quarkus-decisions -[INFO] -[INFO] Extensions from io.quarkus.platform:quarkus-camel-bom: <4> -[INFO] org.apache.camel.quarkus:camel-quarkus-rabbitmq -[INFO] -[INFO] Extensions from registry.quarkus.io: <5> -[INFO] io.quarkiverse.prettytime:quarkus-prettytime:0.2.1 ----- - -<1> Quarkus platform BOMs imported in the project (BOMs imported by parent POMs will also be reported) -<2> Direct Quarkus extension dependencies managed by the `quarkus-bom` -<3> Direct Quarkus extension dependencies managed by the `quarkus-kogito-bom` -<4> Direct Quarkus extension dependencies managed by the `quarkus-camel-bom` -<5> Direct Quarkus extensions dependencies that aren't managed by Quarkus BOMs but found in the Quarkus extension registry - -NOTE: `quarkus:info` will also report Quarkus extensions that aren't found in the Quarkus extension registries if those are present among the project dependencies, indicating they have an 'unknown origin'. - -[[project-info-misaligned]] -=== Highlighing misaligned versions - -`quarkus:info` will also highlight basic Quarkus dependency version misalignments, in case they are detected. For example, if we modify the project mentioned above by removing the `kogito-quarkus-decisions` extension from the dependencies and adding a `2.6.3.Final` `` element to the `quarkus-resteasy-reactive` dependency that is managed by the `quarkus-bom` and then run `quarkus:info` again, we'll see something like: - -[source,text,subs=attributes+] ----- -[INFO] --- quarkus-maven-plugin:{quarkus-version}:info (default-cli) @ code-with-quarkus --- -[WARNING] quarkus:info goal is experimental, its options and output may change in future versions -[INFO] Quarkus platform BOMs: -[INFO] io.quarkus.platform:quarkus-bom:pom:{quarkus-version} -[INFO] io.quarkus.platform:quarkus-kogito-bom:pom:{quarkus-version} | unnecessary <1> -[INFO] io.quarkus.platform:quarkus-camel-bom:pom:{quarkus-version} -[INFO] -[INFO] Extensions from io.quarkus.platform:quarkus-bom: -[INFO] io.quarkus:quarkus-resteasy-reactive:2.6.3.Final | misaligned <2> -[INFO] -[INFO] Extensions from io.quarkus.platform:quarkus-camel-bom: -[INFO] org.apache.camel.quarkus:camel-quarkus-rabbitmq -[INFO] -[INFO] Extensions from registry.quarkus.io: -[INFO] io.quarkiverse.prettytime:quarkus-prettytime:0.2.1 -[INFO] -[WARNING] Non-recommended Quarkus platform BOM and/or extension versions were found. For more details, please, execute 'mvn quarkus:update -Drectify' ----- - -<1> The `quarkus-kogito-bom` import is now reported as 'unnecessary' since none of the Quarkus extensions it includes are found among the project dependencies -<2> The version `2.6.3.Final` of the `quarkus-resteasy-reactive` is now reported as being misaligned with the version managed by the Quarkus platform BOM imported in the project, which is {quarkus-version} - -[[project-update]] -== Quarkus project update - -NOTE: This goal was introduced in Quarkus Maven plugin 2.7.0.Final and can be used in projects that are based on Quarkus version 2.0.0.Final or later. - -The `quarkus:update` goal (currently marked as 'experimental') provided by the Quarkus Maven plugin can be used to check whether there are Quarkus-related updates available for a project, such as: new releases of the relevant Quarkus platform BOMs and non-platform Quarkus extensions present in the project. In a multi-module project the `update` goal is meant to be executed from the main Quarkus application module. - -IMPORTANT: At this point, the `quarkus:update` goal does not actually apply the recommended updates but simply reports what they are and how to apply them manually. - -NOTE: The Quarkus Maven plugin version isn't currently included in the update report, however it's planned to be added in the future releases. - -The way `quarkus:update` works, first, all the direct Quarkus extension dependencies of the project are collected (those that are managed by the Quarkus platform BOMs and those that aren't but found in the Quarkus extension registries). Then the configured Quarkus extension registries (typically the `registry.quarkus.io`) will be queried for the latest recommended/supported Quarkus platform versions and non-platform Quarkus extensions compatible with them. The algorithm will then select the latest compatible combination of all the extensions found in the project, assuming such a combination actually exists. Otherwise, no updates will be suggested. - -Assuming we have a project including Kogito, Camel and core Quarkus extensions available in the Quarkus platform based on Quarkus `2.7.1.Final`, the output of the `quarkus:update` would look like: -[source,text,subs=attributes+] ----- -[aloubyansky@localhost code-with-quarkus]$ mvn quarkus:update -[INFO] Scanning for projects... -[INFO] -[INFO] ---------------------< org.acme:code-with-quarkus >--------------------- -[INFO] Building code-with-quarkus 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -[INFO] -[INFO] --- quarkus-maven-plugin:{quarkus-version}:update (default-cli) @ code-with-quarkus --- -[WARNING] quarkus:update goal is experimental, its options and output might change in future versions -[INFO] -[INFO] Recommended Quarkus platform BOM updates: <1> -[INFO] Update: io.quarkus.platform:quarkus-bom:pom:2.7.1.Final -> {quarkus-version} -[INFO] Update: io.quarkus.platform:quarkus-kogito-bom:pom:2.7.1.Final -> {quarkus-version} -[INFO] Update: io.quarkus.platform:quarkus-camel-bom:pom:2.7.1.Final -> {quarkus-version} ----- - -<1> A list of currently recommended Quarkus platform BOM updates - -NOTE: Typically, a single project property will be used to manage all the Quarkus platform BOMs but the implementation isn't currently smart enough to point that out and will report updates for each BOM individually. - -If we modify the project to remove all the Kogito extensions from the project, change the version of the `quarkus-resteasy-reactive` extension to `2.6.3.Final` and downgrade `quarkus-prettytime` which is not included in the Quarkus platform BOMs to `0.2.0`, `quarkus:update` will report something like: - -[source,text,subs=attributes+] ----- -[INFO] Recommended Quarkus platform BOM updates: <1> -[INFO] Update: io.quarkus.platform:quarkus-bom:pom:2.7.1.Final -> {quarkus-version} -[INFO] Update: io.quarkus.platform:quarkus-camel-bom:pom:2.7.1.Final -> {quarkus-version} -[INFO] Remove: io.quarkus.platform:quarkus-kogito-bom:pom:2.7.1.Final <2> -[INFO] -[INFO] Extensions from io.quarkus.platform:quarkus-bom: -[INFO] Update: io.quarkus:quarkus-resteasy-reactive:2.6.3.Final -> remove version (managed) <3> -[INFO] -[INFO] Extensions from registry.quarkus.io: -[INFO] Update: io.quarkiverse.prettytime:quarkus-prettytime:0.2.0 -> 0.2.1 <4> ----- - -<1> A list of the currently recommended Quarkus platform BOM updates for the project -<2> Given that the project does not include any Kogito extensions, the BOM import is recommended to be removed -<3> An outdated version of the `quarkus-resteasy-reactive` is recommended to be removed in favor of the one managed by the `quarkus-bom` -<4> The latest compatible version of the `quarkus-prettytime` extension - -=== Quarkus project rectify - -As was mentioned above, `quarkus:info`, besides reporting Quarkus platform and extension versions, performs a quick version alignment check, to make sure the extension versions used in the project are compatible with the imported Quarkus platform BOMs. If misalignments are detected, the following warning message will be logged: - -[source,text,subs=attributes+] ----- -[WARNING] Non-recommended Quarkus platform BOM and/or extension versions were found. For more details, please, execute 'mvn quarkus:update -Drectify' ----- - -When the `rectify` option is enabled, `quarkus:update`, instead of suggesting the latest recommended Quarkus version updates, will log update instructions to simply align the extension dependency versions found in the project with the currently imported Quarkus platform BOMs. diff --git a/_versions/2.7/guides/micrometer.adoc b/_versions/2.7/guides/micrometer.adoc deleted file mode 100644 index 78037278b60..00000000000 --- a/_versions/2.7/guides/micrometer.adoc +++ /dev/null @@ -1,598 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Micrometer Metrics - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can utilize the Micrometer metrics library for runtime and -application metrics. - -Apart from application-specific metrics, which are described in this guide, you may also utilize built-in metrics -exposed by various Quarkus extensions. These are described in the guide for each particular extension that supports -built-in metrics. - -IMPORTANT: Micrometer is the recommended approach to metrics for Quarkus. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -Micrometer defines a core library providing a registration mechanism for Metrics, and core metric types (Counters, -Gauges, Timers, Distribution Summaries, etc.). These core types provide an abstraction layer that can be adapted to -different backend monitoring systems. In essence, your application (or a library) can `register` a `Counter`, -`Gauge`, `Timer`, or `DistributionSummary` with a `MeterRegistry`. Micrometer will then delegate that registration to -one or more implementations, where each implementation handles the unique considerations for the associated -monitoring stack. - -Micrometer uses naming conventions to translate between registered Meters and the conventions used by various backend -registries. Meter names, for example, should be created and named using dots to separate segments, `a.name.like.this`. -Micrometer then translates that name into the format that the selected registry prefers. Prometheus -uses underscores, which means the previous name will appear as `a_name_like_this` in Prometheus-formatted metrics -output. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -You can skip right to the solution if you prefer. Either: - -* Clone the git repository: `git clone {quickstarts-clone-url}`, or -* Download an {quickstarts-archive-url}[archive]. - -The solution is located in the `micrometer-quickstart` {quickstarts-tree-url}/micrometer-quickstart[directory]. - -== Creating the Maven Project - -Quarkus Micrometer extensions are structured similarly to Micrometer itself: `quarkus-micrometer` provides core -micrometer support and runtime integration and other Quarkus and Quarkiverse extensions bring in additional -dependencies and requirements to support specific monitoring systems. - -For this example, we'll use the Prometheus registry. - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: micrometer-quickstart -:create-app-extensions: resteasy,micrometer-registry-prometheus -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, that imports the `micrometer-registry-prometheus` extension as a dependency. -This extension will load the core `micrometer` extension as well as additional library dependencies required to support -prometheus. - -If you already have your Quarkus project configured, you can add the `micrometer-registry-prometheus` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: micrometer-registry-prometheus -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-micrometer-registry-prometheus - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-micrometer-registry-prometheus") ----- - -== Writing the application - -Micrometer provides an API that allows you to construct your own custom metrics. The most common types of -meters supported by monitoring systems are gauges, counters, and summaries. The following sections build -an example endpoint, and observes endpoint behavior using these basic meter types. - -To register meters, you need a reference to a `MeterRegistry`, which is configured and maintained by the Micrometer -extension. The `MeterRegistry` can be injected into your application as follows: - -[source,java] ----- -package org.acme.micrometer; - -import io.micrometer.core.instrument.MeterRegistry; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; - -@Path("/example") -@Produces("text/plain") -public class ExampleResource { - - private final MeterRegistry registry; - - ExampleResource(MeterRegistry registry) { - this.registry = registry; - } -} ----- - -Micrometer maintains an internal mapping between unique metric identifier and tag combinations and specific meter -instances. Using `register`, `counter`, or other methods to increment counters or record values does not create -a new instance of a meter unless that combination of identifier and tag/label values hasn't been seen before. - -=== Gauges - -Gauges measure a value that can increase or decrease over time, like the speedometer on a car. Gauges can be -useful when monitoring the statistics for a cache or collection. Consider the following simple example that -observes the size of a list: - -[source,java] ----- - LinkedList list = new LinkedList<>(); - - // Update the constructor to create the gauge - ExampleResource(MeterRegistry registry) { - this.registry = registry; - registry.gaugeCollectionSize("example.list.size", Tags.empty(), list); - } - - @GET - @Path("gauge/{number}") - public Long checkListSize(@PathParam("number") long number) { - if (number == 2 || number % 2 == 0) { - // add even numbers to the list - list.add(number); - } else { - // remove items from the list for odd numbers - try { - number = list.removeFirst(); - } catch (NoSuchElementException nse) { - number = 0; - } - } - return number; - } ----- - -Note that even numbers are added to the list, and odd numbers remove an element from the list. - -Start your application in dev mode: - -include::includes/devtools/dev.adoc[] - -Then try the following sequence and look for `example_list_size` in the plain text output: - -[source,shell] ----- -curl http://localhost:8080/example/gauge/1 -curl http://localhost:8080/example/gauge/2 -curl http://localhost:8080/example/gauge/4 -curl http://localhost:8080/q/metrics -curl http://localhost:8080/example/gauge/6 -curl http://localhost:8080/example/gauge/5 -curl http://localhost:8080/example/gauge/7 -curl http://localhost:8080/q/metrics ----- - -It is important to note that gauges are sampled rather than set; there is no record of how the value associated with a -gauge might have changed between measurements. In this example, the size of the list is observed when the Prometheus -endpoint is visited. - -Micrometer provides a few additional mechanisms for creating gauges. Note that Micrometer does not create strong -references to the objects it observes by default. Depending on the registry, Micrometer either omits gauges that observe -objects that have been garbage-collected entirely or uses `NaN` (not a number) as the observed value. - -When should you use a Gauge? Only if you can't use something else. Never gauge something you can count. Gauges can be -less straight-forward to use than counters. If what you are measuring can be counted (because the value always -increments), use a counter instead. - -=== Counters - -Counters are used to measure values that only increase. In the example below, you will count the number of times you -test a number to see if it is prime: - -[source,java] ----- - @GET - @Path("prime/{number}") - public String checkIfPrime(@PathParam("number") long number) { - if (number < 1) { - return "Only natural numbers can be prime numbers."; - } - if (number == 1 || number == 2 || number % 2 == 0) { - return number + " is not prime."; - } - - if ( testPrimeNumber(number) ) { - return number + " is prime."; - } else { - return number + " is not prime."; - } - } - - protected boolean testPrimeNumber(long number) { - // Count the number of times we test for a prime number - registry.counter("example.prime.number").increment(); - for (int i = 3; i < Math.floor(Math.sqrt(number)) + 1; i = i + 2) { - if (number % i == 0) { - return false; - } - } - return true; - } ----- - -It might be tempting to add a label or tag to the counter indicating what value was checked, but remember that each -unique combination of metric name (`example.prime.number`) and label value produces a unique time series. Using an -unbounded set of data as label values can lead to a "cardinality explosion", an exponential increase in the creation -of new time series. - -[NOTE] -==== -Label and tag can be used interchangably. You may also see "attribute" used in this context in some documentation. -The gist is each that each label or tag or attribute defines an additional bit of information associated with the -single numerical measurement that helps you classify, group, or aggregate the measured value later. The Micrometer API -uses `Tag` as the mechanism for specifying this additional data. -==== - -It is possible to add a tag that would convey a little more information, however. Let's adjust our code, and move -the counter to add some tags to convey additional information. - -[source,java] ----- - @GET - @Path("prime/{number}") - public String checkIfPrime(@PathParam("number") long number) { - if (number < 1) { - registry.counter("example.prime.number", "type", "not-natural").increment(); - return "Only natural numbers can be prime numbers."; - } - if (number == 1 ) { - registry.counter("example.prime.number", "type", "one").increment(); - return number + " is not prime."; - } - if (number == 2 || number % 2 == 0) { - registry.counter("example.prime.number", "type", "even").increment(); - return number + " is not prime."; - } - - if ( testPrimeNumber(number) ) { - registry.counter("example.prime.number", "type", "prime").increment(); - return number + " is prime."; - } else { - registry.counter("example.prime.number", "type", "not-prime").increment(); - return number + " is not prime."; - } - } - - protected boolean testPrimeNumber(long number) { - for (int i = 3; i < Math.floor(Math.sqrt(number)) + 1; i = i + 2) { - if (number % i == 0) { - return false; - } - } - return true; - } ----- - -Looking at the data produced by this counter, you can tell how often a negative number was checked, or the number one, -or an even number, and so on. Try the following sequence and look for `example_prime_number_total` in the plain text -output. Note that the `_total` suffix is added when Micrometer applies Prometheus naming conventions to -`example.prime.number`, the originally specified counter name. - -If you did not leave Quarkus running in dev mode, start it again: - -include::includes/devtools/dev.adoc[] - -Then execute the following sequence: - -[source,shell] ----- -curl http://localhost:8080/example/prime/-1 -curl http://localhost:8080/example/prime/0 -curl http://localhost:8080/example/prime/1 -curl http://localhost:8080/example/prime/2 -curl http://localhost:8080/example/prime/3 -curl http://localhost:8080/example/prime/15 -curl http://localhost:8080/q/metrics ----- - -When should you use a counter? Only if you are doing something that can not be either timed (or summarized). -Counters only record a count, which may be all that is needed. However, if you want to understand more about how a -value is changing, a timer (when the base unit of measurement is time) or a distribution summary might be -more appropriate. - -=== Summaries and Timers - -Timers and distribution summaries in Micrometer are very similar. Both allow you to record an observed value, which -will be aggregated with other recorded values and stored as a sum. Micrometer also increments a counter to indicate the -number of measurements that have been recorded and tracks the maximum observed value (within a decaying interval). - -Distribution summaries are populated by calling the `record` method to record observed values, while timers provide -additional capabilities specific to working with time and measuring durations. For example, we can use a timer to -measure how long it takes to calculate prime numbers using one of the `record` methods that wraps the invocation of a -Supplier function: - -[source,java] ----- - protected boolean testPrimeNumber(long number) { - Timer timer = registry.timer("example.prime.number.test"); - return timer.record(() -> { - for (int i = 3; i < Math.floor(Math.sqrt(number)) + 1; i = i + 2) { - if (number % i == 0) { - return false; - } - } - return true; - }); - } ----- - -Micrometer will apply Prometheus conventions when emitting metrics for this timer. Prometheus measures time in seconds. -Micrometer converts measured durations into seconds and includes the unit in the metric name, per convention. After -visiting the prime endpoint a few more times, look in the plain text output for the following three entries: -`example_prime_number_test_seconds_count`, `example_prime_number_test_seconds_sum`, and -`example_prime_number_test_seconds_max`. - -If you did not leave Quarkus running in dev mode, start it again: - -include::includes/devtools/dev.adoc[] - -Then execute the following sequence: - -[source,shell] ----- -curl http://localhost:8080/example/prime/256 -curl http://localhost:8080/q/metrics -curl http://localhost:8080/example/prime/7919 -curl http://localhost:8080/q/metrics ----- - -Both timers and distribution summaries can be configured to emit additional statistics, like histogram data, -precomputed percentiles, or service level objective (SLO) boundaries. Note that the count, sum, and histogram data -can be re-aggregated across dimensions (or across a series of instances), while precomputed percentile values cannot. - -=== Review automatically generated metrics - -To view metrics, execute `curl localhost:8080/q/metrics/` - -The Micrometer extension automatically times HTTP server requests. Following Prometheus naming conventions for -timers, look for `http_server_requests_seconds_count`, `http_server_requests_seconds_sum`, and -`http_server_requests_seconds_max`. Dimensional labels have been added for the requested uri, the HTTP method -(GET, POST, etc.), the status code (200, 302, 404, etc.), and a more general outcome field. - -[source,text] ----- -# HELP http_server_requests_seconds -# TYPE http_server_requests_seconds summary -http_server_requests_seconds_count{method="GET",outcome="SUCCESS",status="200",uri="/example/prime/{number}",} 1.0 -http_server_requests_seconds_sum{method="GET",outcome="SUCCESS",status="200",uri="/example/prime/{number}",} 0.017385896 -# HELP http_server_requests_seconds_max -# TYPE http_server_requests_seconds_max gauge -http_server_requests_seconds_max{method="GET",outcome="SUCCESS",status="200",uri="/example/prime/{number}",} 0.017385896 -# ----- - -Note that metrics appear lazily, you often won't see any data for your endpoint until -something tries to access it, etc. - -.Ignoring endpoints - -You can disable measurement of HTTP endpoints using the `quarkus.micrometer.binder.http-server.ignore-patterns` -property. This property accepts a comma-separated list of simple regex match patterns identifying URI paths that should -be ignored. For example, setting `quarkus.micrometer.binder.http-server.ignore-patterns=/example/prime/[0-9]+` will -ignore a request to `http://localhost:8080/example/prime/7919`. A request to `http://localhost:8080/example/gauge/7919` -would still be measured. - -.URI templates - -The micrometer extension will make a best effort at representing URIs containing path parameters in templated form. -Using examples from above, a request to `http://localhost:8080/example/prime/7919` should appear as an attribute of -`http_server_requests_seconds_*` metrics with a value of `uri=/example/prime/{number}`. - -Use the `quarkus.micrometer.binder.http-server.match-patterns` property if the correct URL can not be determined. This -property accepts a comma-separated list defining an association between a simple regex match pattern and a replacement -string. For example, setting -`quarkus.micrometer.binder.http-server.match-patterns=/example/prime/[0-9]+=/example/{jellybeans}` would use the value -`/example/{jellybeans}` for the uri attribute any time the requested uri matches `/example/prime/[0-9]+`. - -== Using MeterFilter to configure metrics - -Micrometer uses `MeterFilter` instances to customize the metrics emitted by `MeterRegistry` instances. -The Micrometer extension will detect `MeterFilter` CDI beans and use them when initializing `MeterRegistry` -instances. - -[source,java] ----- -@Singleton -public class CustomConfiguration { - - @ConfigProperty(name = "deployment.env") - String deploymentEnv; - - /** Define common tags that apply only to a Prometheus Registry */ - @Produces - @Singleton - @MeterFilterConstraint(applyTo = PrometheusMeterRegistry.class) - public MeterFilter configurePrometheusRegistries() { - return MeterFilter.commonTags(Arrays.asList( - Tag.of("registry", "prometheus"))); - } - - /** Define common tags that apply globally */ - @Produces - @Singleton - public MeterFilter configureAllRegistries() { - return MeterFilter.commonTags(Arrays.asList( - Tag.of("env", deploymentEnv))); - } - - /** Enable histogram buckets for a specific timer */ - @Produces - @Singleton - public MeterFilter enableHistogram() { - return new MeterFilter() { - @Override - public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) { - if(id.getName().startsWith("myservice")) { - return DistributionStatisticConfig.builder() - .percentiles(0.5, 0.95) // median and 95th percentile, not aggregable - .percentilesHistogram(true) // histogram buckets (e.g. prometheus histogram_quantile) - .build() - .merge(config); - } - return config; - } - }; - } -} ----- - -In this example, a singleton CDI bean will produce two different `MeterFilter` beans. One will be applied only to -Prometheus `MeterRegistry` instances (using the `@MeterFilterConstraint` qualifier), and another will be applied -to all `MeterRegistry` instances. An application configuration property is also injected and used as a tag value. -Additional examples of MeterFilters can be found in the -link:https://micrometer.io/docs/concepts[official documentation]. - -== Does Micrometer support annotations? - -Micrometer does define two annotations, `@Counted` and `@Timed`, that can be added to methods. The `@Timed` annotation -will wrap the execution of a method and will emit the following tags in addition to any tags defined on the -annotation itself: class, method, and exception (either "none" or the simple class name of a detected exception). - -Using annotations is limited, as you can't dynamically assign meaningful tag values. Also note that many methods, e.g. -REST endpoint methods or Vert.x Routes, are counted and timed by the micrometer extension out of the box. - -== Using other Registry implementations - -If you aren't using Prometheus, you have a few options. Some Micrometer registry implementations -have been wrapped in -https://github.com/quarkiverse/quarkiverse-micrometer-registry[Quarkiverse extensions]. -To use the Micrometer StackDriver MeterRegistry, for example, you would use the -`quarkus-micrometer-registry-stackdriver` extension: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-micrometer-registry-stackdriver - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-micrometer-registry-stackdriver") ----- - -If the Micrometer registry you would like to use does not yet have an associated extension, -use the `quarkus-micrometer` extension and bring in the packaged MeterRegistry dependency directly: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-micrometer - - - com.acme - custom-micrometer-registry - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-micrometer") -implementation("com.acme:custom-micrometer-registry") ----- - -You will then need to specify your own provider to configure and initialize the -MeterRegistry, as discussed in the next section. - -== Creating a customized MeterRegistry - -Use a custom `@Produces` method to create and configure a customized `MeterRegistry` if you need to. - -The following example customizes the line format used for StatsD: - -[source,java] ----- -@Produces -@Singleton -public StatsdMeterRegistry createStatsdMeterRegistry(StatsdConfig statsdConfig, Clock clock) { - // define what to do with lines - Consumer lineLogger = line -> logger.info(line); - - // inject a configuration object, and then customize the line builder - return StatsdMeterRegistry.builder(statsdConfig) - .clock(clock) - .lineSink(lineLogger) - .build(); -} ----- - -This example corresponds to the following instructions in the Micrometer documentation: -https://micrometer.io/docs/registry/statsD#_customizing_the_metrics_sink - -Note that the method returns the specific type of `MeterRegistry` as a `@Singleton`. Use MicroProfile Config -to inject any configuration attributes you need to configure the registry. Most Micrometer registry extensions, -like `quarkus-micrometer-registry-statsd`, define a producer for registry-specific configuration objects -that are integrated with the Quarkus configuration model. - -== Support for the MicroProfile Metrics API - -If you use the MicroProfile Metrics API in your application, the Micrometer extension will create an adaptive -layer to map those metrics into the Micrometer registry. Note that naming conventions between the two -systems is different, so the metrics that are emitted when using MP Metrics with Micrometer will change. -You can use a `MeterFilter` to remap names or tags according to your conventions. - -[source,java] ----- -@Produces -@Singleton -public MeterFilter renameApplicationMeters() { - final String targetMetric = MPResourceClass.class.getName() + ".mpAnnotatedMethodName"; - - return MeterFilter() { - @Override - public Meter.Id map(Meter.Id id) { - if (id.getName().equals(targetMetric)) { - // Drop the scope tag (MP Registry type: application, vendor, base) - List tags = id.getTags().stream().filter(x -> !"scope".equals(x.getKey())) - .collect(Collectors.toList()); - // rename the metric - return id.withName("my.metric.name").replaceTags(tags); - } - return id; - } - }; -} ----- - -Ensure the following dependency is present in your build file if you require the MicroProfile Metrics API: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.eclipse.microprofile.metrics - microprofile-metrics-api - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.eclipse.microprofile.metrics:microprofile-metrics-api") ----- - -NOTE: The MP Metrics API compatibility layer will be moved to a different extension in the future. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-micrometer.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/mongodb-panache-kotlin.adoc b/_versions/2.7/guides/mongodb-panache-kotlin.adoc deleted file mode 100644 index aff813ab22d..00000000000 --- a/_versions/2.7/guides/mongodb-panache-kotlin.adoc +++ /dev/null @@ -1,200 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Simplified MongoDB with Panache and Kotlin - -include::./attributes.adoc[] -:config-file: application.properties - -MongoDB is a well known NoSQL Database that is widely used. MongoDB with Panache offers a -new layer atop this familiar framework. This guide will not dive in to the specifics of either as those are already -covered in the xref:mongodb-panache.adoc[MongoDB with Panache guide]. In this guide, we'll cover the Kotlin specific changes -needed to use MongoDB with Panache in your Kotlin-based Quarkus applications. - -== First: an example - -As we saw in the MongoDB with Panache guide, it allows us to extend the functionality in our entities and repositories (also known as DAOs) with some automatically -provided functionality. When using Kotlin, the approach is very similar to what we see in the Java version with a slight -change or two. To Panache-enable your entity, you would define it something like: - -[source,kotlin] ----- -class Person: PanacheMongoEntity { - lateinit var name: String - lateinit var birth: LocalDate - lateinit var status: Status -} ----- - -As you can see our entities remain simple. There is, however, a slight difference from the Java version. The Kotlin -language doesn't support the notion of static methods in quite the same way as Java does. Instead, we must use a -https://kotlinlang.org/docs/tutorials/kotlin-for-py/objects-and-companion-objects.html#companion-objects[companion object]: - -[source,kotlin] ----- -class Person : PanacheMongoEntity() { - companion object: PanacheMongoCompanion { // <1> - fun findByName(name: String) = find("name", name).firstResult() - fun findAlive() = list("status", Status.Alive) - fun deleteStefs() = delete("name", "Stef") - } - - lateinit var name: String // <2> - lateinit var birth: LocalDate - lateinit var status: Status -} ----- -<1> The companion object holds all the methods not related to a specific instance allowing for general management and -querying bound to a specific type. -<2> Here there are options, but we've chosen the `lateinit` approach. This allows us to declare these fields as non-null -knowing they will be properly assigned either by the constructor (not shown) or by the MongoDB POJO codec loading data from the -database. - -NOTE: These types differ from the Java types mentioned in those tutorials. For Kotlin support, all the Panache -types will be found in the `io.quarkus.mongodb.panache.kotlin` package. This subpackage allows for the distinction -between the Java and Kotlin variants and allows for both to be used unambiguously in a single project. - -In the Kotlin version, we've simply moved the bulk of the link:https://www.martinfowler.com/eaaCatalog/activeRecord.html[`active record pattern`] -functionality to the `companion object`. Apart from this slight change, we can then work with our types in ways that map easily -from the Java side of world. - - -== Using the repository pattern - - -=== Defining your entity - -When using the repository pattern, you can define your entities as regular POJO. -[source,kotlin] ----- -class Person { - var id: ObjectId? = null; // used by MongoDB for the _id field - lateinit var name: String - lateinit var birth: LocalDate - lateinit var status: Status -} ----- - -=== Defining your repository - -When using Repositories, you get the exact same convenient methods as with the active record pattern, injected in your Repository, -by making them implement `PanacheMongoRepository`: - -[source,kotlin] ----- -@ApplicationScoped -class PersonRepository: PanacheMongoRepository { - fun findByName(name: String) = find("name", name).firstResult() - fun findAlive() = list("status", Status.Alive) - fun deleteStefs() = delete("name", "Stef") -} ----- - -All the operations that are defined on `PanacheMongoEntityBase` are available on your repository, so using it -is exactly the same as using the active record pattern, except you need to inject it: - -[source,kotlin] ----- -@Inject -lateinit var personRepository: PersonRepository - -@GET -fun count() = personRepository.count() ----- - -=== Most useful operations - -Once you have written your repository, here are the most common operations you will be able to perform: - -[source,kotlin] ----- -// creating a person -var person = Person() -person.name = "Stef" -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1) -person.status = Status.Alive - -// persist it: if you keep the default ObjectId ID field, it will be populated by the MongoDB driver -personRepository.persist(person) - -person.status = Status.Dead; - -// Your must call update() in order to send your entity modifications to MongoDB -personRepository.update(person); - - -// delete it -personRepository.delete(person); - -// getting a list of all Person entities -val allPersons = personRepository.listAll() - -// finding a specific person by ID -// here we build a new ObjectId but you can also retrieve it from the existing entity after being persisted -ObjectId personId = new ObjectId(idAsString); -person = personRepository.findById(personId) ?: throw Exception("No person with that ID") - -// finding all living persons -val livingPersons = personRepository.list("status", Status.Alive) - -// counting all persons -val countAll = personRepository.count() - -// counting all living persons -val countAlive = personRepository.count("status", Status.Alive) - -// delete all living persons -personRepository.delete("status", Status.Alive) - -// delete all persons -personRepository.deleteAll() - -// delete by id -val deleted = personRepository.deleteById(personId) - -// set the name of all living persons to 'Mortal' -personRepository.update("name = 'Mortal' where status = ?1", Status.Alive) - ----- - -All `list` methods have equivalent `stream` versions. - -[source,kotlin] ----- -val persons = personRepository.streamAll(); -val namesButEmmanuels = persons - .map { it.name.toLowerCase() } - .filter { it != "emmanuel" } ----- - -For more examples, please consult the xref:mongodb-panache.adoc[Java version] for complete details. Both APIs -are the same and work identically except for some Kotlin-specific tweaks to make things feel more natural to -Kotlin developers. These tweaks include things like better use of nullability and the lack of `Optional` on API -methods. - -== Setting up and configuring MongoDB with Panache - -To get started using MongoDB with Panache with Kotlin, you can, generally, follow the steps laid out in the Java tutorial. The biggest -change to configuring your project is the Quarkus artifact to include. You can, of course, keep the Java version if you -need but if all you need are the Kotlin APIs then include the following dependency instead: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-mongodb-panache-kotlin // <1> - ----- -<1> Note the addition of `-kotlin` on the end. Generally you'll only need this version but if your project will be using -both Java and Kotlin code, you can safely include both artifacts. - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-mongodb-panache-kotlin") <1> ----- -<1> Note the addition of `-kotlin` on the end. Generally you'll only need this version but if your project will be using -both Java and Kotlin code, you can safely include both artifacts. diff --git a/_versions/2.7/guides/mongodb-panache.adoc b/_versions/2.7/guides/mongodb-panache.adoc deleted file mode 100644 index 23f55e8082a..00000000000 --- a/_versions/2.7/guides/mongodb-panache.adoc +++ /dev/null @@ -1,1203 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Simplified MongoDB with Panache - -include::./attributes.adoc[] -:config-file: application.properties -:mongodb-doc-root-url: https://mongodb.github.io/mongo-java-driver/4.2 - -MongoDB is a well known NoSQL Database that is widely used, but using its raw API can be cumbersome as you need to express your entities and your queries as a MongoDB link:{mongodb-doc-root-url}/bson/documents/#document[`Document`]. - -MongoDB with Panache provides active record style entities (and repositories) like you have in xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache] and focuses on making your entities trivial and fun to write in Quarkus. - -It is built on top of the xref:mongodb.adoc[MongoDB Client] extension. - -== First: an example - -Panache allows you to write your MongoDB entities like this: - -[source,java] ----- -public class Person extends PanacheMongoEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Person findByName(String name){ - return find("name", name).firstResult(); - } - - public static List findAlive(){ - return list("status", Status.Alive); - } - - public static void deleteLoics(){ - delete("name", "Loïc"); - } -} ----- - -You have noticed how much more compact and readable the code is compared to using the MongoDB API? -Does this look interesting? Read on! - -NOTE: the `list()` method might be surprising at first. It takes fragments of PanacheQL queries (subset of JPQL) and contextualizes the rest. -That makes for very concise but yet readable code. -MongoDB native queries are also supported. - -NOTE: what was described above is essentially the link:https://www.martinfowler.com/eaaCatalog/activeRecord.html[active record pattern], sometimes just called the entity pattern. -MongoDB with Panache also allows for the use of the more classical link:https://martinfowler.com/eaaCatalog/repository.html[repository pattern] via `PanacheMongoRepository`. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `mongodb-panache-quickstart` {quickstarts-tree-url}/mongodb-panache-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: mongodb-panache-quickstart -:create-app-extensions: resteasy-reactive-jackson,mongodb-panache -include::includes/devtools/create-app.adoc[] - -This command generates a Maven structure importing the RESTEasy Reactive Jackson and MongoDB with Panache extensions. -After this, the `quarkus-mongodb-panache` extension has been added to your build file. - -If you don't want to generate a new project, add the dependency in your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-mongodb-panache - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-mongodb-panache") ----- - -== Setting up and configuring MongoDB with Panache - -To get started: - -* add your settings in `{config-file}` -* Make your entities extend `PanacheMongoEntity` (optional if you are using the repository pattern) -* Optionally, use the `@MongoEntity` annotation to specify the name of the collection, the name of the database or the name of the client. - -Then add the relevant configuration properties in `{config-file}`. - -[source,properties] ----- -# configure the MongoDB client for a replica set of two nodes -quarkus.mongodb.connection-string = mongodb://mongo1:27017,mongo2:27017 -# mandatory if you don't specify the name of the database using @MongoEntity -quarkus.mongodb.database = person ----- - -The `quarkus.mongodb.database` property will be used by MongoDB with Panache to determine the name of the database where your entities will be persisted (if not overridden by `@MongoEntity`). - -The `@MongoEntity` annotation allows configuring: - -* the name of the client for multi-tenant application, see xref:mongodb.adoc#multiple-mongodb-clients[Multiple MongoDB Clients]. Otherwise, the default client will be used. -* the name of the database, otherwise, the `quarkus.mongodb.database` property will be used. -* the name of the collection, otherwise the simple name of the class will be used. - -For advanced configuration of the MongoDB client, you can follow the xref:mongodb.adoc#configuring-the-mongodb-database[Configuring the MongoDB database guide]. - -== Solution 1: using the active record pattern - -=== Defining your entity - -To define a Panache entity, simply extend `PanacheMongoEntity` and add your columns as public fields. -You can add the `@MongoEntity` annotation to your entity if you need to customize the name of the collection, the database, or the client. - -[source,java] ----- -@MongoEntity(collection="ThePerson") -public class Person extends PanacheMongoEntity { - public String name; - - // will be persisted as a 'birth' field in MongoDB - @BsonProperty("birth") - public LocalDate birthDate; - - public Status status; -} ----- - -NOTE: Annotating with `@MongoEntity` is optional. Here the entity will be stored in the `ThePerson` collection instead of the default `Person` collection. - -MongoDB with Panache uses the link:{mongodb-doc-root-url}/bson/pojos/[PojoCodecProvider] to convert your entities to a MongoDB `Document`. - -You will be allowed to use the following annotations to customize this mapping: - -- `@BsonId`: allows you to customize the ID field, see <>. -- `@BsonProperty`: customize the serialized name of the field. -- `@BsonIgnore`: ignore a field during the serialization. - -If you need to write accessors, you can: - -[source,java] ----- -public class Person extends PanacheMongoEntity { - - @JsonProperty - public String name; - public LocalDate birth; - public Status status; - - // return name as uppercase in the model - public String getName(){ - return name.toUpperCase(); - } - - // store all names in lowercase in the DB - public void setName(String name){ - this.name = name.toLowerCase(); - } -} ----- - -And thanks to our field access rewrite, when your users read `person.name` they will actually call your `getName()` accessor, and similarly for field writes and the setter. -This allows for proper encapsulation at runtime as all fields calls will be replaced by the corresponding getter/setter calls. - -=== Most useful operations - -Once you have written your entity, here are the most common operations you will be able to perform: - -[source,java] ----- -// creating a person -Person person = new Person(); -person.name = "Loïc"; -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1); -person.status = Status.Alive; - -// persist it: if you keep the default ObjectId ID field, it will be populated by the MongoDB driver -person.persist(); - -person.status = Status.Dead; - -// Your must call update() in order to send your entity modifications to MongoDB -person.update(); - -// delete it -person.delete(); - -// getting a list of all Person entities -List allPersons = Person.listAll(); - -// finding a specific person by ID -// here we build a new ObjectId but you can also retrieve it from the existing entity after being persisted -ObjectId personId = new ObjectId(idAsString); -person = Person.findById(personId); - -// finding a specific person by ID via an Optional -Optional optional = Person.findByIdOptional(personId); -person = optional.orElseThrow(() -> new NotFoundException()); - -// finding all living persons -List livingPersons = Person.list("status", Status.Alive); - -// counting all persons -long countAll = Person.count(); - -// counting all living persons -long countAlive = Person.count("status", Status.Alive); - -// delete all living persons -Person.delete("status", Status.Alive); - -// delete all persons -Person.deleteAll(); - -// delete by id -boolean deleted = Person.deleteById(personId); - -// set the name of all living persons to 'Mortal' -long updated = Person.update("name", "Mortal").where("status", Status.Alive); ----- - -All `list` methods have equivalent `stream` versions. - -[source,java] ----- -Stream persons = Person.streamAll(); -List namesButEmmanuels = persons - .map(p -> p.name.toLowerCase() ) - .filter( n -> ! "emmanuel".equals(n) ) - .collect(Collectors.toList()); ----- - -NOTE: A `persistOrUpdate()` method exist that persist or update an entity in the database, it uses the __upsert__ capability of MongoDB to do it in a single query. - -=== Adding entity methods - -Add custom queries on your entities inside the entities themselves. -That way, you and your co-workers can find them easily, and queries are co-located with the object they operate on. -Adding them as static methods in your entity class is the Panache Active Record way. - -[source,java] ----- -public class Person extends PanacheMongoEntity { - public String name; - public LocalDate birth; - public Status status; - - public static Person findByName(String name){ - return find("name", name).firstResult(); - } - - public static List findAlive(){ - return list("status", Status.Alive); - } - - public static void deleteLoics(){ - delete("name", "Loïc"); - } -} ----- - -== Solution 2: using the repository pattern - -=== Defining your entity - -You can define your entity as regular POJO. -You can add the `@MongoEntity` annotation to your entity if you need to customize the name of the collection, the database, or the client. - -[source,java] ----- -@MongoEntity(collection="ThePerson") -public class Person { - public ObjectId id; // used by MongoDB for the _id field - public String name; - public LocalDate birth; - public Status status; -} ----- - -NOTE: Annotating with `@MongoEntity` is optional. Here the entity will be stored in the `ThePerson` collection instead of the default `Person` collection. - -MongoDB with Panache uses the link:{mongodb-doc-root-url}/bson/pojos/[PojoCodecProvider] to convert your entities to a MongoDB `Document`. - -You will be allowed to use the following annotations to customize this mapping: - -- `@BsonId`: allows you to customize the ID field, see <>. -- `@BsonProperty`: customize the serialized name of the field. -- `@BsonIgnore`: ignore a field during the serialization. - -TIP: You can use public fields or private fields with getters/setters. -If you don't want to manage the ID by yourself, you can make your entity extends `PanacheMongoEntity`. - -=== Defining your repository - -When using Repositories, you can get the exact same convenient methods as wit the active record pattern, injected in your Repository, -by making them implements `PanacheMongoRepository`: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheMongoRepository { - - // put your custom logic here as instance methods - - public Person findByName(String name){ - return find("name", name).firstResult(); - } - - public List findAlive(){ - return list("status", Status.Alive); - } - - public void deleteLoics(){ - delete("name", "Loïc"); - } -} ----- - -All the operations that are defined on `PanacheMongoEntityBase` are available on your repository, so using it -is exactly the same as using the active record pattern, except you need to inject it: - -[source,java] ----- -@Inject -PersonRepository personRepository; - -@GET -public long count(){ - return personRepository.count(); -} ----- - -=== Most useful operations - -Once you have written your repository, here are the most common operations you will be able to perform: - -[source,java] ----- -// creating a person -Person person = new Person(); -person.name = "Loïc"; -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1); -person.status = Status.Alive; - -// persist it: if you keep the default ObjectId ID field, it will be populated by the MongoDB driver -personRepository.persist(person); - -person.status = Status.Dead; - -// Your must call update() in order to send your entity modifications to MongoDB -personRepository.update(person); - -// delete it -personRepository.delete(person); - -// getting a list of all Person entities -List allPersons = personRepository.listAll(); - -// finding a specific person by ID -// here we build a new ObjectId but you can also retrieve it from the existing entity after being persisted -ObjectId personId = new ObjectId(idAsString); -person = personRepository.findById(personId); - -// finding a specific person by ID via an Optional -Optional optional = personRepository.findByIdOptional(personId); -person = optional.orElseThrow(() -> new NotFoundException()); - -// finding all living persons -List livingPersons = personRepository.list("status", Status.Alive); - -// counting all persons -long countAll = personRepository.count(); - -// counting all living persons -long countAlive = personRepository.count("status", Status.Alive); - -// delete all living persons -personRepository.delete("status", Status.Alive); - -// delete all persons -personRepository.deleteAll(); - -// delete by id -boolean deleted = personRepository.deleteById(personId); - -// set the name of all living persons to 'Mortal' -long updated = personRepository.update("name", "Mortal").where("status", Status.Alive); ----- - -All `list` methods have equivalent `stream` versions. - -[source,java] ----- -Stream persons = personRepository.streamAll(); -List namesButEmmanuels = persons - .map(p -> p.name.toLowerCase() ) - .filter( n -> ! "emmanuel".equals(n) ) - .collect(Collectors.toList()); ----- - -NOTE: A `persistOrUpdate()` method exist that persist or update an entity in the database, it uses the __upsert__ capability of MongoDB to do it in a single query. - -NOTE: The rest of the documentation show usages based on the active record pattern only, -but keep in mind that they can be performed with the repository pattern as well. -The repository pattern examples have been omitted for brevity. - -== Writing a JAX-RS resource - -First, include one of the RESTEasy extensions to enable JAX-RS endpoints, for example, add the `io.quarkus:quarkus-resteasy-reactive-jackson` dependency for JAX-RS and JSON support. - -Then, you can create the following resource to create/read/update/delete your Person entity: - -[source,java] ----- -@Path("/persons") -@Consumes(MediaType.APPLICATION_JSON) -@Produces(MediaType.APPLICATION_JSON) -public class PersonResource { - - @GET - public List list() { - return Person.listAll(); - } - - @GET - @Path("/{id}") - public Person get(@PathParam("id") String id) { - return Person.findById(new ObjectId(id)); - } - - @POST - public Response create(Person person) { - person.persist(); - return Response.created(URI.create("/persons/" + person.id)).build(); - } - - @PUT - @Path("/{id}") - public void update(@PathParam("id") String id, Person person) { - person.update(); - } - - @DELETE - @Path("/{id}") - public void delete(@PathParam("id") String id) { - Person person = Person.findById(new ObjectId(id)); - if(person == null) { - throw new NotFoundException(); - } - person.delete(); - } - - @GET - @Path("/search/{name}") - public Person search(@PathParam("name") String name) { - return Person.findByName(name); - } - - @GET - @Path("/count") - public Long count() { - return Person.count(); - } -} ----- - -== Advanced Query - -=== Paging - -You should only use `list` and `stream` methods if your collection contains small enough data sets. For larger data -sets you can use the `find` method equivalents, which return a `PanacheQuery` on which you can do paging: - -[source,java] ----- -// create a query for all living persons -PanacheQuery livingPersons = Person.find("status", Status.Alive); - -// make it use pages of 25 entries at a time -livingPersons.page(Page.ofSize(25)); - -// get the first page -List firstPage = livingPersons.list(); - -// get the second page -List secondPage = livingPersons.nextPage().list(); - -// get page 7 -List page7 = livingPersons.page(Page.of(7, 25)).list(); - -// get the number of pages -int numberOfPages = livingPersons.pageCount(); - -// get the total number of entities returned by this query without paging -int count = livingPersons.count(); - -// and you can chain methods of course -return Person.find("status", Status.Alive) - .page(Page.ofSize(25)) - .nextPage() - .stream() ----- - -The `PanacheQuery` type has many other methods to deal with paging and returning streams. - -=== Using a range instead of pages - -`PanacheQuery` also allows range-based queries. - -[source,java] ----- -// create a query for all living persons -PanacheQuery livingPersons = Person.find("status", Status.Alive); - -// make it use a range: start at index 0 until index 24 (inclusive). -livingPersons.range(0, 24); - -// get the range -List firstRange = livingPersons.list(); - -// to get the next range, you need to call range again -List secondRange = livingPersons.range(25, 49).list(); ----- - -[WARNING] -==== -You cannot mix ranges and pages: if you use a range, all methods that depend on having a current page will throw an `UnsupportedOperationException`; -you can switch back to paging using `page(Page)` or `page(int, int)`. -==== - -=== Sorting - -All methods accepting a query string also accept an optional `Sort` parameter, which allows you to abstract your sorting: - -[source,java] ----- -List persons = Person.list(Sort.by("name").and("birth")); - -// and with more restrictions -List persons = Person.list("status", Sort.by("name").and("birth"), Status.Alive); ----- - -The `Sort` class has plenty of methods for adding columns and specifying sort direction. - -=== Simplified queries - -Normally, MongoDB queries are of this form: `{'firstname': 'John', 'lastname':'Doe'}`, this is what we call MongoDB native queries. - -You can use them if you want, but we also support what we call **PanacheQL** that can be seen as a subset of link:https://docs.oracle.com/javaee/7/tutorial/persistence-querylanguage.htm#BNBTG[JPQL] (or link:https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#hql[HQL]) and allows you to easily express a query. -MongoDB with Panache will then map it to a MongoDB native query. - -If your query does not start with `{`, we will consider it a PanacheQL query: - -- `` (and single parameter) which will expand to `{'singleColumnName': '?1'}` -- `` will expand to `{}` where we will map the PanacheQL query to MongoDB native query form. We support the following operators that will be mapped to the corresponding MongoDB operators: 'and', 'or' ( mixing 'and' and 'or' is not currently supported), '=', '>', '>=', '<', '<=', '!=', 'is null', 'is not null', and 'like' that is mapped to the MongoDB `$regex` operator (both String and JavaScript patterns are supported). - -Here are some query examples: - -- `firstname = ?1 and status = ?2` will be mapped to `{'firstname': ?1, 'status': ?2}` -- `amount > ?1 and firstname != ?2` will be mapped to `{'amount': {'$gt': ?1}, 'firstname': {'$ne': ?2}}` -- `lastname like ?1` will be mapped to `{'lastname': {'$regex': ?1}}`. Be careful that this will be link:https://docs.mongodb.com/manual/reference/operator/query/regex/#op._S_regex[MongoDB regex] support and not SQL like pattern. -- `lastname is not null` will be mapped to `{'lastname':{'$exists': true}}` -- `status in ?1` will be mapped to `{'status':{$in: [?1]}}` - -WARNING: MongoDB queries must be valid JSON documents, -using the same field multiple times in a query is not allowed using PanacheQL as it would generate an invalid JSON -(see link:https://github.com/quarkusio/quarkus/issues/12086[this issue on github]). - -We also handle some basic date type transformations: all fields of type `Date`, `LocalDate`, `LocalDateTime` or `Instant` will be mapped to the -link:https://docs.mongodb.com/manual/reference/bson-types/#date[BSON Date] using the `ISODate` type (UTC datetime). -The MongoDB POJO codec doesn't support `ZonedDateTime` and `OffsetDateTime` so you should convert them prior usage. - -MongoDB with Panache also supports extended MongoDB queries by providing a `Document` query, this is supported by the find/list/stream/count/delete methods. - -MongoDB with Panache offers operations to update multiple documents based on an update document and a query : -`Person.update("foo = ?1, bar = ?2", fooName, barName).where("name = ?1", name)`. - -For these operations, you can express the update document the same way you express your queries, here are some examples: - -- `` (and single parameter) which will expand to the update document `{'$set' : {'singleColumnName': '?1'}}` -- `firstname = ?1, status = ?2` will be mapped to the update document `{'$set' : {'firstname': ?1, 'status': ?2}}` -- `firstname = :firstname, status = :status` will be mapped to the update document `{'$set' : {'firstname': :firstname, 'status': :status}}` -- `{'firstname' : ?1, 'status' : ?2}` will be mapped to the update document `{'$set' : {'firstname': ?1, 'status': ?2}}` -- `{'firstname' : firstname, 'status' : :status}` ` will be mapped to the update document `{'$set' : {'firstname': :firstname, 'status': :status}}` - -=== Query parameters - -You can pass query parameters, for both native and PanacheQL queries, by index (1-based) as shown below: - -[source,java] ----- -Person.find("name = ?1 and status = ?2", "Loïc", Status.Alive); -Person.find("{'name': ?1, 'status': ?2}", "Loïc", Status.Alive); ----- - -Or by name using a `Map`: - -[source,java] ----- -Map params = new HashMap<>(); -params.put("name", "Loïc"); -params.put("status", Status.Alive); -Person.find("name = :name and status = :status", params); -Person.find("{'name': :name, 'status', :status}", params); ----- - -Or using the convenience class `Parameters` either as is or to build a `Map`: - -[source,java] ----- -// generate a Map -Person.find("name = :name and status = :status", - Parameters.with("name", "Loïc").and("status", Status.Alive).map()); - -// use it as-is -Person.find("{'name': :name, 'status': :status}", - Parameters.with("name", "Loïc").and("status", Status.Alive)); ----- - -Every query operation accepts passing parameters by index (`Object...`), or by name (`Map` or `Parameters`). - -When you use query parameters, be careful that PanacheQL queries will refer to the Object parameters name but native queries will refer to MongoDB field names. - -Imagine the following entity: - -[source,java] ----- -public class Person extends PanacheMongoEntity { - @BsonProperty("lastname") - public String name; - public LocalDate birth; - public Status status; - - public static Person findByNameWithPanacheQLQuery(String name){ - return find("name", name).firstResult(); - } - - public static Person findByNameWithNativeQuery(String name){ - return find("{'lastname': ?1}", name).firstResult(); - } -} ----- - -Both `findByNameWithPanacheQLQuery()` and `findByNameWithNativeQuery()` methods will return the same result but query written in PanacheQL -will use the entity field name: `name`, and native query will use the MongoDB field name: `lastname`. - -=== Query projection - -Query projection can be done with the `project(Class)` method on the `PanacheQuery` object that is returned by the `find()` methods. - -You can use it to restrict which fields will be returned by the database, -the ID field will always be returned, but it's not mandatory to include it inside the projection class. - -For this, you need to create a class (a POJO) that will only contain the projected fields. -This POJO needs to be annotated with `@ProjectionFor(Entity.class)` where `Entity` is the name of your entity class. -The field names, or getters, of the projection class will be used to restrict which properties will be loaded from the database. - -Projection can be done for both PanacheQL and native queries. - -[source,java] ----- -import io.quarkus.mongodb.panache.common.ProjectionFor; -import org.bson.codecs.pojo.annotations.BsonProperty; - -// using public fields -@ProjectionFor(Person.class) -public class PersonName { - public String name; -} - -// using getters -@ProjectionFor(Person.class) -public class PersonNameWithGetter { - private String name; - - public String getName(){ - return name; - } - - public void setName(String name){ - this.name = name; - } -} - -// only 'name' will be loaded from the database -PanacheQuery shortQuery = Person.find("status ", Status.Alive).project(PersonName.class); -PanacheQuery query = Person.find("'status': ?1", Status.Alive).project(PersonNameWithGetter.class); -PanacheQuery nativeQuery = Person.find("{'status': 'ALIVE'}", Status.Alive).project(PersonName.class); ----- - -TIP: Using `@BsonProperty` is not needed to define custom column mappings, as the mappings from the entity class will be used. - -TIP: You can have your projection class extends from another class. In this case, the parent class also needs to have use `@ProjectionFor` annotation. - -== Query debugging - -As MongoDB with Panache allows writing simplified queries, it is sometimes handy to log the generated native queries for debugging purpose. - -This can be achieved by setting to DEBUG the following log category inside your `application.properties`: - -[source,properties] ----- -quarkus.log.category."io.quarkus.mongodb.panache.runtime".level=DEBUG ----- - -== The PojoCodecProvider: easy object to BSON document conversion. - -MongoDB with Panache uses the link:{mongodb-doc-root-url}/bson/pojos[PojoCodecProvider], with link:{mongodb-doc-root-url}/pojos/#pojo-support[automatic POJO support], -to automatically convert your object to a BSON document. - -In case you encounter the `org.bson.codecs.configuration.CodecConfigurationException` exception, it means the codec is not able to -automatically convert your object. -This codec obeys the Java Bean standard, so it will successfully convert a POJO using public fields or getters/setters. -You can use `@BsonIgnore` to make a field, or a getter/setter, ignored by the codec. - -If your class doesn't obey these rules (for example by including a method that starts with `get` but is not a setter), -you could provide a custom codec for it. -Your custom codec will be automatically discovered and registered inside the codec registry. -See xref:mongodb.adoc#simplifying-mongodb-client-usage-using-bson-codec[Using BSON codec]. - -== Transactions - -MongoDB offers ACID transactions since version 4.0. - -To use them with MongoDB with Panache you need to annotate the method that starts the transaction with the `@Transactional` annotation. - -WARNING: Transaction support inside MongoDB with Panache is still experimental. - -== Custom IDs - -IDs are often a touchy subject. In MongoDB, they are usually auto-generated by the database with an `ObjectId` type. -In MongoDB with Panache the ID are defined by a field named `id` of the `org.bson.types.ObjectId` type, -but if you want to customize them, once again we have you covered. - -You can specify your own ID strategy by extending `PanacheMongoEntityBase` instead of `PanacheMongoEntity`. Then -you just declare whatever ID you want as a public field by annotating it by `@BsonId`: - -[source,java] ----- -@MongoEntity -public class Person extends PanacheMongoEntityBase { - - @BsonId - public Integer myId; - - //... -} ----- - -If you're using repositories, then you will want to extend `PanacheMongoRepositoryBase` instead of `PanacheMongoRepository` -and specify your ID type as an extra type parameter: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheMongoRepositoryBase { - //... -} ----- - -[NOTE] -==== -When using `ObjectId`, MongoDB will automatically provide a value for you, but if you use a custom field type, -you need to provide the value by yourself. -==== - -`ObjectId` can be difficult to use if you want to expose its value in your REST service. -So we created Jackson and JSON-B providers to serialize/deserialize them as a `String` which are automatically registered if your project depends on either the RESTEasy Jackson extension or the RESTEasy JSON-B extension. - -[IMPORTANT] -==== -If you use the standard `ObjectId` ID type, don't forget to retrieve your entity by creating a new `ObjectId` when the identifier comes from a path parameter. For example: - -[source,java] ----- -@GET -@Path("/{id}") -public Person findById(@PathParam("id") String id) { - return Person.findById(new ObjectId(id)); -} ----- -==== - -== Working with Kotlin Data classes - -Kotlin data classes are a very convenient way of defining data carrier classes, making them a great match to define an entity class. - -But this type of class comes with some limitations: all fields needs to be initialized at construction time or be marked as nullable, -and the generated constructor needs to have as parameters all the fields of the data class. - -MongoDB with Panache uses the link:{mongodb-doc-root-url}/bson/pojos[PojoCodecProvider], a MongoDB codec which mandates the presence of a parameterless constructor. - -Therefore, if you want to use a data class as an entity class, you need a way to make Kotlin generate an empty constructor. -To do so, you need to provide default values for all the fields of your classes. -The following sentence from the Kotlin documentation explains it: - -__On the JVM, if the generated class needs to have a parameterless constructor, default values for all properties have to be specified (see Constructors).__ - -If for whatever reason, the aforementioned solution is deemed unacceptable, there are alternatives. - -First, you can create a BSON Codec which will be automatically registered by Quarkus and will be used instead of the `PojoCodecProvider`. -See this part of the documentation: xref:mongodb.adoc#simplifying-mongodb-client-usage-using-bson-codec[Using BSON codec]. - -Another option is to use the `@BsonCreator` annotation to tell the `PojoCodecProvider` to use the Kotlin data class default constructor, -in this case all constructor parameters have to be annotated with `@BsonProperty`: see link:{mongodb-doc-root-url}/bson/pojos/#supporting-pojos-without-no-args-constructors[Supporting pojos without no args constructor]. - -This will only work when the entity extends `PanacheMongoEntityBase` and not `PanacheMongoEntity`, as the ID field also needs to be included in the constructor. - -An example of a `Person` class defined as a Kotlin data class would look like: - -[source,kotlin] ----- -data class Person @BsonCreator constructor ( - @BsonId var id: ObjectId, - @BsonProperty("name") var name: String, - @BsonProperty("birth") var birth: LocalDate, - @BsonProperty("status") var status: Status -): PanacheMongoEntityBase() ----- - -[TIP] -==== -Here we use `var` but note that `val` can also be used. - -The `@BsonId` annotation is used instead of `@BsonProperty("_id")` for brevity's sake, but use of either is valid. -==== - -The last option is to the use the link:https://kotlinlang.org/docs/reference/compiler-plugins.html#no-arg-compiler-plugin[no-arg] compiler plugin. -This plugin is configured with a list of annotations, and the end result is the generation of no-args constructor for each class annotated with them. - -For MongoDB with Panache, you could use the `@MongoEntity` annotation on your data class for this: - -[source,kotlin] ----- -@MongoEntity -data class Person ( - var name: String, - var birth: LocalDate, - var status: Status -): PanacheMongoEntity() ----- - -[[reactive]] -== Reactive Entities and Repositories - -MongoDB with Panache allows using reactive style implementation for both entities and repositories. -For this, you need to use the Reactive variants when defining your entities : `ReactivePanacheMongoEntity` or `ReactivePanacheMongoEntityBase`, -and when defining your repositories: `ReactivePanacheMongoRepository` or `ReactivePanacheMongoRepositoryBase`. - -[TIP] -.Mutiny -==== -The reactive API of the MongoDB with Panache uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -The reactive variant of the `Person` class will be: - -[source,java] ----- -public class ReactivePerson extends ReactivePanacheMongoEntity { - public String name; - public LocalDate birth; - public Status status; - - // return name as uppercase in the model - public String getName(){ - return name.toUpperCase(); - } - - // store all names in lowercase in the DB - public void setName(String name){ - this.name = name.toLowerCase(); - } -} ----- - -You will have access to the same functionalities of the _imperative_ variant inside the reactive one: bson annotations, custom ID, PanacheQL, ... -But the methods on your entities or repositories will all return reactive types. - -See the equivalent methods from the imperative example with the reactive variant: - -[source,java] ----- -// creating a person -ReactivePerson person = new ReactivePerson(); -person.name = "Loïc"; -person.birth = LocalDate.of(1910, Month.FEBRUARY, 1); -person.status = Status.Alive; - -// persist it: if you keep the default ObjectId ID field, it will be populated by the MongoDB driver, -// and accessible when uni1 will be resolved -Uni uni1 = person.persist(); - -person.status = Status.Dead; - -// Your must call update() in order to send your entity modifications to MongoDB -Uni uni2 = person.update(); - -// delete it -Uni uni3 = person.delete(); - -// getting a list of all persons -Uni> allPersons = ReactivePerson.listAll(); - -// finding a specific person by ID -// here we build a new ObjectId but you can also retrieve it from the existing entity after being persisted -ObjectId personId = new ObjectId(idAsString); -Uni personById = ReactivePerson.findById(personId); - -// finding a specific person by ID via an Optional -Uni> optional = ReactivePerson.findByIdOptional(personId); -personById = optional.map(o -> o.orElseThrow(() -> new NotFoundException())); - -// finding all living persons -Uni> livingPersons = ReactivePerson.list("status", Status.Alive); - -// counting all persons -Uni countAll = ReactivePerson.count(); - -// counting all living persons -Uni countAlive = ReactivePerson.count("status", Status.Alive); - -// delete all living persons -Uni deleteCount = ReactivePerson.delete("status", Status.Alive); - -// delete all persons -deleteCount = ReactivePerson.deleteAll(); - -// delete by id -Uni deleted = ReactivePerson.deleteById(personId); - -// set the name of all living persons to 'Mortal' -Uni updated = ReactivePerson.update("name", "Mortal").where("status", Status.Alive); ----- - -TIP: If you use MongoDB with Panache in conjunction with RESTEasy, you can directly return a reactive type inside your JAX-RS resource endpoint as long as you include the `quarkus-resteasy-mutiny` extension. - -The same query facility exists for the reactive types, but the `stream()` methods act differently: they return a `Multi` (which implement a reactive stream `Publisher`) instead of a `Stream`. - -It allows more advanced reactive use cases, for example, you can use it to send server-sent events (SSE) via RESTEasy: - -[source,java] ----- -import org.jboss.resteasy.annotations.SseElementType; -import org.reactivestreams.Publisher; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; - -@GET -@Path("/stream") -@Produces(MediaType.SERVER_SENT_EVENTS) -@SseElementType(MediaType.APPLICATION_JSON) -public Multi streamPersons() { - return ReactivePerson.streamAll(); -} ----- - -TIP: `@SseElementType(MediaType.APPLICATION_JSON)` tells RESTEasy to serialize the object in JSON. - -WARNING: Transactions are not supported for Reactive Entities and Repositories. - -== Mocking - -=== Using the active-record pattern - -If you are using the active-record pattern you cannot use Mockito directly as it does not support mocking static methods, -but you can use the `quarkus-panache-mock` module which allows you to use Mockito to mock all provided static -methods, including your own. - -Add this dependency to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-panache-mock - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-panache-mock") ----- - -Given this simple entity: - -[source,java] ----- -public class Person extends PanacheMongoEntity { - - public String name; - - public static List findOrdered() { - return findAll(Sort.by("lastname", "firstname")).list(); - } -} ----- - -You can write your mocking test like this: - -[source,java] ----- -@QuarkusTest -public class PanacheFunctionalityTest { - - @Test - public void testPanacheMocking() { - PanacheMock.mock(Person.class); - - // Mocked classes always return a default value - Assertions.assertEquals(0, Person.count()); - - // Now let's specify the return value - Mockito.when(Person.count()).thenReturn(23L); - Assertions.assertEquals(23, Person.count()); - - // Now let's change the return value - Mockito.when(Person.count()).thenReturn(42L); - Assertions.assertEquals(42, Person.count()); - - // Now let's call the original method - Mockito.when(Person.count()).thenCallRealMethod(); - Assertions.assertEquals(0, Person.count()); - - // Check that we called it 4 times - PanacheMock.verify(Person.class, Mockito.times(4)).count();// <1> - - // Mock only with specific parameters - Person p = new Person(); - Mockito.when(Person.findById(12L)).thenReturn(p); - Assertions.assertSame(p, Person.findById(12L)); - Assertions.assertNull(Person.findById(42L)); - - // Mock throwing - Mockito.when(Person.findById(12L)).thenThrow(new WebApplicationException()); - Assertions.assertThrows(WebApplicationException.class, () -> Person.findById(12L)); - - // We can even mock your custom methods - Mockito.when(Person.findOrdered()).thenReturn(Collections.emptyList()); - Assertions.assertTrue(Person.findOrdered().isEmpty()); - - PanacheMock.verify(Person.class).findOrdered(); - PanacheMock.verify(Person.class, Mockito.atLeastOnce()).findById(Mockito.any()); - PanacheMock.verifyNoMoreInteractions(Person.class); - } -} ----- -<1> Be sure to call your `verify` methods on `PanacheMock` rather than `Mockito`, otherwise you won't know -what mock object to pass. - -=== Using the repository pattern - -If you are using the repository pattern you can use Mockito directly, using the `quarkus-junit5-mockito` module, -which makes mocking beans much easier: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-junit5-mockito - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-junit5-mockito") ----- - -Given this simple entity: - -[source,java] ----- -public class Person { - - @BsonId - public Long id; - - public String name; -} ----- - -And this repository: - -[source,java] ----- -@ApplicationScoped -public class PersonRepository implements PanacheMongoRepository { - public List findOrdered() { - return findAll(Sort.by("lastname", "firstname")).list(); - } -} ----- - -You can write your mocking test like this: - -[source,java] ----- -@QuarkusTest -public class PanacheFunctionalityTest { - @InjectMock - PersonRepository personRepository; - - @Test - public void testPanacheRepositoryMocking() throws Throwable { - // Mocked classes always return a default value - Assertions.assertEquals(0, personRepository.count()); - - // Now let's specify the return value - Mockito.when(personRepository.count()).thenReturn(23L); - Assertions.assertEquals(23, personRepository.count()); - - // Now let's change the return value - Mockito.when(personRepository.count()).thenReturn(42L); - Assertions.assertEquals(42, personRepository.count()); - - // Now let's call the original method - Mockito.when(personRepository.count()).thenCallRealMethod(); - Assertions.assertEquals(0, personRepository.count()); - - // Check that we called it 4 times - Mockito.verify(personRepository, Mockito.times(4)).count(); - - // Mock only with specific parameters - Person p = new Person(); - Mockito.when(personRepository.findById(12L)).thenReturn(p); - Assertions.assertSame(p, personRepository.findById(12L)); - Assertions.assertNull(personRepository.findById(42L)); - - // Mock throwing - Mockito.when(personRepository.findById(12L)).thenThrow(new WebApplicationException()); - Assertions.assertThrows(WebApplicationException.class, () -> personRepository.findById(12L)); - - Mockito.when(personRepository.findOrdered()).thenReturn(Collections.emptyList()); - Assertions.assertTrue(personRepository.findOrdered().isEmpty()); - - // We can even mock your custom methods - Mockito.verify(personRepository).findOrdered(); - Mockito.verify(personRepository, Mockito.atLeastOnce()).findById(Mockito.any()); - Mockito.verifyNoMoreInteractions(personRepository); - } -} ----- - - -== How and why we simplify MongoDB API - -When it comes to writing MongoDB entities, there are a number of annoying things that users have grown used to -reluctantly deal with, such as: - -- Duplicating ID logic: most entities need an ID, most people don't care how it's set, because it's not really -relevant to your model. -- Dumb getters and setters: since Java lacks support for properties in the language, we have to create fields, -then generate getters and setters for those fields, even if they don't actually do anything more than read/write -the fields. -- Traditional EE patterns advise to split entity definition (the model) from the operations you can do on them -(DAOs, Repositories), but really that requires an unnatural split between the state and its operations even though -we would never do something like that for regular objects in the Object Oriented architecture, where state and methods are in the same class. Moreover, this requires two classes per entity, and requires injection of the DAO or Repository where you need to do entity operations, which breaks your edit flow and requires you to get out of the code you're writing to set up an injection point before coming back to use it. -- MongoDB queries are super powerful, but overly verbose for common operations, requiring you to write queries even -when you don't need all the parts. -- MongoDB queries are JSON based, so you will need some String manipulation or using the `Document` type and it will need a lot of boilerplate code. - -With Panache, we took an opinionated approach to tackle all these problems: - -- Make your entities extend `PanacheMongoEntity`: it has an ID field that is auto-generated. If you require -a custom ID strategy, you can extend `PanacheMongoEntityBase` instead and handle the ID yourself. -- Use public fields. Get rid of dumb getter and setters. Under the hood, we will generate all getters and setters -that are missing, and rewrite every access to these fields to use the accessor methods. This way you can still -write _useful_ accessors when you need them, which will be used even though your entity users still use field accesses. -- With the active record pattern: put all your entity logic in static methods in your entity class and don't create DAOs. -Your entity superclass comes with lots of super useful static methods, and you can add your own in your entity class. -Users can just start using your entity `Person` by typing `Person.` and getting completion for all the operations in a single place. -- Don't write parts of the query that you don't need: write `Person.find("order by name")` or -`Person.find("name = ?1 and status = ?2", "Loïc", Status.Alive)` or even better `Person.find("name", "Loïc")`. - -That's all there is to it: with Panache, MongoDB has never looked so trim and neat. - -== Defining entities in external projects or jars - -MongoDB with Panache relies on compile-time bytecode enhancements to your entities. - -It attempts to identity archives with Panache entities (and consumers of Panache entities) -by the presence of the marker file `META-INF/panache-archive.marker`. Panache includes an -annotation processor that will automatically create this file in archives that depend on -Panache (even indirectly). If you have disabled annotation processors you may need to create -this file manually in some cases. diff --git a/_versions/2.7/guides/mongodb.adoc b/_versions/2.7/guides/mongodb.adoc deleted file mode 100644 index 84e9e8bb352..00000000000 --- a/_versions/2.7/guides/mongodb.adoc +++ /dev/null @@ -1,722 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the MongoDB Client -include::./attributes.adoc[] - -MongoDB is a well known NoSQL Database that is widely used. - -In this guide, we see how you can get your REST services to use the MongoDB database. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* MongoDB installed or Docker installed - -== Architecture - -The application built in this guide is quite simple: the user can add elements in a list using a form and the list is updated. - -All the information between the browser and the server is formatted as JSON. - -The elements are stored in MongoDB. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `mongodb-quickstart` {quickstarts-tree-url}/mongodb-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: mongodb-quickstart -:create-app-extensions: resteasy-reactive-jackson,mongodb-client -include::includes/devtools/create-app.adoc[] - -This command generates a Maven structure importing the RESTEasy Reactive Jackson and MongoDB Client extensions. -After this, the `quarkus-mongodb-client` extension has been added to your build file. - -If you already have your Quarkus project configured, you can add the `mongodb-client` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: mongodb-client -include::includes/devtools/extension-add.adoc[] - -This will add the following to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-mongodb-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-mongodb-client") ----- - -== Creating your first JSON REST service - -In this example, we will create an application to manage a list of fruits. - -First, let's create the `Fruit` bean as follows: - -[source,java] ----- -package org.acme.mongodb; - -import java.util.Objects; - -public class Fruit { - - private String name; - private String description; - private String id; - - public Fruit() { - } - - public Fruit(String name, String description) { - this.name = name; - this.description = description; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - @Override - public boolean equals(Object obj) { - if (!(obj instanceof Fruit)) { - return false; - } - - Fruit other = (Fruit) obj; - - return Objects.equals(other.name, this.name); - } - - @Override - public int hashCode() { - return Objects.hash(this.name); - } - - public void setId(String id) { - this.id = id; - } - - public String getId() { - return id; - } -} ----- - -Nothing fancy. One important thing to note is that having a default constructor is required by the JSON serialization layer. - -Now create a `org.acme.mongodb.FruitService` that will be the business layer of our application and store/load the fruits from the mongoDB database. - -[source,java] ----- -package org.acme.mongodb; - -import com.mongodb.client.MongoClient; -import com.mongodb.client.MongoCollection; -import com.mongodb.client.MongoCursor; -import org.bson.Document; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import java.util.ArrayList; -import java.util.List; - -@ApplicationScoped -public class FruitService { - - @Inject MongoClient mongoClient; - - public List list(){ - List list = new ArrayList<>(); - MongoCursor cursor = getCollection().find().iterator(); - - try { - while (cursor.hasNext()) { - Document document = cursor.next(); - Fruit fruit = new Fruit(); - fruit.setName(document.getString("name")); - fruit.setDescription(document.getString("description")); - list.add(fruit); - } - } finally { - cursor.close(); - } - return list; - } - - public void add(Fruit fruit){ - Document document = new Document() - .append("name", fruit.getName()) - .append("description", fruit.getDescription()); - getCollection().insertOne(document); - } - - private MongoCollection getCollection(){ - return mongoClient.getDatabase("fruit").getCollection("fruit"); - } -} ----- - -Now, create the `org.acme.mongodb.FruitResource` class as follows: - -[source,java] ----- -@Path("/fruits") -public class FruitResource { - - @Inject FruitService fruitService; - - @GET - public List list() { - return fruitService.list(); - } - - @POST - public List add(Fruit fruit) { - fruitService.add(fruit); - return list(); - } -} ----- - -The implementation is pretty straightforward and you just need to define your endpoints using the JAX-RS annotations and use the `FruitService` to list/add new fruits. - -== Configuring the MongoDB database -The main property to configure is the URL to access to MongoDB, almost all configuration can be included in the connection URI so we advise you to do so, you can find more information in the MongoDB documentation: https://docs.mongodb.com/manual/reference/connection-string/ - -A sample configuration should look like this: - -[source,properties] ----- -# configure the mongoDB client for a replica set of two nodes -quarkus.mongodb.connection-string = mongodb://mongo1:27017,mongo2:27017 ----- - -In this example, we are using a single instance running on localhost: - -[source,properties] ----- -# configure the mongoDB client for a single instance on localhost -quarkus.mongodb.connection-string = mongodb://localhost:27017 ----- - -If you need more configuration properties, there is a full list at the end of this guide. - -WARNING: By default Quarkus will restrict the use of JNDI within an application, as a precaution to try and mitigate any future vulnerabilities similar to log4shell. -Because the `mongo+srv` protocol often used to connect to MongoDB requires JNDI, this protection is automatically disabled when using the MongoDB client extension. - -[[dev-services]] -=== Dev Services (Configuration Free Databases) - -Quarkus supports a feature called Dev Services that allows you to create various datasources without any config. In the case of MongoDB this support extends to the default MongoDB connection. -What that means practically, is that if you have not configured `quarkus.mongodb.connection-string` Quarkus will automatically start a MongoDB container when running tests or dev-mode, -and automatically configure the connection. - -When running the production version of the application, the MongoDB connection need to be configured as normal, so if you want to include a production database config in your -`application.properties` and continue to use Dev Services we recommend that you use the `%prod.` profile to define your MongoDB settings. - -include::{generated-dir}/config/quarkus-mongodb-config-group-dev-services-build-time-config.adoc[opts=optional, leveloffset=+1] - -== Multiple MongoDB Clients - -MongoDB allows you to configure multiple clients. -Using several clients works the same way as having a single client. - -[source,properties] ----- -quarkus.mongodb.connection-string = mongodb://login:pass@mongo1:27017/database - -quarkus.mongodb.users.connection-string = mongodb://mongo2:27017/userdb -quarkus.mongodb.inventory.connection-string = mongodb://mongo3:27017/invdb,mongo4:27017/invdb ----- - -Notice there's an extra bit in the key (the `users` and `inventory` segments). -The syntax is as follows: `quarkus.mongodb.[optional name.][mongo connection property]`. -If the name is omitted, it configures the default client. - -[NOTE] -==== -The use of multiple MongoDB clients enables multi-tenancy for MongoDB by allowing to connect to multiple MongoDB clusters. + -If you want to connect to multiple databases inside the same cluster, -multiple clients are **not** necessary as a single client is able to access all databases in the same cluster -(like a JDBC connection is able to access to multiple schemas inside the same database). -==== - -=== Named Mongo client Injection - -When using multiple clients, each `MongoClient`, you can select the client to inject using the `io.quarkus.mongodb.MongoClientName` qualifier. -Using the above properties to configure three different clients, you can also inject each one as follows: - -[source,java,indent=0] ----- -@Inject -MongoClient defaultMongoClient; - -@Inject -@MongoClientName("users") -MongoClient mongoClient1; - -@Inject -@MongoClientName("inventory") -ReactiveMongoClient mongoClient2; ----- - -== Running a MongoDB Database -As by default, `MongoClient` is configured to access a local MongoDB database on port 27017 (the default MongoDB port), if you have a local running database on this port, there is nothing more to do before being able to test it! - -If you want to use Docker to run a MongoDB database, you can use the following command to launch one: -[source,bash] ----- -docker run -ti --rm -p 27017:27017 mongo:4.0 ----- - -[NOTE] -==== -If you use <>, launching the container manually is not necessary! -==== - - -== Creating a frontend - -Now let's add a simple web page to interact with our `FruitResource`. -Quarkus automatically serves static resources located under the `META-INF/resources` directory. -In the `src/main/resources/META-INF/resources` directory, add a `fruits.html` file with the content from this {quickstarts-blob-url}/mongodb-quickstart/src/main/resources/META-INF/resources/fruits.html[fruits.html] file in it. - -You can now interact with your REST service: - -:devtools-wrapped: - * start Quarkus with: -+ -include::includes/devtools/dev.adoc[] - * open a browser to `http://localhost:8080/fruits.html` - * add new fruits to the list via the form -:!devtools-wrapped: - -[[reactive]] -== Reactive MongoDB Client -A reactive MongoDB Client is included in Quarkus. -Using it is as easy as using the classic MongoDB Client. -You can rewrite the previous example to use it like the following. - -[NOTE] -.Deprecation -==== -The `io.quarkus.mongodb.ReactiveMongoClient` client is deprecated and will be removed in the future. -It is recommended to switch to the `io.quarkus.mongodb.reactive.ReactiveMongoClient` client providing the `Mutiny` API. -==== - -[TIP] -.Mutiny -==== -The MongoDB reactive client uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -[source,java] ----- -package org.acme.mongodb; - -import io.quarkus.mongodb.reactive.ReactiveMongoClient; -import io.quarkus.mongodb.reactive.ReactiveMongoCollection; -import io.smallrye.mutiny.Uni; -import org.bson.Document; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import java.util.List; - -@ApplicationScoped -public class ReactiveFruitService { - - @Inject - ReactiveMongoClient mongoClient; - - public Uni> list() { - return getCollection().find() - .map(doc -> { - Fruit fruit = new Fruit(); - fruit.setName(doc.getString("name")); - fruit.setDescription(doc.getString("description")); - return fruit; - }).collect().asList(); - } - - public Uni add(Fruit fruit) { - Document document = new Document() - .append("name", fruit.getName()) - .append("description", fruit.getDescription()); - return getCollection().insertOne(document) - .onItem().ignore().andContinueWithNull(); - } - - private ReactiveMongoCollection getCollection() { - return mongoClient.getDatabase("fruit").getCollection("fruit"); - } -} ----- - - -[source,java] ----- -package org.acme.mongodb; - -import io.smallrye.mutiny.Uni; - -import java.util.List; - -import javax.inject.Inject; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.Consumes; -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.core.MediaType; - -@Path("/reactive_fruits") -@Produces(MediaType.APPLICATION_JSON) -@Consumes(MediaType.APPLICATION_JSON) -public class ReactiveFruitResource { - - @Inject - ReactiveFruitService fruitService; - - @GET - public Uni> list() { - return fruitService.list(); - } - - @POST - public Uni> add(Fruit fruit) { - return fruitService.add(fruit) - .onItem().ignore().andSwitchTo(this::list); - } -} ----- - -== Simplifying MongoDB Client usage using BSON codec - -By using a Bson `Codec`, the MongoDB Client will take care of the transformation of your domain object to/from a MongoDB `Document` automatically. - -First you need to create a Bson `Codec` that will tell Bson how to transform your entity to/from a MongoDB `Document`. -Here we use a `CollectibleCodec` as our object is retrievable from the database (it has a MongoDB identifier), if not we would have used a `Codec` instead. -More information in the codec documentation: https://mongodb.github.io/mongo-java-driver/3.10/bson/codecs. - -[source,java] ----- -package org.acme.mongodb.codec; - -import com.mongodb.MongoClientSettings; -import org.acme.mongodb.Fruit; -import org.bson.Document; -import org.bson.BsonWriter; -import org.bson.BsonValue; -import org.bson.BsonReader; -import org.bson.BsonString; -import org.bson.codecs.Codec; -import org.bson.codecs.CollectibleCodec; -import org.bson.codecs.DecoderContext; -import org.bson.codecs.EncoderContext; - -import java.util.UUID; - -public class FruitCodec implements CollectibleCodec { - - private final Codec documentCodec; - - public FruitCodec() { - this.documentCodec = MongoClientSettings.getDefaultCodecRegistry().get(Document.class); - } - - @Override - public void encode(BsonWriter writer, Fruit fruit, EncoderContext encoderContext) { - Document doc = new Document(); - doc.put("name", fruit.getName()); - doc.put("description", fruit.getDescription()); - documentCodec.encode(writer, doc, encoderContext); - } - - @Override - public Class getEncoderClass() { - return Fruit.class; - } - - @Override - public Fruit generateIdIfAbsentFromDocument(Fruit document) { - if (!documentHasId(document)) { - document.setId(UUID.randomUUID().toString()); - } - return document; - } - - @Override - public boolean documentHasId(Fruit document) { - return document.getId() != null; - } - - @Override - public BsonValue getDocumentId(Fruit document) { - return new BsonString(document.getId()); - } - - @Override - public Fruit decode(BsonReader reader, DecoderContext decoderContext) { - Document document = documentCodec.decode(reader, decoderContext); - Fruit fruit = new Fruit(); - if (document.getString("id") != null) { - fruit.setId(document.getString("id")); - } - fruit.setName(document.getString("name")); - fruit.setDescription(document.getString("description")); - return fruit; - } -} ----- - - -Then you need to create a `CodecProvider` to link this `Codec` to the `Fruit` class. - -[source,java] ----- -package org.acme.mongodb.codec; - -import org.acme.mongodb.Fruit; -import org.bson.codecs.Codec; -import org.bson.codecs.configuration.CodecProvider; -import org.bson.codecs.configuration.CodecRegistry; - -public class FruitCodecProvider implements CodecProvider { - @Override - public Codec get(Class clazz, CodecRegistry registry) { - if (clazz.equals(Fruit.class)) { - return (Codec) new FruitCodec(); - } - return null; - } - -} ----- - -Quarkus takes care of registering the `CodecProvider` for you as a CDI bean of `@Singleton` scope. - -Finally, when getting the `MongoCollection` from the database you can use directly the `Fruit` class instead of the `Document` one, the codec will automatically map the `Document` to/from your `Fruit` class. - -Here is an example of using a `MongoCollection` with the `FruitCodec`. - -[source,java] ----- -package org.acme.mongodb; - -import com.mongodb.client.MongoClient; -import com.mongodb.client.MongoCollection; -import com.mongodb.client.MongoCursor; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import java.util.ArrayList; -import java.util.List; - -@ApplicationScoped -public class CodecFruitService { - - @Inject MongoClient mongoClient; - - public List list(){ - List list = new ArrayList<>(); - MongoCursor cursor = getCollection().find().iterator(); - - try { - while (cursor.hasNext()) { - list.add(cursor.next()); - } - } finally { - cursor.close(); - } - return list; - } - - public void add(Fruit fruit){ - getCollection().insertOne(fruit); - } - - private MongoCollection getCollection(){ - return mongoClient.getDatabase("fruit").getCollection("fruit", Fruit.class); - } -} ----- - -== The POJO Codec - -The link:http://mongodb.github.io/mongo-java-driver/3.12/bson/pojos[POJO Codec] provides a set of annotations that enable the customization of -the way a POJO is mapped to a MongoDB collection and this codec is initialized automatically by Quarkus - -One of these annotations is the `@BsonDiscriminator` annotation that allows to storage multiple Java types in a single MongoDB collection by adding -a discriminator field inside the document. It can be useful when working with abstract types or interfaces. - -Quarkus will automatically register all the classes annotated with `@BsonDiscriminator` with the POJO codec. - -The POJO Codec have enhanced generic support via `PropertyCodecProvider`, -Quarkus will automatically register any `PropertyCodecProvider` with the POJO Codec (these classes are automatically made CDI beans of `@Singleton` scope). -When building native executables and using generic types, you might need to register the type arguments for reflection. - -== Simplifying MongoDB with Panache - -The xref:mongodb-panache.adoc[MongoDB with Panache] extension facilitates the usage of MongoDB by providing active record style entities (and repositories) like you have in xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache] and focuses on making your entities trivial and fun to write in Quarkus. - -== Connection Health Check - -If you are using the `quarkus-smallrye-health` extension, `quarkus-mongodb-client` will automatically add a readiness health check -to validate the connection to the cluster. - -So when you access the `/q/health/ready` endpoint of your application you will have information about the connection validation status. - -This behavior can be disabled by setting the `quarkus.mongodb.health.enabled` property to `false` in your `application.properties`. - -== Metrics - -If you are using the `quarkus-micrometer` or `quarkus-smallrye-metrics` extension, `quarkus-mongodb-client` can provide metrics about the connection pools. -This behavior must first be enabled by setting the `quarkus.mongodb.metrics.enabled` property to `true` in your `application.properties`. - -So when you access the `/q/metrics` endpoint of your application you will have information about the connection pool status. -When using xref:smallrye-metrics.adoc[SmallRye Metrics], connection pool metrics will be available under the `vendor` scope. - -== Tracing - -If you are using the `quarkus-smallrye-opentracing` extension, `quarkus-mongodb-client` can register traces about the commands executed. -This behavior must be enabled by setting the `quarkus.mongodb.tracing.enabled` property to `true` in your `application.properties` and adding the dependency `io.opentracing.contrib:opentracing-mongo-common` to your pom.xml (for more info read the xref:opentracing.adoc#mongodb-client[OpenTracing - MongoDB client] section). - -Read the xref:opentracing.adoc[OpenTracing] guide, for how to configure OpenTracing and how to use the Jaeger tracer. - -== Testing helpers - -To start a MongoDB database for your unit tests, Quarkus provides two `QuarkusTestResourceLifecycleManager` that relies on link:https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo[Flapdoodle embedded MongoDB]. - -- `io.quarkus.test.mongodb.MongoTestResource` will start a single instance on port 27017. -- `io.quarkus.test.mongodb.MongoReplicaSetTestResource` will start a replicaset with two instances, one on port 27017 and the other on port 27018. - -To use them, you need to add the `io.quarkus:quarkus-test-mongodb` dependency to your pom.xml. - -For more information about the usage of a `QuarkusTestResourceLifecycleManager` please read xref:getting-started-testing.adoc#quarkus-test-resource[Quarkus test resource]. - -== The legacy client - -We don't include the legacy MongoDB client by default. It contains the now retired MongoDB Java API (DB, DBCollection,... ) -and the `com.mongodb.MongoClient` that is now superseded by `com.mongodb.client.MongoClient`. - -If you want to use the legacy API, you need to add the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.mongodb - mongodb-driver-legacy - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.mongodb:mongodb-driver-legacy") ----- - -== Building a native executable - -You can use the MongoDB client in a native executable. - -If you want to use SSL/TLS encryption, you need to add these properties in your `application.properties`: - -[source,properties] ----- -quarkus.mongodb.tls=true -quarkus.mongodb.tls-insecure=true # only if TLS certificate cannot be validated ----- - -You can then build a native executable with the usual command: - -include::includes/devtools/build-native.adoc[] - -Running it is as simple as executing `./target/mongodb-quickstart-1.0.0-SNAPSHOT-runner`. - -You can then point your browser to `http://localhost:8080/fruits.html` and use your application. - -[WARNING] -==== -Currently, Quarkus doesn't support link:https://docs.mongodb.com/manual/core/security-client-side-encryption/[Client-Side Field Level Encryption] in native mode. -==== - -[TIP] -==== -If you encounter the following error when running your application in native mode: + -`Failed to encode 'MyObject'. Encoding 'myVariable' errored with: Can't find a codec for class org.acme.MyVariable.` + -This means that the `org.acme.MyVariable` class is not known to GraalVM, the remedy is to add the `@RegisterForReflection` annotation to your `MyVariable class`. -More details about the `@RegisterForReflection` annotation can be found on the xref:writing-native-applications-tips.adoc#registerForReflection[native application tips] page. -==== - -== Using mongo+srv:// urls - -`mongo+srv://` urls are supported out of the box in JVM mode. -However, in native, the default DNS resolver, provided by the MongoDB client, uses JNDI and does not work in native mode. - -If you need to use `mongo+srv://` in native mode, you can configure an alternative DNS resolver. -This feature is **experimental** and may introduce a difference between JVM applications and native applications. - -To enable the alternative DNS resolver, use: - -[source, properties] ----- -quarkus.mongodb.native.dns.use-vertx-dns-resolver=true ----- - -As indicated in the property name, it uses Vert.x to retrieve the DNS records. -By default, it tries to read the first `nameserver` from `/etc/resolv.conf`, if this file exists. -You can also configure your DNS server: - -[source,properties] ----- -quarkus.mongodb.native.dns.use-vertx-dns-resolver=true -quarkus.mongodb.native.dns.server-host=10.0.0.1 -quarkus.mongodb.native.dns.server-port=53 # 53 is the default port ----- - -Also, you can configure the lookup timeout using: - -[source,properties] ----- -quarkus.mongodb.native.dns.use-vertx-dns-resolver=true -quarkus.mongodb.native.dns.lookup-timeout=10s # the default is 5s ----- - -== Configuration Reference - -include::{generated-dir}/config/quarkus-mongodb.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/mutiny-primer.adoc b/_versions/2.7/guides/mutiny-primer.adoc deleted file mode 100644 index 95428db3488..00000000000 --- a/_versions/2.7/guides/mutiny-primer.adoc +++ /dev/null @@ -1,340 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Mutiny - Async for bare mortal - -include::./attributes.adoc[] - -https://smallrye.io/smallrye-mutiny[Mutiny] is an intuitive, reactive programming library. -It is the primary model to write reactive applications with Quarkus. - -== An event-driven reactive programming API - -Mutiny is very different from the other reactive programming libraries. -It takes a different approach to design your program. -With Mutiny everything is event-driven: you receive events, and you react to them. -This event-driven aspect embraces the asynchronous nature of distributed systems and provides an elegant and precise way to express continuation. - -Mutiny offers two types that are both event-driven and lazy: - -* A `Uni` emits a single event (an item or a failure). -Unis are convenient to represent asynchronous actions that return 0 or 1 result. -A good example is the result of sending a message to a message broker queue. -* A `Multi` emits multiple events (n items, 1 failure or 1 completion). -Multis can represent streams of items, potentially unbounded. -A good example is receiving messages from a message broker queue. - -These two types allow representing any type of interactions. -They are event sources. -You observe them (_subscription_) and you get notified when they emit an item, a failure, or, in the case of a bounded Multi, a completion event. -When you (the subscriber) receive the event, you can process it (e.g., transform it, filter it). -With Mutiny, you are going to write code like _onX().action()_, which reads as “on item X do action”. - -If you want to know more about Mutiny, and the concepts behind it, check https://smallrye.io/smallrye-mutiny/pages/philosophy[the Mutiny Philosophy]. - -== Mutiny in Quarkus - -Mutiny is the primary API when dealing with the reactive features from Quarkus. -It means that most extensions support Mutiny either by exposing an API returning Unis and Multis (such as reactive data sources or rest clients) or understanding when your methods return a Uni or a Multi (such as RESTEasy Reactive or Reactive Messaging). - -These integrations make Mutiny a prominent and cohesive model for every reactive application developed with Quarkus. -In addition, Mutiny architecture allows fine-grain dead-code elimination which improves the memory usage when compiled to native (such as with Quarkus native mode or GraalVM native image compiler). - -== Why another reactive programming API? - -Seasoned reactive developers may wonder why Quarkus introduced yet another reactive programming APIs while there are existing ones. -Mutiny is taking a very different angle: - -**Event-Driven** - -Mutiny places events at the core of its design. -With Mutiny, you observe events, react to them, and create elegant and readable processing pipelines. -A Ph.D. in functional programming is not required. - -**Navigable** - Even with intelligent code completion, classes with hundreds of methods are confusing. -Mutiny provides a navigable and explicit API driving you towards the operator you need. - -**Non-Blocking I/O** - Mutiny is the perfect companion to tame the asynchronous nature of applications with non-blocking I/O (which powers xref:quarkus-reactive-architecture.adoc[Quarkus]). -Declaratively compose operations, transform data, enforce progress, recover from failures, and more. - -**Made for an asynchronous world** - Mutiny can be used in any asynchronous application such as event-driven microservices, message-based applications, network utilities, data stream processing, and of course... reactive applications! - -**Reactive Streams and Converters Built-In** - Mutiny is based on the https://www.reactive-streams.org/[Reactive Streams] specification, and so it can be integrated with any other reactive programming library. -In addition, it proposes converters to interact with other popular libraries. - -== Mutiny and the I/O Threads - -Quarkus is powered by a xref:quarkus-reactive-architecture.adoc#engine[reactive engine], and when developing a reactive application, your code is executed on one of the few I/O threads. -Remember, you must never block these threads, and the model would collapse if you do. -So, you can't use blocking I/O. -Instead, you need to schedule the I/O operation and pass a continuation. - -image::reactive-thread.png[alt=Reactive Execution Model and I/O Threads,width=50%, align=center] - -The Mutiny event-driven paradigm is tailored for this. -When the I/O operation completes successfully, the Uni that represents it emits an item event. -When it fails, it emits a failure event. -The continuation is simply and naturally expressed using the event-driven API. - -== Mutiny through Examples - -Many Quarkus extensions expose Mutiny APIs. In this section, we use the MongoDB extension to illustrate how to use Mutiny. - -Let's imagine a simple structure representing an element from the https://en.wikipedia.org/wiki/Periodic_table[periodic table]: - -[source, java] ----- -public class Element { - - public String name; - public String symbol; - public int position; - - public Element(String name, String symbol, int position) { - this.name = name; - this.symbol = symbol; - this.position = position; - } - - public Element() { - // Use by JSON mappers - } -} ----- - -This structure contains the name, symbol, and position of the element. -To retrieve and store elements into a Mongo collection, you can use the following code: - -[source, java] ----- -@ApplicationScoped -public class ElementService { - - final ReactiveMongoCollection collection; - - @Inject - ElementService(ReactiveMongoClient client) { - collection = client.getDatabase("quarkus") - .getCollection("elements", Element.class); - } - - public void add(Element element) { - Uni insertion = collection.insertOne(element); - insertion - .onItem().transform(r -> r.getInsertedId().asString()) - .subscribe().with( - result -> System.out.println("inserted " + result), - failure -> System.out.println("D'oh" + failure)); - } - - public void getAll() { - collection.find() - .subscribe().with( - element -> System.out.println("Element: " + element), - failure -> System.out.println("D'oh! " + failure), - () -> System.out.println("No more elements") - ); - } - -} ----- - -First, the Mongo client is injected. -Note that it uses the reactive variant (`io.quarkus.mongodb.reactive.ReactiveMongoClient`). -In the initialize method, we retrieve and store the collection in which elements will be inserted. - -The `add` method inserts an element in the collection. -It receives the element as a parameter and uses the reactive API of the collection. -Interacting with the database involves I/Os. -The reactive principles forbid blocking while waiting for the interaction to complete. -Instead, we schedule the operation and pass a continuation. -The `insertOne` method returns a Uni, i.e., an asynchronous operation. -That's the scheduled I/O. We now need to express the continuation, which is done using the `.onItem()` method. -`.onItem()` allows configuring what needs to happen when the observed Uni emits an item, in our case when the scheduled I/Os completes. -In this example, we extract the inserted document id. -The final step is the subscription. -Without it, nothing would ever happen. Subscribing triggers the operation. -The subscription method can also define handlers: the `id` value on success, or a failure when the insertion fails. - -Let's now look at the second method. -It retrieves all the stored elements. -In this case, it returns multiple items (one per stored element), so we are using a `Multi`. -As for the insertion, getting the stored elements involves I/Os. -The `find` is our operation. -As for Uni, you need to subscribe to trigger the operation. -The subscriber receives item events, a failure event, or a completion event when all the elements have been received. - -Subscribing to a Uni or a Multi is essential, as without it, the operation is never executed. -In Quarkus some extensions deal with the subscription for you. -For example, in RESTEasy Reactive your HTTP methods can return a Uni or a Multi, and RESTEasy Reactive handles the subscription. - -== Mutiny Patterns - -The example from the last section was simplistic on purpose. -Let's have a look at a few common patterns. - -=== Observing events - -You can observe the various kind of events using: - -`on{event}().invoke(ev -> System.out.println(ev));` - -For example, for items use: -`onItem().invoke(item -> ...);` - -For failure, use: -`onFailure().invoke(failure -> ...)` - -The `invoke` method is synchronous. -Sometimes you need to execute an asynchronous action. -In this case use `call`, as in: `onItem().call(item -> someAsyncAction(item))`. -Note that `call` does not change the item, it just calls an asynchronous action, and when this one completes, it emits the original item downstream. - -=== Transforming item - -The first instrumental pattern consists of transforming the item events you receive. -As we have seen in the previous section, it could indicate the successful insertion, or the elements stored in the database. - -Transforming an item is done using: `onItem().transform(item -> ....)`. - -More details about transformation can be found in the https://smallrye.io/smallrye-mutiny/getting-started/transforming-items[Mutiny documentation]. - -=== Sequential composition - -Sequential composition allows chaining dependent asynchronous operations. This is achieved using `onItem().transformToUni(item -> ...)`. -Unlike `transform`, the function passed to `transformToUni` returns a Uni. - -[source, java] ----- -Uni uni1 = … -uni1 -.onItem().transformtoUni(item -> anotherAsynchronousAction(item)); ----- - -More details about asynchronous transformation can be found in the https://smallrye.io/smallrye-mutiny/getting-started/transforming-items-async[Mutiny documentation]. - -=== Failure handling - -So far we only handle the item events, but handling failure is essential. You can handle failures using `onFailure()`. - -For example, you can recover with a fallback item using `onFailure().recoverWithItem(fallback)`: - -[source, java] ----- -Uni uni1 = … -uni1 -.onFailure().recoverWithItem(“my fallback value”); ----- - -You can also retry the operation such as in: - -[source, java] ----- -Uni uni1 = … -uni1 -.onFailure().retry().atMost(5); ----- - -More info about failure recovery can be found on https://smallrye.io/smallrye-mutiny/getting-started/handling-failures[the handling failure documentation] and https://smallrye.io/smallrye-mutiny/getting-started/retry[the retry documentation]. - -== Events and Actions - -The following tables list the events that you can receive for Uni and Multi. Each of them is associated with its own group (onX). The second table lists the classic action you can do upon an event. Note that some groups offer more possibilities. - - - -|=== -| |Events from the upstream |Events from the downstream - -|Uni -|Subscription (1), Item (0..1), failure (0..1) -|Cancellation - -|Multi -|Subscription (1), Item (0..n), failure (0..1), completion (0..1) -|Cancellation, Request -|=== - -Check the full list of events on https://smallrye.io/smallrye-mutiny/getting-started/observing-events[the event documentation]. - -|=== -| Action |API |Comment - -|transform | `onItem().transform(Function function);` | Transform the event into another event using a synchronous function. -The downstream receives the result of the function (or a failure if the transformation failed). -|transformToUni | `onItem().transformToUni(Function> function);` | Transform the event into another event using an asynchronous function. The downstream receives the item emitted by the produced Uni (or a failure if the transformation failed). If the produced Uni emits a failure, that failure is passed to the downstream. -|invoke | `onItem().invoke(Consumer consumer)` | Invokes the synchronous consumer. This is particularly convenient to execute side-effects actions. The downstream receives the original event, or a failure if the consumer throws an exception -| call | `onItem().call(Function>)` | Invokes the asynchronous function. This is particularly convenient to execute asynchronous side-effect actions.The downstream receives the original event, or a failure if the consumer throws an exception or if the produced Uni emits a failure. -| fail | `onItem().failWith(Function)` | Emits a failure when it receives the event. -| complete (Multi only) | `onItem().complete()` | Emits the completion event when it receives the event. -|=== - -=== Other patterns - -Mutiny provides lots of other features. -Head over to the https://smallrye.io/smallrye-mutiny[Mutiny documentation] to see many more examples, including the whole list of events and how to handle them. - -Some frequently asked guides are the following: - -1. merge vs. concatenation - https://smallrye.io/smallrye-mutiny/guides/merge-concat -2. controlling the emission thread - https://smallrye.io/smallrye-mutiny/guides/emit-subscription -3. explicit blocking - https://smallrye.io/smallrye-mutiny/guides/imperative-to-reactive - -== Shortcuts - -When using Uni, having to write `onItem()` can be cumbersome. -Fortunately, Mutiny provides a set of shortcut to make your code more concise: - - -|=== -|Shortcut |Equivalent - -| uni.map(x → y) | uni.onItem().transform(x → y) -| uni.flatMap(x → uni2) | uni.onItem().transformToUni(x → uni2) -| uni.chain(x → uni2) | uni.onItem().transformToUni(x → uni2) -| uni.then(() → uni2) | uni.onItem().transformToUni(ignored → uni2) -| uni.invoke(x → System.out.println(x)) | uni.onItem().invoke(x → System.out.println(x)) -| uni.call(x → uni2) | uni.onItem().call(x → uni2) -| uni.eventually(() → System.out.println("eventually")) | uni.onItemOrFailure().invoke((ignoredItem, ignoredException) → System.out.println("eventually")) -| uni.eventually(() → uni2) | uni.onItemOrFailure().call((ignoredItem, ignoredException) → uni2) -|=== - -== Reactive Streams - -Mutiny uses https://www.reactive-streams.org/[Reactive Streams]. -`Multi` implements `Publisher` and enforces the back-pressure protocol. -Emissions are constrained by the request emitted from the downstream subscribers. -Thus, it does not overload the subscribers. -Note that in some cases, you can’t follow this protocol (because the Multi emits events that can’t be controlled, such as time, or measures sensors). -In this case, Mutiny provides a way to control the overflow using `onOverflow()`. - -`Uni` does not implement Reactive Streams `Publisher`. -A `Uni` can only emit one event, so subscribing to the `Uni` is enough to express your intent to use the result and does not need the request protocol ceremony. - -== Mutiny and Vert.x - -Vert.x is a toolkit to build reactive applications and systems. -It provides a huge ecosystem of libraries following the reactive principles (i.e., non-blocking and asynchronous). -Vert.x is an essential part of Quarkus, as it provides its reactive capabilities. - -In addition, the whole Vert.x API can be used with Quarkus. -To provide a cohesive experience, the Vert.x API is also available using a Mutiny variant, i.e., returns Uni and Multi. - -More details about this API can be found on: https://quarkus.io/blog/mutiny-vertx/. - -== Mutiny Integration in Quarkus - -The integration of Mutiny in Quarkus goes beyond just the library. -Mutiny exposes hooks that allow Quarkus and Mutiny to be closely integrated: - -* Calling `await` or `toIterable` would fail if you are running on an I/O thread, preventing blocking the I/O thread; -* The `log()` operator use the Quarkus logger; -* The default Mutiny thread pool is the Quarkus worker thread pool; -* Context Propagation is enabled by default when using Mutiny Uni and Multi - -More details about the infrastructure integration can be found on https://smallrye.io/smallrye-mutiny/guides/infrastructure. - - - - diff --git a/_versions/2.7/guides/native-and-ssl.adoc b/_versions/2.7/guides/native-and-ssl.adoc deleted file mode 100644 index 8410b64e85e..00000000000 --- a/_versions/2.7/guides/native-and-ssl.adoc +++ /dev/null @@ -1,264 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using SSL With Native Executables - -include::./attributes.adoc[] -:devtools-no-gradle: - -We are quickly moving to an SSL-everywhere world so being able to use SSL is crucial. - -In this guide, we will discuss how you can get your native executables to support SSL, -as native executables don't support it out of the box. - -NOTE: If you don't plan on using native executables, you can pass your way as in JDK mode, SSL is supported without further manipulations. - -== Prerequisites - -To complete this guide, you need: - -* less than 20 minutes -* an IDE -* GraalVM (Java 11) installed with `JAVA_HOME` and `GRAALVM_HOME` configured appropriately -* Apache Maven {maven-version} - -This guide is based on the REST client guide so you should get this Maven project first. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The project is located in the `rest-client-quickstart` {quickstarts-tree-url}/rest-client-quickstart[directory]. - -== Looks like it works out of the box?!? - -If you open the application's configuration file (`src/main/resources/application.properties`), you can see the following line: - -[source,properties] ----- -quarkus.rest-client."org.acme.rest.client.ExtensionsService".url=https://stage.code.quarkus.io/api ----- -which configures our REST client to connect to an SSL REST service. - -For the purposes of this guide, we also need to remove the configuration that starts the embedded WireMock server that stubs REST client responses so the tests actually propagate calls to the https://stage.code.quarkus.io/api. Update the test file `src/test/java/org/acme/rest/client/ExtensionsResourceTest.java` and remove the line: -[source,java] ----- -@QuarkusTestResource(WireMockExtensions.class) ----- -from the `ExtensionsResourceTest` class. - -Now let's build the application as a native executable and run the tests: - -include::includes/devtools/build-native.adoc[] - -And we obtain the following result: - -[source] ----- -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ ----- - -So, yes, it appears it works out of the box and this guide is pretty useless. - -It's not. The magic happens when building the native executable: - -[source] ----- -[INFO] [io.quarkus.creator.phase.nativeimage.NativeImagePhase] /opt/graalvm/bin/native-image -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dcom.sun.xml.internal.bind.v2.bytecode.ClassTailor.noOptimize=true -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -jar rest-client-1.0.0-SNAPSHOT-runner.jar -J-Djava.util.concurrent.ForkJoinPool.common.parallelism=1 -H:+PrintAnalysisCallTree -H:EnableURLProtocols=http,https -H:-SpawnIsolates -H:+JNI --no-server -H:-UseServiceLoaderFeature -H:+StackTrace ----- - -The important elements are these 2 options that were automatically added by Quarkus: - -[source,bash] ----- --H:EnableURLProtocols=http,https -H:+JNI ----- - -They enable the native SSL support for your native executable. -But you should not set them manually, we have a nice configuration property for this purpose as described below. - -As SSL is de facto the standard nowadays, we decided to enable its support automatically for some of our extensions: - - * the Agroal connection pooling extension (`quarkus-agroal`), - * the Amazon Services extension (`quarkus-amazon-*`), - * the Consul Config extension (`quarkus-config-consul`), - * the Elasticsearch client extensions (`quarkus-elasticsearch-rest-client` and `quarkus-elasticsearch-rest-high-level-client`) and thus the Hibernate Search Elasticsearch extension (`quarkus-hibernate-search-orm-elasticsearch`), - * the Elytron Security OAuth2 extension (`quarkus-elytron-security-oauth2`), - * the gRPC extension (`quarkus-grpc`), - * the Infinispan Client extension (`quarkus-infinispan-client`). - * the Jaeger extension (`quarkus-jaeger`), - * the JGit extension (`quarkus-jgit`), - * the JSch extension (`quarkus-jsch`), - * the Kafka Client extension (`quarkus-kafka-client`), if Apicurio Registry 2.x Avro library is used - * the Keycloak Authorization extension (`quarkus-keycloak-authorization`), - * the Kubernetes client extension (`quarkus-kubernetes-client`), - * the Logging Sentry extension (`quarkus-logging-sentry`), - * the Mailer extension (`quarkus-mailer`), - * the MongoDB client extension (`quarkus-mongodb-client`), - * the Neo4j extension (`quarkus-neo4j`), - * the OIDC and OIDC client extensions (`quarkus-oidc` and `quarkus-oidc-client`), - * the Reactive client for IBM DB2 extension (`quarkus-reactive-db2-client`), - * the Reactive client for PostgreSQL extension (`quarkus-reactive-pg-client`), - * the Reactive client for MySQL extension (`quarkus-reactive-mysql-client`), - * the Reactive client for Microsoft SQL Server extension (`quarkus-reactive-mssql-client`), - * the Redis client extension (`quarkus-redis-client`), - * the REST Client extension (`quarkus-rest-client`), - * the REST Client Reactive extension (`quarkus-rest-client-reactive`), - * the Spring Cloud Config client extension (`quarkus-spring-cloud-config-client`), - * the Vault extension (`quarkus-vault`), - * the Cassandra client extensions (`cassandra-quarkus-client`) - -As long as you have one of these extensions in your project, the SSL support will be enabled by default. - -If you are not using any of them and you want to enable SSL support anyway, please add the following to your configuration: - -[source,properties] ----- -quarkus.ssl.native=true ----- - -Now, let's just check the size of our native executable as it will be useful later: - -[source,shell] ----- -$ ls -lh target/rest-client-quickstart-1.0.0-SNAPSHOT-runner --rwxrwxr-x. 1 gandrian gandrian 46M Jun 11 13:01 target/rest-client-quickstart-1.0.0-SNAPSHOT-runner ----- - -== Let's disable SSL and see how it goes - -Quarkus has an option to disable the SSL support entirely. -Why? Because it comes at a certain cost. -So if you are sure you don't need it, you can disable it entirely. - -First, let's disable it without changing the REST service URL and see how it goes. - -Open `src/main/resources/application.properties` and add the following line: - -[source,properties] ----- -quarkus.ssl.native=false ----- - -And let's try to build again: - -include::includes/devtools/build-native.adoc[] - -The native executable tests will fail with the following error: - -[source] ----- -Caused by: java.net.MalformedURLException: Accessing an URL protocol that was not enabled. The URL protocol https is supported but not enabled by default. It must be enabled by adding the --enable-url-protocols=https option to the native-image command.. ----- - -This error is the one you obtain when trying to use SSL while it was not explicitly enabled in your native executable. - -Now, let's change the REST service URL to **not** use SSL in `src/main/resources/application.properties`: - -[source,properties] ----- -quarkus.rest-client."org.acme.rest.client.ExtensionsService".url=http://stage.code.quarkus.io/api ----- -and since http://stage.code.quarkus.io/api responds with 302 status code we need to also skip the tests with `-DskipTests`. - -Now we can build again: - -:build-additional-parameters: -DskipTests -include::includes/devtools/build-native.adoc[] -:!build-additional-parameters: - -If you check carefully the native executable build options, you can see that the SSL related options are gone: - -[source] ----- -[INFO] [io.quarkus.creator.phase.nativeimage.NativeImagePhase] /opt/graalvm/bin/native-image -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dcom.sun.xml.internal.bind.v2.bytecode.ClassTailor.noOptimize=true -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -jar rest-client-1.0.0-SNAPSHOT-runner.jar -J-Djava.util.concurrent.ForkJoinPool.common.parallelism=1 -H:+PrintAnalysisCallTree -H:EnableURLProtocols=http -H:-SpawnIsolates -H:+JNI --no-server -H:-UseServiceLoaderFeature -H:+StackTrace ----- - -And we end up with: - -[source] ----- -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ ----- - -You remember we checked the size of the native executable with SSL enabled? -Let's check again with SSL support entirely disabled: - -[source,shell] ----- -$ ls -lh target/rest-client-quickstart-1.0.0-SNAPSHOT-runner --rwxrwxr-x. 1 gandrian gandrian 35M Jun 11 13:06 target/rest-client-quickstart-1.0.0-SNAPSHOT-runner ----- - -Yes, it is now **35 MB** whereas it used to be **46 MB**. SSL comes with a 11 MB overhead in native executable size. - -And there's more to it. - -== Let's start again with a clean slate - -Let's revert the changes we made to the configuration file and go back to SSL with the following command: - -[source,bash] ----- -git checkout -- src/main/resources/application.properties ----- - -And let's build the native executable again: - -include::includes/devtools/build-native.adoc[] - -[#the-truststore-path] -== The TrustStore path - -[WARNING] -==== -This behavior is new to GraalVM 21.3+. -==== - -GraalVM supports both build time and runtime certificate configuration. - -=== Build time configuration - -The build time approach favors the principle of "immutable security" where the appropriate certificates are added at build time, and can never be changed afterward. -This guarantees that the list of valid certificates cannot be tampered with when the application gets deployed in production. - -However, this comes with a few drawbacks: - - * If you use the same executable in all environments, and a certificate expires, the application needs to be rebuilt, and redeployed into production with the new certificate, which is an inconvenience. - * Even worse, if a certificate gets revoked because of a security breach, all applications that embed this certificate need to be rebuilt and redeployed in a timely manner. - * This requires also to add into the application all certificates for all environments (e.g. DEV, TEST, PROD), which means that a certificate that is required for DEV but should not be used elsewhere, will make its way anyway in production. - * Providing all certificates at build time complicates the CI, specifically in dynamic environments such as Kubernetes where valid certificates are provided by the platform in the `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` PEM file. - * Lastly, this does not play well with third party software that do not provide a dedicated build for each customer environment. - -Creating a native executable using build time certificates essentially means that the root certificates are fixed at image build time, based on the certificate configuration used at build time (which for Quarkus means when you perform a build having `quarkus.package.type=native` set). -This avoids shipping a `cacerts` file or requiring a system property be set in order to set up root certificates that are provided by the OS where the binary runs. - -In this situation, system properties such as `javax.net.ssl.trustStore` do not have an effect at -run time, so when the defaults need to be changed, these system properties must be provided at image build time. -The easiest way to do so is by setting `quarkus.native.additional-build-args`. For example: - -[source,bash] ----- -quarkus.native.additional-build-args=-J-Djavax.net.ssl.trustStore=/tmp/mycerts,-J-Djavax.net.ssl.trustStorePassword=changeit ----- - -will ensure that the certificates of `/tmp/mycerts` are baked into the native binary and used *in addition* to the default cacerts. -The file containing the custom TrustStore does *not* (and probably should not) have to be present at runtime as its content has been baked into the native binary. - -=== Run time configuration - -Using the runtime certificate configuration, supported by GraalVM since 21.3 does not require any special or additional configuration compared to regular java programs or Quarkus in jvm mode. See the https://www.graalvm.org/reference-manual/native-image/CertificateManagement/#run-time-options[GraalVM documentation] for more information. - -[#working-with-containers] -=== Working with containers - -No special action needs to be taken when running the native binary in a container. If the native binary was properly built with the custom TrustStore -as described in the previous section, it will work properly in container as well. - -== Conclusion - -We make building native executable using SSL easy, and provide several options to cope well with different types of security requirements. diff --git a/_versions/2.7/guides/native-reference.adoc b/_versions/2.7/guides/native-reference.adoc deleted file mode 100644 index e8d0fdb7f4d..00000000000 --- a/_versions/2.7/guides/native-reference.adoc +++ /dev/null @@ -1,1565 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Native Reference Guide - -include::./attributes.adoc[] - -This guide is a companion to the -xref:building-native-image.adoc[Building a Native Executable], -xref:native-and-ssl.adoc[Using SSL With Native Images], -and xref:writing-native-applications-tips.adoc[Writing Native Applications], -guides. -It provides further details to debugging issues in Quarkus native executables that might arise during development or production. - -This reference guide takes as input the application developed in the xref:getting-started.adoc[Getting Started Guide]. -You can find instructions on how to quickly set up this application in this guide. - -== Requirements and Assumptions - -This guide has the following requirements: - -* JDK 11 installed with `JAVA_HOME` configured appropriately -* Apache Maven {maven-version} -* A working container runtime (Docker, podman) - -This guide builds and executes Quarkus native executables within a Linux environment. -To offer a homogeneous experience across all environments, -the guide relies on a container runtime environment to build and run the native executables. -The instructions below use Docker as example, but very similar commands should work on alternative container runtimes, e.g. podman. - -[IMPORTANT] -==== -Building native executables is an expensive process, -so make sure the container runtime has enough CPU and memory to do this. -A minimum of 4 CPUs and 4GB of memory is required. -==== - -Finally, this guide assumes the use of the link:https://github.com/graalvm/mandrel[Mandrel distribution] of GraalVM for building native executables, -and these are built within a container so there is no need for installing Mandrel on the host. - -== Bootstrapping the project - -Start by creating a new Quarkus project. -Open a terminal and run the following command: - -For Linux & MacOS users - -:create-app-artifact-id: debugging-native -:create-app-extensions: resteasy,container-image-docker -:create-app-code: -include::includes/devtools/create-app.adoc[] - -For Windows users - -- If using cmd , (don't use backward slash `\` and put everything on the same line) -- If using Powershell , wrap `-D` parameters in double quotes e.g. `"-DprojectArtifactId=debugging-native"` - -== Configure Quarkus properties - -Some Quarkus configuration options will be used constantly throughout this guide, -so to help declutter command line invocations, -it's recommended to add these options to the `application.properties` file. -So, go ahead and add the following options to that file: - -[source,properties,subs=attributes+] ----- -quarkus.native.container-build=true -quarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-mandrel:{mandrel-flavor} -quarkus.container-image.build=true -quarkus.container-image.group=test ----- - -== First Debugging Steps - -As a first step, change to the project directory and build the native executable for the application: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative ----- - -Run the application to verify it works as expected. In one terminal: - -[source,bash] ----- -docker run -i --rm -p 8080:8080 test/debugging-native:1.0.0-SNAPSHOT ----- - -In another: - -[source,bash] ----- -curl -w '\n' http://localhost:8080/hello ----- - -The rest of this section explores ways to build the native executable with extra information, -but first, stop the running application. -We can obtain this information while building the native executable by adding additional native-image build options using `-Dquarkus.native.additional-build-args`, e.g. - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.additional-build-args=--native-image-info ----- - -Executing that will produce additional output lines like this: - -[source,bash] ----- -... -# Printing compilation-target information to: /project/reports/target_info_20220223_100915.txt -… -# Printing native-library information to: /project/reports/native_library_info_20220223_100925.txt ----- - -The target info file contains information such as the target platform, -the toolchain used to compile the executable, -and the C library in use: - -[source,bash] ----- -$ cat target/*/reports/target_info_*.txt -Building image for target platform: org.graalvm.nativeimage.Platform$LINUX_AMD64 -Using native toolchain: - Name: GNU project C and C++ compiler (gcc) - Vendor: redhat - Version: 8.5.0 - Target architecture: x86_64 - Path: /usr/bin/gcc -Using CLibrary: com.oracle.svm.core.posix.linux.libc.GLib ----- - -The native library info file contains information on the static libraries added to the binary and the other libraries dynamically linked to the executable: - -[source,bash] ----- -$ cat target/*/reports/native_library_info_*.txt -Static libraries: - ../opt/mandrel/lib/svm/clibraries/linux-amd64/liblibchelper.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libnet.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libextnet.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libnio.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libjava.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libfdlibm.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libsunec.a - ../opt/mandrel/lib/static/linux-amd64/glibc/libzip.a - ../opt/mandrel/lib/svm/clibraries/linux-amd64/libjvm.a -Other libraries: stdc++,pthread,dl,z,rt ----- - -Even more detail can be obtained by passing in `--verbose` as an additional native-image build argument. -This option can be very useful in detecting whether the options that you pass at a high level via Quarkus are being passed down to the native executable production, -or whether some third party jar has some native-image configuration embedded in it that is reaching the native-image invocation: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.additional-build-args=--verbose ----- - -Running with `--verbose` demonstrates how the native-image building process is two sequential java processes: - -* The first is a very short Java process that does some basic validation and builds the arguments for the second process -(in a stock GraalVM distribution, this is executed as native code). -* The second Java process is where the main part of the native executable production happens. -The `--verbose` option shows the actual Java process executed. -You could take the output and run it yourself. - -One may also combine multiple native build options by separating with a comma, e.g.: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.additional-build-args=--native-image-info,--verbose ----- - -[TIP] -==== -Remember that if an argument for `-Dquarkus.native.additional-build-args` includes the `,` symbol, -it needs to be escaped to be processed correcly, e.g. `\\,`. -==== - -== Inspecting Native Executables - -Given a native executable, various Linux tools can be used to inspect it. -To allow supporting a variety of environments, -inspections will be done from within a Linux container. -Let's create a Linux container image with all the tools required for this guide: - -[source,dockerfile] ----- -FROM fedora:35 - -RUN dnf install -y \ -binutils \ -gdb \ -git \ -perf \ -perl-open - -ENV FG_HOME /opt/FlameGraph - -RUN git clone https://github.com/brendangregg/FlameGraph $FG_HOME - -WORKDIR /data - -ENTRYPOINT /bin/bash ----- - -Using docker in the non-Linux environment, you can create an image using this Dockerfile via: - -[source,bash] ----- -docker build -t fedora-tools:v1 . ----- - -Then, go to the root of the project and run the Docker container we have just created as: - -[source,bash] ----- -docker run -t -i --rm -v ${PWD}:/data -p 8080:8080 fedora-tools:v1 ----- - -`ldd` shows the shared library dependencies of an executable: - -[source,bash] ----- -ldd ./target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -`strings` can be used to look for text messages inside the binary: - -[source,bash] ----- -strings ./target/debugging-native-1.0.0-SNAPSHOT-runner | grep Hello ----- - -Using `strings` you can also get Mandrel information given the binary: - -[source,bash] ----- -strings ./target/debugging-native-1.0.0-SNAPSHOT-runner | grep core.VM ----- - -Finally, using `readelf` we can inspect different sections of the binary. -For example, we can see how the heap and text sections take most of binary: - -[source,bash] ----- -readelf -SW ./target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -== Native Reports - -Optionally, the native build process can generate reports that show what goes into the binary: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.enable-reports ----- - -The reports will be created under `target/debugging-native-1.0.0-SNAPSHOT-native-image-source-jar/reports/`. -These reports are some of the most useful resources when encountering issues with missing methods/classes, or encountering forbidden methods by Mandrel. - -=== Call Tree Reports - -`call_tree` text file report is one of the default reports generated when the `-Dquarkus.native.enable-reports` option is passed in. -This is useful for getting an approximation on why a method/class is included in the binary. -However, the text format makes it very difficult to read and can take up a lot of space. - -Since Mandrel 21.3.0.0, the call tree is also reported as a group of CSV files. -The CSV output can be enabled by adding `-H:PrintAnalysisCallTreeType=CSV` to the additional native arguments. E.g. - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.enable-reports \ - -Dquarkus.native.additional-build-args=-H:PrintAnalysisCallTreeType=CSV ----- - -These can in turn be imported into a graph database, such as Neo4j, -to inspect them more easily and run queries against the call tree. -Let’s see this in action. - -First, start a Neo4j instance: - -[source,bash] ----- -export NEO_PASS=... -docker run \ - --detach \ - --rm \ - --name testneo4j \ - -p7474:7474 -p7687:7687 \ - --env NEO4J_AUTH=neo4j/${NEO_PASS} \ - neo4j:latest ----- - -Once the container is running, -you can access the link:http://localhost:7474[Neo4j browser]. -Use `neo4j` as the username and the value of `NEO_PASS` as the password to log in. - -To import the CSV files, -we need the following cypher script which will import the data within the CSV files and create graph database nodes and edges: - -[source,cypher] ----- -CREATE CONSTRAINT unique_vm_id ON (v:VM) ASSERT v.vmId IS UNIQUE; -CREATE CONSTRAINT unique_method_id ON (m:Method) ASSERT m.methodId IS UNIQUE; - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_vm.csv' AS row -MERGE (v:VM {vmId: row.Id, name: row.Name}) -RETURN count(v); - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_methods.csv' AS row -MERGE (m:Method {methodId: row.Id, name: row.Name, type: row.Type, parameters: row.Parameters, return: row.Return, display: row.Display}) -RETURN count(m); - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_virtual_methods.csv' AS row -MERGE (m:Method {methodId: row.Id, name: row.Name, type: row.Type, parameters: row.Parameters, return: row.Return, display: row.Display}) -RETURN count(m); - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_entry_points.csv' AS row -MATCH (m:Method {methodId: row.Id}) -MATCH (v:VM {vmId: '0'}) -MERGE (v)-[:ENTRY]->(m) -RETURN count(*); - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_direct_edges.csv' AS row -MATCH (m1:Method {methodId: row.StartId}) -MATCH (m2:Method {methodId: row.EndId}) -MERGE (m1)-[:DIRECT {bci: row.BytecodeIndexes}]->(m2) -RETURN count(*); - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_override_by_edges.csv' AS row -MATCH (m1:Method {methodId: row.StartId}) -MATCH (m2:Method {methodId: row.EndId}) -MERGE (m1)-[:OVERRIDEN_BY]->(m2) -RETURN count(*); - -LOAD CSV WITH HEADERS FROM 'file:///reports/call_tree_virtual_edges.csv' AS row -MATCH (m1:Method {methodId: row.StartId}) -MATCH (m2:Method {methodId: row.EndId}) -MERGE (m1)-[:VIRTUAL {bci: row.BytecodeIndexes}]->(m2) -RETURN count(*); ----- - -Copy and paste the contents of the script into a file called `import.cypher`. - -[WARNING] -==== -Mandrel 22.0.0 contains a bug where the symbolic links used by the import cypher file are not correctly set when generating reports within a container -(for more details see link:https://github.com/oracle/graal/issues/4355[here]). -This can be worked around by copying the following script into a file and executing it: - -[source,bash] ----- -set -e - -project="debugging-native" - -pushd target/*-native-image-source-jar/reports - -rm -f call_tree_vm.csv -ln -s call_tree_vm_${project}-* call_tree_vm.csv - -rm -f call_tree_direct_edges.csv -ln -s call_tree_direct_edges_${project}-* call_tree_direct_edges.csv - -rm -f call_tree_entry_points.csv -ln -s call_tree_entry_points_${project}-* call_tree_entry_points.csv - -rm -f call_tree_methods.csv -ln -s call_tree_methods_${project}-* call_tree_methods.csv - -rm -f call_tree_virtual_edges.csv -ln -s call_tree_virtual_edges_${project}-* call_tree_virtual_edges.csv - -rm -f call_tree_virtual_methods.csv -ln -s call_tree_virtual_methods_${project}-* call_tree_virtual_methods.csv - -rm -f call_tree_override_by_edges.csv -ln -s call_tree_override_by_edges_${project}-* call_tree_override_by_edges.csv - -popd ----- -==== - -Next, copy the import cypher script and CSV files into Neo4j's import folder: - -[source,bash] ----- -docker cp \ - target/*-native-image-source-jar/reports \ - testneo4j:/var/lib/neo4j/import - -docker cp import.cypher testneo4j:/var/lib/neo4j ----- - -After copying all the files, invoke the import script: - -[source,bash] ----- -docker exec testneo4j bin/cypher-shell -u neo4j -p ${NEO_PASS} -f import.cypher ----- - -Once the import completes (shouldn't take more than a couple of minutes), go to the link:http://localhost:7474[Neo4j browser], -and you'll be able to observe a small summary of the data in the graph: - -image::native-reference-neo4j-db-info.png[Neo4j database information after import] - -The data above shows that there are ~60000 methods, and just over ~200000 edges between them. -The Quarkus application demonstrated here is very basic, so there’s not a lot we can explore, but here are some example queries you can run to explore the graph in more detail. -Typically, you’d start by looking for a given method: - -[source,cypher] ----- -match (m:Method) where m.name = "hello" return * ----- - -From there, you can narrow down to a given method on a specific type: - -[source,cypher] ----- -match (m:Method) where m.name = "hello" and m.type =~ ".*GreetingResource" return * ----- - -Once you’ve located the node for the specific method you’re after, a typical question you’d want to get an answer for is: -why does this method get included in the call tree? -To do that, start from the method and look for incoming connections at a given depth, -starting from the end method. -For example, methods that directly call a method can be located via: - -[source,cypher] ----- -match (m:Method) <- [*1..1] - (o) where m.name = "hello" return * ----- - -Then you can look for direct calls at depth of 2, -so you’d search for methods that call methods that call into the target method: - -[source,cypher] ----- -match (m:Method) <- [*1..2] - (o) where m.name = "hello" return * ----- - -You can continue going up layers, -but unfortunately if you reach a depth with too many nodes, -the Neo4j browser will be unable to visualize them all. -When that happens, you can alternatively run the queries directly against the cypher shell: - -[source,bash] ----- -docker exec testneo4j bin/cypher-shell -u neo4j -p ${NEO_PASS} \ - "match (m:Method) <- [*1..10] - (o) where m.name = 'hello' return *" ----- - -=== Used Packages/Classes/Methods Reports - -`used_packages`, `used_classes` and `used_methods` text file reports come in handy when comparing different versions of the application, -e.g. why does the image take longer to build? Or why is the image bigger now? - -=== Further Reports - -Mandrel can produce further reports beyond the ones that are enabled with the `-Dquarkus.native.enable-reports` option. -These are called expert options and you can learn more about them by running: - -[source,bash,subs=attributes+] ----- -docker run quay.io/quarkus/ubi-quarkus-mandrel:{mandrel-flavor} --expert-options-all ----- - -To use these expert options, add them comma separated to the `-Dquarkus.native.additional-build-args` parameter. - -== Build-time vs Run-time Initialization - -Quarkus instructs Mandrel to initialize as much as possible at build time, -so that runtime startup can be as fast as possible. -This is important in containerized environments where the startup speed has a big impact on how quickly an application is ready to do work. -Build time initialization also minimizes the risk of runtime failures due to unsupported features becoming reachable through runtime initialization, -thus making Quarkus more reliable. - -The most common examples of build-time initialized code are static variables and blocks. -Although Mandrel executes those at run-time by default, -Quarkus instructs Mandrel to run them at build-time for the reasons given. - -This means that any static variables initialized inline, or initialized in a static block, -will keep the same value even if the application is restarted. -This is a different behaviour compared to what would happen if executed as Java. - -To see this in action with a very basic example, -add a new `TimestampResource` to the application that looks like this: - -[source,java] ----- -package org.acme; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/timestamp") -public class TimestampResource { - - static long firstAccess = System.currentTimeMillis(); - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String timestamp() { - return "First access " + firstAccess; - } -} ----- - -Rebuild the binary using: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative ----- - -Run the application in one terminal -(make sure you stop any other native executable container runs before executing this): - -[source,bash] ----- -docker run -i --rm -p 8080:8080 test/debugging-native:1.0.0-SNAPSHOT ----- - -Send a `GET` request multiple times from another terminal: - -[source,bash] ----- -curl -w '\n' http://localhost:8080/timestamp # run this multiple times ----- - -to see how the current time has been baked into the binary. -This time was calculated when the binary was being built, -hence application restarts have no effect. - -In some situations, built time initializations can lead to errors when building native executables. -One example is when a value gets computed at build time which is forbidden to reside in the heap of the JVM that gets baked into the binary. -To see this in action, add this REST resource: - -[source,java] ----- -package org.acme; - -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.crypto.Cipher; -import javax.crypto.NoSuchPaddingException; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.nio.charset.StandardCharsets; -import java.security.KeyPair; -import java.security.KeyPairGenerator; -import java.security.NoSuchAlgorithmException; - -@Path("/encrypt-decrypt") -public class EncryptDecryptResource { - - static final KeyPairGenerator KEY_PAIR_GEN; - static final Cipher CIPHER; - - static { - try { - KEY_PAIR_GEN = KeyPairGenerator.getInstance("RSA"); - KEY_PAIR_GEN.initialize(1024); - - CIPHER = Cipher.getInstance("RSA"); - } catch (NoSuchAlgorithmException | NoSuchPaddingException e) { - throw new RuntimeException(e); - } - } - - @GET - @Path("/{message}") - public String encryptDecrypt(@PathParam String message) throws Exception { - KeyPair keyPair = KEY_PAIR_GEN.generateKeyPair(); - - byte[] text = message.getBytes(StandardCharsets.UTF_8); - - // Encrypt with private key - CIPHER.init(Cipher.ENCRYPT_MODE, keyPair.getPrivate()); - byte[] encrypted = CIPHER.doFinal(text); - - // Decrypt with public key - CIPHER.init(Cipher.DECRYPT_MODE, keyPair.getPublic()); - byte[] unencrypted = CIPHER.doFinal(encrypted); - - return new String(unencrypted, StandardCharsets.UTF_8); - } -} ----- - -When trying to rebuild the application, you’ll encounter an error: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative -... -Error: Unsupported features in 2 methods -Detailed message: -Error: Detected an instance of Random/SplittableRandom class in the image heap. Instances created during image generation have cached seed values and don't behave as expected. To see how this object got instantiated use --trace-object-instantiation=java.security.SecureRandom. The object was probably created by a class initializer and is reachable from a static field. You can request class initialization at image runtime by using the option --initialize-at-run-time=. Or you can write your own initialization methods and call them explicitly from your main entry point. -Trace: Object was reached by - reading field java.security.KeyPairGenerator$Delegate.initRandom of - constant java.security.KeyPairGenerator$Delegate@58b0fe1b reached by - reading field org.acme.EncryptDecryptResource.KEY_PAIR_GEN -Error: Detected an instance of Random/SplittableRandom class in the image heap. Instances created during image generation have cached seed values and don't behave as expected. To see how this object got instantiated use --trace-object-instantiation=java.security.SecureRandom. The object was probably created by a class initializer and is reachable from a static field. You can request class initialization at image runtime by using the option --initialize-at-run-time=. Or you can write your own initialization methods and call them explicitly from your main entry point. -Trace: Object was reached by - reading field sun.security.rsa.RSAKeyPairGenerator.random of - constant sun.security.rsa.RSAKeyPairGenerator$Legacy@3248a092 reached by - reading field java.security.KeyPairGenerator$Delegate.spi of - constant java.security.KeyPairGenerator$Delegate@58b0fe1b reached by - reading field org.acme.EncryptDecryptResource.KEY_PAIR_GEN ----- - -So, what the message above is telling us is that our application caches a value that is supposed to be random as a constant. -This is not desirable because something that's supposed to be random is no longer so, -because the seed is baked in the image. -The message above makes it quite clear what is causing this, -but in other situations the cause might be more obfuscated. -As a next step, we'll add some extra flags to the native executable generation to get more information. - -As suggested by the message, let's start by adding an option to track object instantiation: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.additional-build-args="--trace-object-instantiation=java.security.SecureRandom" -... -Error: Unsupported features in 2 methods -Detailed message: -Error: Detected an instance of Random/SplittableRandom class in the image heap. Instances created during image generation have cached seed values and don't behave as expected. Object has been initialized by the com.sun.jndi.dns.DnsClient class initializer with a trace: - at java.security.SecureRandom.(SecureRandom.java:218) - at sun.security.jca.JCAUtil$CachedSecureRandomHolder.(JCAUtil.java:59) - at sun.security.jca.JCAUtil.getSecureRandom(JCAUtil.java:69) - at com.sun.jndi.dns.DnsClient.(DnsClient.java:82) -. Try avoiding to initialize the class that caused initialization of the object. The object was probably created by a class initializer and is reachable from a static field. You can request class initialization at image runtime by using the option --initialize-at-run-time=. Or you can write your own initialization methods and call them explicitly from your main entry point. -Trace: Object was reached by - reading field java.security.KeyPairGenerator$Delegate.initRandom of - constant java.security.KeyPairGenerator$Delegate@4a5058f9 reached by - reading field org.acme.EncryptDecryptResource.KEY_PAIR_GEN -Error: Detected an instance of Random/SplittableRandom class in the image heap. Instances created during image generation have cached seed values and don't behave as expected. Object has been initialized by the com.sun.jndi.dns.DnsClient class initializer with a trace: - at java.security.SecureRandom.(SecureRandom.java:218) - at sun.security.jca.JCAUtil$CachedSecureRandomHolder.(JCAUtil.java:59) - at sun.security.jca.JCAUtil.getSecureRandom(JCAUtil.java:69) - at com.sun.jndi.dns.DnsClient.(DnsClient.java:82) -. Try avoiding to initialize the class that caused initialization of the object. The object was probably created by a class initializer and is reachable from a static field. You can request class initialization at image runtime by using the option --initialize-at-run-time=. Or you can write your own initialization methods and call them explicitly from your main entry point. -Trace: Object was reached by - reading field sun.security.rsa.RSAKeyPairGenerator.random of - constant sun.security.rsa.RSAKeyPairGenerator$Legacy@71880cf1 reached by - reading field java.security.KeyPairGenerator$Delegate.spi of - constant java.security.KeyPairGenerator$Delegate@4a5058f9 reached by - reading field org.acme.EncryptDecryptResource.KEY_PAIR_GEN ----- - -The error messages point to the code in the example, -but it can be suprising that a reference to `DnsClient` appears. -Why is that? -The key is in what happens inside `KeyPairGenerator.initialize()` method call. -It uses `JCAUtil.getSecureRandom()` which is why this is problematic, -but sometimes the tracing options can show some stack traces that do not represent what happens in reality. -The best option is to dig through the source code and use tracing output for guidance but not as full truth. - -Moving the `KEY_PAIR_GEN.initialize(1024);` call to the run-time executed method `encryptDecrypt` is enough to solve this particular issue. - -Additional information on which classes are initialized and why can be obtained by passing in the `-H:+PrintClassInitialization` flag via `-Dquarkus.native.additional-build-args`. - -== Profile Runtime Behaviour - -=== Single Thread - -In this exercise, we profile the runtime behaviour of some Quarkus application that was compiled to a native executable to determine where the bottleneck is. -Assume that you’re in a scenario where profiling the pure Java version is not possible, maybe because the issue only occurs with the native version of the application. - -Add a REST resource with the following code -(example courtesy of link:https://github.com/apangin/java-profiling-presentation/blob/master/src/demo1/StringBuilderTest.java[Andrei Pangin's Java Profiling presentation]): - -[source,java] ----- -package org.acme; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/string-builder") -public class StringBuilderResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String appendDelete() { - StringBuilder sb = new StringBuilder(); - sb.append(new char[1_000_000]); - - do - { - sb.append(12345); - sb.delete(0, 5); - } while (Thread.currentThread().isAlive()); - - return "Never happens"; - } -} ----- - -Recompile the application, rebuild the binary and run it. Attempting a simple curl will never complete, as expected: - -[source,bash,subs=attributes+] ----- -$ ./mvnw package -DskipTests -Pnative -... -$ docker run -i --rm -p 8080:8080 test/debugging-native:1.0.0-SNAPSHOT -... -$ curl http://localhost:8080/string-builder # this will never complete ----- - -However, the question we’re trying to answer here is: -what would be the bottleneck of such code? -Is it appending the characters? Is it deleting it? Is it checking whether the thread is alive? - -Since we're dealing with a linux native executable, -we can use tools like `perf` directly. -To use `perf`, -go to the root of the project and start the tools container created earlier as a privileged user: - -[source,bash] ----- -docker run --privileged -t -i --rm -v ${PWD}:/data -p 8080:8080 fedora-tools:v1 ----- - -[NOTE] -==== -Note that in order to use `perf` to profile the native executables in the guide, -the container needs to run as privileged, or with `--cap-add sys_admin`. -Please note that privileged containers are **NOT** recommended in production, so use this flag with caution! -==== - -Once the container is running, you need to ensure that the kernel is ready for the profiling exercises: - -[source,bash] ----- -echo -1 | sudo tee /proc/sys/kernel/perf_event_paranoid -echo 0 | sudo tee /proc/sys/kernel/kptr_restrict ----- - -[TIP] -==== -The kernel modifications above also apply to Linux virtual machines. -If running on a bare metal Linux machine, -tweaking only `perf_event_paranoid` is enough. -==== - -Then, from inside the tools container we execute: - -[source,bash] ----- -perf record -F 1009 -g -a ./target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -While `perf record` is running, open another window and access the endpoint: - -[source,bash] ----- -curl http://localhost:8080/string-builder # this will never complete ----- - -After a few seconds, halt the `perf record` process. -This will generate a `perf.data` file. -We could use `perf report` to inspect the perf data, -but you can often get a better picture showing that data as a flame graph. -To generate flame graphs, we will use -https://github.com/brendangregg/FlameGraph[FlameGraph GitHub repository], -which has already been installed inside the tools container. - -Next, generate a flame graph using the data captured via `perf record`: - -[source,bash] ----- -$ perf script -i perf.data | ${FG_HOME}/stackcollapse-perf.pl > out.perf-folded -$ ${FG_HOME}/flamegraph.pl out.perf-folded > flamegraph.svg ----- - -The flame graph is an svg file that a web browser, such as Firefox, can easily display. -After the above two commands complete one can open `flamegraph.svg` in their browser: - -image::native-reference-perf-flamegraph-no-symbols.svg[Perf flamegraph without symbols] - -We see a big majority of time spent in what is supposed to be our main, -but we see no trace of the `StringBuilderResource` class, -nor the `StringBuilder` class we're calling. -We should look at the symbol table of the binary: -can we find symbols for our class and `StringBuilder`? -We need those in order to get meaningful data. -From within the tools container, query the symbol table: - -[source,bash] ----- -objdump -t ./target/debugging-native-1.0.0-SNAPSHOT-runner | grep StringBuilder -[no output] ----- - -No output appears when querying the symbol table. -This is why we don't see any call graphs in the flame graphs. -This is a deliberate decision that native-image makes. -By default, it removes symbols from the binary. - -To regain the symbols, we need to rebuild the binary instructing GraalVM not to delete the symbols. -On top of that, enable DWARF debug info so that the stack traces can be populated with that information. -From outside the tools container, execute: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.debug.enabled \ - -Dquarkus.native.additional-build-args=-H:-DeleteLocalSymbols ----- - -Next, re-enter the tools container if you exited, -and inspect the native executable with `objdump`, -and see how the symbols are now present: - -[source,bash] ----- -$ objdump -t ./target/debugging-native-1.0.0-SNAPSHOT-runner | grep StringBuilder -000000000050a940 l F .text 0000000000000091 .hidden ReflectionAccessorHolder_StringBuilderResource_appendDelete_9e06d4817d0208a0cce97ebcc0952534cac45a19_e22addf7d3eaa3ad14013ce01941dc25beba7621 -000000000050a9e0 l F .text 00000000000000bb .hidden ReflectionAccessorHolder_StringBuilderResource_constructor_0f8140ea801718b80c05b979a515d8a67b8f3208_12baae06bcd6a1ef9432189004ae4e4e176dd5a4 -... ----- - -You should see a long list of symbols that match that pattern. - -Then, run the executable through perf, -*indicating that the call graph is dwarf*: - -[source,bash] ----- -perf record -F 1009 --call-graph dwarf -a ./target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -Run the curl command once again, stop the binary, generate the flamegraphs and open it: - -[source,bash] ----- -perf script -i perf.data | ${FG_HOME}/stackcollapse-perf.pl > out.perf-folded -${FG_HOME}/flamegraph.pl out.perf-folded > flamegraph.svg ----- - -The flamegraph now shows where the bottleneck is. -It's when `StringBuilder.delete()` is called which calls `System.arraycopy()`. -The issue is that 1 million characters need to be shifted in very small increments: - -image::native-reference-perf-flamegraph-symbols.svg[Perf flamegraph with symbols] - -=== Multi-Thread - -Multi-threaded programs might require special attention when trying to understand their runtime behaviour. -To demonstrate this, add this `MulticastResource` code to your project -(example courtesy of link:https://github.com/apangin/java-profiling-presentation/blob/master/src/demo6/DatagramTest.java[Andrei Pangin's Java Profiling presentation]): - -[source,java] ----- -package org.acme; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import java.net.InetSocketAddress; -import java.nio.ByteBuffer; -import java.nio.channels.DatagramChannel; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; -import java.util.concurrent.ThreadFactory; -import java.util.concurrent.atomic.AtomicInteger; - -@Path("/multicast") -public class MulticastResource -{ - @GET - @Produces(MediaType.TEXT_PLAIN) - public String send() throws Exception { - sendMulticasts(); - return "Multicast packets sent"; - } - - static void sendMulticasts() throws Exception { - DatagramChannel ch = DatagramChannel.open(); - ch.bind(new InetSocketAddress(5555)); - ch.configureBlocking(false); - - ExecutorService pool = - Executors.newCachedThreadPool(new ShortNameThreadFactory()); - for (int i = 0; i < 10; i++) { - pool.submit(() -> { - final ByteBuffer buf = ByteBuffer.allocateDirect(1000); - final InetSocketAddress remoteAddr = - new InetSocketAddress("127.0.0.1", 5556); - - while (true) { - buf.clear(); - ch.send(buf, remoteAddr); - } - }); - } - - System.out.println("Warming up..."); - Thread.sleep(3000); - - System.out.println("Benchmarking..."); - Thread.sleep(5000); - } - - private static final class ShortNameThreadFactory implements ThreadFactory { - - private final AtomicInteger threadNumber = new AtomicInteger(1); - private final String namePrefix = "thread-"; - - public Thread newThread(Runnable r) { - return new Thread(r, namePrefix + threadNumber.getAndIncrement()); - } - } -} ----- - -Build the native executable with debug info: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.debug.enabled \ - -Dquarkus.native.additional-build-args=-H:-DeleteLocalSymbols ----- - -From inside the tools container (as privileged user) run the native executable through `perf`: - -[source,bash] ----- -perf record -F 1009 --call-graph dwarf -a ./target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -Invoke the endpoint to send the multicast packets: - -[source,bash] ----- -curl -w '\n' http://localhost:8080/multicast ----- - -Make and open a flamegraph: - -[source,bash] ----- -perf script -i perf.data | ${FG_HOME}/stackcollapse-perf.pl > out.perf-folded -${FG_HOME}/flamegraph.pl out.perf-folded > flamegraph.svg ----- - -image::native-reference-multi-flamegraph-separate-threads.svg[Muti-thread perf flamegraph with separate threads] - -The flamegraph produced looks odd. Each thread is treated independently even though they all do the same work. -This makes it difficult to have a clear picture of the bottlenecks in the program. - -This is happening because from a `perf` perspective, each thread is a different command. -We can see that if we inspect `perf report`: - -[source,bash] ----- -perf report --stdio -# Children Self Command Shared Object Symbol -# ........ ........ ............... ...................................... ...................................................................................... -... - 6.95% 0.03% thread-2 debugging-native-1.0.0-SNAPSHOT-runner [.] MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 -... - 4.60% 0.02% thread-10 debugging-native-1.0.0-SNAPSHOT-runner [.] MulticastResource_lambda$sendMulticasts$0_cb1f7b5dcaed7dd4e3f90d18bad517d67eae4d88 -... ----- - -This can be worked around by applying some modifications to the perf output, -in order to make all threads have the same name. E.g. - -[source,bash] ----- -perf script | sed -E "s/thread-[0-9]*/thread/" | ${FG_HOME}/stackcollapse-perf.pl > out.perf-folded -${FG_HOME}/flamegraph.pl out.perf-folded > flamegraph.svg ----- - -image::native-reference-multi-flamegraph-joined-threads.svg[Muti-thread perf flamegraph with joined threads] - -When you open the flamegraph, you will see all threads' work collapsed into a single area. -Then, you can clearly see that there's some locking that could affect performance. - -== Debugging Native Crashes - -One of the drawbacks of using native executables is that they cannot be debugged using the standard Java debuggers, -instead we need to debug them using `gdb`, the GNU Project debugger. -To demonstrate how to do this, -we are going to generate a native Quarkus application that crashes due to a Segmentation Fault when accessing http://localhost:8080/crash. -To achieve this, add the following REST resource to the project: - -[source,java] ----- -package org.acme; - -import sun.misc.Unsafe; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import java.lang.reflect.Field; - -@Path("/crash") -public class CrashResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - Field theUnsafe = null; - try { - theUnsafe = Unsafe.class.getDeclaredField("theUnsafe"); - theUnsafe.setAccessible(true); - Unsafe unsafe = (Unsafe) theUnsafe.get(null); - unsafe.copyMemory(0, 128, 256); - } catch (NoSuchFieldException | IllegalAccessException e) { - e.printStackTrace(); - } - return "Never happens"; - } -} ----- - -This code will try to copy 256 bytes from address `0x0` to `0x80` resulting in a Segmentation Fault. -To verify this compile and run the example application: - -[source,bash,subs=attributes+] ----- -$ ./mvnw package -DskipTests -Pnative -... -$ docker run -i --rm -p 8080:8080 test/debugging-native:1.0.0-SNAPSHOT -... -$ curl http://localhost:8080/crash ----- - -This will result in the following output: - -[source,bash] ----- -$ docker run -i --rm -p 8080:8080 test/debugging-native:1.0.0-SNAPSHOT -... -Segfault detected, aborting process. Use runtime option -R:-InstallSegfaultHandler if you don't want to use SubstrateSegfaultHandler. -... ----- - -The omitted output above contains clues to what caused the issue, -but in this exercise we are going to assume that no information was provided. -Let’s try to debug the segmentation fault using `gdb`. -To do that, go to the root of the project and enter the tools container: - -[source,bash] ----- -docker run -t -i --rm -v ${PWD}:/data -p 8080:8080 fedora-tools:v1 /bin/bash ----- - -Then start the application in `gdb` and execute `run`. - -[source,bash] ----- -gdb ./target/debugging-native-1.0.0-SNAPSHOT-runner -... -Reading symbols from ./target/debugging-native-1.0.0-SNAPSHOT-runner... -(No debugging symbols found in ./target/debugging-ntaive-1.0.0-SNAPSHOT-runner) -(gdb) run -Starting program: /data/target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -Next, try to access http://localhost:8080/crash: -[source,bash] ----- -curl http://localhost:8080/crash ----- - -This will result in the following message in `gdb`: - -[source,bash] ----- -Thread 4 "ecutor-thread-0" received signal SIGSEGV, Segmentation fault. -[Switching to Thread 0x7fe103dff640 (LWP 190)] -0x0000000000461f6e in ?? () ----- - -If we try to get more info about the backtrace that led to this crash we will see that there is not enough information available. - -[source,bash] ----- -(gdb) bt -#0 0x0000000000418b5e in ?? () -#1 0x00007ffff6f2d328 in ?? () -#2 0x0000000000418a04 in ?? () -#3 0x00007ffff44062a0 in ?? () -#4 0x00000000010c3dd3 in ?? () -#5 0x0000000000000100 in ?? () -#6 0x0000000000000000 in ?? () ----- - -This is because we didn’t compile the Quarkus application with `-Dquarkus.native.debug.enabled`, -so `gdb` cannot find debugging symbols for our native executable, -as indicated by the "_No debugging symbols found in ./target/debugging-native-1.0.0-SNAPSHOT-runner_" message in the beginning of `gdb`. - -Recompiling the Quarkus application with `-Dquarkus.native.debug.enabled` and rerunning it through `gdb` we are now able to get a backtrace making clear what caused the crash. -On top of that, add `-H:-OmitInlinedMethodDebugLineInfo` option to avoid inlined methods being omitted from the backtrace: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.debug.enabled \ - -Dquarkus.native.additional-build-args=-H:-OmitInlinedMethodDebugLineInfo -... -$ gdb ./target/debugging-native-1.0.0-SNAPSHOT-runner -Reading symbols from ./target/debugging-native-1.0.0-SNAPSHOT-runner... -(gdb) run -Starting program: /data/target/debugging-native-1.0.0-SNAPSHOT-runner -... -$ curl http://localhost:8080/crash ----- - -This will result in the following message in `gdb`: - -[source,bash] ----- -Thread 4 "ecutor-thread-0" received signal SIGSEGV, Segmentation fault. -[Switching to Thread 0x7fffeffff640 (LWP 362984)] -com.oracle.svm.core.UnmanagedMemoryUtil::copyLongsBackward(org.graalvm.word.Pointer *, org.graalvm.word.Pointer *, org.graalvm.word.UnsignedWord *) () - at com/oracle/svm/core/UnmanagedMemoryUtil.java:169 -169 com/oracle/svm/core/UnmanagedMemoryUtil.java: No such file or directory. ----- - -We already see that `gdb` is able to tell us which method caused the crash and where it’s located in the source code. -We can also get a backtrace of the call graph that led us to this state: - -[source,bash] ----- -(gdb) bt -#0 com.oracle.svm.core.UnmanagedMemoryUtil::copyLongsBackward(org.graalvm.word.Pointer *, org.graalvm.word.Pointer *, org.graalvm.word.UnsignedWord *) () at com/oracle/svm/core/UnmanagedMemoryUtil.java:169 -#1 0x0000000000461e14 in com.oracle.svm.core.UnmanagedMemoryUtil::copyBackward(org.graalvm.word.Pointer *, org.graalvm.word.Pointer *, org.graalvm.word.UnsignedWord *) () at com/oracle/svm/core/UnmanagedMemoryUtil.java:110 -#2 0x0000000000461dc8 in com.oracle.svm.core.UnmanagedMemoryUtil::copy(org.graalvm.word.Pointer *, org.graalvm.word.Pointer *, org.graalvm.word.UnsignedWord *) () at com/oracle/svm/core/UnmanagedMemoryUtil.java:67 -#3 0x000000000045d3c0 in com.oracle.svm.core.JavaMemoryUtil::unsafeCopyMemory(java.lang.Object *, long, java.lang.Object *, long, long) () at com/oracle/svm/core/JavaMemoryUtil.java:276 -#4 0x00000000013277de in jdk.internal.misc.Unsafe::copyMemory0 () at com/oracle/svm/core/jdk/SunMiscSubstitutions.java:125 -#5 jdk.internal.misc.Unsafe::copyMemory(java.lang.Object *, long, java.lang.Object *, long, long) () at jdk/internal/misc/Unsafe.java:788 -#6 0x00000000013b1a3f in jdk.internal.misc.Unsafe::copyMemory () at jdk/internal/misc/Unsafe.java:799 -#7 sun.misc.Unsafe::copyMemory () at sun/misc/Unsafe.java:585 -#8 org.acme.CrashResource::hello(void) () at org/acme/CrashResource.java:22 ----- - -Similarly, we can get a backtrace of the call graph of other threads. - -1. First, we can list the available threads with: -+ -[source,bash] ----- -(gdb) info threads - Id Target Id Frame - 1 Thread 0x7fcc62a07d00 (LWP 322) "debugging-nativ" 0x00007fcc62b8b77a in __futex_abstimed_wait_common () from /lib64/libc.so.6 - 2 Thread 0x7fcc60eff640 (LWP 326) "gnal Dispatcher" 0x00007fcc62b8b77a in __futex_abstimed_wait_common () from /lib64/libc.so.6 -* 4 Thread 0x7fcc5b7fe640 (LWP 328) "ecutor-thread-0" com.oracle.svm.core.UnmanagedMemoryUtil::copyLongsBackward(org.graalvm.word.Pointer *, org.graalvm.word.Pointer *, org.graalvm.word.UnsignedWord *) () at com/oracle/svm/core/UnmanagedMemoryUtil.java:169 - 5 Thread 0x7fcc5abff640 (LWP 329) "-thread-checker" 0x00007fcc62b8b77a in __futex_abstimed_wait_common () from /lib64/libc.so.6 - 6 Thread 0x7fcc59dff640 (LWP 330) "ntloop-thread-0" 0x00007fcc62c12c9e in epoll_wait () from /lib64/libc.so.6 -... ----- -+ -2. select the thread we want to inspect, e.g. thread 1: -+ -[source,bash] ----- -(gdb) thread 1 -[Switching to thread 1 (Thread 0x7ffff7a58d00 (LWP 1028851))] -#0 __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x2cd7adc) at futex-internal.c:57 -57 return INTERNAL_SYSCALL_CANCEL (futex_time64, futex_word, op, expected, ----- -+ -3. and, finally, print the stack trace: -+ -[source,bash] ----- -(gdb) bt -#0 __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x2cd7adc) at futex-internal.c:57 -#1 __futex_abstimed_wait_common (futex_word=futex_word@entry=0x2cd7adc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, - cancel=cancel@entry=true) at futex-internal.c:87 -#2 0x00007ffff7bdd79f in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x2cd7adc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, - private=private@entry=0) at futex-internal.c:139 -#3 0x00007ffff7bdfeb0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x2ca07b0, cond=0x2cd7ab0) at pthread_cond_wait.c:504 -#4 ___pthread_cond_wait (cond=0x2cd7ab0, mutex=0x2ca07b0) at pthread_cond_wait.c:619 -#5 0x00000000004e2014 in com.oracle.svm.core.posix.headers.Pthread::pthread_cond_wait () at com/oracle/svm/core/posix/thread/PosixJavaThreads.java:252 -#6 com.oracle.svm.core.posix.thread.PosixParkEvent::condWait(void) () at com/oracle/svm/core/posix/thread/PosixJavaThreads.java:252 -#7 0x0000000000547070 in com.oracle.svm.core.thread.JavaThreads::park(void) () at com/oracle/svm/core/thread/JavaThreads.java:764 -#8 0x0000000000fc5f44 in jdk.internal.misc.Unsafe::park(boolean, long) () at com/oracle/svm/core/thread/Target_jdk_internal_misc_Unsafe_JavaThreads.java:49 -#9 0x0000000000eac1ad in java.util.concurrent.locks.LockSupport::park(java.lang.Object *) () at java/util/concurrent/locks/LockSupport.java:194 -#10 0x0000000000ea5d68 in java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject::awaitUninterruptibly(void) () - at java/util/concurrent/locks/AbstractQueuedSynchronizer.java:2018 -#11 0x00000000008b6b30 in io.quarkus.runtime.ApplicationLifecycleManager::run(io.quarkus.runtime.Application *, java.lang.Class *, java.util.function.BiConsumer *, java.lang.String[] *) () - at io/quarkus/runtime/ApplicationLifecycleManager.java:144 -#12 0x00000000008bc055 in io.quarkus.runtime.Quarkus::run(java.lang.Class *, java.util.function.BiConsumer *, java.lang.String[] *) () at io/quarkus/runtime/Quarkus.java:67 -#13 0x000000000045c88b in io.quarkus.runtime.Quarkus::run () at io/quarkus/runtime/Quarkus.java:41 -#14 io.quarkus.runtime.Quarkus::run () at io/quarkus/runtime/Quarkus.java:120 -#15 0x000000000045c88b in io.quarkus.runner.GeneratedMain::main () -#16 com.oracle.svm.core.JavaMainWrapper::runCore () at com/oracle/svm/core/JavaMainWrapper.java:150 -#17 com.oracle.svm.core.JavaMainWrapper::run(int, org.graalvm.nativeimage.c.type.CCharPointerPointer *) () at com/oracle/svm/core/JavaMainWrapper.java:186 -#18 0x000000000048084d in com.oracle.svm.core.code.IsolateEnterStub::JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b(int, org.graalvm.nativeimage.c.type.CCharPointerPointer *) - () at com/oracle/svm/core/JavaMainWrapper.java:280 ----- - -Alternatively, we can list the backtraces of all threads with a single command: - -[source,bash] ----- -(gdb) thread apply all backtrace - -Thread 22 (Thread 0x7fffc8dff640 (LWP 1028872) "tloop-thread-15"): -#0 0x00007ffff7c64c2e in epoll_wait (epfd=8, events=0x2ca3880, maxevents=1024, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 -#1 0x000000000166e01c in Java_sun_nio_ch_EPoll_wait () -#2 0x00000000011bfece in sun.nio.ch.EPoll::wait(int, long, int, int) () at com/oracle/svm/core/stack/JavaFrameAnchors.java:42 -#3 0x00000000011c08d2 in sun.nio.ch.EPollSelectorImpl::doSelect(java.util.function.Consumer *, long) () at sun/nio/ch/EPollSelectorImpl.java:120 -#4 0x00000000011d8977 in sun.nio.ch.SelectorImpl::lockAndDoSelect(java.util.function.Consumer *, long) () at sun/nio/ch/SelectorImpl.java:124 -#5 0x0000000000705720 in sun.nio.ch.SelectorImpl::select () at sun/nio/ch/SelectorImpl.java:141 -#6 io.netty.channel.nio.SelectedSelectionKeySetSelector::select(void) () at io/netty/channel/nio/SelectedSelectionKeySetSelector.java:68 -#7 0x0000000000703c2e in io.netty.channel.nio.NioEventLoop::select(long) () at io/netty/channel/nio/NioEventLoop.java:813 -#8 0x0000000000701a5f in io.netty.channel.nio.NioEventLoop::run(void) () at io/netty/channel/nio/NioEventLoop.java:460 -#9 0x00000000008496df in io.netty.util.concurrent.SingleThreadEventExecutor$4::run(void) () at io/netty/util/concurrent/SingleThreadEventExecutor.java:986 -#10 0x0000000000860762 in io.netty.util.internal.ThreadExecutorMap$2::run(void) () at io/netty/util/internal/ThreadExecutorMap.java:74 -#11 0x0000000000840da4 in io.netty.util.concurrent.FastThreadLocalRunnable::run(void) () at io/netty/util/concurrent/FastThreadLocalRunnable.java:30 -#12 0x0000000000b7dd04 in java.lang.Thread::run(void) () at java/lang/Thread.java:829 -#13 0x0000000000547dcc in com.oracle.svm.core.thread.JavaThreads::threadStartRoutine(org.graalvm.nativeimage.ObjectHandle *) () at com/oracle/svm/core/thread/JavaThreads.java:597 -#14 0x00000000004e15b1 in com.oracle.svm.core.posix.thread.PosixJavaThreads::pthreadStartRoutine(com.oracle.svm.core.thread.JavaThreads$ThreadStartData *) () at com/oracle/svm/core/posix/thread/PosixJavaThreads.java:194 -#15 0x0000000000480984 in com.oracle.svm.core.code.IsolateEnterStub::PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df(com.oracle.svm.core.thread.JavaThreads$ThreadStartData *) () at com/oracle/svm/core/posix/thread/PosixJavaThreads.java:182 -#16 0x00007ffff7be0b1a in start_thread (arg=) at pthread_create.c:443 -#17 0x00007ffff7c65650 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 - -Thread 21 (Thread 0x7fffc97fa640 (LWP 1028871) "tloop-thread-14"): -#0 0x00007ffff7c64c2e in epoll_wait (epfd=53, events=0x2cd0970, maxevents=1024, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 -#1 0x000000000166e01c in Java_sun_nio_ch_EPoll_wait () -#2 0x00000000011bfece in sun.nio.ch.EPoll::wait(int, long, int, int) () at com/oracle/svm/core/stack/JavaFrameAnchors.java:42 -#3 0x00000000011c08d2 in sun.nio.ch.EPollSelectorImpl::doSelect(java.util.function.Consumer *, long) () at sun/nio/ch/EPollSelectorImpl.java:120 -#4 0x00000000011d8977 in sun.nio.ch.SelectorImpl::lockAndDoSelect(java.util.function.Consumer *, long) () at sun/nio/ch/SelectorImpl.java:124 -#5 0x0000000000705720 in sun.nio.ch.SelectorImpl::select () at sun/nio/ch/SelectorImpl.java:141 -#6 io.netty.channel.nio.SelectedSelectionKeySetSelector::select(void) () at io/netty/channel/nio/SelectedSelectionKeySetSelector.java:68 -#7 0x0000000000703c2e in io.netty.channel.nio.NioEventLoop::select(long) () at io/netty/channel/nio/NioEventLoop.java:813 -#8 0x0000000000701a5f in io.netty.channel.nio.NioEventLoop::run(void) () at io/netty/channel/nio/NioEventLoop.java:460 -#9 0x00000000008496df in io.netty.util.concurrent.SingleThreadEventExecutor$4::run(void) () at io/netty/util/concurrent/SingleThreadEventExecutor.java:986 -#10 0x0000000000860762 in io.netty.util.internal.ThreadExecutorMap$2::run(void) () at io/netty/util/internal/ThreadExecutorMap.java:74 -#11 0x0000000000840da4 in io.netty.util.concurrent.FastThreadLocalRunnable::run(void) () at io/netty/util/concurrent/FastThreadLocalRunnable.java:30 -#12 0x0000000000b7dd04 in java.lang.Thread::run(void) () at java/lang/Thread.java:829 -#13 0x0000000000547dcc in com.oracle.svm.core.thread.JavaThreads::threadStartRoutine(org.graalvm.nativeimage.ObjectHandle *) () at com/oracle/svm/core/thread/JavaThreads.java:597 -#14 0x00000000004e15b1 in com.oracle.svm.core.posix.thread.PosixJavaThreads::pthreadStartRoutine(com.oracle.svm.core.thread.JavaThreads$ThreadStartData *) () at com/oracle/svm/core/posix/thread/PosixJavaThreads.java:194 -#15 0x0000000000480984 in com.oracle.svm.core.code.IsolateEnterStub::PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df(com.oracle.svm.core.thread.JavaThreads$ThreadStartData *) () at com/oracle/svm/core/posix/thread/PosixJavaThreads.java:182 -#16 0x00007ffff7be0b1a in start_thread (arg=) at pthread_create.c:443 -#17 0x00007ffff7c65650 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 - -Thread 20 (Thread 0x7fffc9ffb640 (LWP 1028870) "tloop-thread-13"): -... ----- - -Note, however, that despite being able to get a backtrace we can still not list the source code at point with the `list` command. - -[source,bash] ----- -(gdb) list -164 in com/oracle/svm/core/UnmanagedMemoryUtil.java ----- - -This is because `gdb` is not aware of the location of the source files. -We are running the executable outside of the target directory. -To fix this we can either rerun `gdb` from the target directory or, -run `directory target/debugging-native-1.0.0-SNAPSHOT-native-image-source-jar/sources` e.g.: - -[source,bash] ----- -(gdb) directory target/debugging-native-1.0.0-SNAPSHOT-native-image-source-jar/sources -Source directories searched: /data/target/debugging-native-1.0.0-SNAPSHOT-native-image-source-jar/sources:$cdir:$cwd -(gdb) list -164 UnsignedWord offset = size; -165 while (offset.aboveOrEqual(32)) { -166 offset = offset.subtract(32); -167 Pointer src = from.add(offset); -168 Pointer dst = to.add(offset); -169 long l24 = src.readLong(24); -170 long l16 = src.readLong(16); -171 long l8 = src.readLong(8); -172 long l0 = src.readLong(0); -173 dst.writeLong(24, l24); ----- - -We can now examine line `169` and get a first hint of what might be wrong -(in this case we see that it fails at the first read from src which contains the address `0x0000`), -or walk up the stack using `gdb`’s `up` command to see what part of our code led to this situation. -To learn more about using gdb to debug native executables see -https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/DebugInfo.md[here]. - -== Frequently Asked Questions - -=== Why is the process of generating a native executable slow? - -Native executable generation is a multi-step process. -The analysis and compile steps are the most expensive of all and hence the ones that dominate the time spent generating the native executable. - -In the analysis phase, a static points-to analysis starts from the main method of the program to find out what is reachable. -As new classes are discovered, some of them will be initialized during this process depending on the configuration. -In the next step, the heap is snapshotted and checks are made to see which types need to be available at runtime. -The initialization and heap snapshotting can cause new types to be discovered, in which case the process is repeated. -The process stops when a fixed point is reached, that is when the reachable program grows no more. - -The compilation step is pretty straightforward, it simply compiles all the reachable code. - -The time spent in analysis and compilation phases depends on how big the application is. -The bigger the application, the longer it takes to compile it. -However, there are certain features that can have an exponential effect. -For example, when registering types and methods for reflection access, -the analysis can’t easily see what’s behind those types or methods, -so it has to do more work to complete the analysis step. - -=== Why is runtime performance of a native executable inferior compared to JVM mode? - -As with most things in life there are some trade offs involved when choosing native compilation over JVM mode. -So depending on the application the runtime performance of a native application might be slower compared to JVM mode, -though that’s not always the case. - -JVM execution of an application includes runtime optimization of the code that profits from profile information built up during execution. -That includes the opportunities to inline a lot more of the code, -locate hot code on direct paths (i.e. ensure better instruction cache locality) -and cut out a lot of the code on cold paths (on the JVM a lot of code does not get compiled until something tries to execute it -- it is replaced with a trap that causes deoptimization and recompilation). -Removal of cold paths provides many more optimization opportunities than are available for ahead of time compilation because it significantly reduces the branch complexity and combinatorial logic of the smaller amount of hot code that is compiled. - -By contrast, native executable compilation has to cater for all possible execution paths when it compiles code offline since it does not know which are the hot or cold paths and cannot use the trick of planting a trap and recompiling if it is hit. For the same reason it cannot load the dice to ensure that code cache conflicts are minimized by co-locating hot paths adjacent. -Native executable generation is able to remove some code because of the closed world hypothesis but that is often not enough to make up for all the benefits that profiling and runtime deopt & recompile provides to the JVM JIT compiler. - -Note, however, that there is a price you pay for that potentially higher JVM speed, and that price is in increased resource usage (both CPU and memory) and startup time because: - -1. it takes some time before the JIT kicks in and fully optimizes the code. -2. the JIT compiler consumes resources that could be utilized by the application. -3. the JVM has to retain a lot more metadata and compiler/profiler data to support the better optimizations that it can offer. - -The reason for 1) is that code needs to be run interpreted for some time and, possibly, to be compiled several times before all potential optimizations are realized to ensure that: - -a. it’s worth compiling that code path, i.e. it’s being executed enough times, and that -b. we have enough profiling data to perform meaningful optimizations. - -An implication of 1) is that for small, short-lived applications a native executable may well be a better bet. -Although the compiled code is not as well optimized it is available straight away. - -The reason for 2) is that the JVM is essentially running the compiler at runtime in parallel with the application itself. -In the case of native executables the compiler is run ahead of time removing the need to run the compiler in parallel with the application. - -There are several reasons for 3). The JVM does not have a closed world assumption. -So, it has to be able to recompile code if loading of new classes implies that it needs to revise optimistic assumptions made at compile time. -For example, if an interface has only one implementation it can make a call jump directly to that code. -However, in the case where a second implementation class is loaded the call site needs to be patched to test the type of the receiver instance and jump to the code that belongs to its class. -Supporting optimizations like this one requires keeping track of a lot more details of the class base than a native executable, -including recording the full class and interface hierarchy, -details of which methods override other methods, all method bytecode etc. -In a native executable most of the details of class structure and bytecode can be ignored at run time. - -The JVM also has to cope with changes to the class base or execution profiles that result in a thread going down a previously cold path. -At that point the JVM has to jump out of the compiled code into the interpreter and recompile the code to cater for a new execution profile that includes the previously cold path. -That requires keeping runtime info that allow a compiled stack frame to be replaced with one or more interpreter frames. -It also requires runtime extensible profile counters to be allocated and updated to track what has or has not been executed. - -=== Why are native executables “big”? - -This can be attributed to a number of different reasons: - -1. Native executables include not only the application code but also, library code, and JDK code. -As a result a more fair comparison would be to compare the native executable’s size with the size of the application, -plus the size of the libraries it uses, plus the size of the JDK. -Especially the JDK part is not negligible even in simple applications like HelloWorld. -To get a glance on what is being pulled in the image one can use `-H:+PrintUniverse` when building the native executable. -2. Some features are always included in a native executable even though they might never be actually used at run time. -An example of such a feature is garbage collection. -At compile time we can’t be sure whether an application will need to run garbage collection at run time, -so garbage collection is always included in native executables increasing their size even if not necessary. -Native executable generation relies on static code analysis to identify which code paths are reachable, -and static code analysis can be imprecise leading to more code getting into the image than what’s actually needed. - -There is a https://github.com/oracle/graal/issues/287[GraalVM upstream issue] -with some interesting discussions about that topic. - -=== What version of Mandrel was used to generate a binary? - -One can see which Mandrel version was used to generate a binary by inspecting the binary as follows: - -[source,bash] ----- -$ strings target/debugging-native-1.0.0-SNAPSHOT-runner | grep GraalVM -com.oracle.svm.core.VM=GraalVM 22.0.0.2-Final Java 11 Mandrel Distribution ----- - -=== How do I enable GC logging in native executables? - -Executing the native executable with `-XX:PrintFlags=` prints a list of flags that can be passed to native executables. -For various levels of GC logging one may use: - -[source,bash] ----- -$ ./target/debugging-native-1.0.0-SNAPSHOT-runner -XX:PrintFlags= -... - -XX:±PrintGC Print summary GC information after each collection. Default: - (disabled). - -XX:±PrintGCSummary Print summary GC information after application main method returns. Default: - (disabled). - -XX:±PrintGCTimeStamps Print a time stamp at each collection, if +PrintGC or +VerboseGC. Default: - (disabled). - -XX:±PrintGCTimes Print the time for each of the phases of each collection, if +VerboseGC. Default: - (disabled). - -XX:±PrintHeapShape Print the shape of the heap before and after each collection, if +VerboseGC. Default: - (disabled). -... - -XX:±TraceHeapChunks Trace heap chunks during collections, if +VerboseGC and +PrintHeapShape. Default: - (disabled). - -XX:±VerboseGC Print more information about the heap before and after each collection. Default: - (disabled). ----- - -=== Can I get a heap dump of a native executable? e.g. if it runs out of memory - -Unfortunately generating heap dumps in hprof format, -which can be opened by tools such as VisualVM or Eclipse MAT, -can only be achieved with -https://www.graalvm.org/reference-manual/native-image/NativeImageHeapdump[GraalVM Enterprise Edition]. -Mandrel, which is based on the GraalVM Community Edition, does not have this capability. - -Although Mandrel can generate debug symbols and these contain a fair amount of information about object layouts, -including what is a pointer field vs a primitive field, this information cannot be used as is to detect memory leaks or find dominator objects. -This is because it has no idea what constitutes a root pointer nor how to recursively trace pointers from those roots. - -=== Can I build and run this examples outside of a container in Linux? - -Yes you can. -In fact, debugging native executables on a Linux bare metal box offers the best possible experience. -In this kind of environments, root access is not needed except to install packages required to run some debug steps, -or to enable `perf` to gather events at the kernel. - -These are the packages you'll need on your Linux environment to run through the different debugging sections: - -[source,bash] ----- -# dnf (rpm-based) -sudo dnf install binutils gdb perf perl-open -# Debian-based distributions: -sudo apt install binutils gdb perf ----- - -=== Generating flame graphs is slow, or produces errors, what can I do? - -There are multiple ways in which a native executable produced by Mandrel can be profiled. -All the methods require you to pass in the `-H:-DeleteLocalSymbols` option. - -The method shown in this reference guide generates a binary with DWARF debug information, -runs it via `perf record` and then uses `perf script` and flame graph tooling to generate the flamegraphs. -However, the `perf script` post-processing step done on this binary can appear to be slow or can show some DWARF errors. - -An alternative method to generate flame graphs is to pass in `-H:+PreserveFramePointer` when generating the native executable instead of generating the DWARF debug information. -It instructs the binary to use an extra register for the frame pointer. -This enables `perf` to do stack walking to profile the runtime behaviour. -To generate the native executable using these flags, do the following: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative - -Dquarkus.native.additional-build-args=-H:+PreserveFramePointer,-H:-DeleteLocalSymbols ----- - -To get runtime profiling information out of the native executable, simply do: - -[source,bash] ----- -perf record -F 1009 -g -a ./target/debugging-native-1.0.0-SNAPSHOT-runner ----- - -The recommended method for generating runtime profiling information is using the debug information rather than generating a binary that preserves the frame pointer. -This is because adding debug information to the native executable build process has no negative runtime performance whereas preserving the frame pointer does. - -DWARF debug info is generated in a separate file and can even be omitted in the default deployment and only be transferred and used on demand, -for profiling or debugging purposes. -Furthermore, the presence of debug info enables `perf` to show us the relevant source code lines as well, -hence it does not bloat the native executable itself. -To do that, simply call `perf report` with an extra parameter to show source code lines: - -[source,bash] ----- -perf report --stdio -F+srcline -... -83.69% 0.00% GreetingResource.java:20 ... -... -83.69% 0.00% AbstractStringBuilder.java:1025 ... -... -83.69% 0.00% ArraycopySnippets.java:95 ... ----- - -The performance penalty of preserving the frame pointer is due to using the extra register for stack walking, -particularly in `x86_64` compared to `aarch64` where there are less registers available. -Using this extra register reduces the number of registers that are available for other work, -which can lead to performance penalties. - -=== I think I’ve found a bug in native-image, how can I debug it with the IDE? - -Although it is possible to remote debug processes within containers, -it might be easier to step-by-step debug native-image by installing Mandrel locally and adding it to the path of the shell process. - -Native executable generation is the result of two Java processes that are executed sequentially. -The first process is very short and its main job is to set things up for the second process. -The second process is the one that takes care of most of the work. -The steps to debug one process or the other vary slightly. - -Let’s discuss first how to debug the second process, -which is the one you most likely to want to debug. -The starting point for the second process is the `com.oracle.svm.hosted.NativeImageGeneratorRunner` class. -To debug this process, simply add `--debug-attach=*:8000` as an additional build time argument: - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.additional-build-args=--debug-attach=*:8000 ----- - -The starting point for the first process is the `com.oracle.svm.driver.NativeImages` class. -In GraalVM CE distributions, this first process is a binary, so debugging it in the traditional way with a Java IDE is not possible. -However, Mandrel distributions (or locally built GraalVM CE instances) keep this as a normal Java process, -so you can remote debug this process by adding the `--vm.agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:8000` as an additional build argument, e.g. - -[source,bash,subs=attributes+] ----- -$ ./mvnw package -DskipTests -Pnative \ - -Dquarkus.native.additional-build-args=--vm.agentlib:jdwp=transport=dt_socket\\,server=y\\,suspend=y\\,address=*:8000 ----- - -=== Can I use JFR/JMC to debug or profile native binaries? - -https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/about.htm#JFRUH170[Java Flight Recorder (JFR)] and -https://www.oracle.com/java/technologies/jdk-mission-control.html[JDK Mission Control (JMC)] -can be used to profile native binaries since GraalVM CE 21.2.0. -However, JFR in GraalVM is currently significantly limited in capabilities compared to HotSpot. -The custom event API is fully supported, but many VM level features are unavailable. -They will be added in future releases. Current limitations are: - -* Minimal VM level events -* No old object sampling -* No stacktrace tracing -* No Streaming API for JDK 17 - -To use JFR add the application property: `-Dquarkus.native.enable-vm-inspection=true`. -E.g. - -[source,bash,subs=attributes+] ----- -./mvnw package -DskipTests -Pnative -Dquarkus.native.container-build=true \ - -Dquarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-mandrel:{mandrel-flavor} \ - -Dquarkus.native.enable-vm-inspection=true ----- - -Once the image is compiled, enable and start JFR via runtime flags: `-XX:+FlightRecorder` and `-XX:StartFlightRecording`. For example: - -[source,bash] ----- -./target/debugging-native-1.0.0-SNAPSHOT-runner \ - -XX:+FlightRecorder \ - -XX:StartFlightRecording="filename=recording.jfr" ----- - -For more details on using JFR, see https://www.graalvm.org/reference-manual/native-image/JFR[here]. diff --git a/_versions/2.7/guides/openapi-swaggerui.adoc b/_versions/2.7/guides/openapi-swaggerui.adoc deleted file mode 100644 index 380c66f01cb..00000000000 --- a/_versions/2.7/guides/openapi-swaggerui.adoc +++ /dev/null @@ -1,539 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenAPI and Swagger UI - -include::./attributes.adoc[] - -This guide explains how your Quarkus application can expose its API description through an OpenAPI specification and -how you can test it via a user-friendly UI named Swagger UI. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we create a straightforward REST application to demonstrate how fast you can expose your API -specification and benefit from a user interface to test it. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can skip right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `openapi-swaggerui-quickstart` {quickstarts-tree-url}/openapi-swaggerui-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: openapi-swaggerui-quickstart -:create-app-extensions: resteasy,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -== Expose a REST Resource - -We will create a `Fruit` bean and a `FruitResource` REST resource -(feel free to take a look to the xref:rest-json.adoc[Writing JSON REST services guide] if your want more details on how to build a REST API with Quarkus). - -[source,java] ----- -package org.acme.openapi.swaggerui; - -public class Fruit { - - public String name; - public String description; - - public Fruit() { - } - - public Fruit(String name, String description) { - this.name = name; - this.description = description; - } -} ----- - -[source,java] ----- -package org.acme.openapi.swaggerui; - -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.DELETE; -import javax.ws.rs.Path; -import java.util.Collections; -import java.util.LinkedHashMap; -import java.util.Set; - -@Path("/fruits") -public class FruitResource { - - private Set fruits = Collections.newSetFromMap(Collections.synchronizedMap(new LinkedHashMap<>())); - - public FruitResource() { - fruits.add(new Fruit("Apple", "Winter fruit")); - fruits.add(new Fruit("Pineapple", "Tropical fruit")); - } - - @GET - public Set list() { - return fruits; - } - - @POST - public Set add(Fruit fruit) { - fruits.add(fruit); - return fruits; - } - - @DELETE - public Set delete(Fruit fruit) { - fruits.removeIf(existingFruit -> existingFruit.name.contentEquals(fruit.name)); - return fruits; - } -} ----- - -You can also create a test: - -[source,java] ----- -package org.acme.openapi.swaggerui; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import javax.ws.rs.core.MediaType; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; -import static org.hamcrest.Matchers.containsInAnyOrder; - -@QuarkusTest -public class FruitResourceTest { - - @Test - public void testList() { - given() - .when().get("/fruits") - .then() - .statusCode(200) - .body("$.size()", is(2), - "name", containsInAnyOrder("Apple", "Pineapple"), - "description", containsInAnyOrder("Winter fruit", "Tropical fruit")); - } - - @Test - public void testAdd() { - given() - .body("{\"name\": \"Pear\", \"description\": \"Winter fruit\"}") - .header("Content-Type", MediaType.APPLICATION_JSON) - .when() - .post("/fruits") - .then() - .statusCode(200) - .body("$.size()", is(3), - "name", containsInAnyOrder("Apple", "Pineapple", "Pear"), - "description", containsInAnyOrder("Winter fruit", "Tropical fruit", "Winter fruit")); - - given() - .body("{\"name\": \"Pear\", \"description\": \"Winter fruit\"}") - .header("Content-Type", MediaType.APPLICATION_JSON) - .when() - .delete("/fruits") - .then() - .statusCode(200) - .body("$.size()", is(2), - "name", containsInAnyOrder("Apple", "Pineapple"), - "description", containsInAnyOrder("Winter fruit", "Tropical fruit")); - } -} ----- - -== Expose OpenAPI Specifications - -Quarkus provides the https://github.com/smallrye/smallrye-open-api/[Smallrye OpenAPI] extension compliant with the -https://github.com/eclipse/microprofile-open-api/[MicroProfile OpenAPI] -specification in order to generate your API -https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.0.md[OpenAPI v3 specification]. - -You just need to add the `openapi` extension to your Quarkus application: - -:add-extension-extensions: quarkus-smallrye-openapi -include::includes/devtools/extension-add.adoc[] - -This will add the following to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-openapi - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-openapi") ----- - -Now, we are ready to run our application: - -include::includes/devtools/dev.adoc[] - -Once your application is started, you can make a request to the default `/q/openapi` endpoint: - -[source,shell] ----- -$ curl http://localhost:8080/q/openapi -openapi: 3.0.3 -info: - title: Generated API - version: "1.0" -paths: - /fruits: - get: - responses: - 200: - description: OK - content: - application/json: {} - post: - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/Fruit' - responses: - 200: - description: OK - content: - application/json: {} - delete: - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/Fruit' - responses: - 200: - description: OK - content: - application/json: {} -components: - schemas: - Fruit: - properties: - description: - type: string - name: - type: string ----- - -[NOTE] -==== -If you do not like the default endpoint location `/q/openapi`, you can change it by adding the following configuration to your `application.properties`: -[source, properties] ----- -quarkus.smallrye-openapi.path=/swagger ----- -==== - -[NOTE] -==== -You can request the OpenAPI in JSON format using the `format` query parameter. For example: -[source, properties] ----- -/q/openapi?format=json ----- -==== - - -Hit `CTRL+C` to stop the application. - -== Providing Application Level OpenAPI Annotations - -There are some MicroProfile OpenAPI annotations which describe global API information, such as the following: - -* API Title -* API Description -* Version -* Contact Information -* License - -All of this information (and more) can be included in your Java code by using appropriate OpenAPI annotations -on a JAX-RS `Application` class. Because a JAX-RS `Application` class is not required in Quarkus, you will -likely have to create one. It can simply be an empty class that extends `javax.ws.rs.core.Application`. This -empty class can then be annotated with various OpenAPI annotations such as `@OpenAPIDefinition`. For example: - -[source, java] ----- -@OpenAPIDefinition( - tags = { - @Tag(name="widget", description="Widget operations."), - @Tag(name="gasket", description="Operations related to gaskets") - }, - info = @Info( - title="Example API", - version = "1.0.1", - contact = @Contact( - name = "Example API Support", - url = "http://exampleurl.com/contact", - email = "techsupport@example.com"), - license = @License( - name = "Apache 2.0", - url = "https://www.apache.org/licenses/LICENSE-2.0.html")) -) -public class ExampleApiApplication extends Application { -} ----- - -Another option, that is a feature provided by SmallRye and not part of the specification, is to use configuration to add this global API information. -This way, you do not need to have a JAX-RS `Application` class, and you can name the API differently per environment. - -For example, add the following to your `application.properties`: - -[source, properties] ----- -quarkus.smallrye-openapi.info-title=Example API -%dev.quarkus.smallrye-openapi.info-title=Example API (development) -%test.quarkus.smallrye-openapi.info-title=Example API (test) -quarkus.smallrye-openapi.info-version=1.0.1 -quarkus.smallrye-openapi.info-description=Just an example service -quarkus.smallrye-openapi.info-terms-of-service=Your terms here -quarkus.smallrye-openapi.info-contact-email=techsupport@example.com -quarkus.smallrye-openapi.info-contact-name=Example API Support -quarkus.smallrye-openapi.info-contact-url=http://exampleurl.com/contact -quarkus.smallrye-openapi.info-license-name=Apache 2.0 -quarkus.smallrye-openapi.info-license-url=https://www.apache.org/licenses/LICENSE-2.0.html ----- - -This will give you similar information as the `@OpenAPIDefinition` example above. - -== Loading OpenAPI Schema From Static Files - -Instead of dynamically creating OpenAPI schemas from annotation scanning, Quarkus also supports serving static OpenAPI documents. -The static file to serve must be a valid document conforming to the https://swagger.io/docs/specification[OpenAPI specification]. -An OpenAPI document that conforms to the OpenAPI Specification is itself a valid JSON object, that can be represented in `yaml` or `json` formats. - -To see this in action, we'll put OpenAPI documentation under `META-INF/openapi.yaml` for our `/fruits` endpoints. -Quarkus also supports alternative <> if you prefer. - -[source,yaml] ----- -openapi: 3.0.1 -info: - title: Static OpenAPI document of fruits resource - description: Fruit resources Open API documentation - version: "1.0" - -servers: - - url: http://localhost:8080/q/openapi - description: Optional dev mode server description - -paths: - /fruits: - get: - responses: - 200: - description: OK - fruits list - content: - application/json: {} - post: - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/Fruit' - responses: - 200: - description: new fruit resource created - content: - application/json: {} - delete: - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/Fruit' - responses: - 200: - description: OK - fruit resource deleted - content: - application/json: {} -components: - schemas: - Fruit: - properties: - description: - type: string - name: - type: string ----- -By default, a request to `/q/openapi` will serve the combined OpenAPI document from the static file and the model generated from application endpoints code. -We can however change this to only serve the static OpenAPI document by adding `mp.openapi.scan.disable=true` configuration into `application.properties`. - -Now, a request to `/q/openapi` endpoint will serve the static OpenAPI document instead of the generated one. - -[[open-document-paths]] -[TIP] -.About OpenAPI document paths -==== -Quarkus supports various paths to store your OpenAPI document under. We recommend you place it under `META-INF/openapi.yml`. -Alternative paths are: - -* `META-INF/openapi.yaml` -* `META-INF/openapi.yml` -* `META-INF/openapi.json` -* `WEB-INF/classes/META-INF/openapi.yml` -* `WEB-INF/classes/META-INF/openapi.yaml` -* `WEB-INF/classes/META-INF/openapi.json` - -Live reload of static OpenAPI document is supported during development. A modification to your OpenAPI document will be picked up on fly by Quarkus. -==== - -== Changing the OpenAPI version - -By default, when the document is generated, the OpenAPI version used will be `3.0.3`. If you use a static file as mentioned above, the version in the file -will be used. You can also define the version in SmallRye using the following configuration: - -[source, properties] ----- -mp.openapi.extensions.smallrye.openapi=3.0.2 ----- - -This might be useful if your API goes through a Gateway that needs a certain version. - -== Auto-generation of Operation Id - -The https://swagger.io/docs/specification/paths-and-operations/[Operation Id] can be set using the `@Operation` annotation, and is in many cases useful when using a tool to generate a client stub from the schema. -The Operation Id is typically used for the method name in the client stub. In SmallRye, you can auto-generate this Operation Id by using the following configuration: - -[source, properties] ----- -mp.openapi.extensions.smallrye.operationIdStrategy=METHOD ----- - -Now you do not need to use the `@Operation` annotation. While generating the document, the method name will be used for the Operation Id. - -.The strategies available for generating the Operation Id -|=== -|Property value |Description - -|`METHOD` -|Use the method name. - -|`CLASS_METHOD` -|Use the class name (without the package) plus the method. - -|`PACKAGE_CLASS_METHOD` -|Use the class name plus the method name. -|=== - -[[dev-mode]] -== Use Swagger UI for development - -When building APIs, developers want to test them quickly. https://swagger.io/tools/swagger-ui/[Swagger UI] is a great tool -permitting to visualize and interact with your APIs. -The UI is automatically generated from your OpenAPI specification. - -The Quarkus `smallrye-openapi` extension comes with a `swagger-ui` extension embedding a properly configured Swagger UI page. - -[NOTE] -==== -By default, Swagger UI is only available when Quarkus is started in dev or test mode. - -If you want to make it available in production too, you can include the following configuration in your `application.properties`: -[source, properties] ----- -quarkus.swagger-ui.always-include=true ----- - -This is a build time property, it cannot be changed at runtime after your application is built. - -==== - -By default, Swagger UI is accessible at `/q/swagger-ui`. - -You can update the `/swagger-ui` sub path by setting the `quarkus.swagger-ui.path` property in your `application.properties`: - -[source, properties] ----- -quarkus.swagger-ui.path=my-custom-path ----- - -[WARNING] -==== -The value `/` is not allowed as it blocks the application from serving anything else. -A value prefixed with '/' makes it absolute and not relative. -==== - -Now, we are ready to run our application: - -[source,bash] ----- -./mvnw compile quarkus:dev ----- - -You can check the Swagger UI path in your application's log: - -[source] ----- -00:00:00,000 INFO [io.qua.swa.run.SwaggerUiServletExtension] Swagger UI available at /q/swagger-ui ----- - -Once your application is started, you can go to http://localhost:8080/q/swagger-ui and play with your API. - -You can visualize your API's operations and schemas. -image:openapi-swaggerui-guide-screenshot01.png[alt=Visualize your API] - -You can also interact with your API in order to quickly test it. -image:openapi-swaggerui-guide-screenshot02.png[alt=Interact with your API] - -Hit `CTRL+C` to stop the application. - -=== Styling -You can style the swagger ui by supplying your own logo and css. - -==== Logo - -To supply your own logo, you need to place a file called `logo.png` in `src/main/resources/META-INF/branding`. - -This will set the logo for all UIs (not just swagger ui), so in this case also GraphQL-UI and Health-UI (if included). - -If you only want to apply this logo to swagger-ui (and not globally to all UIs) call the file `smallrye-open-api-ui.png` -rather than `logo.png`. - -==== CSS - -To supply your own css that override/enhance style in the html, you need to place a file called `style.css` in `src/main/resources/META-INF/branding`. - -This will add that css to all UIs (not just swagger ui), so in this case also GraphQL-UI and Health-UI (if included). - -If you only want to apply this style to swagger-ui (and not globally to all UIs) call the file `smallrye-open-api-ui.css` -rather than `style.css`. - -For more information on styling, read this blog entry: https://quarkus.io/blog/stylish-api/ - -=== Cross Origin Resource Sharing - -If you plan to consume this application from a Single Page Application running on a different domain, you will need to configure CORS (Cross-Origin Resource Sharing). Please read the xref:http-reference.adoc#cors-filter[HTTP CORS documentation] for more details. - -== Configuration Reference - -=== OpenAPI - -include::{generated-dir}/config/quarkus-smallrye-openapi.adoc[opts=optional, leveloffset=+1] - -=== Swagger UI - -include::{generated-dir}/config/quarkus-swaggerui.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/opentelemetry.adoc b/_versions/2.7/guides/opentelemetry.adoc deleted file mode 100644 index 45e53640f78..00000000000 --- a/_versions/2.7/guides/opentelemetry.adoc +++ /dev/null @@ -1,358 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenTelemetry - -include::./attributes.adoc[] - -This guide explains how your Quarkus application can utilize https://opentelemetry.io/[OpenTelemetry] to provide -distributed tracing for interactive web applications. - -== Prerequisites - -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we create a straightforward REST application to demonstrate distributed tracing. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can skip right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `opentelemetry-quickstart` {quickstarts-tree-url}/opentelemetry-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: opentelemetry-quickstart -:create-app-extensions: resteasy,quarkus-opentelemetry-exporter-otlp -include::includes/devtools/create-app.adoc[] - -This command generates the Maven project and imports the `quarkus-opentelemetry-exporter-otlp` extension, -which includes the OpenTelemetry support, -and a gRPC span exporter for https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md[OTLP]. - -If you already have your Quarkus project configured, you can add the `quarkus-opentelemetry-exporter-otlp` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: opentelemetry-otlp-exporter -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-opentelemetry-exporter-otlp - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-opentelemetry-exporter-otlp") ----- - -=== Examine the JAX-RS resource - -Create a `src/main/java/org/acme/opentelemetry/TracedResource.java` file with the following content: - -[source,java] ----- -package org.acme.opentelemetry; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import org.jboss.logging.Logger; - -@Path("/hello") -public class TracedResource { - - private static final Logger LOG = Logger.getLogger(TracedResource.class); - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - LOG.info("hello"); - return "hello"; - } -} ----- - -Notice that there is no tracing specific code included in the application. By default, requests sent to this -endpoint will be traced without any required code changes. - -=== Create the configuration - -There are two ways to configure the OTLP gRPC Exporter within the application. - -The first approach is by providing the properties within the `src/main/resources/application.properties` file: - -[source,properties] ----- -quarkus.application.name=myservice // <1> -quarkus.opentelemetry.enabled=true // <2> -quarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://localhost:4317 // <3> - -quarkus.opentelemetry.tracer.exporter.otlp.headers=Authorization=Bearer my_secret // <4> ----- - -<1> All spans created from the application will include an OpenTelemetry `Resource` indicating the span was created by the `myservice` application. If not set, it will default to the artifact id. -<2> Whether OpenTelemetry is enabled or not. The default is `true`, but shown here to indicate how it can be disabled -<3> gRPC endpoint for sending spans -<4> Optional gRPC headers commonly used for authentication - -== Run the application - -The first step is to configure and start the https://opentelemetry.io/docs/collector/[OpenTelemetry Collector] to receive, process and export telemetry data to https://www.jaegertracing.io/[Jaeger] that will display the captured traces. - -Configure the OpenTelemetry Collector by creating an `otel-collector-config.yaml` file: - -[source,yaml,subs="attributes"] ----- -receivers: - otlp: - protocols: - grpc: - endpoint: otel-collector:4317 - otlp/2: - protocols: - grpc: - endpoint: otel-collector:55680 - -exporters: - jaeger: - endpoint: jaeger-all-in-one:14250 - tls: - insecure: true - -processors: - batch: - -extensions: - health_check: - -service: - extensions: [health_check] - pipelines: - traces: - receivers: [otlp,otlp/2] - processors: [batch] - exporters: [jaeger] - ----- - -Start the OpenTelemetry Collector and Jaeger system via the following `docker-compose.yml` file that you can launch via `docker-compose up -d`: - -[source,yaml,subs="attributes"] ----- -version: "2" -services: - - # Jaeger - jaeger-all-in-one: - image: jaegertracing/all-in-one:latest - ports: - - "16686:16686" - - "14268" - - "14250" - # Collector - otel-collector: - image: otel/opentelemetry-collector:latest - command: ["--config=/etc/otel-collector-config.yaml"] - volumes: - - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml - ports: - - "13133:13133" # Health_check extension - - "4317:4317" # OTLP gRPC receiver - - "55680:55680" # OTLP gRPC receiver alternative port - depends_on: - - jaeger-all-in-one ----- - -Now we are ready to run our application. If using `application.properties` to configure the tracer: - -include::includes/devtools/dev.adoc[] - -or if configuring the OTLP gRPC endpoint via JVM arguments: - -:dev-additional-parameters: -Djvm.args="-Dquarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://localhost:55680" -include::includes/devtools/dev.adoc[] -:!dev-additional-parameters: - -With the OpenTelemetry Collector, Jaeger system and application running, you can make a request to the provided endpoint: - -[source,shell] ----- -$ curl http://localhost:8080/hello -hello ----- - -Then visit the http://localhost:16686[Jaeger UI] to see the tracing information. - -Hit `CTRL+C` to stop the application. - -== Additional configuration -Some use cases will require custom configuration of OpenTelemetry. -These sections will outline what is necessary to properly configure it. - -=== ID Generator -The OpenTelemetry extension will use by default a random https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#id-generators[ID Generator] -when creating the trace and span identifier. - -Some vendor-specific protocols need a custom ID Generator, you can override the default one by creating a producer. -The OpenTelemetry extension will detect the `IdGenerator` CDI bean and will use it when configuring the tracer producer. - -[source,java] ----- -@Singleton -public class CustomConfiguration { - - /** Creates a custom IdGenerator for OpenTelemetry */ - @Produces - @Singleton - public IdGenerator idGenerator() { - return AwsXrayIdGenerator.getInstance(); - } -} ----- - -=== Propagators -OpenTelemetry propagates cross-cutting concerns through https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/context/api-propagators.md[propagators] that will share an underlying `Context` for storing state and accesing -data across the lifespan of a distributed transaction. - -By default, the OpenTelemetry extension enables the https://www.w3.org/TR/trace-context/[W3C Trace Context] and the https://www.w3.org/TR/baggage/[W3C Baggage] -propagators, you can however choose any of the supported OpenTelemetry propagators by setting the `propagators` config that is described in the <>. - -[NOTE] -==== -The `b3`, `b3multi`, `jaeger` and `ottrace` propagators will need the https://github.com/open-telemetry/opentelemetry-java/tree/main/extensions/trace-propagators[trace-propagators] -extension to be added as a dependency to your project. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.opentelemetry - opentelemetry-extension-trace-propagators - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.opentelemetry:opentelemetry-extension-trace-propagators") ----- - -The `xray` propagator will need the https://github.com/open-telemetry/opentelemetry-java/tree/main/extensions/aws[aws] extension to be added as a dependency to your project. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.opentelemetry - opentelemetry-extension-aws - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.opentelemetry:opentelemetry-extension-aws") ----- -==== - -=== Resource -A https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md#resources[resource] is a representation -of the entity that is producing telemetry, it adds attributes to the exported trace to characterize who is producing the trace. - -You can add attributes by setting the `resource-attributes` tracer config that is described in the <>. -Since this property can be overridden at runtime, the OpenTelemetry extension will pick up its value following the order of precedence that -is described in the xref:config-reference.adoc#configuration_sources[Quarkus Configuration Reference]. - -If by any means you need to use a custom resource or one that is provided by one of the https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions[OpenTelemetry SDK Extensions] -you can create multiple resource producers. The OpenTelemetry extension will detect the `Resource` CDI beans and will merge them when configuring the tracer producer. - -[source,java] ----- -@ApplicationScoped -public class CustomConfiguration { - - @Produces - @ApplicationScoped - public Resource osResource() { - return OsResource.get(); - } - - @Produces - @ApplicationScoped - public Resource ecsResource() { - return EcsResource.get(); - } -} ----- - -=== Sampler -A https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#sampling[sampler] decides whether -a trace should be sampled and exported, controlling noise and overhead by reducing the number of sample of traces collected and sent -to the collector. - -You can set a https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#built-in-samplers[built-in sampler] -simply by setting the desired sampler config described in the <>. - -If you need to use a custom sampler or to use one that is provided by one of the https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions[OpenTelemetry SDK Extensions] -you can create a sampler producer. The OpenTelemetry extension will detect the `Sampler` CDI bean and will use it when configuring the tracer producer. - -[source,java] ----- -@Singleton -public class CustomConfiguration { - - /** Creates a custom sampler for OpenTelemetry */ - @Produces - @Singleton - public Sampler sampler() { - return JaegerRemoteSampler.builder() - .setServiceName("my-service") - .build(); - } -} ----- - -== Additional instrumentation - -Some Quarkus extensions will require additional code to ensure traces are propagated to subsequent execution. -These sections will outline what is necessary to propagate traces across process boundaries. - -The instrumentation documented in this section has been tested with Quarkus and works in both standard and native mode. - -=== SmallRye Reactive Messaging - Kafka - -When using the SmallRye Reactive Messaging extension for Kafka, -we are able to propagate the span into the Kafka Record with: - -[source,java] ----- -Metadata.of(TracingMetadata.withPrevious(Context.current())); ----- - -The above creates a `Metadata` object we can add to the `Message` being produced, -which retrieves the OpenTelemetry `Context` to extract the current span for propagation. - -[[configuration-reference]] -== OpenTelemetry Configuration Reference - -include::{generated-dir}/config/quarkus-opentelemetry.adoc[leveloffset=+1, opts=optional] -include::{generated-dir}/config/quarkus-opentelemetry-exporter-otlp.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/opentracing.adoc b/_versions/2.7/guides/opentracing.adoc deleted file mode 100644 index 96c05d906f4..00000000000 --- a/_versions/2.7/guides/opentracing.adoc +++ /dev/null @@ -1,324 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenTracing - -include::./attributes.adoc[] - -This guide explains how your Quarkus application can utilize OpenTracing to provide distributed tracing for -interactive web applications. - -== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we create a straightforward REST application to demonstrate distributed tracing. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can skip right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `opentracing-quickstart` {quickstarts-tree-url}/opentracing-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: opentracing-quickstart -:create-app-extensions: resteasy,quarkus-smallrye-opentracing -include::includes/devtools/create-app.adoc[] - -This command generates the Maven project and imports the `smallrye-opentracing` extension, which -includes the OpenTracing support and the default https://www.jaegertracing.io/[Jaeger] tracer. - -If you already have your Quarkus project configured, you can add the `smallrye-opentracing` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: smallrye-opentracing -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-opentracing - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-opentracing") ----- - -=== Examine the JAX-RS resource - -Create the `src/main/java/org/acme/opentracing/TracedResource.java` file with the following content: - -[source,java] ----- -package org.acme.opentracing; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import org.jboss.logging.Logger; - -@Path("/hello") -public class TracedResource { - - private static final Logger LOG = Logger.getLogger(TracedResource.class); - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - LOG.info("hello"); // <1> - return "hello"; - } -} ----- - -<1> The log event carries OpenTracing information as well. In order to print OpenTracing information to the console output, the console log handler with the required OpenTracing event's keys needs to be defined in the `application.properties` file. - -Notice that there is no tracing specific code included in the application. By default, requests sent to this -endpoint will be traced without any code changes being required. It is also possible to enhance the tracing information. -This can be achieved by https://github.com/smallrye/smallrye-opentracing/[SmallRye OpenTracing] an implementation of -https://github.com/eclipse/microprofile-opentracing/[MicroProfile OpenTracing]. - -=== Create the configuration - -There are two ways to configure the Jaeger tracer within the application. - -The first approach is by providing the properties within the `src/main/resources/application.properties` file: - -[source,properties] ----- -quarkus.jaeger.service-name=myservice // <1> -quarkus.jaeger.sampler-type=const // <2> -quarkus.jaeger.sampler-param=1 // <3> -quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, parentId=%X{parentId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n // <4> ----- - -<1> If the `quarkus.jaeger.service-name` property (or `JAEGER_SERVICE_NAME` environment variable) is not provided then a "no-op" tracer will be configured, resulting in no tracing data being reported to the backend. -<2> Setup a sampler, that uses a constant sampling strategy. -<3> Sample all requests. Set sampler-param to somewhere between 0 and 1, e.g. 0.50, if you do not wish to sample all requests. -<4> Add trace IDs into log message. - -The second approach is to supply the properties as https://www.jaegertracing.io/docs/latest/client-features/[environment variables]. These can be specified as `jvm.args` as shown in the following section. - -== Run the application - -The first step is to start the tracing system to collect and display the captured traces: - -[source,bash] ----- -docker run -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 jaegertracing/all-in-one:latest ----- - -Now we are ready to run our application. If using `application.properties` to configure the tracer: - -include::includes/devtools/dev.adoc[] - -or if configuring the tracer via environment variables: - -:dev-additional-parameters: -Djvm.args="-DJAEGER_SERVICE_NAME=myservice -DJAEGER_SAMPLER_TYPE=const -DJAEGER_SAMPLER_PARAM=1" -include::includes/devtools/dev.adoc[] -:!dev-additional-parameters: - -Once both the application and tracing system are started, you can make a request to the provided endpoint: - -[source,shell] ----- -$ curl http://localhost:8080/hello -hello ----- -When the first request has been submitted, the Jaeger tracer within the app will be initialized: - -[source] ----- -2019-10-16 09:35:23,464 INFO [io.jae.Configuration] (executor-thread-1) Initialized tracer=JaegerTracer(version=Java-0.34.0, serviceName=myservice, reporter=RemoteReporter(sender=UdpSender(), closeEnqueueTimeout=1000), sampler=ConstSampler(decision=true, tags={sampler.type=const, sampler.param=true}), tags={hostname=localhost.localdomain, jaeger.version=Java-0.34.0, ip=127.0.0.1}, zipkinSharedRpcSpan=false, expandExceptionLogs=false, useTraceId128Bit=false) -13:20:11 INFO traceId=1336b2b0a76a96a3, parentId=0, spanId=1336b2b0a76a96a3, sampled=true [or.ac.qu.TracedResource] (executor-thread-63) hello ----- - -Then visit the http://localhost:16686[Jaeger UI] to see the tracing information. - -Hit `CTRL+C` to stop the application. - -== Tracing additional methods - -REST endpoints are automatically traced. -If you need to trace additional methods, you can add the `org.eclipse.microprofile.opentracing.Traced` annotation to CDI bean classes or their non-private methods. - -This can be useful to trace incoming requests from non-REST calls (like request coming from a message) or to create spans inside a trace. - -Here is an example of a `FrancophoneService` which methods are traced. - -[source, java] ----- -import javax.enterprise.context.ApplicationScoped; - -import org.eclipse.microprofile.opentracing.Traced; - -@Traced -@ApplicationScoped -public class FrancophoneService { - - public String bonjour() { - return "bonjour"; - } -} ----- - -NOTE: The best way to add OpenTracing capability to reactive messaging based applications is by adding the `Traced` annotation to all incoming methods. - -== Additional instrumentation - -The https://github.com/opentracing-contrib[OpenTracing API Contributions project] offers additional instrumentation that can be used to add tracing to a large variety of technologies/components. - -The instrumentation documented in this section has been tested with Quarkus and works in both standard and native mode. - -=== JDBC - -The https://github.com/opentracing-contrib/java-jdbc[JDBC instrumentation] will add a span for each JDBC queries done by your application, to enable it, add the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.opentracing.contrib - opentracing-jdbc - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.opentracing.contrib:opentracing-jdbc") ----- - -As it uses a dedicated JDBC driver, you must configure your datasource and Hibernate to use it. - -[source, properties] ----- -quarkus.datasource.db-kind=postgresql -# add ':tracing' to your database URL -quarkus.datasource.jdbc.url=jdbc:tracing:postgresql://localhost:5432/mydatabase -# use the 'TracingDriver' instead of the one for your database -quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver -# configure Hibernate dialect -quarkus.hibernate-orm.dialect=org.hibernate.dialect.PostgreSQLDialect ----- - - -=== Kafka - -The https://github.com/opentracing-contrib/java-kafka-client[Kafka instrumentation] will add a span for each message sent to or received from a Kafka topic. To enable it, add the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.opentracing.contrib - opentracing-kafka-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.opentracing.contrib:opentracing-kafka-client") ----- - -It contains OpenTracing interceptors that must be registered on Kafka producers and consumers. - -If you followed the xref:kafka.adoc[Kafka guide], the interceptors can be added on the `generated-price` and the `prices` channels as follows: - -[source, properties] ----- -# Configure the Kafka sink (we write to it) -mp.messaging.outgoing.generated-price.connector=smallrye-kafka -mp.messaging.outgoing.generated-price.topic=prices -mp.messaging.outgoing.generated-price.value.serializer=org.apache.kafka.common.serialization.IntegerSerializer -mp.messaging.outgoing.generated-price.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor - -# Configure the Kafka source (we read from it) -mp.messaging.incoming.prices.connector=smallrye-kafka -mp.messaging.incoming.prices.value.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer -mp.messaging.incoming.prices.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor ----- - -NOTE: `interceptor.classes` accept a list of classes separated by a comma. - - -=== MongoDB client - -The https://github.com/opentracing-contrib/java-mongo-driver[Mongo Driver instrumentation] will add a span for each command executed by your application. To enable it, add the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.opentracing.contrib - opentracing-mongo-common - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.opentracing.contrib:opentracing-mongo-common") ----- - -It contains the OpenTracing CommandListener that will be registered on the configuration of the mongo client. -Following the xref:mongodb.adoc[MongoDB guide], the command listener will be registered defining the config property as follows: - -[source, properties] ----- -# Enable tracing commands in mongodb client -quarkus.mongodb.tracing.enabled=true ----- - -=== Zipkin compatibility mode - -To enable it, add the following dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.jaegertracing - jaeger-zipkin - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.jaegertracing:jaeger-zipkin") ----- - -It contains the dependencies to convert the request to zipkin format. -The zipkin compatibility mode will be activated after defining the config property as follows: - -[source, properties] ----- -# Enable zipkin compatibility mode -quarkus.jaeger.zipkin.compatibility-mode=true ----- - -[[configuration-reference]] -== Jaeger Configuration Reference - -include::{generated-dir}/config/quarkus-jaeger.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/optaplanner.adoc b/_versions/2.7/guides/optaplanner.adoc deleted file mode 100644 index 5fb26b1228c..00000000000 --- a/_versions/2.7/guides/optaplanner.adoc +++ /dev/null @@ -1,1099 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= OptaPlanner - Using AI to optimize a schedule with OptaPlanner - -include::./attributes.adoc[] -:config-file: application.properties - -This guide walks you through the process of creating a Quarkus application -with https://www.optaplanner.org/[OptaPlanner]'s constraint solving Artificial Intelligence (AI). - -== What you will build - -You will build a REST application that optimizes a school timetable for students and teachers: - -image::optaplanner-time-table-app-screenshot.png[] - -Your service will assign `Lesson` instances to `Timeslot` and `Room` instances automatically -by using AI to adhere to hard and soft scheduling _constraints_, such as the following examples: - -* A room can have at most one lesson at the same time. -* A teacher can teach at most one lesson at the same time. -* A student can attend at most one lesson at the same time. -* A teacher prefers to teach all lessons in the same room. -* A teacher prefers to teach sequential lessons and dislikes gaps between lessons. -* A student dislikes sequential lessons on the same subject. - -Mathematically speaking, school timetabling is an _NP-hard_ problem. -This means it is difficult to scale. -Simply brute force iterating through all possible combinations takes millions of years -for a non-trivial dataset, even on a supercomputer. -Luckily, AI constraint solvers such as OptaPlanner have advanced algorithms -that deliver a near-optimal solution in a reasonable amount of time. - -[[solution]] -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in {quickstarts-tree-url}/optaplanner-quickstart[the `optaplanner-quickstart` directory]. - -== Prerequisites - -:prerequisites-time: 30 minutes -:prerequisites-no-graalvm: -include::includes/devtools/prerequisites.adoc[] - -== The build file and the dependencies - -Use https://code.quarkus.io/[code.quarkus.io] to generate an application -with the following extensions, for Maven or Gradle: - -* RESTEasy JAX-RS (`quarkus-resteasy`) -* RESTEasy Jackson (`quarkus-resteasy-jackson`) -* OptaPlanner (`optaplanner-quarkus`) -* OptaPlanner Jackson (`optaplanner-quarkus-jackson`) - -Alternatively, generate it from the command line: - -:create-app-artifact-id: optaplanner-quickstart -:create-app-extensions: resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson -include::includes/devtools/create-app.adoc[] - -This will include the following dependencies in your build file: - -[source,xml,subs=attributes+,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - - - io.quarkus.platform - quarkus-bom - {quarkus-version} - pom - import - - - io.quarkus.platform - quarkus-optaplanner-bom - {quarkus-version} - pom - import - - - - - - io.quarkus - quarkus-resteasy - - - io.quarkus - quarkus-resteasy-jackson - - - org.optaplanner - optaplanner-quarkus - - - org.optaplanner - optaplanner-quarkus-jackson - - - - io.quarkus - quarkus-junit5 - test - - ----- - -[source,gradle,subs=attributes+,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -dependencies { - implementation enforcedPlatform("io.quarkus.platform:quarkus-bom:{quarkus-version}") - implementation enforcedPlatform("io.quarkus.platform:quarkus-optaplanner-bom:{quarkus-version}") - implementation 'io.quarkus:quarkus-resteasy' - implementation 'io.quarkus:quarkus-resteasy-jackson' - implementation 'org.optaplanner:optaplanner-quarkus' - implementation 'org.optaplanner:optaplanner-quarkus-jackson' - - testImplementation 'io.quarkus:quarkus-junit5' -} ----- - -== Model the domain objects - -Your goal is to assign each lesson to a time slot and a room. -You will create these classes: - -image::optaplanner-time-table-class-diagram-pure.png[] - -=== Timeslot - -The `Timeslot` class represents a time interval when lessons are taught, -for example, `Monday 10:30 - 11:30` or `Tuesday 13:30 - 14:30`. -For simplicity's sake, all time slots have the same duration -and there are no time slots during lunch or other breaks. - -A time slot has no date, because a high school schedule just repeats every week. -So there is no need for https://docs.optaplanner.org/latestFinal/optaplanner-docs/html_single/index.html#continuousPlanning[continuous planning]. - -Create the `src/main/java/org/acme/optaplanner/domain/Timeslot.java` class: - -[source,java] ----- -package org.acme.optaplanner.domain; - -import java.time.DayOfWeek; -import java.time.LocalTime; - -public class Timeslot { - - private DayOfWeek dayOfWeek; - private LocalTime startTime; - private LocalTime endTime; - - public Timeslot() { - } - - public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { - this.dayOfWeek = dayOfWeek; - this.startTime = startTime; - this.endTime = endTime; - } - - public DayOfWeek getDayOfWeek() { - return dayOfWeek; - } - - public LocalTime getStartTime() { - return startTime; - } - - public LocalTime getEndTime() { - return endTime; - } - - @Override - public String toString() { - return dayOfWeek + " " + startTime; - } - -} ----- - -Because no `Timeslot` instances change during solving, a `Timeslot` is called a _problem fact_. -Such classes do not require any OptaPlanner specific annotations. - -Notice the `toString()` method keeps the output short, -so it is easier to read OptaPlanner's `DEBUG` or `TRACE` log, as shown later. - -=== Room - -The `Room` class represents a location where lessons are taught, -for example, `Room A` or `Room B`. -For simplicity's sake, all rooms are without capacity limits -and they can accommodate all lessons. - -Create the `src/main/java/org/acme/optaplanner/domain/Room.java` class: - -[source,java] ----- -package org.acme.optaplanner.domain; - -public class Room { - - private String name; - - public Room() { - } - - public Room(String name) { - this.name = name; - } - - public String getName() { - return name; - } - - @Override - public String toString() { - return name; - } - -} ----- - -`Room` instances do not change during solving, so `Room` is also a _problem fact_. - -=== Lesson - -During a lesson, represented by the `Lesson` class, -a teacher teaches a subject to a group of students, -for example, `Math by A.Turing for 9th grade` or `Chemistry by M.Curie for 10th grade`. -If a subject is taught multiple times per week by the same teacher to the same student group, -there are multiple `Lesson` instances that are only distinguishable by `id`. -For example, the 9th grade has six math lessons a week. - -During solving, OptaPlanner changes the `timeslot` and `room` fields of the `Lesson` class, -to assign each lesson to a time slot and a room. -Because OptaPlanner changes these fields, `Lesson` is a _planning entity_: - -image::optaplanner-time-table-class-diagram-annotated.png[] - -Most of the fields in the previous diagram contain input data, except for the orange fields: -A lesson's `timeslot` and `room` fields are unassigned (`null`) in the input data -and assigned (not `null`) in the output data. -OptaPlanner changes these fields during solving. -Such fields are called planning variables. -In order for OptaPlanner to recognize them, -both the `timeslot` and `room` fields require an `@PlanningVariable` annotation. -Their containing class, `Lesson`, requires an `@PlanningEntity` annotation. - -Create the `src/main/java/org/acme/optaplanner/domain/Lesson.java` class: - -[source,java] ----- -package org.acme.optaplanner.domain; - -import org.optaplanner.core.api.domain.entity.PlanningEntity; -import org.optaplanner.core.api.domain.lookup.PlanningId; -import org.optaplanner.core.api.domain.variable.PlanningVariable; - -@PlanningEntity -public class Lesson { - - @PlanningId - private Long id; - - private String subject; - private String teacher; - private String studentGroup; - - @PlanningVariable(valueRangeProviderRefs = "timeslotRange") - private Timeslot timeslot; - @PlanningVariable(valueRangeProviderRefs = "roomRange") - private Room room; - - public Lesson() { - } - - public Lesson(Long id, String subject, String teacher, String studentGroup) { - this.id = id; - this.subject = subject; - this.teacher = teacher; - this.studentGroup = studentGroup; - } - - public Long getId() { - return id; - } - - public String getSubject() { - return subject; - } - - public String getTeacher() { - return teacher; - } - - public String getStudentGroup() { - return studentGroup; - } - - public Timeslot getTimeslot() { - return timeslot; - } - - public void setTimeslot(Timeslot timeslot) { - this.timeslot = timeslot; - } - - public Room getRoom() { - return room; - } - - public void setRoom(Room room) { - this.room = room; - } - - @Override - public String toString() { - return subject + "(" + id + ")"; - } - -} ----- - -The `Lesson` class has an `@PlanningEntity` annotation, -so OptaPlanner knows that this class changes during solving -because it contains one or more planning variables. - -The `timeslot` field has an `@PlanningVariable` annotation, -so OptaPlanner knows that it can change its value. -In order to find potential `Timeslot` instances to assign to this field, -OptaPlanner uses the `valueRangeProviderRefs` property to connect to a value range provider -(explained later) that provides a `List` to pick from. - -The `room` field also has an `@PlanningVariable` annotation, for the same reasons. - -[NOTE] -==== -Determining the `@PlanningVariable` fields for an arbitrary constraint solving use case -is often challenging the first time. -Read https://docs.optaplanner.org/latestFinal/optaplanner-docs/html_single/index.html#domainModelingGuide[the domain modeling guidelines] -to avoid common pitfalls. -==== - -== Define the constraints and calculate the score - -A _score_ represents the quality of a specific solution. -The higher the better. -OptaPlanner looks for the best solution, which is the solution with the highest score found in the available time. -It might be the _optimal_ solution. - -Because this use case has hard and soft constraints, -use the `HardSoftScore` class to represent the score: - -* Hard constraints must not be broken. For example: _A room can have at most one lesson at the same time._ -* Soft constraints should not be broken. For example: _A teacher prefers to teach in a single room._ - -Hard constraints are weighted against other hard constraints. -Soft constraints are weighted too, against other soft constraints. -*Hard constraints always outweigh soft constraints*, regardless of their respective weights. - -To calculate the score, you could implement an `EasyScoreCalculator` class: - -[source,java] ----- -public class TimeTableEasyScoreCalculator implements EasyScoreCalculator { - - @Override - public HardSoftScore calculateScore(TimeTable timeTable) { - List lessonList = timeTable.getLessonList(); - int hardScore = 0; - for (Lesson a : lessonList) { - for (Lesson b : lessonList) { - if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) - && a.getId() < b.getId()) { - // A room can accommodate at most one lesson at the same time. - if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { - hardScore--; - } - // A teacher can teach at most one lesson at the same time. - if (a.getTeacher().equals(b.getTeacher())) { - hardScore--; - } - // A student can attend at most one lesson at the same time. - if (a.getStudentGroup().equals(b.getStudentGroup())) { - hardScore--; - } - } - } - } - int softScore = 0; - // Soft constraints are only implemented in the optaplanner-quickstarts code - return HardSoftScore.of(hardScore, softScore); - } - -} ----- - -Unfortunately **that does not scale well**, because it is non-incremental: -every time a lesson is assigned to a different time slot or room, -all lessons are re-evaluated to calculate the new score. - -Instead, create a `src/main/java/org/acme/optaplanner/solver/TimeTableConstraintProvider.java` class -to perform incremental score calculation. -It uses OptaPlanner's ConstraintStream API which is inspired by Java Streams and SQL: - -[source,java] ----- -package org.acme.optaplanner.solver; - -import org.acme.optaplanner.domain.Lesson; -import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; -import org.optaplanner.core.api.score.stream.Constraint; -import org.optaplanner.core.api.score.stream.ConstraintFactory; -import org.optaplanner.core.api.score.stream.ConstraintProvider; -import org.optaplanner.core.api.score.stream.Joiners; - -public class TimeTableConstraintProvider implements ConstraintProvider { - - @Override - public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { - return new Constraint[] { - // Hard constraints - roomConflict(constraintFactory), - teacherConflict(constraintFactory), - studentGroupConflict(constraintFactory), - // Soft constraints are only implemented in the optaplanner-quickstarts code - }; - } - - private Constraint roomConflict(ConstraintFactory constraintFactory) { - // A room can accommodate at most one lesson at the same time. - - // Select a lesson ... - return constraintFactory - .from(Lesson.class) - // ... and pair it with another lesson ... - .join(Lesson.class, - // ... in the same timeslot ... - Joiners.equal(Lesson::getTimeslot), - // ... in the same room ... - Joiners.equal(Lesson::getRoom), - // ... and the pair is unique (different id, no reverse pairs) ... - Joiners.lessThan(Lesson::getId)) - // ... then penalize each pair with a hard weight. - .penalize("Room conflict", HardSoftScore.ONE_HARD); - } - - private Constraint teacherConflict(ConstraintFactory constraintFactory) { - // A teacher can teach at most one lesson at the same time. - return constraintFactory.from(Lesson.class) - .join(Lesson.class, - Joiners.equal(Lesson::getTimeslot), - Joiners.equal(Lesson::getTeacher), - Joiners.lessThan(Lesson::getId)) - .penalize("Teacher conflict", HardSoftScore.ONE_HARD); - } - - private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { - // A student can attend at most one lesson at the same time. - return constraintFactory.from(Lesson.class) - .join(Lesson.class, - Joiners.equal(Lesson::getTimeslot), - Joiners.equal(Lesson::getStudentGroup), - Joiners.lessThan(Lesson::getId)) - .penalize("Student group conflict", HardSoftScore.ONE_HARD); - } - -} ----- - -The `ConstraintProvider` scales an order of magnitude better than the `EasyScoreCalculator`: __O__(n) instead of __O__(n²). - -== Gather the domain objects in a planning solution - -A `TimeTable` wraps all `Timeslot`, `Room`, and `Lesson` instances of a single dataset. -Furthermore, because it contains all lessons, each with a specific planning variable state, -it is a _planning solution_ and it has a score: - -* If lessons are still unassigned, then it is an _uninitialized_ solution, -for example, a solution with the score `-4init/0hard/0soft`. -* If it breaks hard constraints, then it is an _infeasible_ solution, -for example, a solution with the score `-2hard/-3soft`. -* If it adheres to all hard constraints, then it is a _feasible_ solution, -for example, a solution with the score `0hard/-7soft`. - -Create the `src/main/java/org/acme/optaplanner/domain/TimeTable.java` class: - -[source,java] ----- -package org.acme.optaplanner.domain; - -import java.util.List; - -import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; -import org.optaplanner.core.api.domain.solution.PlanningScore; -import org.optaplanner.core.api.domain.solution.PlanningSolution; -import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; -import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; -import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; - -@PlanningSolution -public class TimeTable { - - @ValueRangeProvider(id = "timeslotRange") - @ProblemFactCollectionProperty - private List timeslotList; - @ValueRangeProvider(id = "roomRange") - @ProblemFactCollectionProperty - private List roomList; - @PlanningEntityCollectionProperty - private List lessonList; - - @PlanningScore - private HardSoftScore score; - - public TimeTable() { - } - - public TimeTable(List timeslotList, List roomList, List lessonList) { - this.timeslotList = timeslotList; - this.roomList = roomList; - this.lessonList = lessonList; - } - - public List getTimeslotList() { - return timeslotList; - } - - public List getRoomList() { - return roomList; - } - - public List getLessonList() { - return lessonList; - } - - public HardSoftScore getScore() { - return score; - } - -} ----- - -The `TimeTable` class has an `@PlanningSolution` annotation, -so OptaPlanner knows that this class contains all of the input and output data. - -Specifically, this class is the input of the problem: - -* A `timeslotList` field with all time slots -** This is a list of problem facts, because they do not change during solving. -* A `roomList` field with all rooms -** This is a list of problem facts, because they do not change during solving. -* A `lessonList` field with all lessons -** This is a list of planning entities, because they change during solving. -** Of each `Lesson`: -*** The values of the `timeslot` and `room` fields are typically still `null`, so unassigned. -They are planning variables. -*** The other fields, such as `subject`, `teacher` and `studentGroup`, are filled in. -These fields are problem properties. - -However, this class is also the output of the solution: - -* A `lessonList` field for which each `Lesson` instance has non-null `timeslot` and `room` fields after solving -* A `score` field that represents the quality of the output solution, for example, `0hard/-5soft` - -=== The value range providers - -The `timeslotList` field is a value range provider. -It holds the `Timeslot` instances which OptaPlanner can pick from to assign to the `timeslot` field of `Lesson` instances. -The `timeslotList` field has an `@ValueRangeProvider` annotation to connect the `@PlanningVariable` with the `@ValueRangeProvider`, -by matching the value of the `id` property with the value of the `valueRangeProviderRefs` property of the `@PlanningVariable` annotation in the `Lesson` class. - -Following the same logic, the `roomList` field also has an `@ValueRangeProvider` annotation. - -=== The problem fact and planning entity properties - -Furthermore, OptaPlanner needs to know which `Lesson` instances it can change -as well as how to retrieve the `Timeslot` and `Room` instances used for score calculation -by your `TimeTableConstraintProvider`. - -The `timeslotList` and `roomList` fields have an `@ProblemFactCollectionProperty` annotation, -so your `TimeTableConstraintProvider` can select _from_ those instances. - -The `lessonList` has an `@PlanningEntityCollectionProperty` annotation, -so OptaPlanner can change them during solving -and your `TimeTableConstraintProvider` can select _from_ those too. - -== Create the solver service - -Now you are ready to put everything together and create a REST service. -But solving planning problems on REST threads causes HTTP timeout issues. -Therefore, the Quarkus extension injects a `SolverManager` instance, -which runs solvers in a separate thread pool -and can solve multiple datasets in parallel. - -Create the `src/main/java/org/acme/optaplanner/rest/TimeTableResource.java` class: - -[source,java] ----- -package org.acme.optaplanner.rest; - -import java.util.UUID; -import java.util.concurrent.ExecutionException; -import javax.inject.Inject; -import javax.ws.rs.POST; -import javax.ws.rs.Path; - -import org.acme.optaplanner.domain.TimeTable; -import org.optaplanner.core.api.solver.SolverJob; -import org.optaplanner.core.api.solver.SolverManager; - -@Path("/timeTable") -public class TimeTableResource { - - @Inject - SolverManager solverManager; - - @POST - @Path("/solve") - public TimeTable solve(TimeTable problem) { - UUID problemId = UUID.randomUUID(); - // Submit the problem to start solving - SolverJob solverJob = solverManager.solve(problemId, problem); - TimeTable solution; - try { - // Wait until the solving ends - solution = solverJob.getFinalBestSolution(); - } catch (InterruptedException | ExecutionException e) { - throw new IllegalStateException("Solving failed.", e); - } - return solution; - } - -} ----- - -For simplicity's sake, this initial implementation waits for the solver to finish, -which can still cause an HTTP timeout. -The _complete_ implementation avoids HTTP timeouts much more elegantly. - -== Set the termination time - -Without a termination setting or a termination event, the solver runs forever. -To avoid that, limit the solving time to five seconds. -That is short enough to avoid the HTTP timeout. - -Create the `src/main/resources/application.properties` file: - -[source,properties] ----- -# The solver runs only for 5 seconds to avoid a HTTP timeout in this simple implementation. -# It's recommended to run for at least 5 minutes ("5m") otherwise. -quarkus.optaplanner.solver.termination.spent-limit=5s ----- - - -== Run the application - -First start the application: - -include::includes/devtools/dev.adoc[] - -=== Try the application - -Now that the application is running, you can test the REST service. -You can use any REST client you wish. -The following example uses the Linux command `curl` to send a POST request: - -[source,shell] ----- -$ curl -i -X POST http://localhost:8080/timeTable/solve -H "Content-Type:application/json" -d '{"timeslotList":[{"dayOfWeek":"MONDAY","startTime":"08:30:00","endTime":"09:30:00"},{"dayOfWeek":"MONDAY","startTime":"09:30:00","endTime":"10:30:00"}],"roomList":[{"name":"Room A"},{"name":"Room B"}],"lessonList":[{"id":1,"subject":"Math","teacher":"A. Turing","studentGroup":"9th grade"},{"id":2,"subject":"Chemistry","teacher":"M. Curie","studentGroup":"9th grade"},{"id":3,"subject":"French","teacher":"M. Curie","studentGroup":"10th grade"},{"id":4,"subject":"History","teacher":"I. Jones","studentGroup":"10th grade"}]}' ----- - -After about five seconds, according to the termination spent time defined in your `application.properties`, -the service returns an output similar to the following example: - -[source] ----- -HTTP/1.1 200 -Content-Type: application/json -... - -{"timeslotList":...,"roomList":...,"lessonList":[{"id":1,"subject":"Math","teacher":"A. Turing","studentGroup":"9th grade","timeslot":{"dayOfWeek":"MONDAY","startTime":"08:30:00","endTime":"09:30:00"},"room":{"name":"Room A"}},{"id":2,"subject":"Chemistry","teacher":"M. Curie","studentGroup":"9th grade","timeslot":{"dayOfWeek":"MONDAY","startTime":"09:30:00","endTime":"10:30:00"},"room":{"name":"Room A"}},{"id":3,"subject":"French","teacher":"M. Curie","studentGroup":"10th grade","timeslot":{"dayOfWeek":"MONDAY","startTime":"08:30:00","endTime":"09:30:00"},"room":{"name":"Room B"}},{"id":4,"subject":"History","teacher":"I. Jones","studentGroup":"10th grade","timeslot":{"dayOfWeek":"MONDAY","startTime":"09:30:00","endTime":"10:30:00"},"room":{"name":"Room B"}}],"score":"0hard/0soft"} ----- - -Notice that your application assigned all four lessons to one of the two time slots and one of the two rooms. -Also notice that it conforms to all hard constraints. -For example, M. Curie's two lessons are in different time slots. - -On the server side, the `info` log show what OptaPlanner did in those five seconds: - -[source,options="nowrap"] ----- -... Solving started: time spent (33), best score (-8init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). -... Construction Heuristic phase (0) ended: time spent (73), best score (0hard/0soft), score calculation speed (459/sec), step total (4). -... Local Search phase (1) ended: time spent (5000), best score (0hard/0soft), score calculation speed (28949/sec), step total (28398). -... Solving ended: time spent (5000), best score (0hard/0soft), score calculation speed (28524/sec), phase total (2), environment mode (REPRODUCIBLE). ----- - -=== Test the application - -A good application includes test coverage. - -==== Test the constraints - -To test each constraint in isolation, use a `ConstraintVerifier` in unit tests. -It tests each constraint's corner cases in isolation from the other tests, -which lowers maintenance when adding a new constraint with proper test coverage. - -Add a `optaplanner-test` dependency in your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.optaplanner - optaplanner-test - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("org.optaplanner:optaplanner-test") ----- - -Create the `src/test/java/org/acme/optaplanner/solver/TimeTableConstraintProviderTest.java` class: - -[source,java] ----- -package org.acme.optaplanner.solver; - -import java.time.DayOfWeek; -import java.time.LocalTime; - -import javax.inject.Inject; - -import io.quarkus.test.junit.QuarkusTest; -import org.acme.optaplanner.domain.Lesson; -import org.acme.optaplanner.domain.Room; -import org.acme.optaplanner.domain.TimeTable; -import org.acme.optaplanner.domain.Timeslot; -import org.junit.jupiter.api.Test; -import org.optaplanner.test.api.score.stream.ConstraintVerifier; - -@QuarkusTest -class TimeTableConstraintProviderTest { - - private static final Room ROOM = new Room("Room1"); - private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); - private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); - - @Inject - ConstraintVerifier constraintVerifier; - - @Test - void roomConflict() { - Lesson firstLesson = new Lesson(1, "Subject1", "Teacher1", "Group1"); - Lesson conflictingLesson = new Lesson(2, "Subject2", "Teacher2", "Group2"); - Lesson nonConflictingLesson = new Lesson(3, "Subject3", "Teacher3", "Group3"); - - firstLesson.setRoom(ROOM); - firstLesson.setTimeslot(TIMESLOT1); - - conflictingLesson.setRoom(ROOM); - conflictingLesson.setTimeslot(TIMESLOT1); - - nonConflictingLesson.setRoom(ROOM); - nonConflictingLesson.setTimeslot(TIMESLOT2); - - constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) - .given(firstLesson, conflictingLesson, nonConflictingLesson) - .penalizesBy(1); - } - -} ----- - -This test verifies that the constraint `TimeTableConstraintProvider::roomConflict`, -when given three lessons in the same room, where two lessons have the same timeslot, -it penalizes with a match weight of `1`. -So with a constraint weight of `10hard` it would reduce the score by `-10hard`. - -Notice how `ConstraintVerifier` ignores the constraint weight during testing - even -if those constraint weights are hard coded in the `ConstraintProvider` - because -constraints weights change regularly before going into production. -This way, constraint weight tweaking does not break the unit tests. - -==== Test the solver - -In a JUnit test, generate a test dataset and send it to the `TimeTableResource` to solve. - -Create the `src/test/java/org/acme/optaplanner/rest/TimeTableResourceTest.java` class: - -[source,java] ----- -package org.acme.optaplanner.rest; - -import java.time.DayOfWeek; -import java.time.LocalTime; -import java.util.ArrayList; -import java.util.List; - -import javax.inject.Inject; - -import io.quarkus.test.junit.QuarkusTest; -import org.acme.optaplanner.domain.Room; -import org.acme.optaplanner.domain.Timeslot; -import org.acme.optaplanner.domain.Lesson; -import org.acme.optaplanner.domain.TimeTable; -import org.acme.optaplanner.rest.TimeTableResource; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.Timeout; - -import static org.junit.jupiter.api.Assertions.assertFalse; -import static org.junit.jupiter.api.Assertions.assertNotNull; -import static org.junit.jupiter.api.Assertions.assertTrue; - -@QuarkusTest -public class TimeTableResourceTest { - - @Inject - TimeTableResource timeTableResource; - - @Test - @Timeout(600_000) - public void solve() { - TimeTable problem = generateProblem(); - TimeTable solution = timeTableResource.solve(problem); - assertFalse(solution.getLessonList().isEmpty()); - for (Lesson lesson : solution.getLessonList()) { - assertNotNull(lesson.getTimeslot()); - assertNotNull(lesson.getRoom()); - } - assertTrue(solution.getScore().isFeasible()); - } - - private TimeTable generateProblem() { - List timeslotList = new ArrayList<>(); - timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); - timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); - timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); - timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); - timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); - - List roomList = new ArrayList<>(); - roomList.add(new Room("Room A")); - roomList.add(new Room("Room B")); - roomList.add(new Room("Room C")); - - List lessonList = new ArrayList<>(); - lessonList.add(new Lesson(101L, "Math", "B. May", "9th grade")); - lessonList.add(new Lesson(102L, "Physics", "M. Curie", "9th grade")); - lessonList.add(new Lesson(103L, "Geography", "M. Polo", "9th grade")); - lessonList.add(new Lesson(104L, "English", "I. Jones", "9th grade")); - lessonList.add(new Lesson(105L, "Spanish", "P. Cruz", "9th grade")); - - lessonList.add(new Lesson(201L, "Math", "B. May", "10th grade")); - lessonList.add(new Lesson(202L, "Chemistry", "M. Curie", "10th grade")); - lessonList.add(new Lesson(203L, "History", "I. Jones", "10th grade")); - lessonList.add(new Lesson(204L, "English", "P. Cruz", "10th grade")); - lessonList.add(new Lesson(205L, "French", "M. Curie", "10th grade")); - return new TimeTable(timeslotList, roomList, lessonList); - } - -} ----- - -This test verifies that after solving, all lessons are assigned to a time slot and a room. -It also verifies that it found a feasible solution (no hard constraints broken). - -Add test properties to the `src/main/resources/application.properties` file: - -[source,properties] ----- -quarkus.optaplanner.solver.termination.spent-limit=5s - -# Effectively disable spent-time termination in favor of the best-score-limit -%test.quarkus.optaplanner.solver.termination.spent-limit=1h -%test.quarkus.optaplanner.solver.termination.best-score-limit=0hard/*soft ----- - -Normally, the solver finds a feasible solution in less than 200 milliseconds. -Notice how the `application.properties` overwrites the solver termination during tests -to terminate as soon as a feasible solution (`0hard/*soft`) is found. -This avoids hard coding a solver time, because the unit test might run on arbitrary hardware. -This approach ensures that the test runs long enough to find a feasible solution, even on slow machines. -But it does not run a millisecond longer than it strictly must, even on fast machines. - -=== Logging - -When adding constraints in your `ConstraintProvider`, -keep an eye on the _score calculation speed_ in the `info` log, -after solving for the same amount of time, to assess the performance impact: - -[source] ----- -... Solving ended: ..., score calculation speed (29455/sec), ... ----- - -To understand how OptaPlanner is solving your problem internally, -change the logging in the `application.properties` file or with a `-D` system property: - -[source,properties] ----- -quarkus.log.category."org.optaplanner".level=debug ----- - -Use `debug` logging to show every _step_: - -[source,options="nowrap"] ----- -... Solving started: time spent (67), best score (-20init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). -... CH step (0), time spent (128), score (-18init/0hard/0soft), selected move count (15), picked move ([Math(101) {null -> Room A}, Math(101) {null -> MONDAY 08:30}]). -... CH step (1), time spent (145), score (-16init/0hard/0soft), selected move count (15), picked move ([Physics(102) {null -> Room A}, Physics(102) {null -> MONDAY 09:30}]). -... ----- - -Use `trace` logging to show every _step_ and every _move_ per step. - -== Summary - -Congratulations! -You have just developed a Quarkus application with https://www.optaplanner.org/[OptaPlanner]! - -== Further improvements: Database and UI integration - -Now try adding database and UI integration: - -. Store `Timeslot`, `Room`, and `Lesson` in the database with xref:hibernate-orm-panache.adoc[Hibernate and Panache]. - -. xref:rest-json.adoc[Expose them through REST]. - -. Adjust the `TimeTableResource` to read and write a `TimeTable` instance in a single transaction -and use those accordingly: -+ -[source,java] ----- -package org.acme.optaplanner.rest; - -import javax.inject.Inject; -import javax.transaction.Transactional; -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.Path; - -import io.quarkus.panache.common.Sort; -import org.acme.optaplanner.domain.Lesson; -import org.acme.optaplanner.domain.Room; -import org.acme.optaplanner.domain.TimeTable; -import org.acme.optaplanner.domain.Timeslot; -import org.optaplanner.core.api.score.ScoreManager; -import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; -import org.optaplanner.core.api.solver.SolverManager; -import org.optaplanner.core.api.solver.SolverStatus; - -@Path("/timeTable") -public class TimeTableResource { - - public static final Long SINGLETON_TIME_TABLE_ID = 1L; - - @Inject - SolverManager solverManager; - @Inject - ScoreManager scoreManager; - - // To try, open http://localhost:8080/timeTable - @GET - public TimeTable getTimeTable() { - // Get the solver status before loading the solution - // to avoid the race condition that the solver terminates between them - SolverStatus solverStatus = getSolverStatus(); - TimeTable solution = findById(SINGLETON_TIME_TABLE_ID); - scoreManager.updateScore(solution); // Sets the score - solution.setSolverStatus(solverStatus); - return solution; - } - - @POST - @Path("/solve") - public void solve() { - solverManager.solveAndListen(SINGLETON_TIME_TABLE_ID, - this::findById, - this::save); - } - - public SolverStatus getSolverStatus() { - return solverManager.getSolverStatus(SINGLETON_TIME_TABLE_ID); - } - - @POST - @Path("/stopSolving") - public void stopSolving() { - solverManager.terminateEarly(SINGLETON_TIME_TABLE_ID); - } - - @Transactional - protected TimeTable findById(Long id) { - if (!SINGLETON_TIME_TABLE_ID.equals(id)) { - throw new IllegalStateException("There is no timeTable with id (" + id + ")."); - } - // Occurs in a single transaction, so each initialized lesson references the same timeslot/room instance - // that is contained by the timeTable's timeslotList/roomList. - return new TimeTable( - Timeslot.listAll(Sort.by("dayOfWeek").and("startTime").and("endTime").and("id")), - Room.listAll(Sort.by("name").and("id")), - Lesson.listAll(Sort.by("subject").and("teacher").and("studentGroup").and("id"))); - } - - @Transactional - protected void save(TimeTable timeTable) { - for (Lesson lesson : timeTable.getLessonList()) { - // TODO this is awfully naive: optimistic locking causes issues if called by the SolverManager - Lesson attachedLesson = Lesson.findById(lesson.getId()); - attachedLesson.setTimeslot(lesson.getTimeslot()); - attachedLesson.setRoom(lesson.getRoom()); - } - } - -} ----- -+ -For simplicity's sake, this code handles only one `TimeTable` instance, -but it is straightforward to enable multi-tenancy and handle multiple `TimeTable` instances of different high schools in parallel. -+ -The `getTimeTable()` method returns the latest timetable from the database. -It uses the `ScoreManager` (which is automatically injected) -to calculate the score of that timetable, so the UI can show the score. -+ -The `solve()` method starts a job to solve the current timetable and store the time slot and room assignments in the database. -It uses the `SolverManager.solveAndListen()` method to listen to intermediate best solutions -and update the database accordingly. -This enables the UI to show progress while the backend is still solving. - -. Adjust the `TimeTableResourceTest` instance accordingly, now that the `solve()` method returns immediately. -Poll for the latest solution until the solver finishes solving: -+ -[source,java] ----- -package org.acme.optaplanner.rest; - -import javax.inject.Inject; - -import io.quarkus.test.junit.QuarkusTest; -import org.acme.optaplanner.domain.Lesson; -import org.acme.optaplanner.domain.TimeTable; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.Timeout; -import org.optaplanner.core.api.solver.SolverStatus; - -import static org.junit.jupiter.api.Assertions.assertFalse; -import static org.junit.jupiter.api.Assertions.assertNotNull; -import static org.junit.jupiter.api.Assertions.assertTrue; - -@QuarkusTest -public class TimeTableResourceTest { - - @Inject - TimeTableResource timeTableResource; - - @Test - @Timeout(600_000) - public void solveDemoDataUntilFeasible() throws InterruptedException { - timeTableResource.solve(); - TimeTable timeTable = timeTableResource.getTimeTable(); - while (timeTable.getSolverStatus() != SolverStatus.NOT_SOLVING) { - // Quick polling (not a Test Thread Sleep anti-pattern) - // Test is still fast on fast machines and doesn't randomly fail on slow machines. - Thread.sleep(20L); - timeTable = timeTableResource.getTimeTable(); - } - assertFalse(timeTable.getLessonList().isEmpty()); - for (Lesson lesson : timeTable.getLessonList()) { - assertNotNull(lesson.getTimeslot()); - assertNotNull(lesson.getRoom()); - } - assertTrue(timeTable.getScore().isFeasible()); - } - -} ----- - -. Build an attractive web UI on top of these REST methods to visualize the timetable. - -Take a look at {quickstarts-tree-url}/optaplanner-quickstart[the quickstart source code] to see how this all turns out. diff --git a/_versions/2.7/guides/performance-measure.adoc b/_versions/2.7/guides/performance-measure.adoc deleted file mode 100644 index 88058173fe2..00000000000 --- a/_versions/2.7/guides/performance-measure.adoc +++ /dev/null @@ -1,244 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Measuring Performance - -include::./attributes.adoc[] - -This guide covers: - -* how we measure memory usage -* how we measure startup time -* which additional flags will Quarkus apply to native-image by default - -All of our tests are run on the same hardware for a given batch. -It goes without saying but it's better when you say it. - -== How do we measure memory usage - -When measuring the footprint of a Quarkus application, we measure https://en.wikipedia.org/wiki/Resident_set_size[Resident Set Size (RSS)] -and not the JVM heap size which is only a small part of the overall problem. -The JVM not only allocates native memory for heap (`-Xms`, `-Xmx`) but also structures required by the jvm to run your application. Depending on the JVM implementation, the total memory allocated for an application will include, but not limited to: - - * Heap space - * Class metadata - * Thread stacks - * Compiled code - * Garbage collection - -=== Native Memory Tracking - -In order to view the native memory used by the JVM, you can enable the https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html[Native Memory Tracking] (NMT) feature in hotspot; - -Enable NMT on the command line; - - -XX:NativeMemoryTracking=[off | summary | detail] <1> - -<1> NOTE: this feature will add a 5-10% performance overhead - -It is then possible to use jcmd to dump a report of the native memory usage of the Hotspot JVM running your application; - - jcmd VM.native_memory [summary | detail | baseline | summary.diff | detail.diff | shutdown] [scale= KB | MB | GB] - -=== Cloud Native Memory Limits - -It is important to measure the whole memory to see the impact of a Cloud Native application. -It is particularly true of container environments which will kill a process based on its full RSS memory usage. - -Likewise, don't fall into the trap of only measuring private memory which is what the process uses that is not shareable with other processes. -While private memory might be useful in a environment deploying many different applications (and thus sharing memory a lot), -it is very misleading in environments like Kubernetes/OpenShift. - -=== Measuring Memory Correctly on Docker - -In order to measure memory correctly **DO NOT use docker stat or anything derived from it** (e.g. ctop). This approach only measures a subset of the in-use resident pages, while the Linux Kernel, cgroups and cloud orchestration providers will utilize the full resident set in their accounting (determining whether a process is over the limits and should be killed). - -To measure accurately, a similar set of steps for measuring RSS on Linux should be performed. The docker `top` command allows running a `ps` command on the container host machine against the processes in the container instance. By utilizing this in combination with formatting output parameters, the rss value can be returned: - - docker top -o pid,rss,args - -For example: - -[source,shell] ----- - $ docker top $(docker ps -q --filter ancestor=quarkus/myapp) -o pid,rss,args - -PID RSS COMMAND -2531 27m ./application -Dquarkus.http.host=0.0.0.0 ----- - -Alternatively, one can jump directly into a privileged shell (root on the host), and execute a `ps` command directly: - -[source,shell] ----- - $ docker run -it --rm --privileged --pid=host justincormack/nsenter1 /bin/ps -e -o pid,rss,args | grep application - 2531 27m ./application -Dquarkus.http.host=0.0.0.0 ----- - -If you happen to be running on Linux, you can execute the `ps` command directly, since your shell is the same as the container host: - - ps -e -o pid,rss,args | grep application - -=== Platform Specific Memory Reporting - -In order to not incur the performance overhead of running with NVM enabled, we measure the total RSS of an JVM application using tools specific to each platform. - -Linux:: - - The linux https://linux.die.net/man/1/pmap[pmap] and https://linux.die.net/man/1/ps[ps] tools provide a report on the native memory map for a process - -[source,shell] ----- - $ ps -o pid,rss,command -p - - PID RSS COMMAND - 11229 12628 ./target/getting-started-1.0.0-SNAPSHOT-runner ----- - -[source,shell] ----- - $ pmap -x - - 13150: /data/quarkus-application -Xmx100m -Xmn70m - Address Kbytes RSS Dirty Mode Mapping - 0000000000400000 55652 30592 0 r-x-- quarkus-application - 0000000003c58000 4 4 4 r-x-- quarkus-application - 0000000003c59000 5192 4628 748 rwx-- quarkus-application - 00000000054c0000 912 156 156 rwx-- [ anon ] - ... - 00007fcd13400000 1024 1024 1024 rwx-- [ anon ] - ... - 00007fcd13952000 8 4 0 r-x-- libfreebl3.so - ... - ---------------- ------- ------- ------- - total kB 9726508 256092 220900 ----- - -Each Memory region that has been allocated for the process is listed; - -- Address: Start address of virtual address space -- Kbytes: Size (kilobytes) of virtual address space reserved for region -- RSS: Resident set size (kilobytes). This is the measure of how much memory space is actually being used -- Dirty: dirty pages (both shared and private) in kilobytes -- Mode: Access mode for memory region -- Mapping: Includes application regions and Shared Object (.so) mappings for process - -The Total RSS (kB) line reports the total native memory the process is using. - -macOS:: -On macOS, you can use `ps x -o pid,rss,command -p ` which list the RSS for a given process in KB (1024 bytes). - -[source,shell] ----- -$ ps x -o pid,rss,command -p 57160 - - PID RSS COMMAND -57160 288548 /Applications/IntelliJ IDEA CE.app/Contents/jdk/Contents/Home/jre/bin/java ----- - -Which means IntelliJ IDEA consumes 281,8 MB of resident memory. - -== How do we measure startup time - -Some frameworks use aggressive lazy initialization techniques. -It is important to measure the startup time to first request to most accurately reflect how long a framework needs to start. -Otherwise, you will miss the time the framework _actually_ takes to initialize. - -Here is how we measure startup time in our tests. - -We create a sample application that logs timestamps for certain points in the application lifecycle. - -[source, java] ----- -@Path("/") -public class GreetingEndpoint { - - private static final String template = "Hello, %s!"; - - @GET - @Path("/greeting") - @Produces(MediaType.APPLICATION_JSON) - public Greeting greeting(@QueryParam("name") String name) { - System.out.println(new SimpleDateFormat("HH:mm:ss.SSS").format(new java.util.Date(System.currentTimeMillis()))); - String suffix = name != null ? name : "World"; - return new Greeting(String.format(template, suffix)); - } - - void onStart(@Observes StartupEvent startup) { - System.out.println(new SimpleDateFormat("HH:mm:ss.SSS").format(new Date())); - } -} ----- - -We start looping in a shell, sending requests to the rest endpoint of the sample application we are testing. - -[source,shell] ----- -$ while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' localhost:8080/api/greeting)" != "200" ]]; do sleep .00001; done ----- - -In a separate terminal, we start the timing application that we are testing, printing the time the application starts - -[source,shell] ----- -$ date +"%T.%3N" && ./target/quarkus-timing-runner - -10:57:32.508 -10:57:32.512 -2019-04-05 10:57:32,512 INFO [io.quarkus] (main) Quarkus 0.11.0 started in 0.002s. Listening on: http://127.0.0.1:8080 -2019-04-05 10:57:32,512 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson] -10:57:32.537 ----- - -The difference between the final timestamp and the first timestamp is the total startup time for the application to serve the first request. - -== Additional flags applied by Quarkus - -When Quarkus invokes GraalVM `native-image` it will apply some additional flags by default. - -You might want to know about the following ones in case you're comparing performance properties with other builds. - -=== Disable fallback images - -Fallback native images are a feature of GraalVM to "fall back" to run your application in the normal JVM, should the compilation -to native code fail for some reason. - -Quarkus disables this feature by setting `-H:FallbackThreshold=0`: this will ensure you get a compilation failure rather -risking to not notice that the application is unable to really run in native mode. - -If you instead want to just run in Java mode, that's totally possible: just skip the native-image build and run it as a jar. - -=== Disable Isolates - -Isolates are a neat feature of GraalVM, but Quarkus isn't using them at this stage. - -Disable via `-H:-SpawnIsolates`. - -=== Disable auto-registration of all Service Loader implementations - -Quarkus extensions can automatically pick the right services they need, while GraalVM's native-image defaults to include -all services it's able to find on the classpath. - -We prefer listing services explicitly as it produces better optimised binaries. Disable it as well by setting `-H:-UseServiceLoaderFeature`. - -=== Better default for Garbage Collection implementation - -The default in GraalVM seems meant to optimise for short lived processes. - -Quarkus defaults to server applications, so we switch to a better default by setting - `-H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime`. - -=== Others ... - -This section is provided as high level guidance, but can't presume to be comprehensive as some flags are controlled - dynamically by the extensions, the platform you're building on, configuration details, your code and possibly - a combination of any of these. - -Generally speaking the ones listed here are those most likely to affect performance metrics, but in the right -circumstances one could observe non negligible impact from the other flags too. - -If you're to investigate some differences in detail make sure to check what Quarkus is invoking exactly: when the build -plugin is producing a native image, the full command lines are logged. - diff --git a/_versions/2.7/guides/picocli.adoc b/_versions/2.7/guides/picocli.adoc deleted file mode 100644 index c42175ff281..00000000000 --- a/_versions/2.7/guides/picocli.adoc +++ /dev/null @@ -1,296 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Command Mode with Picocli - -include::./attributes.adoc[] - -https://picocli.info/[Picocli] is an open source tool for creating rich command line applications. - -Quarkus provides support for using Picocli. This guide contains examples of `picocli` extension usage. - -IMPORTANT: If you are not familiar with the Quarkus Command Mode, consider reading the xref:command-mode-reference.adoc[Command Mode reference guide] first. - -== Configuration - -Once you have your Quarkus project configured you can add the `picocli` extension -to your project by running the following command in your project base directory. - -[source,bash] ----- -./mvnw quarkus:add-extension -Dextensions="picocli" ----- - -This will add the following to your `pom.xml`: - -[source,xml] ----- - - io.quarkus - quarkus-picocli - ----- - -== Simple command line application - -Simple PicocliApplication with only one `Command` can be created as follows: - -[source,java] ----- -package com.acme.picocli; - -import picocli.CommandLine; - -import javax.enterprise.context.Dependent; -import javax.inject.Inject; - -@CommandLine.Command // <1> -public class HelloCommand implements Runnable { - - @CommandLine.Option(names = {"-n", "--name"}, description = "Who will we greet?", defaultValue = "World") - String name; - - private final GreetingService greetingService; - - public HelloCommand(GreetingService greetingService) { // <2> - this.greetingService = greetingService; - } - - @Override - public void run() { - greetingService.sayHello(name); - } -} - -@Dependent -class GreetingService { - void sayHello(String name) { - System.out.println("Hello " + name + "!"); - } -} ----- -<1> If there is only one class annotated with `picocli.CommandLine.Command` it will be used as entry point to Picocli CommandLine. -<2> All classes annotated with `picocli.CommandLine.Command` are registered as CDI beans. - -IMPORTANT: Beans with `@CommandLine.Command` should not use proxied scopes (e.g. do not use `@ApplicationScope`) -because Picocli will not be able set field values in such beans. This extension will register classes with `@CommandLine.Command` annotation -using `@Depended` scope. If you need to use proxied scope, then annotate setter and not field, for example: -[source,java] ----- -@CommandLine.Command -@ApplicationScoped -public class EntryCommand { - private String name; - - @CommandLine.Option(names = "-n") - public void setName(String name) { - this.name = name; - } - -} ----- - -== Command line application with multiple Commands - -When multiple classes have the `picocli.CommandLine.Command` annotation, then one of them needs to be also annotated with `io.quarkus.picocli.runtime.annotations.TopCommand`. -This can be overwritten with the `quarkus.picocli.top-command` property. - -[source,java] ----- -package com.acme.picocli; - -import io.quarkus.picocli.runtime.annotations.TopCommand; -import picocli.CommandLine; - -@TopCommand -@CommandLine.Command(mixinStandardHelpOptions = true, subcommands = {HelloCommand.class, GoodByeCommand.class}) -public class EntryCommand { -} - -@CommandLine.Command(name = "hello", description = "Greet World!") -class HelloCommand implements Runnable { - - @Override - public void run() { - System.out.println("Hello World!"); - } -} - -@CommandLine.Command(name = "goodbye", description = "Say goodbye to World!") -class GoodByeCommand implements Runnable { - - @Override - public void run() { - System.out.println("Goodbye World!"); - } -} ----- - -== Customizing Picocli CommandLine instance - -You can customize CommandLine classes used by the `picocli` extension by producing your own bean instance: - -[source,java] ----- -package com.acme.picocli; - -import io.quarkus.picocli.runtime.PicocliCommandLineFactory; -import io.quarkus.picocli.runtime.annotations.TopCommand; -import picocli.CommandLine; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.inject.Produces; - -@TopCommand -@CommandLine.Command -public class EntryCommand implements Runnable { - @CommandLine.Spec - CommandLine.Model.CommandSpec spec; - - @Override - public void run() { - System.out.println("My name is: " + spec.name()); - } -} - -@ApplicationScoped -class CustomConfiguration { - - @Produces - CommandLine customCommandLine(PicocliCommandLineFactory factory) { // <1> - return factory.create().setCommandName("CustomizedName"); - } -} ----- -<1> `PicocliCommandLineFactory` will create an instance of CommandLine with `TopCommand` and `CommandLine.IFactory` injected. - -== Different entry command for each profile - -It is possible to create different entry command for each profile, using `@IfBuildProfile`: - -[source,java] ----- -@ApplicationScoped -public class Config { - - @Produces - @TopCommand - @IfBuildProfile("dev") - public Object devCommand() { - return DevCommand.class; // <1> - } - - @Produces - @TopCommand - @IfBuildProfile("prod") - public Object prodCommand() { - return new ProdCommand("Configured by me!"); - } - -} ----- -<1> You can return instance of `java.lang.Class` here. In such case `CommandLine` will try to instantiate this class using `CommandLine.IFactory`. - -== Configure CDI Beans with parsed arguments - -You can use `Event` or just `CommandLine.ParseResult` to configure CDI beans based on arguments parsed by Picocli. -This event will be generated in `QuarkusApplication` class created by this extension. If you are providing your own `@QuarkusMain` this event will not be raised. -`CommandLine.ParseResult` is created from default `CommandLine` bean. - -[source,java] ----- -@CommandLine.Command -public class EntryCommand implements Runnable { - - @CommandLine.Option(names = "-c", description = "JDBC connection string") - String connectionString; - - @Inject - DataSource dataSource; - - @Override - public void run() { - try (Connection c = dataSource.getConnection()) { - // Do something - } catch (SQLException throwables) { - // Handle error - } - } -} - -@ApplicationScoped -class DatasourceConfiguration { - - @Produces - @ApplicationScoped // <1> - DataSource dataSource(CommandLine.ParseResult parseResult) { - PGSimpleDataSource ds = new PGSimpleDataSource(); - ds.setURL(parseResult.matchedOption("c").getValue().toString()); - return ds; - } -} ----- -<1> `@ApplicationScoped` used for lazy initialization - -== Providing own QuarkusMain - -You can also provide your own application entry point annotated with `QuarkusMain` (as described in xref:command-mode-reference.adoc[Command Mode reference guide]). - -[source,java] ----- -package com.acme.picocli; - -import io.quarkus.runtime.QuarkusApplication; -import io.quarkus.runtime.annotations.QuarkusMain; -import picocli.CommandLine; - -import javax.inject.Inject; - -@QuarkusMain -@CommandLine.Command(name = "demo", mixinStandardHelpOptions = true) -public class ExampleApp implements Runnable, QuarkusApplication { - @Inject - CommandLine.IFactory factory; // <1> - - @Override - public void run() { - // business logic - } - - @Override - public int run(String... args) throws Exception { - return new CommandLine(this, factory).execute(args); - } -} ----- -<1> Quarkus-compatible `CommandLine.IFactory` bean created by `picocli` extension. - -== Native mode support - -This extension uses the Quarkus standard build steps mechanism to support GraalVM Native images. In the exceptional case that incompatible changes in a future picocli release cause any issue, the following configuration can be used to fall back to the annotation processor from the picocli project as a temporary workaround: - -[source,xml] ----- - - info.picocli - picocli-codegen - ----- - -For Gradle, you need to add the following in `dependencies` section of the `build.gradle` file: - -[source,groovy,subs=attributes+] ----- -annotationProcessor enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}") -annotationProcessor 'info.picocli:picocli-codegen' ----- - -== Development Mode - -In the development mode, i.e. when running `mvn quarkus:dev`, the application is executed and restarted every time the `Space bar` key is pressed. You can also pass arguments to your command line app via the `quarkus.args` system property, e.g. `mvn quarkus:dev -Dquarkus.args='--help'` and `mvn quarkus:dev -Dquarkus.args='-c -w --val 1'`. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-picocli.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/platform-include.adoc b/_versions/2.7/guides/platform-include.adoc deleted file mode 100644 index 410c8451e34..00000000000 --- a/_versions/2.7/guides/platform-include.adoc +++ /dev/null @@ -1,4 +0,0 @@ -[NOTE] -==== -This extension is developed by a third party and is part of the Quarkus Platform. -==== \ No newline at end of file diff --git a/_versions/2.7/guides/platform.adoc b/_versions/2.7/guides/platform.adoc deleted file mode 100644 index d7ce35d2a03..00000000000 --- a/_versions/2.7/guides/platform.adoc +++ /dev/null @@ -1,191 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Platform - -include::./attributes.adoc[] - -The Quarkus extension ecosystem consists of the Quarkus extensions developed and maintained by the community, including the Quarkus core development team. While the Quarkus ecosystem (sometimes also referred to as the "Quarkus universe") includes all the Quarkus extensions ever developed, there is also a concept of a Quarkus platform. - -== Quarkus Platform - -The fundamental promise of a Quarkus platform is any combination of the Quarkus extensions the platform consists of can be used in the same application without causing any conflict for each other. -Each organization creating their Quarkus platform may establish their own criteria for the extensions to be accepted into the platform and the means to guarantee the compatibility between the accepted extensions. - -=== Quarkus Platform Artifacts - -Each Quarkus platform is defined with a few artifacts. - -=== Quarkus Platform BOM - -Each Quarkus Platform is expected to provide a Maven BOM artifact that - -* imports a chosen version of `io.quarkus:quarkus-bom` (the platform BOM may be flattened at the end but it has to be based on some version of `io.quarkus:quarkus-bom`) -* includes all the Quarkus extension artifacts (the runtime and the deployment ones) the platform consists of -* includes all the necessary third-party artifacts that align the transitive dependency versions of the platform extensions to guarantee compatibility between them -* includes the <> artifact -* possibly includes the <> artifacts - -Quarkus applications that want to include extensions from a Quarkus platform will be importing the Quarkus platform BOM. - -[[platform-descriptor]] -=== Quarkus Platform Descriptor - -Quarkus platform descriptor is a JSON artifact that provides information about the platform and its extensions to the Quarkus tools. E.g. http://code.quarkus.io and the Quarkus command line tools consult this descriptor to list, add and remove extensions to/from the project on user's request. -This artifact is also used as a Quarkus platform identifier. When Quarkus tools need to identify the Quarkus platform(s) used in the project, they will analyze the dependency version constraints of the project (the effective list of the managed dependencies from the `dependencyManagement` section in Maven terms) looking for the platform descriptor artifact(s) among them. Given that the platform descriptors are included into the Quarkus platform BOMs, every Quarkus application will inherit the platform descriptor artifact from the imported platform BOM(s) as a dependency version constraint (managed dependency in Maven terms). - -To be able to easily identify Quarkus platform descriptors among the project's dependency constraints, the platform descriptor Maven artifact coordinates should follow the following naming convention: - -* the `groupId` of the descriptor artifact should match the `groupId` of the corresponding Quarkus Platform BOM; -* the `artifactId` of the descriptor artifact should be the `artifactId` of the corresponding Quarkus Platform BOM with the `-quarkus-platform-descriptor` suffix; -* the `classifier` of the descriptor artifact should match the `version` of the corresponding Quarkus Platform BOM; -* the `type` of the descriptor artifact should be `json`; -* the `version` of the descriptor artifact should match the `version` of the corresponding Quarkus Platform BOM. - -As a string it will look like `:-quarkus-platform-descriptor::json:` - -E.g. the coordinates of the descriptor for the Quarkus BOM `io.quarkus.platform:quarkus-bom::pom:1.2.3` will be `io.quarkus.platform:quarkus-bom-quarkus-platform-descriptor:1.2.3:json:1.2.3`. -And for a custom Quarkus platform defined with BOM `org.acme:acme-bom::pom:555` it will be `org.acme:acme-bom-quarkus-platform-descriptor:555:json:555`. - -The classifier matching the version of the platform may look confusing at first. But this is what turns the descriptor into a true "fingerprint" of the platform. In both Maven and Gradle, the effective set of the dependency version constraints (or the managed dependencies) is obtained by merging all the imported BOMs and version constraints specified individually in the current project and also its parent(s). The artifact `classifier` is a part of the dependency ID, which could be expressed as `groupId:artifactId:classifier:type`. Which means that if a project imports a couple of BOMs, e.g. `org.apple:apple-bom::pom:1.0` and `org.orange:orange-bom::pom:1.0`, and each of these two BOMs imports a different version `io.quarkus.platform:quarkus-bom::pom`, the Quarkus tools will be able to detect this fact and make the user aware of it, since it *might* not be a safe combination. If the descriptor artifact didn't include the classifer containing the version of the platform then the tools wouldn't be able to detect a potentially incompatible mix of different versions of the same platform in the same project. - -The platform descriptor will normally be generated using a Maven plugin, e.g. - -[source,xml] ----- - - io.quarkus - quarkus-platform-descriptor-json-plugin - ${quarkus.version} <1> - - - process-resources - - generate-extensions-json <2> - - - - - ${quarkus.platform.group-id} <3> - ${quarkus.platform.artifact-id} <4> - ${quarkus.platform.version} <5> - ${overridesfile} <6> - true <7> - - ----- - -<1> the version of the `quarkus-platform-descriptor-json-plugin` -<2> `generate-extensions-json` is the goal generating the platform descriptor -<3> the `groupId` of the platform BOM -<4> the `artifactId` of the platform BOM -<5> the `version` of the platform BOM -<6> this parameter is optional, it allows to override some metadata from the Quarkus extension descriptors found in every runtime extension artifact from which the platform descriptor is generated -<7> this parameter is also optional and defaults to false. It has to be set to true in case the platform BOM *is not generated* and *is not flattened*. Which for example is the case for `io.quarkus:quarkus-bom`. - -[[platform-properties]] -=== Quarkus Platform Properties - -A Quarkus platform may provide its own default values for some of the configuration options. - -Quarkus is using https://github.com/smallrye/smallrye-config[SmallRye Config] for wiring application configuration. A Quarkus platform may be used as another source of configuration in the hierarchy of the configuration sources dominated by the application's `application.properties`. - -To provide platform-specific defaults, the platform needs to include a dependency version constraint in its BOM for a properties artifact whose coordinates follow the following naming convention: - -* the `groupId` of the properties artifact should match the `groupId` of the corresponding Quarkus Platform BOM; -* the `artifactId` of the properties artifact should be the `artifactId` of the corresponding Quarkus Platform BOM with the `-quarkus-platform-properties` suffix; -* the `classifier` of the descriptor artifact should be left empty/null; -* the `type` of the descriptor artifact should be `properties`; -* the `version` of the descriptor artifact should match the `version` of the corresponding Quarkus Platform BOM. - -The properties artifact itself is expected to be a traditional `properties` file that will be loaded into an instance of `java.util.Properties` class. - -IMPORTANT: At this point, platform properties are only allowed to provide the default values for a restricted set of configuration options. The property names in the platform properties file must be prefixed with the `platform.` suffix. - -Extension developers that want to make their configuration options platform-specific should set their default values to properties that start with the `platform.` suffix. Here is an example: - -[source,java] ----- -package io.quarkus.deployment.pkg; - -@ConfigRoot(phase = ConfigPhase.BUILD_TIME) -public class NativeConfig { - - /** - * The docker image to use to do the image build - */ - @ConfigItem(defaultValue = "${platform.quarkus.native.builder-image}") - public String builderImage; -} ----- - -In this case the default value for `quarkus.native.builder-image` will be provided by the platform. The user will still be able to set the desired value for `quarkus.native.builder-image` in its `application.properties`, of course. But in case it's not customized by the user, the default value will be coming from the platform properties. -A platform properties file for the example above would contain (the actual value is provided as an example): - -[source,text,subs=attributes+] ----- -platform.quarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-native-image:{graalvm-flavor} ----- - -There is also a Maven plugin goal that validates the platform properties content and its artifact coordinates and also checks whether the platform properties artifact is present in the platform's BOM. Here is a sample plugin configuration: - -[source,xml] ----- - - io.quarkus - quarkus-platform-descriptor-json-plugin - ${quarkus.version} - - - process-resources - - platform-properties - - - - ----- - -==== Merging Quarkus Platform Properties - -In case an application is importing more than one Quarkus platform and those platforms include their own platform properties artifacts, the content of those platform properties artifacts will be merged to form a single set of properties that will be used for the application build. -The order in which the properties artifacts are merged will correspond to the order in which they appear in the list of dependency version constraints of the application (in the Maven terms that will correspond to the effective list of application's managed dependencies, i.e. the flattened `managedDependencies` POM section). - -IMPORTANT: The content of the properties artifacts found earlier will dominate over those found later among the application's dependency constraints! - -That means if a platform needs to override a certain property value defined in the platform it is based on, it will need to include its platform properties artifact into the `managedDependencies` section of its BOM before importing the base platform. - -For example, let's assume `org.acme:acme-quarkus-bom` platform extends the `io.quarkus:quarkus-bom` platform by importing its BOM. In case, the `org.acme:acme-quarkus-bom` platform were to override certain properties defined in the `io.quarkus:quarkus-bom-quarkus-platform-properties` included in the `io.quarkus:quarkus-bom`, the `org.acme:acme-quarkus-bom` would have to be composed as -[source,xml] ----- - - - acme-quarkus-bom - Acme - Quarkus - BOM - pom - - - - - - org.acme - acme-quarkus-bom-quarkus-platform-properties - properties - ${project.version} - - - - - io.quarkus - quarkus-bom - ${quarkus.version} - pom - import - - - ----- - -That way, the `org.acme:acme-quarkus-bom` platform properties will appear before those provided by the `io.quarkus:quarkus-bom` properties and so will be dominating at build time. diff --git a/_versions/2.7/guides/quarkus-blaze-persistence.adoc b/_versions/2.7/guides/quarkus-blaze-persistence.adoc deleted file mode 100644 index 0f3e18b2119..00000000000 --- a/_versions/2.7/guides/quarkus-blaze-persistence.adoc +++ /dev/null @@ -1,103 +0,0 @@ -[.configuration-legend] -icon:lock[title=Fixed at build time] Configuration property fixed at build time - All other configuration properties are overridable at runtime -[.configuration-reference.searchable, cols="80,.^10,.^10"] -|=== - -h|[[quarkus-blaze-persistence_configuration]]link:#quarkus-blaze-persistence_configuration[Configuration property] - -h|Type -h|Default - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.template-eager-loading]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.template-eager-loading[quarkus.blaze-persistence.template-eager-loading]` - -[.description] --- -A boolean flag to make it possible to prepare all view template caches on startup. By default the eager loading of the view templates is disabled to have a better startup performance. Valid values for this property are `true` or `false`. ---|boolean -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.default-batch-size]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.default-batch-size[quarkus.blaze-persistence.default-batch-size]` - -[.description] --- -An integer value that defines the default batch size for entity view attributes. By default the value is 1 and can be overridden either via `com.blazebit.persistence.view.BatchFetch#size()` or by setting this property via `com.blazebit.persistence.view.EntityViewSetting#setProperty`. ---|int -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.expect-batch-mode]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.expect-batch-mode[quarkus.blaze-persistence.expect-batch-mode]` - -[.description] --- -A mode specifying if correlation value, view root or embedded view batching is expected. By default the value is `values` and can be overridden by setting this property via `com.blazebit.persistence.view.EntityViewSetting#setProperty`. Valid values are - - `values` - - `view_roots` - - `embedding_views` ---|string -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.updater.eager-loading]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.updater.eager-loading[quarkus.blaze-persistence.updater.eager-loading]` - -[.description] --- -A boolean flag to make it possible to prepare the entity view updater cache on startup. By default the eager loading of entity view updates is disabled to have a better startup performance. Valid values for this property are `true` or `false`. ---|boolean -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.updater.disallow-owned-updatable-subview]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.updater.disallow-owned-updatable-subview[quarkus.blaze-persistence.updater.disallow-owned-updatable-subview]` - -[.description] --- -A boolean flag to make it possible to disable the strict validation that disallows the use of an updatable entity view type for owned relationships. By default the use is disallowed i.e. the default value is `true`, but since there might be strange models out there, it possible to allow this. Valid values for this property are `true` or `false`. ---|boolean -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.updater.strict-cascading-check]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.updater.strict-cascading-check[quarkus.blaze-persistence.updater.strict-cascading-check]` - -[.description] --- -A boolean flag to make it possible to disable the strict cascading check that disallows setting updatable or creatable entity views on non-cascading attributes before being associated with a cascading attribute. When disabled, it is possible, like in JPA, that the changes done to an updatable entity view are not flushed when it is not associated with an attribute that cascades updates. By default the use is enabled i.e. the default value is `true`. Valid values for this property are `true` or `false`. ---|boolean -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.updater.error-on-invalid-plural-setter]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.updater.error-on-invalid-plural-setter[quarkus.blaze-persistence.updater.error-on-invalid-plural-setter]` - -[.description] --- -A boolean flag that allows to switch from warnings to boot time validation errors when invalid plural attribute setters are encountered while the strict cascading check is enabled. When `true`, a boot time validation error is thrown when encountering an invalid setter, otherwise just a warning. This configuration has no effect when the strict cascading check is disabled. By default the use is disabled i.e. the default value is `false`. Valid values for this property are `true` or `false`. ---|boolean -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.create-empty-flat-views]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.create-empty-flat-views[quarkus.blaze-persistence.create-empty-flat-views]` - -[.description] --- -A boolean flag that allows to specify if empty flat views should be created by default if not specified via `EmptyFlatViewCreation`. By default the creation of empty flat views is enabled i.e. the default value is `true`. Valid values for this property are `true` or `false`. ---|boolean -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.expression-cache-class]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.expression-cache-class[quarkus.blaze-persistence.expression-cache-class]` - -[.description] --- -The full qualified expression cache implementation class name. ---|string -| - - -a|icon:lock[title=Fixed at build time] [[quarkus-blaze-persistence_quarkus.blaze-persistence.inline-ctes]]`link:#quarkus-blaze-persistence_quarkus.blaze-persistence.inline-ctes[quarkus.blaze-persistence.inline-ctes]` - -[.description] --- -If set to true, the CTE queries are inlined by default. Valid values for this property are `true`, `false` or `auto`. Default is `true` which will always inline non-recursive CTEs. The `auto` configuration will only make use of inlining if the JPA provider and DBMS dialect support/require it. The property can be changed for a criteria builder before constructing a query. ---|boolean -| - -|=== \ No newline at end of file diff --git a/_versions/2.7/guides/quarkus-intro.adoc b/_versions/2.7/guides/quarkus-intro.adoc deleted file mode 100644 index 6a11ae71889..00000000000 --- a/_versions/2.7/guides/quarkus-intro.adoc +++ /dev/null @@ -1,79 +0,0 @@ -= What is Quarkus - -include::./attributes.adoc[] -:toc: macro -:toclevels: 4 -:doctype: book -:icons: font -:docinfo1: - -:numbered: -:sectnums: -:sectnumlevels: 4 - -// tag::intro[] - -[quote] --- -Quarkus is a Cloud Native, Container First framework for writing Java applications. --- - -Container First:: -Minimal footprint Java applications optimized for running in containers -Cloud Native:: -Embraces 12 factor architecture in environments like Kubernetes -Unify imperative and reactive:: -Brings under one programming model non blocking and imperative styles of development -Standards-based:: -Based on the standards and the libraries you love and use (RESTEasy, Hibernate, Netty, Eclipse Vert.x, Apache Camel...) -Microservice First:: -Brings lightning fast startup time to Java applications -Extreme productivity:: -Instant hot code replacement: don't allow build, deploy, boot delays disrupt your flow -Developer Joy:: -Development-centric experience without compromises to bring your amazing applications to life in no time - -All under one framework. - -// end::intro[] - -== Scratch pad - - -Quarkus believes in developer Joy. - - -It unifies imperative and reactive. -It is a Microservice first toolkit. - -Standards based -Quarkus brings all the standards and frameworks you love and use: RESTEasy, Hibernate, Netty, vert.x, Camel...) - -Imperative and Reactive - -* ahead-of-time native binary (executable binary) -* Cloud Native -* Java -* modular -* Substrate VM native - -Seamlessly build container optimal - -Container affinity - -Container optimal - -* low memory -* low startup time -* ahead of time optimal - -Unifying Imperative and Reactive under one framework. - -Usability -* easy to use -* productive environment -* hot reload - -Standards based - - diff --git a/_versions/2.7/guides/quarkus-reactive-architecture.adoc b/_versions/2.7/guides/quarkus-reactive-architecture.adoc deleted file mode 100644 index b5e19ce25e2..00000000000 --- a/_versions/2.7/guides/quarkus-reactive-architecture.adoc +++ /dev/null @@ -1,199 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Reactive Architecture - -include::./attributes.adoc[] - -Quarkus is reactive. -It's even more than this: Quarkus unifies reactive and imperative programming. -You don't even have to choose: you can implement reactive components and imperative components then combine them inside the very **same** application. -No need to use different stacks, tooling or APIs; Quarkus bridges both worlds. - -This page will explain what we mean by _Reactive_ and how Quarkus enables it. -We will also discuss execution and programming models. -Finally, we will list the Quarkus extensions offering reactive facets. - -== What is _Reactive_? - -The _Reactive_ word is overloaded and associated with many concepts such as back-pressure, monads, or event-driven architecture. -So, let's clarify what we mean by _Reactive_. - -_Reactive_ is a set of principles and guidelines to build responsive distributed systems and applications. -The https://www.reactivemanifesto.org/[Reactive Manifesto] characterizes _Reactive Systems_ as distributed systems having four characteristics: - -1. Responsive - they must respond in a timely fashion -2. Elastic - they adapt themselves to the fluctuating load -3. Resilient - they handle failures gracefully -4. Asynchronous message passing - the component of a reactive system interact using messages - -image::reactive-systems.png[alt=Reactive Systems Pillars, width=50%, align=center] - -In addition to this, the https://principles.reactive.foundation/[Reactive Principles white paper] lists a set of rules and patterns to help the construction of reactive systems. - -== Reactive Systems and Quarkus - -Reactive System is an architectural style that can be summarized by: distributed systems done right. -Relying on asynchronous message passing helps enforce the loose coupling (both in terms of space and time) between the different components. -You send messages to virtual destinations. The receiver can be located anywhere, or even not yet exist at the time of the message dispatch. -The elasticity pillar allows scaling up and down individual components according to the load. -Elasticity also provides redundancy, which helps with the resilience pillar. -Failures are inevitable. -Components forming a reactive system must handle them gracefully, avoid cascading failures, and self-adapt themselves. - -A responsive system can continue to handle the request while facing failures and under fluctuating load. -Quarkus has been tailored for that. -It provides features that will help you design, implement and operate reactive systems. - -== Reactive Applications - -Quarkus is not only going to help you build reactive systems. -It's also going to make sure that each constituent enforces the reactive principles and is highly efficient. - -Efficiency is essential, especially in the Cloud and in containerized environments. -Resources, such as CPU and memory, are shared among multiple applications. -Greedy applications that consume lots of memory are inefficient and put penalties on sibling applications. -You may need to request more memory, CPU, or bigger virtual machines. -It either increases your monthly Cloud bill or decreases your deployment density. - -I/O is an essential part of almost any modern system. -Whether it is to call a remote service, interact with a database, or send messages to a broker, there are all I/O-based operations. -Efficiently handling them is critical to avoid greedy applications. -For this reason, Quarkus uses non-blocking I/O, which allows a low number of OS threads to manage many concurrent I/Os. -As a result, Quarkus applications allow for higher concurrency, use less memory, and improve the deployment density. - -== How does Quarkus enable Reactive? - -Under the hood, Quarkus has a reactive engine. -This engine, powered by Eclipse Vert.x and Netty, handles the non-blocking I/O interactions. - -image::quarkus-reactive-core.png[Quarkus Reactive Core,width=50%, align=center] - -Quarkus extensions and the application code can use this engine to orchestrate I/O interactions, interact with databases, send and receive messages, and so on. - -== Reactive execution model - -While using non-blocking I/O has tremendous benefits, it does not come for free. -Indeed, it introduces a new execution model quite different from the one used by classical frameworks. - -Traditional applications use blocking I/O and an imperative (sequential) execution model. -So, in an application exposing an HTTP endpoint, each HTTP request is associated with a thread. -In general, that thread is going to process the whole request and the thread is tied up serving only that request for the duration of that request. -When the processing requires interacting with a remote service, it uses blocking I/O. -The thread is blocked, waiting for the result of the I/O. -While that model is simple to develop with (as everything is sequential), it has a few drawbacks. -To handle concurrent requests, you need multiple threads, so, you need to introduce a worker thread pool. -The size of this pool constrains the concurrency of the application. -In addition, each thread has a cost in terms of memory and CPU. -Large thread pools result in greedy applications. - -image::blocking-threads.png[alt=Imperative Execution Model and Worker Threads,width=50%, align=center] - -As we have seen above, non-blocking I/O avoids that problem. -A few threads can handle many concurrent I/O. -If we go back to the HTTP endpoint example, the request processing is executed on one of these I/O threads. -Because there are only a few of them, you need to use them wisely. -When the request processing needs to call a remote service, you can't block the thread anymore. -You schedule the I/O and pass a continuation, i.e., the code to execute once the I/O completes. - -image::reactive-thread.png[alt=Reactive Execution Model and I/O Threads,width=50%, align=center] - -This model is much more efficient, but we need a way to write code to express these continuations. - -== Reactive Programming Models - -The Quarkus architecture, based on non-blocking I/O and message passing, allows multiple supporting reactive development models that are all different in how they express continuations. -The two main ways to write reactive code with Quarkus are: - -* Reactive Programming with https://smallrye.io/smallrye-mutiny[Mutiny], and -* Coroutines with Kotlin - -First, https://smallrye.io/smallrye-mutiny[Mutiny] is an intuitive, event-driven reactive programming library. -With Mutiny, you write event-driven code. -Your code is a pipeline receiving events and processing them. -Each stage in your pipeline can be seen as a continuation, as Mutiny invokes them when the upstream part of the pipeline emits an event. - -The Mutiny API has been tailored to improve the readability and maintenance of the codebase. -Mutiny provides everything you need to orchestrate asynchronous actions, including concurrent execution. -It also offers a large set of operators to manipulate individual events and streams of events. - -[TIP] -Find more info about Mutiny and its usage in Quarkus on xref:mutiny-primer.adoc[Mutiny support documentation]. - -Co-routines are a way to write asynchronous code sequentially. -It suspends the execution of the code during I/O and registers the rest of the code as the continuation. -Kotlin coroutines are great when developing in Kotlin and only need to express sequential compositions (chain of co-dependent asynchronous tasks). - -== Unification of Imperative and Reactive - -Changing your development model is not simple. -It requires relearning and restructuring code in a non-blocking fashion. -Fortunately, you don't have to do it! - -Quarkus is inherently reactive thanks to its reactive engine. -But, you, as an application developer, don't have to write reactive code. -Quarkus unifies reactive and imperative. -It means that you can write traditional blocking applications on Quarkus. -But how do you avoid blocking the I/O threads? -Quarkus implements a https://en.wikipedia.org/wiki/Proactor_pattern[proactor pattern] that switches to worker thread when needed. - -image::proactor-pattern.png[The proactor pattern in Quarkus,width=50%, align=center] - -Thanks to hints in your code (such as the `@Blocking` and `@NonBlocking` annotations), Quarkus extensions can decide when the application logic is blocking or non-blocking. -If we go back to the HTTP endpoint example from above, the HTTP request is always received on an I/O thread. -Then, the extension dispatching that request to your code decides whether to call it on the I/O thread, avoiding thread switches, or on a worker thread. -This decision depends on the extension. -For example, the RESTEasy Reactive extension uses the `@Blocking` annotation to determine if the method needs to be invoked using a worker thread, or if it can be invoked using the I/O thread. - -Quarkus is pragmatic and versatile. -You decide how to develop and execute your application. -You can use the imperative way, the reactive way, or mix them, using reactive on the parts of the application under high concurrency. - -[#quarkus-extensions-enabling-reactive] -== Quarkus Extensions enabling Reactive - -Quarkus offers a large set of reactive APIs and features. -This section lists the most important, but it's not an exhaustive list. -Quarkus adds new features in every release, and the https://github.com/quarkiverse[Quarkiverse] proposes many extensions enabling _Reactive_. - -=== HTTP - -* RESTEasy Reactive: an implementation of JAX-RS tailored for the Quarkus architecture. -It follows a reactive-first approach but allows imperative code using the `@Blocking` annotation. -* Reactive Routes: a declarative way to register HTTP routes directly on the Vert.x router used by Quarkus to route HTTP requests to methods. -* Reactive Rest Client: allows consuming HTTP endpoints. -Under the hood, it uses the non-blocking I/O features from Quarkus. -* Qute - the Qute template engine exposes a reactive API to render templates in a non-blocking manner. - -=== Data - -* Hibernate Reactive: a version of Hibernate ORM using asynchronous and non-blocking clients to interact with the database. -* Hibernate Reactive with Panache: provide active record and repository support on top of Hibernate Reactive. -* Reactive PostgreSQL client: an asynchronous and non-blocking client interacting with a PostgreSQL database, allowing high concurrency. -* Reactive MySQL client: an asynchronous and non-blocking client interacting with a MySQL database -* The MongoDB extension: exposes an imperative and reactive (Mutiny) APIs to interact with MongoDB. -* Mongo with Panache offers active record support for both the imperative and reactive APIs. -* The Cassandra extension: exposes an imperative and reactive (Mutiny) APIs to interact with Cassandra -* The Redis extension: exposes an imperative and reactive (Mutiny) APIs to store and retrieve data from a Redis key-value store. - -=== Event-Driven Architecture - -* Reactive Messaging: allows implementing event-driven applications using reactive and imperative code. -* Kafka Connector for Reactive Messaging: allows implementing applications consuming and writing Kafka records -* AMQP 1.0 Connector for Reactive Message: allows implementing applications sending and receiving AMQP messages. - -=== Network Protocols and Utilities - -* gRPC: implement and consume gRPC services. -Offer reactive and imperative programming interfaces. -* GraphQL: implement and query (client) data store using GraphQL. Offers Mutiny APIs and subscriptions as event streams. -* Fault Tolerance: provide retry, fallback, circuit breakers abilities to your application.It can be used with Mutiny types. - -[#engine] -=== Engine - -* Vert.x : the underlying reactive engine of Quarkus. -The extension allows accessing to the managed Vert.x instance, as well as its Mutiny variant (exposing the Vert.x API using Mutiny types) -* Context Propagation: capture and propagate contextual objects (transaction, principal…) in a reactive pipeline diff --git a/_versions/2.7/guides/quarkus-runtime-base-image.adoc b/_versions/2.7/guides/quarkus-runtime-base-image.adoc deleted file mode 100644 index ea888110d23..00000000000 --- a/_versions/2.7/guides/quarkus-runtime-base-image.adoc +++ /dev/null @@ -1,126 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Base Runtime Image - -include::./attributes.adoc[] - -To ease the containerization of native executables, Quarkus provides a base image providing the requirements to run these executables. -The `quarkus-micro-image:1.0` image is: - -* small (based on `ubi8-micro`) -* designed for containers -* contains the right set of dependencies (glibc, libstdc++, zlib) -* support upx-compressed executables (more details on the xref:upx.adoc[enabling compression documentation]) - -== Using the base image - -In your `Dockerfile`, just use: - -[source, dockerfile] ----- -FROM quay.io/quarkus/quarkus-micro-image:1.0 -WORKDIR /work/ -COPY target/*-runner /work/application -RUN chmod 775 /work -EXPOSE 8080 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -== Extending the image - -Your application may have additional requirements. -For example, if you have an application that requires `libfreetype.so`, you need to copy the native libraries to the container. -In this case, you need to use a multi-stage `dockerfile` to copy the required libraries: - -[source, dockerfile] ----- -# First stage - install the dependencies in an intermediate container -FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 as BUILD -RUN microdnf install freetype - -# Second stage - copy the dependencies -FROM quay.io/quarkus/quarkus-micro-image:1.0 -COPY --from=BUILD \ - /lib64/libfreetype.so.6 \ - /lib64/libbz2.so.1 \ - /lib64/libpng16.so.16 \ - /lib64/ - -WORKDIR /work/ -COPY target/*-runner /work/application -RUN chmod 775 /work -EXPOSE 8080 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - -If you need to have access to the full AWT support, you need more than just `libfreetype.so`, but also the font and font configurations: - -[source, dockerfile] ----- -# First stage - install the dependencies in an intermediate container -FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 as BUILD -RUN microdnf install freetype fontconfig - -# Second stage - copy the dependencies -FROM quay.io/quarkus/quarkus-micro-image:1.0 -COPY --from=BUILD \ - /lib64/libfreetype.so.6 \ - /lib64/libgcc_s.so.1 \ - /lib64/libbz2.so.1 \ - /lib64/libpng16.so.16 \ - /lib64/libm.so.6 \ - /lib64/libbz2.so.1 \ - /lib64/libexpat.so.1 \ - /lib64/libuuid.so.1 \ - /lib64/ - -COPY --from=BUILD \ - /usr/lib64/libfontconfig.so.1 \ - /usr/lib64/ - -COPY --from=BUILD \ - /usr/share/fonts /usr/share/fonts - -COPY --from=BUILD \ - /usr/share/fontconfig /usr/share/fontconfig - -COPY --from=BUILD \ - /usr/lib/fontconfig /usr/lib/fontconfig - -COPY --from=BUILD \ - /etc/fonts /etc/fonts - -WORKDIR /work/ -COPY target/*-runner /work/application -RUN chmod 775 /work -EXPOSE 8080 -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- - - -== Alternative - Using ubi-minimal - -If the micro image does not suit your requirements, you can use https://catalog.redhat.com/software/containers/ubi8/ubi-minimal/5c359a62bed8bd75a2c3fba8[UBI- Minimal]. -It's a bigger image, but contains more utilities and is closer to a full Linux distribution. -Typically, it contains a package manager (`microdnf`), so you can install packages more easily. - - -To use this base image, use the following `Dockerfile`: - -[source, dockerfile] ----- -FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 -WORKDIR /work/ -RUN chown 1001 /work \ - && chmod "g+rwX" /work \ - && chown 1001:root /work -COPY --chown=1001:root target/*-runner /work/application - -EXPOSE 8080 -USER 1001 - -CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] ----- \ No newline at end of file diff --git a/_versions/2.7/guides/quartz.adoc b/_versions/2.7/guides/quartz.adoc deleted file mode 100644 index 1296d6328dd..00000000000 --- a/_versions/2.7/guides/quartz.adoc +++ /dev/null @@ -1,442 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Scheduling Periodic Tasks with Quartz - -include::./attributes.adoc[] -:extension-status: preview - -Modern applications often need to run specific tasks periodically. -In this guide, you learn how to schedule periodic clustered tasks using the http://www.quartz-scheduler.org/[Quartz] extension. - -include::./status-include.adoc[] - -TIP: If you only need to run in-memory scheduler use the xref:scheduler.adoc[Scheduler] extension. - -== Prerequisites - -:prerequisites-docker-compose: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we are going to expose one Rest API `tasks` to visualise the list of tasks created by a Quartz job running every 10 seconds. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `quartz-quickstart` {quickstarts-tree-url}/quartz-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: quartz-quickstart -:create-app-extensions: resteasy,quartz,hibernate-orm-panache,flyway,resteasy-jackson,jdbc-postgresql -include::includes/devtools/create-app.adoc[] - -It generates: - -* the Maven structure -* a landing page accessible on `http://localhost:8080` -* example `Dockerfile` files for both `native` and `jvm` modes -* the application configuration file - -The Maven project also imports the Quarkus Quartz extension. - -If you already have your Quarkus project configured, you can add the `quartz` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: quartz -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-quartz - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-quartz") ----- - -[TIP] -==== -To use a JDBC store, the `quarkus-agroal` extension, which provides the datasource support, is also required. -==== - -== Creating the Task Entity - -In the `org.acme.quartz` package, create the `Task` class, with the following content: - -[source,java] ----- -package org.acme.quartz; - -import javax.persistence.Entity; -import java.time.Instant; -import javax.persistence.Table; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; - -@Entity -@Table(name="TASKS") -public class Task extends PanacheEntity { <1> - public Instant createdAt; - - public Task() { - createdAt = Instant.now(); - } - - public Task(Instant time) { - this.createdAt = time; - } -} ----- -<1> Declare the entity using xref:hibernate-orm-panache.adoc[Panache] - -== Creating a scheduled job - -In the `org.acme.quartz` package, create the `TaskBean` class, with the following content: - -[source,java] ----- -package org.acme.quartz; - -import javax.enterprise.context.ApplicationScoped; - -import javax.transaction.Transactional; - -import io.quarkus.scheduler.Scheduled; - -@ApplicationScoped <1> -public class TaskBean { - - @Transactional - @Scheduled(every = "10s", identity = "task-job") <2> - void schedule() { - Task task = new Task(); <3> - task.persist(); <4> - } -} ----- -<1> Declare the bean in the _application_ scope -<2> Use the `@Scheduled` annotation to instruct Quarkus to run this method every 10 seconds and set the unique identifier for this job. -<3> Create a new `Task` with the current start time. -<4> Persist the task in database using xref:hibernate-orm-panache.adoc[Panache]. - -=== Scheduling Jobs Programmatically - -It is also possible to leverage the Quartz API directly. -You can inject the underlying `org.quartz.Scheduler` in any bean: - -[source,java] ----- -package org.acme.quartz; - -@ApplicationScoped -public class TaskBean { - - @Inject - org.quartz.Scheduler quartz; <1> - - void onStart(@Observes StartupEvent event) throws SchedulerException { - JobDetail job = JobBuilder.newJob(MyJob.class) - .withIdentity("myJob", "myGroup") - .build(); - Trigger trigger = TriggerBuilder.newTrigger() - .withIdentity("myTrigger", "myGroup") - .startNow() - .withSchedule( - SimpleScheduleBuilder.simpleSchedule() - .withIntervalInSeconds(10) - .repeatForever()) - .build(); - quartz.scheduleJob(job, trigger); <2> - } - - @Transactional - void performTask() { - Task task = new Task(); - task.persist(); - } - - // A new instance of MyJob is created by Quartz for every job execution - public static class MyJob implements Job { - - @Inject - TaskBean taskBean; - - public void execute(JobExecutionContext context) throws JobExecutionException { - taskBean.performTask(); <3> - } - - } -} ----- -<1> Inject the underlying `org.quartz.Scheduler` instance. -<2> Schedule a new job using the Quartz API. -<3> Invoke the `TaskBean#performTask()` method from the job. Jobs are also xref:cdi.adoc[container-managed] beans if they belong to a link:cdi-reference[bean archive]. - -NOTE: By default, the scheduler is not started unless a `@Scheduled` business method is found. You may need to force the start of the scheduler for "pure" programmatic scheduling. See also <>. - -== Updating the application configuration file - -Edit the `application.properties` file and add the below configuration: -[source,properties] ----- -# Quartz configuration -quarkus.quartz.clustered=true <1> -quarkus.quartz.store-type=jdbc-cmt <2> - -# Datasource configuration. -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus_test -quarkus.datasource.password=quarkus_test -quarkus.datasource.jdbc.url=jdbc:postgresql://localhost/quarkus_test - -# Hibernate configuration -quarkus.hibernate-orm.database.generation=none -quarkus.hibernate-orm.log.sql=true -quarkus.hibernate-orm.sql-load-script=no-file - -# flyway configuration -quarkus.flyway.connect-retries=10 -quarkus.flyway.table=flyway_quarkus_history -quarkus.flyway.migrate-at-start=true -quarkus.flyway.baseline-on-migrate=true -quarkus.flyway.baseline-version=1.0 -quarkus.flyway.baseline-description=Quartz ----- -<1> Indicate that the scheduler will be run in clustered mode -<2> Use the database store to persist job related information so that they can be shared between nodes - -== Creating a REST resource and a test - -Create the `org.acme.quartz.TaskResource` class with the following content: - -[source,java] ----- -package org.acme.quartz; - -import java.util.List; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/tasks") -public class TaskResource { - - @GET - public List listAll() { - return Task.listAll(); <1> - } -} ----- -<1> Retrieve the list of created tasks from the database - -You also have the option to create a `org.acme.quartz.TaskResourceTest` test with the following content: - -[source,java] ----- -package org.acme.quartz; - -import io.quarkus.test.junit.QuarkusTest; - -import static org.hamcrest.Matchers.greaterThanOrEqualTo; - -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class TaskResourceTest { - - @Test - public void tasks() throws InterruptedException { - Thread.sleep(1000); // wait at least a second to have the first task created - given() - .when().get("/tasks") - .then() - .statusCode(200) - .body("size()", is(greaterThanOrEqualTo(1))); <1> - } -} ----- -<1> Ensure that we have a `200` response and at least one task created - -== Creating Quartz Tables - -Add a SQL migration file named `src/main/resources/db/migration/V2.0.0\__QuarkusQuartzTasks.sql` with the content copied from -file with the content from link:{quickstarts-blob-url}/quartz-quickstart/src/main/resources/db/migration/V2.0.0__QuarkusQuartzTasks.sql[V2.0.0__QuarkusQuartzTasks.sql]. - -== Configuring the load balancer - -In the root directory, create a `nginx.conf` file with the following content: - -[source,conf] ----- -user nginx; - -events { - worker_connections 1000; -} - -http { - server { - listen 8080; - location / { - proxy_pass http://tasks:8080; <1> - } - } -} ----- -<1> Route all traffic to our tasks application - -== Setting Application Deployment - -In the root directory, create a `docker-compose.yml` file with the following content: - -[source,yaml] ----- -version: '3' - -services: - tasks: <1> - image: quarkus-quickstarts/quartz:1.0 - build: - context: ./ - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - QUARKUS_DATASOURCE_URL: jdbc:postgresql://postgres/quarkus_test - networks: - - tasks-network - depends_on: - - postgres - - nginx: <2> - image: nginx:1.17.6 - volumes: - - ./nginx.conf:/etc/nginx/nginx.conf:ro - depends_on: - - tasks - ports: - - 8080:8080 - networks: - - tasks-network - - postgres: <3> - image: postgres:14.1 - container_name: quarkus_test - environment: - - POSTGRES_USER=quarkus_test - - POSTGRES_PASSWORD=quarkus_test - - POSTGRES_DB=quarkus_test - ports: - - 5432:5432 - networks: - - tasks-network - -networks: - tasks-network: - driver: bridge ----- -<1> Define the tasks service -<2> Define the nginx load balancer to route incoming traffic to an appropriate node -<3> Define the configuration to run the database - -== Running the database - -In a separate terminal, run the below command: - -[source,bash] ----- -docker-compose up postgres <1> ----- -<1> Start the database instance using the configuration options supplied in the `docker-compose.yml` file - -== Run the application in Dev Mode - -Run the application with: - -include::includes/devtools/dev.adoc[] - -After a few seconds, open another terminal and run `curl localhost:8080/tasks` to verify that we have at least one task created. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -and executed with `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Packaging the application and run several instances - -The application can be packaged using: - -include::includes/devtools/build.adoc[] - -Once the build is successful, run the below command: - -[source,bash] ----- -docker-compose up --scale tasks=2 --scale nginx=1 <1> ----- -<1> Start two instances of the application and a load balancer - -After a few seconds, in another terminal, run `curl localhost:8080/tasks` to verify that tasks were only created at different instants and in an interval of 10 seconds. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -WARNING: It's the reponsibility of the deployer to clear/remove the previous state, i.e. stale jobs and triggers. Moreover, the applications that form the "Quartz cluster" should be identical, otherwise an unpredictable result may occur. - -[[quartz-register-plugin-listeners]] -== Registering Plugin and Listeners - -You can register `plugins`, `job-listeners` and `trigger-listeners` through Quarkus configuration. - -The example below registers the plugin `org.quartz.plugins.history.LoggingJobHistoryPlugin` named as `jobHistory` with the property `jobSuccessMessage` defined as `Job [{1}.{0}] execution complete and reports: {8}` - -[source,conf] ----- -quarkus.quartz.plugins.jobHistory.class=org.quartz.plugins.history.LoggingJobHistoryPlugin -quarkus.quartz.plugins.jobHistory.properties.jobSuccessMessage=Job [{1}.{0}] execution complete and reports: {8} ----- - -You can also register a listener programmatically with an injected `org.quartz.Scheduler`: - -[source,java] ----- -public class MyListenerManager { - void onStart(@Observes StartupEvent event, org.quartz.Scheduler scheduler) throws SchedulerException { - scheduler.getListenerManager().addJobListener(new MyJogListener()); - scheduler.getListenerManager().addTriggerListener(new MyTriggerListener()); - } -} ----- - -[[quartz-configuration-reference]] -== Quartz Configuration Reference - -include::{generated-dir}/config/quarkus-quartz.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/qute-reference.adoc b/_versions/2.7/guides/qute-reference.adoc deleted file mode 100644 index e7612b3dcfc..00000000000 --- a/_versions/2.7/guides/qute-reference.adoc +++ /dev/null @@ -1,2148 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Qute Reference Guide - -include::./attributes.adoc[] - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -Qute is a templating engine designed specifically to meet the Quarkus needs. -The usage of reflection is minimized to reduce the size of native images. -The API combines both the imperative and the non-blocking reactive style of coding. -In the development mode, all files located in the `src/main/resources/templates` folder are watched for changes and modifications are immediately visible in your application. -Furthermore, Qute attempts to detect most of the template problems at build time and fail fast. - -In this guide, you will find an <>, the description of the <> and <> details. - -NOTE: Qute is primarily designed as a Quarkus extension. It is possible to use it as a "standalone" library too. However, in such case some of the features are not available. In general, any feature mentioned under the <> section is missing. You can find more information about the limitations and possibilities in the <> section. - -[[the_simplest_example]] -== The Simplest Example - -The easiest way to try Qute is to use the convenient `io.quarkus.qute.Qute` class and call one of its `fmt()` static methods that can be used to format simple messages: - -[source,java] ----- -import io.quarkus.qute.Qute; - -Qute.fmt("Hello {}!", "Lucy"); <1> -// => Hello Lucy! - -Qute.fmt("Hello {name} {surname ?: 'Default'}!", Map.of("name", "Andy")); <2> -// => Hello Andy Default! - -Qute.fmt("{header}").contentType("text/html").data("header", "

My header

").render(); <3> -// <h1>Header</h1> <4> - -Qute.fmt("I am {#if ok}happy{#else}sad{/if}!", Map.of("ok", true)); <5> -// => I am happy! ----- -<1> The empty expression `{}` is a placeholder that is replaced with an index-based array accessor, i.e. `{data[0]}`. -<2> You can provide a data map instead. -<3> A builder-like API is available for more complex formatting requirements. -<4> Note that for a "text/html" template the special chars are replaced with html entities by default. -<5> You can use any <> in the template. In this case, the <> is used to render the appropriate part of the message based on the input data. - -TIP: In <>, the engine used to format the messages is the same as the one injected by `@Inject Engine`. Therefore, you can make use of any Quarkus-specific integration feature such as <>, <> or even <>. - - -The format object returned by the `Qute.fmt(String)` method can be evaluated lazily and used e.g. as a log message: - -[source,java] ----- -LOG.info(Qute.fmt("Hello {name}!").data("name", "Foo")); -// => Hello Foo! and the message template is only evaluated if the log level INFO is used for the specific logger ----- - -NOTE: Please read the javadoc of the `io.quarkus.qute.Qute` class for more details. - -[[hello_world_example]] -== Hello World Example - -In this example, we would like to demonstrate the _basic workflow_ when working with Qute templates. -Let's start with a simple "hello world" example. -We will always need some *template contents*: - -.hello.html -[source,html] ----- - -

Hello {name}! <1> - ----- -<1> `{name}` is a value expression that is evaluated when the template is rendered. - -Then, we will need to parse the contents into a *template definition* Java object. -A template definition is an instance of `io.quarkus.qute.Template`. - -If using Qute "standalone" you'll need to create an instance of `io.quarkus.qute.Engine` first. -The `Engine` represents a central point for template management with dedicated configuration. -Let's use the convenient builder: - -[source,java] ----- -Engine engine = Engine.builder().addDefaults().build(); ----- - -TIP: In Quarkus, there is a preconfigured `Engine` available for injection - see <>. - -Once we have an `Engine` instance we could parse the template contents: - -[source,java] ----- -Template hello = engine.parse(helloHtmlContent); ----- - -TIP: In Quarkus, you can simply inject the template definition. The template is automatically parsed and cached - see <>. - -Finally, create a *template instance*, set the data and render the output: - -[source,java] ----- -// Renders

Hello Jim!

-hello.data("name", "Jim").render(); <1> <2> ----- -<1> `Template.data(String, Object)` is a convenient method that creates a template instance and sets the data in one step. -<2> `TemplateInstance.render()` triggers a synchronous rendering, i.e. the current thread is blocked until the rendering is finished. However, there are also asynchronous ways to trigger the rendering and consume the results. For example there is the `TemplateInstance.renderAsync()` method that returns `CompletionStage` or `TemplateInstance.createMulti()` that returns Mutiny's `Multi`. - -So the workflow is simple: - -1. Create the template contents (`hello.html`), -2. Parse the template definition (`io.quarkus.qute.Template`), -3. Create a template instance (`io.quarkus.qute.TemplateInstance`), -4. Render the output. - -TIP: The `Engine` is able to cache the template definitions so that it's not necessary to parse the contents again and again. In Quarkus, the caching is done automatically. - -[[core_features]] -== Core Features - -[[basic-building-blocks]] -=== Basic Building Blocks - -The dynamic parts of a template include comments, expressions, sections and unparsed character data. - -Comments:: -A comment starts with the sequence `{!` and ends with the sequence `!}`, e.g. `{! This is a comment !}`. -Can be multiline and may contain expressions and sections: `{! {#if true} !}`. -The content of a comment is completely ignored when rendering the output. - -Expressions:: -An <> outputs an evaluated value. -It consists of one or more parts. -A part may represent simple properties: `{foo}`, `{item.name}`, and virtual methods: `{item.get(name)}`, `{name ?: 'John'}`. -An expression may also start with a namespace: `{inject:colors}`. - -Sections:: -A <> may contain static text, expressions and nested sections: `{#if foo.active}{foo.name}{/if}`. -The name in the closing tag is optional: `{#if active}ACTIVE!{/}`. -A section can be empty: `{#myTag image=true /}`. -A section may also declare nested section blocks: `{#if item.valid} Valid. {#else} Invalid. {/if}` and decide which block to render. - -Unparsed Character Data:: -It is used to mark the content that should be rendered but _not parsed_. -It starts with the sequence `{|` and ends with the sequence `|}`: `{| |}`, and could be multi-line. -+ -NOTE: Previously, unparsed character data had to start with `{[` and end with `]}`. This syntax is still supported but we encourage users to switch to the new syntax to avoid some common collisions with constructs from other languages. - -[[identifiers]] -=== Identifiers and Tags - -Identifiers are used in expressions and section tags. -A valid identifier is a sequence of non-whitespace characters. -However, users are encouraged to only use valid Java identifiers in expressions. - -TIP: You can use bracket notation if you need to specify an identifier that contains a dot, e.g. `{map['my.key']}`. - -When parsing a template document the parser identifies all _tags_. -A tag starts and ends with a curly bracket, e.g. `{foo}`. -The content of a tag must start with: - -* a digit, or -* an alphabet character, or -* underscore, or -* a built-in command: `#`, `!`, `@`, `/`. - -If it does not start with any of the above it is ignored by the parser. - -.Tag Examples -[source,html] ----- - - - {_foo.bar} <1> - {! comment !}<2> - { foo} <3> - {{foo}} <4> - {"foo":true} <5> - - ----- -<1> Parsed: an expression that starts with underscore. -<2> Parsed: a comment -<3> Ignored: starts with whitespace. -<4> Ignored: starts with `{`. -<5> Ignored: starts with `"`. - -TIP: It is also possible to use escape sequences `\{` and `\}` to insert delimiters in the text. In fact, an escape sequence is usually only needed for the start delimiter, ie. `\\{foo}` will be rendered as `{foo}` (no parsing/evaluation will happen). - -=== Removing Standalone Lines From the Template - -By default, the parser removes standalone lines from the template output. -A *standalone line* is a line that contains at least one section tag (e.g. `{#each}` and `{/each}`), parameter declaration (e.g. `{@org.acme.Foo foo}`) or comment but no expression and no non-whitespace character. -In other words, a line that contains no section tag or a parameter declaration is *not* a standalone line. -Likewise, a line that contains an _expression_ or a _non-whitespace character_ is *not* a standalone line. - -.Template Example -[source,html] ----- - - -
    - {#for item in items} <1> -
  • {item.name} {#if item.active}{item.price}{/if}
  • <2> - <3> - {/for} <4> -
- - ----- -<1> This is a standalone line and will be removed. -<2> Not a standalone line - contains an expression and non-whitespace characters -<3> Not a standalone line - contains no section tag/parameter declaration -<4> This is a standalone line. - -.Default Output -[source,html] ----- - - -
    -
  • Foo 100
  • - -
- - ----- - -TIP: In Quarkus, the default behavior can be disabled by setting the property `quarkus.qute.remove-standalone-lines` to `false`. -In this case, all whitespace characters from a standalone line will be printed to the output. - -.Output with `quarkus.qute.remove-standalone-lines=false` -[source,html] ----- - - -
    - -
  • Foo 100
  • - - -
- - ----- - -[[expressions]] -=== Expressions - -An expression is evaluated and outputs the value. -It has one or more parts, where each part represents either a property accessor (aka Field Access Expression) or a virtual method invocation (aka Method Invocation Expression). - -When accessing the properties you can either use the dot notation or bracket notation. -In the `object.property` (dot notation) syntax, the `property` must be a <>. -In the `object[property_name]` (bracket notation) syntax, the `property_name` has to be a non-null <> value. - -An expression can start with an optional namespace followed by a colon (`:`). -A valid namespace consist of alphanumeric characters and underscores. -Namespace expressions are resolved differently - see also <>. - -.Property Accessor Examples -[source] ----- -{name} <1> -{item.name} <2> -{item['name']} <3> -{global:colors} <4> ----- -<1> no namespace, one part: `name` -<2> no namespace, two parts: `item`, `name` -<3> equivalent to `{item.name}` but using the bracket notation -<4> namespace `global`, one part: `colors` - -A part of an expression can be a _virtual method_ in which case the name can be followed by a list of comma-separated parameters in parentheses. -A parameter of a virtual method can be either a nested expression or a <> value. -We call these method _"virtual"_ because they does not have to be backed by a real Java method. -You can learn more about virtual methods in the <>. - -.Virtual Method Example -[source] ----- -{item.getLabels(1)} <1> -{name or 'John'} <2> ----- -<1> no namespace, two parts - `item`, `getLabels(1)`, the second part is a virtual method with name `getLabels` and params `1` -<2> infix notation that can be used for virtual methods with single parameter, translated to `name.or('John')`; no namespace, two parts - `name`, `or('John')` - -[[literals]] -==== Supported Literals - -|=== -|Literal |Examples - -|boolean -|`true`, `false` - -|null -|`null` - -|string -|`'value'`, `"string"` - -|integer -|`1`, `-5` - -|long -|`1l`, `-5L` - -|double -|`1D`, `-5d` - -|float -|`1f`, `-5F` - -|=== - -[[expression_resolution]] -==== Resolution - -The first part of the expression is always resolved against the <>. -If no result is found for the first part it's resolved against the parent context object (if available). -For an expression that starts with a namespace the current context object is found using all the available ``NamespaceResolver``s. -For an expression that does not start with a namespace the current context object is *derived from the position* of the tag. -All other parts of an expression are resolved using all ``ValueResolver``s against the result of the previous resolution. - -For example, expression `{name}` has no namespace and single part - `name`. -The "name" will be resolved using all available value resolvers against the current context object. -However, the expression `{global:colors}` has the namespace `global` and single part - `colors`. -First, all available ``NamespaceResolver``s will be used to find the current context object. -And afterwards value resolvers will be used to resolve "colors" against the context object found. - -[TIP] -==== -Data passed to the template instance are always accessible using the `data` namespace. -This could be useful to access data for which the key is overridden: - -[source,html] ----- - -{item.name} <1> -
    -{#for item in item.derivedItems} <2> -
  • - {item.name} <3> - is derived from - {data:item.name} <4> -
  • -{/for} -
- ----- -<1> `item` is passed to the template instance as a data object. -<2> Iterate over the list of derived items. -<3> `item` is an alias for the iterated element. -<4> Use the `data` namespace to access the `item` data object. - -==== - -[[current_context_object]] -==== Current Context - -If an expression does not specify a namespace the _current context object_ is derived from the position of the tag. -By default, the current context object represents the data passed to the template instance. -However, sections may change the current context object. -A typical example is the <> section that can be used to define named local variables: - -[source,html] ----- -{#let myParent=order.item.parent myPrice=order.price} <1> -

{myParent.name}

-

Price: {myPrice}

-{/let} ----- -<1> The current context object inside the section is the map of resolved parameters. - -NOTE: The current context can be accessed via the implicit binding `this`. - -==== Built-in Resolvers - -|=== -|Name |Description |Examples - -|Elvis Operator -|Outputs the default value if the previous part cannot be resolved or resolves to `null`. -|`{person.name ?: 'John'}`, `{person.name or 'John'}`, `{person.name.or('John')}` - -|orEmpty -|Outputs an empty list if the previous part cannot be resolved or resolves to `null`. -|`{#for pet in pets.orEmpty}{pet.name}{/for}` - -|Ternary Operator -|Shorthand for if-then-else statement. Unlike in <> nested operators are not supported. -|`{item.isActive ? item.name : 'Inactive item'}` outputs the value of `item.name` if `item.isActive` resolves to `true`. - -|Logical AND Operator -|Outputs `true` if both parts are not `falsy` as described in the <>. The parameter is only evaluated if needed. -|`{person.isActive && person.hasStyle}` - -|Logical OR Operator -|Outputs `true` if any of the parts is not `falsy` as described in the <>. The parameter is only evaluated if needed. -|`{person.isActive \|\| person.hasStyle}` - -|=== - -TIP: The condition in a ternary operator evaluates to `true` if the value is not considered `falsy` as described in the <>. - -NOTE: In fact, the operators are implemented as "virtual methods" that consume one parameter and can be used with infix notation. For example `{person.name or 'John'}` is translated to `{person.name.or('John')}` and `{item.isActive ? item.name : 'Inactive item'}` is translated to `{item.isActive.ifTruthy(item.name).or('Inactive item')}` - -==== Arrays - -You can iterate over elements of an array with the <>. -Moreover, it's also possible to get the length of the specified array and access the elements directly via an index value. -Additionaly, you can access the first/last `n` elements via the `take(n)/takeLast(n)` methods. - -.Array Examples -[source,html] ----- -

Array of length: {myArray.length}

<1> -
    -
  • First: {myArray.0}
  • <2> -
  • Second: {myArray[1]}
  • <3> -
  • Third: {myArray.get(2)}
  • <4> -
-
    - {#for element in myArray} -
  1. {element}
  2. - {/for} -
-First two elements: {#each myArray.take(2)}{it}{/each} <5> ----- -<1> Outputs the length of the array. -<2> Outputs the first element of the array. -<3> Outputs the second element of the array using the bracket notation. -<4> Outputs the third element of the array via the virtual method `get()`. -<5> Outputs the first two elements of the array. - -==== Character Escapes - -For HTML and XML templates the `'`, `"`, `<`, `>`, `&` characters are escaped by default if a template variant is set. - -NOTE: In Quarkus, a variant is set automatically for templates located in the `src/main/resources/templates`. By default, the `java.net.URLConnection#getFileNameMap()` is used to determine the content type of a template file. The additional map of suffixes to content types can be set via `quarkus.qute.content-types`. - -If you need to render the unescaped value: - -1. Either use the `raw` or `safe` properties implemented as extension methods of the `java.lang.Object`, -2. Or wrap the `String` value in a `io.quarkus.qute.RawString`. - -[source,html] ----- - -

{title}

<1> -{paragraph.raw} <2> - ----- -<1> `title` that resolves to `Expressions & Escapes` will be rendered as `Expressions &amp; Escapes` -<2> `paragraph` that resolves to `

My text!

` will be rendered as `

My text!

` - -TIP: By default, a template with one of the following content types is escaped: `text/html`, `text/xml`, `application/xml` and `application/xhtml+xml`. However, it's possible to extend this list via the `quarkus.qute.escape-content-types` configuration property. - -[[virtual_methods]] -==== Virtual Methods - -A virtual method is a *part of an expression* that looks like a regular Java method invocation. -It's called "virtual" because it does not have to match the actual method of a Java class. -In fact, like normal properties a virtual method is also handled by a value resolver. -The only difference is that for virtual methods a value resolver consumes parameters that are also expressions. - -.Virtual Method Example -[source,html] ----- - -

{item.buildName(item.name,5)}

<1> - ----- -<1> `buildName(item.name,5)` represents a virtual method with name `buildName` and two parameters: `item.name` and `5` . The virtual method could be evaluated by a value resolver generated for the following Java class: -+ -[source,java] ----- -class Item { - String buildName(String name, int age) { - return name + ":" + age; - } -} ----- - -NOTE: Virtual methods are usually evaluated by value resolvers generated for <>, <> or classes used in <>. However, a custom value resolver that is not backed by any Java class/method can be registered as well. - -A virtual method with single parameter can be called using the infix notation: - -.Infix Notation Example -[source,html] ----- - -

{item.price or 5}

<1> - ----- -<1> `item.price or 5` is translated to `item.price.or(5)`. - -Virtual method parameters can be "nested" virtual method invocations. - -.Nested Virtual Method Example -[source,html] ----- - -

{item.subtractPrice(item.calculateDiscount(10))}

<1> - ----- -<1> `item.calculateDiscount(10)` is evaluated first and then passed as an argument to `item.subtractPrice()`. - -==== Evaluation of `CompletionStage` and `Uni` Objects - -Objects that implement `java.util.concurrent.CompletionStage` and `io.smallrye.mutiny.Uni` are evaluated in a special way. -If a part of an expression resolves to a `CompletionStage`, the resolution continues once this stage is completed and the next part of the expression (if any) is evaluated against the result of the completed stage. -For example, if there is an expression `{foo.size}` and `foo` resolves to `CompletionStage>` then `size` is resolved against the completed result, i.e. `List`. -If a part of an expression resolves to a `Uni`, a `CompletionStage` is first created from `Uni` using `Uni#subscribeAsCompletionStage()` and then evaluated as described above. - -==== Missing Properties - -It can happen that an expression may not be evaluated at runtime. -For example, if there is an expression `{person.age}` and there is no property `age` declared on the `Person` class. -The behavior differs based on whether the <> is enabled or not. - -If enabled then a missing property will always result in a `TemplateException` and the rendering is aborted. -You can use _default values_ and _safe expressions_ in order to suppress the error. - -If disabled then the special constant `NOT_FOUND` is written to the output by default. - -TIP: In Quarkus, it's possible to change the default strategy via the `quarkus.qute.property-not-found-strategy` as described in the <>. - -NOTE: Similar errors are detected at build time if <> and <> are used. - -[[sections]] -=== Sections - -A section: - -* has a start tag -** starts with `#`, followed by the name of the section such as `{#if}` and `{#each}`, -* may be empty -** tag ends with `/`, ie. `{#emptySection /}` -* may contain other expression, sections, etc. -** the end tag starts with `/` and contains the name of the section (optional): `{#if foo}Foo!{/if}` or `{#if foo}Foo!{/}`, - -The start tag can also define parameters. -The parameters have optional names. -A section may contain several content *blocks*. -The "main" block is always present. -Additional/nested blocks also start with `#` and can have parameters too - `{#else if item.isActive}`. -A section helper that defines the logic of a section can "execute" any of the blocks and evaluate the parameters. - -[source] ----- -{#if item.name is 'sword'} - It's a sword! -{#else if item.name is 'shield'} - It's a shield! -{#else} - Item is neither a sword nor a shield. -{/if} ----- - -[[loop_section]] -==== Loop Section - -The loop section makes it possible to iterate over an instance of `Iterable`, `Iterator`, array, `Map` (element is a `Map.Entry`), `Stream`, `Integer` and `int` (primitive value). -It has two flavors. -The first one is using the `each` name and `it` is an implicit alias for the iteration element. - -[source] ----- -{#each items} - {it.name} <1> -{/each} ----- -<1> `name` is resolved against the current iteration element. - -The other form is using the `for` name and can specify the alias used to reference the iteration element: - -[source] ----- -{#for item in items} <1> - {item.name} -{/for} ----- -<1> `item` is the alias used for the iteration element. - -It's also possible to access the iteration metadata inside the loop via the following keys: - -* `count` - 1-based index -* `index` - zero-based index -* `hasNext` - `true` if the iteration has more elements -* `isLast` - `true` if `hasNext == false` -* `isFirst` - `true` if `count == 1` -* `odd` - `true` if the zero-based index is odd -* `even` - `true` if the zero-based index is even -* `indexParity` - outputs `odd` or `even` based on the zero-based index value - -However, the keys cannot be used directly. -Instead, a prefix is used to avoid possible collisions with variables from the outer scope. -By default, the alias of an iterated element suffixed with an underscore is used as a prefix. -For example, the `hasNext` key must be prefixed with `it_` inside an `{#each}` section: `{it_hasNext}`. - -.`each` Iteration Metadata Example -[source] ----- -{#each items} - {it_count}. {it.name} <1> - {#if it_hasNext}
{/if} <2> -{/each} ----- -<1> `it_count` represents one-based index. -<2> `
` is only rendered if the iteration has more elements. - -And must be used in a form of `{item_hasNext}` inside a `{#for}` section with the `item` element alias. - -.`for` Iteration Metadata Example -[source] ----- -{#for item in items} - {item_count}. {item.name} <1> - {#if item_hasNext}
{/if} <2> -{/each} ----- -<1> `item_count` represents one-based index. -<2> `
` is only rendered if the iteration has more elements. - -[TIP] -==== -The iteration metadata prefix is configurable either via `EngineBuilder.iterationMetadataPrefix()` for standalone Qute or via the `quarkus.qute.iteration-metadata-prefix` configuration property in a Quarkus application. Three special constants can be used: - -1. `` - the alias of an iterated element suffixed with an underscore is used (default) -2. `` - the alias of an iterated element suffixed with a question mark is used -3. `` - no prefix is used -==== - -The `for` statement also works with integers, starting from 1. In the example below, considering that `total = 3`: - -[source] ----- -{#for i in total} - {i}: -{/for} ----- - -And the output will be: - -[source] ----- -1:2:3: ----- - -A loop section may also define the `{#else}` block that is executed when there are no items to iterate: - -[source] ----- -{#for item in items} - {item.name} -{#else} - No items. -{/for} ----- - -[[if_section]] -==== If Section - -The `if` section represents a basic control flow section. -The simplest possible version accepts a single parameter and renders the content if the condition is evaluated to `true`. -A condition without an operator evaluates to `true` if the value is not considered `falsy`, i.e. if the value is not `null`, `false`, an empty collection, an empty map, an empty array, an empty string/char sequence or a number equal to zero. - -[source] ----- -{#if item.active} - This item is active. -{/if} ----- - -You can also use the following operators in a condition: - -|=== -|Operator |Aliases |Precedence (higher wins) - -|logical complement -|`!` -| 4 - -|greater than -|`gt`, `>` -| 3 - -|greater than or equal to -|`ge`, `>=` -| 3 - -|less than -|`lt`, `<` -| 3 - -|less than or equal to -|`le`, `\<=` -| 3 - -|equals -|`eq`, `==`, `is` -| 2 - -|not equals -|`ne`, `!=` -| 2 - -|logical AND (short-circuiting) -|`&&`, `and` -| 1 - -|logical OR (short-circuiting) -|`\|\|`, `or` -| 1 - -|=== - -.A simple operator example -[source] ----- -{#if item.age > 10} - This item is very old. -{/if} ----- - -Multiple conditions are also supported. - -.Multiple conditions example -[source] ----- -{#if item.age > 10 && item.price > 500} - This item is very old and expensive. -{/if} ----- - -Precedence rules can be overridden by parentheses. - -.Parentheses example -[source] ----- -{#if (item.age > 10 || item.price > 500) && user.loggedIn} - User must be logged in and item age must be > 10 or price must be > 500. -{/if} ----- - - -You can also add any number of `else` blocks: - -[source] ----- -{#if item.age > 10} - This item is very old. -{#else if item.age > 5} - This item is quite old. -{#else if item.age > 2} - This item is old. -{#else} - This item is not old at all! -{/if} ----- - -[[when_section]] -==== When Section - -This section is similar to Java's `switch` or Kotlin's `when` constructs. -It matches a _tested value_ against all blocks sequentially until a condition is satisfied. -The first matching block is executed. -All other blocks are ignored (this behavior differs to the Java `switch` where a `break` statement is necessary). - -.Example using the `when`/`is` name aliases -[source] ----- -{#when items.size} - {#is 1} <1> - There is exactly one item! - {#is > 10} <2> - There are more than 10 items! - {#else} <3> - There are 2 -10 items! -{/when} ----- -<1> If there is exactly one parameter it's tested for equality. -<2> It's possible to use <> to specify the matching logic. Unlike in the <> nested operators are not supported. -<3> `else` is block is executed if no other block matches the value. - -.Example using the `switch`/`case` name aliases -[source] ----- -{#switch person.name} - {#case 'John'} <1> - Hey John! - {#case 'Mary'} - Hey Mary! -{/switch} ----- -<1> `case` is an alias for `is`. - -A tested value that resolves to an enum is handled specifically. -The parameters of an `is`/`case` block are not evaluated as expressions but compared with the result of `toString()` invocation upon the tested value. - -[source] ----- -{#when machine.status} - {#is ON} - It's running. <1> - {#is in OFF BROKEN} - It's broken or OFF. <2> -{/when} ----- -<1> This block is executed if `machine.status.toString().equals("ON")`. -<2> This block is executed if `machine.status.toString().equals("OFF")` or `machine.status.toString().equals("BROKEN")`. - -NOTE: An enum constant is validated if the tested value has a type information available and resolves to an enum type. - -The following operators are supported in `is`/`case` block conditions: - -[[when_operators]] - -|=== -|Operator |Aliases |Example - -|not equal -|`!=`, `not`, `ne` -|`{#is not 10}`,`{#case != 10}` - -|greater than -|`gt`, `>` -|`{#case le 10}` - -|greater than or equal to -|`ge`, `>=` -|`{#is >= 10}` - -|less than -|`lt`, `<` -|`{#is < 10}` - -|less than or equal to -|`le`, `\<=` -|`{#case le 10}` - -|in -|`in` -|`{#is in 'foo' 'bar' 'baz'}` - -|not in -|`ni`,`!in` -|`{#is !in 1 2 3}` - -|=== - -[[let_section]] -==== Let Section - -This section allows you to define named local variables: -[source,html] ----- -{#let myParent=order.item.parent isActive=false age=10} <1> -

{myParent.name}

- Is active: {isActive} - Age: {age} -{/let} <2> ----- -<1> The local variable is initialized with an expression that can also represent a <>. -<2> Keep in mind that the variable is not available outside the `let` section that defines it. - -The section tag is also registered under the `set` alias: - -[source,html] ----- -{#set myParent=item.parent price=item.price} -

{myParent.name}

-

Price: {price} -{/set} ----- - -[[with_section]] -==== With Section - -This section can be used to set the current context object. -This could be useful to simplify the template structure: - -[source,html] ----- -{#with item.parent} -

{name}

<1> -

{description}

<2> -{/with} ----- -<1> The `name` will be resolved against the `item.parent`. -<2> The `description` will be also resolved against the `item.parent`. - -[IMPORTANT] -==== -Note that the `with` section should not be used in <> or templates that define <>. -The reason is that it prevents Qute from validating the nested expressions. -If possible it should be replaced with the `{#let}` section which declares an explicit binding: - -[source,html] ----- -{#let it=item.parent} -

{it.name}

-

{it.description}

-{/let} ----- -==== - -This section might also come in handy when we'd like to avoid multiple expensive invocations: - -[source,html] ----- -{#with item.callExpensiveLogicToGetTheValue(1,'foo',bazinga)} - {#if this is "fun"} <1> -

Yay!

- {#else} -

{this} is not fun at all!

- {/if} -{/with} ----- -<1> `this` is the result of `item.callExpensiveLogicToGetTheValue(1,'foo',bazinga)`. The method is only invoked once even though the result may be used in multiple expressions. - -[[include_helper]] -==== Include Section - -This section can be used to include another template and possibly override some parts of the template (template inheritance). - -.Simple Example -[source,html] ----- - - - -Simple Include - - - {#include foo limit=10 /} <1><2> - - ----- -<1> Include a template with id `foo`. The included template can reference data from the current context. -<2> It's also possible to define optional parameters that can be used in the included template. - -Template inheritance makes it possible to reuse template layouts. - -.Template "base" -[source,html] ----- - - - -{#insert title}Default Title{/} <1> - - - {#insert}No body!{/} <2> - - ----- -<1> `insert` sections are used to specify parts that could be overridden by a template that includes the given template. -<2> An `insert` section may define the default content that is rendered if not overridden. If no name parameter is supplied then the main block of the relevant `{#include}` section is used. - -.Template "detail" -[source,html] ----- -{#include base} <1> - {#title}My Title{/title} <2> -
<3> - My body. -
-{/include} ----- -<1> `include` section is used to specify the extended template. -<2> Nested blocks are used to specify the parts that should be overridden. -<3> The content of the main block is used for an `{#insert}` section with no name parameter specified. - -NOTE: Section blocks can also define an optional end tag - `{/title}`. - -==== Eval Section - -This section can be used to parse and evaluate a template dynamically. -The behavior is very similar to the <> but: - -1. The template content is passed directly, i.e. not obtained via an `io.quarkus.qute.TemplateLocator`, -2. It's not possible to override parts of the evaluated template. - -[source,html] ----- -{#eval myData.template name='Mia' /} <1><2><3> ----- -<1> The result of `myData.template` will be used as the template. The template is executed with the <>, i.e. can reference data from the template it's included into. -<2> It's also possible to define optional parameters that can be used in the evaluated template. -<3> The content of the section is always ignored. - -NOTE: The evaluated template is parsed and evaluated every time the section is executed. In other words, it's not possible to cache the parsed value to conserve resources and optimize the performance. - -[[user_tags]] -==== User-defined Tags - -User-defined tags can be used to include a tag template and optionally pass some parameters. -Let's suppose we have a tag template called `itemDetail.html`: - -[source] ----- -{#if showImage} <1> - {it.image} <2> - {nested-content} <3> -{/if} ----- -<1> `showImage` is a named parameter. -<2> `it` is a special key that is replaced with the first unnamed parameter of the tag. -<3> (optional) `nested-content` is a special key that will be replaced by the content of the tag. - -In Quarkus, all files from the `src/main/resources/templates/tags` are registered and monitored automatically. -For Qute standalone, you need to put the parsed template under the name `itemDetail.html` and register a relevant `UserTagSectionHelper` to the engine: - -[source,java] ----- -Engine engine = Engine.builder() - .addSectionHelper(new UserTagSectionHelper.Factory("itemDetail","itemDetail.html")) - .build(); -engine.putTemplate("itemDetail.html", engine.parse("...")); ----- - -Then, we can call the tag like this: - -[source,html] ----- -
    -{#for item in items} -
  • - {#itemDetail item showImage=true} <1> - = {item.name} <2> - {/itemDetail} -
  • -{/for} -
----- -<1> `item` is resolved to an iteration element and can be referenced using the `it` key in the tag template. -<2> Tag content injected using the `nested-content` key in the tag template. - -By default, the tag template can reference data from the parent context. -For example, the tag above could use the following expression `{items.size}`. -However, sometimes it might be useful to disable this behavior and execute the tag as an _isolated_ template, i.e. without access to the context of the template that calls the tag. -In this case, just add `_isolated` or `_isolated=true` argument to the call site, e.g. `{#itemDetail item showImage=true _isolated /}`. - -=== Rendering Output - -`TemplateInstance` provides several ways to trigger the rendering and consume the result. -The most straightforward approach is represented by `TemplateInstance.render()`. -This method triggers a synchronous rendering, i.e. the current thread is blocked until the rendering is finished, and returns the output. -By contrast, `TemplateInstance.renderAsync()` returns a `CompletionStage` which is completed when the rendering is finished. - -.`TemplateInstance.renderAsync()` Example -[source,java] ----- -template.data(foo).renderAsync().whenComplete((result, failure) -> { <1> - if (failure == null) { - // consume the output... - } else { - // process failure... - } -}; ----- -<1> Register a callback that is executed once the rendering is finished. - -There are also two methods that return https://smallrye.io/smallrye-mutiny/[Mutiny] types. -`TemplateInstance.createUni()` returns a new `Uni` object. -If you call `createUni()` the template is not rendered right away. -Instead, every time `Uni.subscribe()` is called a new rendering of the template is triggered. - -.`TemplateInstance.createUni()` Example -[source,java] ----- -template.data(foo).createUni().subscribe().with(System.out::println); ----- - -`TemplateInstance.createMulti()` returns a new `Multi` object. -Each item represents a part/chunk of the rendered template. -Again, `createMulti()` does not trigger rendering. -Instead, every time a computation is triggered by a subscriber the template is rendered again. - -.`TemplateInstance.createMulti()` Example -[source,java] ----- -template.data(foo).createMulti().subscribe().with(buffer:append,buffer::flush); ----- - -NOTE: The template rendering is divided in two phases. During the first phase, which is asynchronous, all expressions in the template are resolved and a _result tree_ is built. In the second phase, which is synchronous, the result tree is _materialized_, i.e. one by one the result nodes emit chunks that are consumed/buffered by the specific consumer. - -=== Engine Configuration - -[[value-resolvers]] -==== Value Resolvers - -Value resolvers are used when evaluating expressions. -A custom `io.quarkus.qute.ValueResolver` can be registered programmatically via `EngineBuilder.addValueResolver()`. - -.`ValueResolver` Builder Example -[source,java] ----- -engineBuilder.addValueResolver(ValueResolver.builder() - .appliesTo(ctx -> ctx.getBase() instanceof Long && ctx.getName().equals("tenTimes")) - .resolveSync(ctx -> (Long) ctx.getBase() * 10) - .build()); ----- - -[[template-locator]] -==== Template Locator - -Manual registration is sometimes handy but it's also possible to register a template locator using `EngineBuilder.addLocator()`. -This locator is used whenever the `Engine.getTemplate()` method is called and the engine has no template for a given id stored in the cache. -The locator is responsible to use the correct character encoding when reading the contents of a template. - -NOTE: In Quarkus, all templates from the `src/main/resources/templates` are located automatically and the encoding set via `quarkus.qute.default-charset` (UTF-8 by default) is used. - -==== Content Filters - -Content filters can be used to modify the template contents before parsing. - -.Content Filter Example -[source,java] ----- -engineBuilder.addParserHook(new ParserHook() { - @Override - public void beforeParsing(ParserHelper parserHelper) { - parserHelper.addContentFilter(contents -> contents.replace("${", "$\\{")); <1> - } -}); ----- -<1> Escape all occurences of `${`. - -[[strict_rendering]] -==== Strict Rendering - -The strict rendering enables the developers to catch insidious errors caused by typos and invalid expressions. -If enabled then any expression that cannot be resolved, i.e. is evaluated to an instance of `io.quarkus.qute.Results.NotFound`, will always result in a `TemplateException` and the rendering is aborted. -A `NotFound` value is considered an error because it basically means that no value resolver was able to resolve the expression correctly. - -NOTE: `null` is a valid value though. It is considered `falsy` as described in the <> and does not produce any output. - -Strict rendering is enabled by default. -However, you can disable this functionality via `io.quarkus.qute.EngineBuilder.strictRendering(boolean)`. - -TIP: In Quarkus, a dedicated config property can be used instead: `quarkus.qute.strict-rendering`. - -If you really need to use an expression which can potentially lead to a "not found" error, you can use _default values_ and _safe expressions_ in order to suppress the error. -A default value is used if the previous part of an expression cannot be resolved or resolves to `null`. -You can use the elvis operator to output the default value: `{foo.bar ?: 'baz'}`, which is effectively the same as the following virtual method: `{foo.bar.or('baz')}`. -A safe expression ends with the `??` suffix and results in `null` if the expression cannot be resolved. -It can be very useful e.g. in `{#if}` sections: `{#if valueNotFound??}Only rendered if valueNotFound is truthy!{/if}`. -In fact, `??` is just a shorthand notation for `.or(null)`, i.e. `{#if valueNotFound??}` becomes `{#if valueNotFound.or(null)}`. - -[[quarkus_integration]] -== Quarkus Integration - -If you want to use Qute in your Quarkus application add the following dependency to your project: - -[source,xml] ----- - - io.quarkus - quarkus-qute - ----- - -In Quarkus, a preconfigured engine instance is provided and available for injection - a bean with scope `@ApplicationScoped`, bean type `io.quarkus.qute.Engine` and qualifier `@Default` is registered automatically. -Moreover, all templates located in the `src/main/resources/templates` directory are validated and can be easily injected. - -[source,java] ----- -import io.quarkus.qute.Engine; -import io.quarkus.qute.Template; -import io.quarkus.qute.Location; - -class MyBean { - - @Inject - Template items; <1> - - @Location("detail/items2_v1.html") <2> - Template items2; - - @Inject - Engine engine; <3> -} ----- -<1> If there is no `Location` qualifier provided, the field name is used to locate the template. In this particular case, the container will attempt to locate a template with path `src/main/resources/templates/items.html`. -<2> The `Location` qualifier instructs the container to inject a template from a path relative from `src/main/resources/templates`. In this case, the full path is `src/main/resources/templates/detail/items2_v1.html`. -<3> Inject the configured `Engine` instance. - -It's also possible to contribute to the engine configuration via a CDI observer method. - -.`EngineBuilder` Observer Example -[source,java] ----- -import io.quarkus.qute.EngineBuilder; - -class MyBean { - - void configureEngine(@Observes EngineBuilder builder) { - builder.addValueResolver(ValueResolver.builder() - .appliesTo(ctx -> ctx.getBase() instanceof Long && ctx.getName().equals("tenTimes")) - .resolveSync(ctx -> (Long) ec.getBase() * 10) - .build()); - } -} ----- - -=== Template Variants - -Sometimes it's useful to render a specific variant of the template based on the content negotiation. -This can be done by setting a special attribute via `TemplateInstance.setAttribute()`: - -[source,java] ----- -class MyService { - - @Inject - Template items; <1> - - @Inject - ItemManager manager; - - String renderItems() { - return items.data("items",manager.findItems()).setAttribute(TemplateInstance.SELECTED_VARIANT, new Variant(Locale.getDefault(),"text/html","UTF-8")).render(); - } -} ----- - -NOTE: When using `quarkus-resteasy-qute` the content negotiation is performed automatically. See <>. - -[[injecting-beans-directly-in-templates]] -=== Injecting Beans Directly In Templates - -A CDI bean annotated with `@Named` can be referenced in any template through `cdi` and/or `inject` namespaces: - -[source,html] ----- -{cdi:personService.findPerson(10).name} <1> -{inject:foo.price} <2> ----- -<1> First, a bean with name `personService` is found and then used as the base object. -<2> First, a bean with name `foo` is found and then used as the base object. - -All expressions with `cdi` and `inject` namespaces are validated during build. -For the expression `cdi:personService.findPerson(10).name` the implementation class of the injected bean must either declare the `findPerson` method or a matching <> must exist. -For the expression `inject:foo.price` the implementation class of the injected bean must either have the `price` property (e.g. a `getPrice()` method) or a matching <> must exist. - -NOTE: A `ValueResolver` is also generated for all beans annotated with `@Named` so that it's possible to access its properties without reflection. - -TIP: If your application serves xref:http-reference.adoc[HTTP requests] you can also inject the current `io.vertx.core.http.HttpServerRequest` via the `inject` namespace, e.g. `{inject:vertxRequest.getParam('foo')}`. - -[[typesafe_expressions]] -=== Type-safe Expressions - -Template expressions can be optionally type-safe. -Which means that an expression is validated against the existing Java types and template extension methods. -If an invalid/incorrect expression is found then the build fails. - -For example, if there is an expression `item.name` where `item` maps to `org.acme.Item` then `Item` must have a property `name` or a matching template extension method must exist. - -An optional _parameter declaration_ is used to bind a Java type to expressions whose first part matches the parameter name. -Parameter declarations are specified directly in a template. - -.Parameter Declaration Example -[source,html] ----- -{@org.acme.Foo foo} <1> - - - - -Qute Hello - - -

{title}

<2> - Hello {foo.message.toLowerCase}! <3> <4> - - ----- -<1> Parameter declaration - maps `foo` to `org.acme.Foo`. -<2> Not validated - not matching a param declaration. -<3> This expression is validated. `org.acme.Foo` must have a property `message` or a matching template extension method must exist. -<4> Likewise, the Java type of the object resolved from `foo.message` must have a property `toLowerCase` or a matching template extension method must exist. - -IMPORTANT: A value resolver is automatically generated for all types used in parameter declarations so that it's possible to access its properties without reflection. - -TIP: Method parameters of <> are automatically turned into parameter declarations. - -Note that sections can override names that would otherwise match a parameter declaration: - -[source,html] ----- -{@org.acme.Foo foo} - - - - -Qute Hello - - -

{foo.message}

<1> - {#for foo in baz.foos} -

Hello {foo.message}!

<2> - {/for} - - ----- -<1> Validated against `org.acme.Foo`. -<2> Not validated - `foo` is overridden in the loop section. - -[[typesafe_templates]] -=== Type-safe Templates -You can also define type-safe templates in your Java code. -If using <>, you can rely on the following convention: - -- Organise your template files in the `/src/main/resources/templates` directory, by grouping them into one directory per resource class. So, if - your `ItemResource` class references two templates `hello` and `goodbye`, place them at `/src/main/resources/templates/ItemResource/hello.txt` - and `/src/main/resources/templates/ItemResource/goodbye.txt`. Grouping templates per resource class makes it easier to navigate to them. -- In each of your resource class, declare a `@CheckedTemplate static class Template {}` class within your resource class. -- Declare one `public static native TemplateInstance method();` per template file for your resource. -- Use those static methods to build your template instances. - -.ItemResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; -import io.quarkus.qute.CheckedTemplate; - -@Path("item") -public class ItemResource { - - @CheckedTemplate - public static class Templates { - public static native TemplateInstance item(Item item); <1> <2> - } - - @GET - @Path("{id}") - @Produces(MediaType.TEXT_HTML) - public TemplateInstance get(@PathParam("id") Integer id) { - return Templates.item(service.findItem(id)); <3> - } -} ----- -<1> Declare a method that gives us a `TemplateInstance` for `templates/ItemResource/item.html` and declare its `Item item` parameter so we can validate the template. -<2> The `item` parameter is automatically turned into a <> and so all expressions that reference this name will be validated. -<3> Make the `Item` object accessible in the template. - -TIP: By default, the templates defined in a class annotated with `@CheckedTemplate` can only contain type-safe expressions, i.e. expressions that can be validated at build time. You can use `@CheckedTemplate(requireTypeSafeExpressions = false)` to relax this requirement. - - -You can also declare a top-level Java class annotated with `@CheckedTemplate`: - -.Top-level checked templates -[source,java] ----- -package org.acme.quarkus.sample; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; -import io.quarkus.qute.CheckedTemplate; - -@CheckedTemplate -public class Templates { - public static native TemplateInstance hello(String name); <1> -} ----- -<1> This declares a template with path `templates/hello.txt`. The `name` parameter is automatically turned into a <> and so all expressions that reference this name will be validated. - -Then declare one `public static native TemplateInstance method();` per template file. -Use those static methods to build your template instances: - -.HelloResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; -import io.quarkus.qute.TemplateInstance; - -@Path("hello") -public class HelloResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public TemplateInstance get(@QueryParam("name") String name) { - return Templates.hello(name); - } -} ----- - -[[template_extension_methods]] -=== Template Extension Methods - -Extension methods can be used to extend the data classes with new functionality (to extend the set of accessible properties and methods) or to resolve expressions for a specific <>. -For example, it is possible to add _computed properties_ and _virtual methods_. - -A value resolver is automatically generated for a method annotated with `@TemplateExtension`. -If a class is annotated with `@TemplateExtension` then a value resolver is generated for every _non-private static method_ declared on the class. -Method-level annotations override the behavior defined on the class. -Methods that do not meet the following requirements are ignored. - -A template extension method: - -* must not be `private` -* must be static, -* must not return `void`. - -If there is no namespace defined the class of the first parameter that is not annotated with `@TemplateAttribute` is used to match the base object. Otherwise the namespace is used to match an expression. - -The method name is used to match the property name by default. -However, it is possible to specify the matching name with `TemplateExtension#matchName()`. -A special constant - `TemplateExtension#ANY` - may be used to specify that the extension method matches any name. -It is also possible to match the name against a regular expression specified in `TemplateExtension#matchRegex()`. -In both cases, an additional string method parameter must be used to pass the property name. -If both `matchName()` and `matchRegex()` are set the regular expression is used for matching. - -.Extension Method Example -[source,java] ----- -package org.acme; - -class Item { - - public final BigDecimal price; - - public Item(BigDecimal price) { - this.price = price; - } -} - -@TemplateExtension -class MyExtensions { - - static BigDecimal discountedPrice(Item item) { <1> - return item.getPrice().multiply(new BigDecimal("0.9")); - } -} ----- -<1> This method matches an expression with base object of the type `Item.class` and the `discountedPrice` property name. - -This template extension method makes it possible to render the following template: - -[source,html] ----- -{item.discountedPrice} <1> ----- -<1> `item` is resolved to an instance of `org.acme.Item`. - -==== Method Parameters - -An extension method may declare parameters. -If no namespace is specified then the first parameter that is not annotated with `@TemplateAttribute` is used to pass the base object, i.e. `org.acme.Item` in the first example. -If matching any name or using a regular expression then a string method parameter needs to be used to pass the property name. -Parameters annotated with `@TemplateAttribute` are obtained via `TemplateInstance#getAttribute()`. -All other parameters are resolved when rendering the template and passed to the extension method. - -.Multiple Parameters Example -[source,java] ----- -@TemplateExtension -class BigDecimalExtensions { - - static BigDecimal scale(BigDecimal val, int scale, RoundingMode mode) { <1> - return val.setScale(scale, mode); - } -} ----- -<1> This method matches an expression with base object of the type `BigDecimal.class`, with the `scale` virtual method name and two virtual method parameters. - -[source,html] ----- -{item.discountedPrice.scale(2,mode)} <1> ----- -<1> `item.discountedPrice` is resolved to an instance of `BigDecimal`. - -[[namespace_extension_methods]] -==== Namespace Extension Methods - -If `TemplateExtension#namespace()` is specified then the extension method is used to resolve expressions with the given <>. -Template extension methods that share the same namespace are grouped in one resolver ordered by `TemplateExtension#priority()`. -The first matching extension method is used to resolve an expression. - -.Namespace Extension Method Example -[source,java] ----- -@TemplateExtension(namespace = "str") -public class StringExtensions { - - static String format(String fmt, Object... args) { - return String.format(fmt, args); - } - - static String reverse(String val) { - return new StringBuilder(val).reverse().toString(); - } -} ----- - -These extension methods can be used as follows. - -[source,html] ----- -{str:format('%s %s!','Hello', 'world')} <1> -{str:reverse('hello')} <2> ----- -<1> The output is `Hello world!` -<2> The output is `olleh` - -[[built-in-template-extension]] -==== Built-in Template Extensions - -Quarkus provides a set of built-in extension methods. - -===== Maps - -* `keys` or `keySet`: Returns a Set view of the keys contained in a map -** `{#for key in map.keySet}` - -* `values`: Returns a Collection view of the values contained in a map -** `{#for value in map.values}` - -* `size`: Returns the number of key-value mappings in a map -** `{map.size}` - -* `isEmpty`: Returns true if a map contains no key-value mappings -** `{#if map.isEmpty}` - -* `get(key)`: Returns the value to which the specified key is mapped -** `{map.get('foo')}` - -TIP: A map value can be also accessed directly: `{map.myKey}`. Use the bracket notation for keys that are not legal identifiers: `{map['my key']}`. - -===== Collections - -* `get(index)`: Returns the element at the specified position in a list -** `{list.get(0)}` - -* `reversed`: Returns a reversed iterator over a list -** `{#for r in recordsList.reversed}` - -* `take`: Returns the first `n` elements from the given list; throws an `IndexOutOfBoundsException` if `n` is out of range -** `{#for r in recordsList.take(3)}` - -* `takeLast`: Returns the last `n` elements from the given list; throws an `IndexOutOfBoundsException` if `n` is out of range -** `{#for r in recordsList.takeLast(3)}` - -TIP: A list element can be accessed directly: `{list.10}` or `{list[10]}`. - -===== Numbers - -* `mod`: Modulo operation -** `{#if counter.mod(5) == 0}` - -===== Strings - -* `fmt` or `format`: format the string instance via `java.lang.String.format()` -** `{myStr.fmt("arg1","arg2")}` -** `{myStr.format(locale,arg1)}` -* `str:fmt` or `str:format`: format the supplied string value via `java.lang.String.format()` -** `{str:format("Hello %s!",name)}` -** `{str:fmt(locale,'%tA',now)}` - -===== Config - -* `config:` or `config:[]`: Returns the config value for the given property name -** `{config:foo}` or `{config:['property.with.dot.in.name']}` - -* `config:property(name)`: Returns the config value for the given property name; the name can be obtained dynamically by an expression -** `{config:property('quarkus.foo')}` -** `{config:property(foo.getPropertyName())}` - -* `config:boolean(name)`: Returns the config value for the given property name as a boolean; the name can be obtained dynamically by an expression -** `{config:boolean('quarkus.foo.boolean') ?: 'Not Found'}` -** `{config:boolean(foo.getPropertyName()) ?: 'property is false'}` - -* `config:integer(name)`: Returns the config value for the given property name as an integer; the name can be obtained dynamically by an expression -** `{config:integer('quarkus.foo')}` -** `{config:integer(foo.getPropertyName())}` - -===== Time - -* `format(pattern)`: Formats temporal objects from the `java.time` package -** `{dateTime.format('d MMM uuuu')}` - -* `format(pattern,locale)`: Formats temporal objects from the `java.time` package -** `{dateTime.format('d MMM uuuu',myLocale)}` - -* `format(pattern,locale,timeZone)`: Formats temporal objects from the `java.time` package -** `{dateTime.format('d MMM uuuu',myLocale,myTimeZoneId)}` - -* `time:format(dateTime,pattern)`: Formats temporal objects from the `java.time` package, `java.util.Date`, `java.util.Calendar` and `java.lang.Number` -** `{time:format(myDate,'d MMM uuuu')}` - -* `time:format(dateTime,pattern,locale)`: Formats temporal objects from the `java.time` package, `java.util.Date`, `java.util.Calendar` and `java.lang.Number` -** `{time:format(myDate,'d MMM uuuu', myLocale)}` - -* `time:format(dateTime,pattern,locale,timeZone)`: Formats temporal objects from the `java.time` package, `java.util.Date`, `java.util.Calendar` and `java.lang.Number` -** `{time:format(myDate,'d MMM uuuu',myLocale,myTimeZoneId)}` - -[[template_data]] -=== @TemplateData - -A value resolver is automatically generated for a type annotated with `@TemplateData`. -This allows Quarkus to avoid using reflection to access the data at runtime. - -NOTE: Non-public members, constructors, static initializers, static, synthetic and void methods are always ignored. - -[source,java] ----- -package org.acme; - -@TemplateData -class Item { - - public final BigDecimal price; - - public Item(BigDecimal price) { - this.price = price; - } - - public BigDecimal getDiscountedPrice() { - return price.multiply(new BigDecimal("0.9")); - } -} ----- - -Any instance of `Item` can be used directly in the template: - -[source,html] ----- -{#each items} <1> - {it.price} / {it.discountedPrice} -{/each} ----- -<1> `items` is resolved to a list of `org.acme.Item` instances. - -Furthermore, `@TemplateData.properties()` and `@TemplateData.ignore()` can be used to fine-tune the generated resolver. -Finally, it is also possible to specify the "target" of the annotation - this could be useful for third-party classes not controlled by the application: - -[source,java] ----- -@TemplateData(target = BigDecimal.class) -@TemplateData -class Item { - - public final BigDecimal price; - - public Item(BigDecimal price) { - this.price = price; - } -} ----- - -[source,html] ----- -{#each items} - {it.price.setScale(2, rounding)} <1> -{/each} ----- -<1> The generated value resolver knows how to invoke the `BigDecimal.setScale()` method. - -==== Accessing Static Fields and Methods - -If `@TemplateData#namespace()` is set to a non-empty value then a namespace resolver is automatically generated to access the public static fields and methods of the target class. -By default, the namespace is the FQCN of the target class where dots and dollar signs are replaced by underscores. -For example, the namespace for a class with name `org.acme.Foo` is `org_acme_Foo`. -The static field `Foo.AGE` can be accessed via `{org_acme_Foo:AGE}`. -The static method `Foo.computeValue(int number)` can be accessed via `{org_acme_Foo:computeValue(10)}`. - -NOTE: A namespace can only consist of alphanumeric characters and underscores. - -.Class Annotated With `@TemplateData` -[source,java] ----- -package model; - -@TemplateData <1> -public class Statuses { - public static final String ON = "on"; - public static final String OFF = "off"; -} ----- -<1> A name resolver with the namespace `model_Status` is generated automatically. - -.Template Accessing Class Constants -[source,html] ----- -{#if machine.status == model_Status:ON} - The machine is ON! -{/if} ----- - -==== Convenient Annotation For Enums - -There's also a convenient annotation to access enum constants: `@io.quarkus.qute.TemplateEnum`. -This annotation is functionally equivalent to `@TemplateData(namespace = TemplateData.SIMPLENAME)`, i.e. a namespace resolver is automatically generated for the target enum and the simple name of the target enum is used as the namespace. - -.Enum Annotated With `@TemplateEnum` -[source,java] ----- -package model; - -@TemplateEnum <1> -public enum Status { - ON, - OFF -} ----- -<1> A name resolver with the namespace `Status` is generated automatically. - -NOTE: `@TemplateEnum` declared on a non-enum class is ignored. Also if an enum also declares the `@TemplateData` annotation then the `@TemplateEnum` annotation is ignored. - -.Template Accessing Enum Constants -[source,html] ----- -{#if machine.status == Status:ON} - The machine is ON! -{/if} ----- - -TIP: Quarkus detects possible namespace collisions and fails the build if a specific namespace is defined by multiple `@TemplateData` and/or `@TemplateEnum` annotations. - -[[global_variables]] -=== Global Variables - -The `io.quarkus.qute.TemplateGlobal` annotation can be used to denote static fields and methods that supply _global variables_ which are accessible in any template. -Internally, each global variable is added to the data map of any `TemplateInstance` via the `TemplateInstance#data(String, Object)` method. - -.Global Variables Definition -[source,java] ----- -enum Color { RED, GREEN, BLUE } - -@TemplateGlobal <1> -public class Globals { - - static int age = 40; - - static Color[] myColors() { - return new Color[] { Color.RED, Color.BLUE }; - } - - @TemplateGlobal(name = "currentUser") <2> - static String user() { - return "Mia"; - } -} ----- -<1> If a class is annotated with `@TemplateGlobal` then every non-void non-private static method that declares no parameters and every non-private static field is considered a global variable. The name is defaulted, i.e. the name of the field/method is used. -<2> Method-level annotations override the class-level annotation. In this particular case, the name is not defaulted but selected explicitly. - -.A Template Accessing Global Variables -[source,html] ----- -User: {currentUser} <1> -Age: {age} <2> -Colors: {#each myColors}{it}{#if it_hasNext}, {/if}{/each} <3> ----- -<1> `currentUser` resolves to `Globals#user()`. -<2> `age` resolves to `Globals#age`. -<3> `myColors` resolves to `Globals#myColors()`. - -NOTE: Note that global variables implicitly add <> to all templates and so any expression that references a global variable is validated during build. - -.The Output -[source,html] ----- -User: Mia -Age: 40 -Colors: RED, BLUE ----- - -==== Resolving Conflicts - -Global variables may conflict with regular data objects. -<> override the global variables automatically. -For example, the following definition overrides the global variable supplied by the `Globals#user()` method: - -.Type-safe Template Definition -[source,java] ----- -import org.acme.User; - -@CheckedTemplate -public class Templates { - static native TemplateInstance hello(User currentUser); <1> -} ----- -<1> `currentUser` conflicts with the global variable supplied by `Globals#user()`. - -So the corresponding template does not result in a validation error even though the `Globals#user()` method returns `java.lang.String` which does not have the `name` property: - -.`templates/hello.txt` -[source,html] ----- -User name: {currentUser.name} <1> ----- -<1> `org.acme.User` has the `name` property. - -For other templates an explicit parameter declaration is needed: - -[source,html] ----- -{@org.acme.User currentUser} <1> - -User name: {currentUser.name} ----- -<1> This parameter declaration overrides the declaration added by the global variable supplied by the `Globals#user()` method. - - -[[native_executables]] -=== Native Executables - -In the JVM mode a reflection-based value resolver may be used to access properties and call methods of the model classes. -But this does not work for xref:building-native-image.adoc[a native executable] out of the box. -As a result, you may encounter template exceptions like `Property "name" not found on the base object "org.acme.Foo" in expression {foo.name} in template hello.html` even if the `Foo` class declares a relevant getter method. - -There are several ways to solve this problem: - -* Make use of <> or <> -** In this case, an optimized value resolver is generated automatically and used at runtime -** This is the preferred solution -* Annotate the model class with <> - a specialized value resolver is generated and used at runtime -* Annotate the model class with `@io.quarkus.runtime.annotations.RegisterForReflection` to make the reflection-based value resolver work - - -[[resteasy_integration]] -=== RESTEasy Integration - -If you want to use Qute in your JAX-RS application, then depending on which JAX-RS stack you are using, you'll need to register the proper extension first. -If you are using the traditional `quakus-resteasy` extension, then in your `pom.xml` file, add: - -[source,xml] ----- - - io.quarkus - quarkus-resteasy-qute - ----- - -If instead you are using RESTEasy Reactive via the `quarkus-resteasy-reactive` extension, then in your `pom.xml` file, add: - -[source,xml] ----- - - io.quarkus - quarkus-resteasy-reactive-qute - ----- - -Both of these extensions register a special `ContainerResponseFilter` implementation which enables resource methods to return a `TemplateInstance`, thus freeing users of having to take care of all necessary internal steps. - -The end result is that a using Qute within a JAX-RS resource may look as simple as: - -.HelloResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; - -@Path("hello") -public class HelloResource { - - @Inject - Template hello; <1> - - @GET - @Produces(MediaType.TEXT_PLAIN) - public TemplateInstance get(@QueryParam("name") String name) { - return hello.data("name", name); <2> <3> - } -} ----- -<1> If there is no `@Location` qualifier provided, the field name is used to locate the template. In this particular case, we're injecting a template with path `templates/hello.txt`. -<2> `Template.data()` returns a new template instance that can be customized before the actual rendering is triggered. In this case, we put the name value under the key `name`. The data map is accessible during rendering. -<3> Note that we don't trigger the rendering - this is done automatically by a special `ContainerResponseFilter` implementation. - -TIP: Users are encouraged to use <> that help to organize the templates for a specific JAX-RS resource and enable <> automatically. - -The content negotiation is performed automatically. -The resulting output depends on the `Accept` header received from the client. - -[source,java] ----- -@Path("/detail") -class DetailResource { - - @Inject - Template item; <1> - - @GET - @Produces({ MediaType.TEXT_HTML, MediaType.TEXT_PLAIN }) - public TemplateInstance item() { - return item.data("myItem", new Item("Alpha", 1000)); <2> - } -} ----- -<1> Inject a variant template with base path derived from the injected field - `src/main/resources/templates/item`. -<2> For `text/plain` the `src/main/resources/templates/item.txt` template is used. For `text/html` the `META-INF/resources/templates/item.html` template is used. - -The `RestTemplate` util class can be used to obtain a template instance from a body of a JAX-RS resource method: - -.RestTemplate Example -[source,java] ----- -@Path("/detail") -class DetailResource { - - @GET - @Produces({ MediaType.TEXT_HTML, MediaType.TEXT_PLAIN }) - public TemplateInstance item() { - return RestTemplate.data("myItem", new Item("Alpha", 1000)); <1> - } -} ----- -<1> The name of the template is derived from the resource class and method name; `DetailResource/item` in this particular case. - -WARNING: Unlike with `@Inject` the templates obtained via `RestTemplate` are not validated, i.e. the build does not fail if a template does not exist. - -=== Development Mode - -In the development mode, all files located in `src/main/resources/templates` are watched for changes and modifications are immediately visible. - -[[type-safe-message-bundles]] -=== Type-safe Message Bundles - -==== Basic Concepts - -The basic idea is that every message is potentially a very simple template. -In order to prevent type errors a message is defined as an annotated method of a *message bundle interface*. -Quarkus generates the *message bundle implementation* at build time. -Subsequently, the bundles can be used at runtime: - -1. Directly in your code via `io.quarkus.qute.i18n.MessageBundles#get()`; e.g. `MessageBundles.get(AppMessages.class).hello_name("Lucie")` -2. Injected in your beans via `@Inject`; e.g. `@Inject AppMessages` -3. Referenced in the templates via the message bundle namespace: -+ -[source,html] ----- - {msg:hello_name('Lucie')} <1> <2> <3> - {msg:message(myKey,'Lu')} <4> ----- -<1> `msg` is the default namespace. -<2> `hello_name` is the message key. -<3> `Lucie` is the parameter of the message bundle interface method. -<4> It is also possible to obtain a localized message for a key resolved at runtime using a reserved key `message`. The validation is skipped in this case though. - -.Message Bundle Interface Example -[source,java] ----- -import io.quarkus.qute.i18n.Message; -import io.quarkus.qute.i18n.MessageBundle; - -@MessageBundle <1> -public interface AppMessages { - - @Message("Hello {name}!") <2> - String hello_name(String name); <3> -} ----- -<1> Denotes a message bundle interface. The bundle name is defaulted to `msg` and is used as a namespace in templates expressions, e.g. `{msg:hello_name}`. -<2> Each method must be annotated with `@Message`. The value is a qute template. -<3> The method parameters can be used in the template. - -==== Bundle Name and Message Keys - -Keys are used directly in templates. -The bundle name is used as a namespace in template expressions. -The `@MessageBundle` can be used to define the default strategy used to generate message keys from method names. -However, the `@Message` can override this strategy and even define a custom key. -By default, the annotated element's name is used as-is. -Other possibilities are: - -1. De-camel-cased and hyphenated; e.g. `helloName()` -> `hello-name` -2. De-camel-cased and parts separated by underscores; e.g. `helloName()` -> `hello_name`. - -==== Validation - -* All message bundle templates are validated: -** All expressions without a namespace must map to a parameter; e.g. `Hello {foo}` -> the method must have a param of name `foo` -** All expressions are validated against the types of the parameters; e.g. `Hello {foo.bar}` where the parameter `foo` is of type `org.acme.Foo` -> `org.acme.Foo` must have a property of name `bar` -+ -NOTE: A warning message is logged for each _unused_ parameter. -* Expressions that reference a message bundle method, such as `{msg:hello(item.name)}`, are validated too. - -==== Localization - -The default locale specified via the `quarkus.default-locale` config property is used for the `@MessageBundle` interface by default. -However, the `io.quarkus.qute.i18n.MessageBundle#locale()` can be used to specify a custom locale. -Additionally, there are two ways to define a localized bundle: - -1. Create an interface that extends the default interface that is annotated with `@Localized` -2. Create an UTF-8 encoded file located in `src/main/resources/messages`; e.g. `msg_de.properties`. - -TIP: A localized interface is the preferred solution mainly due to the possibility of easy refactoring. - -.Localized Interface Example -[source,java] ----- -import io.quarkus.qute.i18n.Localized; -import io.quarkus.qute.i18n.Message; - -@Localized("de") <1> -public interface GermanAppMessages extends AppMessages { - - @Override - @Message("Hallo {name}!") <2> - String hello_name(String name); -} ----- -<1> The value is the locale tag string (IETF). -<2> The value is the localized template. - -Message bundle files must be encoded in UTF-8. -The file name consists of the relevant bundle name (e.g. `msg`) and underscore followed by the locate tag (IETF). -The file format is very simple: each line represents either a key/value pair with the equals sign used as a separator or a comment (line starts with `#`). -Blank lines are ignored. -Keys are _mapped to method names_ from the corresponding message bundle interface. -Values represent the templates normally defined by `io.quarkus.qute.i18n.Message#value()`. -A value may be spread out across several adjacent normal lines. -In such case, the line terminator must be escaped with a backslash character `\`. -The behavior is very similar to the behavior of the `java.util.Properties.load(Reader)` method. - -.Localized File Example - `msg_de.properties` -[source,properties] ----- -# This comment is ignored -hello_name=Hallo {name}! <1> <2> ----- -<1> Each line in a localized file represents a key/value pair. The key must correspond to a method declared on the message bundle interface. The value is the message template. -<2> Keys and values are separated by the equals sign. - -NOTE: We use the `.properties` suffix in our example because most IDEs and text editors support syntax highlighting of `.properties` files. But in fact, the suffix could be anything - it is just ignored. - -TIP: An example properties file is generated into the target directory for each message bundle interface automatically. For example, by default if no name is specified for `@MessageBundle` the file `target/qute-i18n-examples/msg.properties` is generated when the application is build via `mvn clean package`. You can use this file as a base for a specific locale. Just rename the file - e.g. `msg_fr.properties`, change the message templates and move it in the `src/main/resources/messages` directory. - -.Value Spread Out Across Several Adjacent Lines -[source,properties] ----- -hello=Hello \ - {name} and \ - good morning! ----- -Note that the line terminator is escaped with a backslash character `\` and white space at the start of the following line is ignored. I.e. `{msg:hello('Edgar')}` would be rendered as `Hello Edgar and good morning!`. - -Once we have the localized bundles defined we need a way to _select_ the correct bundle for a specific template instance, i.e. to specify the locale for all message bundle expressions in the template. -By default, the locale specified via the `quarkus.default-locale` configuration property is used to select the bundle. -Alternatively, you can specify the `locale` attribute of a template instance. - -.`locale` Attribute Example -[source,java] ----- -@Singleton -public class MyBean { - - @Inject - Template hello; - - String render() { - return hello.instance().setAttribute("locale", Locale.forLanguageTag("cs")).render(); <1> - } -} ----- -<1> You can set a `Locale` instance or a locale tag string (IETF). - - -NOTE: When using <> the `locale` attribute is derived from the the `Accept-Language` header if not set by a user. - -The `@Localized` qualifier can be used to inject a localized message bundle interface. - -.Injected Localized Message Bundle Example -[source,java] ----- -@Singleton -public class MyBean { - - @Localized("cs") <1> - AppMessages msg; - - String render() { - return msg.hello_name("Jachym"); - } -} ----- -<1> The annotation value is a locale tag string (IETF). - - -=== Configuration Reference - -include::{generated-dir}/config/quarkus-qute.adoc[leveloffset=+1, opts=optional] - - -[[standalone]] -== Qute Used as a Standalone Library - -Qute is primarily designed as a Quarkus extension. -However. it is possible to use it as a "standalone" library. -In this case, some features are not available and some additional configuration is needed. - -Engine:: First of all, no managed `Engine` instance is available out of the box. -You'll need to configure a new instance via `Engine.builder()`. - -Templates:: -* By default, no <> are registered, i.e. `Engine.getTemplate(String)` will not work. -* You can register a custom template locator or parse a template manually and put the reulst in the cache via `Engine.putTemplate(String, Template)`. - -Value resolvers:: -* No <> are generated automatically. -** <> will not work. -** <> annotations are ignored. -* It's recommended to register a `ReflectionValueResolver` instance via `Engine.addValueResolver(new ReflectionValueResolver())` so that Qute can access object properties and call public methods. -+ -NOTE: Keep in mind that reflection may not work correctly in some restricted environments or may require additional configuration, e.g. registration in case of a GraalVM native image. -* A custom value resolver can be easily built via `ValueResolver.builder()` - -Type-safety:: -* <> are not validated. -* <> are not supported. - -Injection:: It is not possible to inject a `Template` instance and vice versa - a template cannot inject a `@Named` CDI bean via the `inject:` namespace. diff --git a/_versions/2.7/guides/qute.adoc b/_versions/2.7/guides/qute.adoc deleted file mode 100644 index 6d9a8e2fa83..00000000000 --- a/_versions/2.7/guides/qute.adoc +++ /dev/null @@ -1,533 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Qute Templating Engine - -include::./attributes.adoc[] - -Qute is a templating engine designed specifically to meet the Quarkus needs. -The usage of reflection is minimized to reduce the size of native images. -The API combines both the imperative and the non-blocking reactive style of coding. -In the development mode, all files located in `src/main/resources/templates` are watched for changes and modifications are immediately visible. -Furthermore, we try to detect most of the template problems at build time. -In this guide, you will learn how to easily render templates in your application. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `qute-quickstart` {quickstarts-tree-url}/qute-quickstart[directory]. - -== Hello World with JAX-RS - -If you want to use Qute in your JAX-RS application, you need to add an extension first: - -* either `quarkus-resteasy-qute` if you are using RESTEasy Classic: -+ -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-qute - ----- -+ -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-qute") ----- - -* or `quarkus-resteasy-reactive-qute` if you are using RESTEasy Reactive: -+ -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-reactive-qute - ----- -+ -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-reactive-qute") ----- - -We'll start with a very simple template: - -.hello.txt -[source] ----- -Hello {name}! <1> ----- -<1> `{name}` is a value expression that is evaluated when the template is rendered. - -NOTE: By default, all files located in the `src/main/resources/templates` directory and its subdirectories are registered as templates. Templates are validated during startup and watched for changes in the development mode. - -Now let's inject the "compiled" template in the resource class. - -.HelloResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; - -@Path("hello") -public class HelloResource { - - @Inject - Template hello; <1> - - @GET - @Produces(MediaType.TEXT_PLAIN) - public TemplateInstance get(@QueryParam("name") String name) { - return hello.data("name", name); <2> <3> - } -} ----- -<1> If there is no `@Location` qualifier provided, the field name is used to locate the template. In this particular case, we're injecting a template with path `templates/hello.txt`. -<2> `Template.data()` returns a new template instance that can be customized before the actual rendering is triggered. In this case, we put the name value under the key `name`. The data map is accessible during rendering. -<3> Note that we don't trigger the rendering - this is done automatically by a special `ContainerResponseFilter` implementation. - -If your application is running, you can request the endpoint: - -[source,shell] ----- -$ curl -w "\n" http://localhost:8080/hello?name=Martin -Hello Martin! ----- - -== Type-safe templates - -There's an alternate way to declare your templates in your Java code, which relies on the following convention: - -- Organise your template files in the `/src/main/resources/templates` directory, by grouping them into one directory per resource class. So, if - your `ItemResource` class references two templates `hello` and `goodbye`, place them at `/src/main/resources/templates/ItemResource/hello.txt` - and `/src/main/resources/templates/ItemResource/goodbye.txt`. Grouping templates per resource class makes it easier to navigate to them. -- In each of your resource class, declare a `@CheckedTemplate static class Template {}` class within your resource class. -- Declare one `public static native TemplateInstance method();` per template file for your resource. -- Use those static methods to build your template instances. - -Here's the previous example, rewritten using this style: - -We'll start with a very simple template: - -.HelloResource/hello.txt -[source] ----- -Hello {name}! <1> ----- -<1> `{name}` is a value expression that is evaluated when the template is rendered. - -Now let's declare and use those templates in the resource class. - -.HelloResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.CheckedTemplate; - -@Path("hello") -public class HelloResource { - - @CheckedTemplate - public static class Templates { - public static native TemplateInstance hello(String name); <1> - } - - @GET - @Produces(MediaType.TEXT_PLAIN) - public TemplateInstance get(@QueryParam("name") String name) { - return Templates.hello(name); <2> - } -} ----- -<1> This declares a template with path `templates/HelloResource/hello`. -<2> `Templates.hello()` returns a new template instance that is returned from the resource method. Note that we don't trigger the rendering - this is done automatically by a special `ContainerResponseFilter` implementation. - -NOTE: Once you have declared a `@CheckedTemplate` class, we will check that all its methods point to existing templates, so if you try to use a template from your Java code and you forgot to add it, we will let you know at build time :) - -Keep in mind this style of declaration allows you to reference templates declared in other resources too: - -.HelloResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; - -import io.quarkus.qute.TemplateInstance; - -@Path("goodbye") -public class GoodbyeResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public TemplateInstance get(@QueryParam("name") String name) { - return HelloResource.Templates.hello(name); - } -} ----- - -=== Top-level type-safe templates - -Naturally, if you want to declare templates at the top-level, directly in `/src/main/resources/templates/hello.txt`, for example, -you can declare them in a toplevel (non-nested) `Templates` class: - -.HelloResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; -import io.quarkus.qute.CheckedTemplate; - -@CheckedTemplate -public class Templates { - public static native TemplateInstance hello(String name); <1> -} ----- -<1> This declares a template with path `templates/hello`. - - -== Template Parameter Declarations - -If you declare a *parameter declaration* in a template then Qute attempts to validate all expressions that reference this parameter and if an incorrect expression is found the build fails. - -Let's suppose we have a simple class like this: - -.Item.java -[source,java] ----- -public class Item { - public String name; - public BigDecimal price; -} ----- - -And we'd like to render a simple HTML page that contains the item name and price. - -Let's start again with the template: - -.ItemResource/item.html -[source,html] ----- - - - - -{item.name} <1> - - -

{item.name}

-
Price: {item.price}
<2> - - ----- -<1> This expression is validated. Try to change the expression to `{item.nonSense}` and the build should fail. -<2> This is also validated. - -Finally, let's create a resource class with type-safe templates: - -.ItemResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; -import io.quarkus.qute.CheckedTemplate; - -@Path("item") -public class ItemResource { - - @CheckedTemplate - public static class Templates { - public static native TemplateInstance item(Item item); <1> - } - - @GET - @Path("{id}") - @Produces(MediaType.TEXT_HTML) - public TemplateInstance get(@PathParam("id") Integer id) { - return Templates.item(service.findItem(id)); <2> - } -} ----- -<1> Declare a method that gives us a `TemplateInstance` for `templates/ItemResource/item.html` and declare its `Item item` parameter so we can validate the template. -<2> Make the `Item` object accessible in the template. - -=== Template parameter declaration inside the template itself - -Alternatively, you can declare your template parameters in the template file itself. - -Let's start again with the template: - -.item.html -[source,html] ----- -{@org.acme.Item item} <1> - - - - -{item.name} <2> - - -

{item.name}

-
Price: {item.price}
- - ----- -<1> Optional parameter declaration. Qute attempts to validate all expressions that reference the parameter `item`. -<2> This expression is validated. Try to change the expression to `{item.nonSense}` and the build should fail. - -Finally, let's create a resource class. - -.ItemResource.java -[source,java] ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import io.quarkus.qute.TemplateInstance; -import io.quarkus.qute.Template; - -@Path("item") -public class ItemResource { - - @Inject - ItemService service; - - @Inject - Template item; <1> - - @GET - @Path("{id}") - @Produces(MediaType.TEXT_HTML) - public TemplateInstance get(@PathParam("id") Integer id) { - return item.data("item", service.findItem(id)); <2> - } -} ----- -<1> Inject the template with path `templates/item.html`. -<2> Make the `Item` object accessible in the template. - -== Template Extension Methods - -*Template extension methods* are used to extend the set of accessible properties of data objects. - -Sometimes, you're not in control of the classes that you want to use in your template, and you cannot add methods -to them. Template extension methods allows you to declare new method for those classes that will be available -from your templates just as if they belonged to the target class. - -Let's keep extending on our simple HTML page that contains the item name, price and add a discounted price. -The discounted price is sometimes called a "computed property". -We will implement a template extension method to render this property easily. -Let's update our template: - -.HelloResource/item.html -[source,html] ----- - - - - -{item.name} - - -

{item.name}

-
Price: {item.price}
- {#if item.price > 100} <1> -
Discounted Price: {item.discountedPrice}
<2> - {/if} - - ----- -<1> `if` is a basic control flow section. -<2> This expression is also validated against the `Item` class and obviously there is no such property declared. However, there is a template extension method declared on the `TemplateExtensions` class - see below. - -Finally, let's create a class where we put all our extension methods: - -.TemplateExtensions.java -[source,java] ----- -package org.acme.quarkus.sample; - -import io.quarkus.qute.TemplateExtension; - -@TemplateExtension -public class TemplateExtensions { - - public static BigDecimal discountedPrice(Item item) { <1> - return item.price.multiply(new BigDecimal("0.9")); - } -} ----- -<1> A static template extension method can be used to add "computed properties" to a data class. The class of the first parameter is used to match the base object and the method name is used to match the property name. - -NOTE: you can place template extension methods in every class if you annotate them with `@TemplateExtension` but we advise to keep them either -grouped by target type, or in a single `TemplateExtensions` class by convention. - -== Rendering Periodic Reports - -Templating engine could be also very useful when rendering periodic reports. -You'll need to add the `quarkus-scheduler` and `quarkus-qute` extensions first. -In your `pom.xml` file, add: - -[source,xml] ----- - - io.quarkus - quarkus-qute - - - io.quarkus - quarkus-scheduler - ----- - -Let's suppose the have a `SampleService` bean whose `get()` method returns a list of samples. - -.Sample.java -[source,java] ----- -public class Sample { - public boolean valid; - public String name; - public String data; -} ----- - -The template is simple: - -.report.html -[source,html] ----- - - - - -Report {now} - - -

Report {now}

- {#for sample in samples} <1> -

{sample.name ?: 'Unknown'}

<2> -

- {#if sample.valid} - {sample.data} - {#else} - Invalid sample found. - {/if} -

- {/for} - - ----- -<1> The loop section makes it possible to iterate over iterables, maps and streams. -<2> This value expression is using the https://en.wikipedia.org/wiki/Elvis_operator[elvis operator] - if the name is null the default value is used. - -[source,java] -.ReportGenerator.java ----- -package org.acme.quarkus.sample; - -import javax.inject.Inject; - -import io.quarkus.qute.Template; -import io.quarkus.qute.Location; -import io.quarkus.scheduler.Scheduled; - -public class ReportGenerator { - - @Inject - SampleService service; - - @Location("reports/v1/report_01") <1> - Template report; - - @Scheduled(cron="0 30 * * * ?") <2> - void generate() { - String result = report - .data("samples", service.get()) - .data("now", java.time.LocalDateTime.now()) - .render(); <3> - // Write the result somewhere... - } -} ----- -<1> In this case, we use the `@Location` qualifier to specify the template path: `templates/reports/v1/report_01.html`. -<2> Use the `@Scheduled` annotation to instruct Quarkus to execute this method on the half hour. For more information see the xref:scheduler.adoc[Scheduler] guide. -<3> The `TemplateInstance.render()` method triggers rendering. Note that this method blocks the current thread. - -== Reactive and Asynchronous APIs - -Templates can be rendered as a `CompletionStage` (completed with the rendered output asynchronously) or as `Publisher` containing the rendered chunks: - -[source, java] ----- -CompletionStage async = template.data("name", "neo").renderAsync(); -Publisher publisher = template.data("name", "neo").publisher(); ----- - -In the case of a `Publisher`, the template is rendered chunk by chunk following the requests from the subscriber. -The rendering is not started until a subscriber requests it. -The returned `Publisher` is an instance of `io.smallrye.mutiny.Multi`. - -It is possible to create an instance of `io.smallrye.mutiny.Uni` as follows: - -[source, java] ----- -Uni uni = Uni.createFrom().completionStage(() -> template.data("name", "neo").renderAsync()); ----- - -In this case, the rendering only starts once the subscriber requests it. - -== Qute Reference Guide - -To learn more about Qute, please refer to the xref:qute-reference.adoc[Qute reference guide]. - -[[qute-configuration-reference]] -== Qute Configuration Reference - -include::{generated-dir}/config/quarkus-qute.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/rabbitmq-dev-services.adoc b/_versions/2.7/guides/rabbitmq-dev-services.adoc deleted file mode 100644 index 12060f43a7a..00000000000 --- a/_versions/2.7/guides/rabbitmq-dev-services.adoc +++ /dev/null @@ -1,118 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services for RabbitMQ - -include::./attributes.adoc[] - -Dev Services for RabbitMQ automatically starts a RabbitMQ broker in dev mode and when running tests. -So, you don't have to start a broker manually. -The application is configured automatically. - -== Enabling / Disabling Dev Services for RabbitMQ - -Dev Services for RabbitMQ is automatically enabled unless: - -- `quarkus.rabbitmq.devservices.enabled` is set to `false` -- the `rabbitmq-host` or `rabbitmq-port` is configured -- all the Reactive Messaging RabbitMQ channels have the `host` or `port` attributes set - -Dev Services for RabbitMQ relies on Docker to start the broker. -If your environment does not support Docker, you will need to start the broker manually, or connect to an already running broker. -You can configure the broker access using the `rabbitmq-host`, `rabbitmq-port`, `rabbitmq-username` and `rabbitmq-password` properties. - -== Shared broker - -Most of the time you need to share the broker between applications. -Dev Services for RabbitMQ implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single broker. - -NOTE: Dev Services for RabbitMQ starts the container with the `quarkus-dev-service-rabbitmq` label which is used to identify the container. - -If you need multiple (shared) brokers, you can configure the `quarkus.rabbitmq.devservices.service-name` attribute and indicate the broker name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `rabbitmq`. - -Sharing is enabled by default in dev mode, but disabled in test mode. -You can disable the sharing with `quarkus.rabbitmq.devservices.shared=false`. - -== Setting the port - -By default, Dev Services for RabbitMQ picks a random port and configures the application. -You can set the port by configuring the `quarkus.rabbitmq.devservices.port` property. - -== Configuring the image - -Dev Services for RabbitMQ uses official images available at https://hub.docker.com/_/rabbitmq. -You can configure the image and version using the `quarkus.rabbitmq.devservices.image-name` property: - -[source, properties] ----- -quarkus.rabbitmq.devservices.image-name=rabbitmq:latest ----- - -== Predefined Topology - -Dev Services for RabbitMQ supports defining topology upon broker start. You can define Exchanges, Queues, and -Bindings using standard Quarkus configuration. - -=== Defining Exchanges - -To define a RabbitMQ exchange you provide the exchange's name after the `quarkus.rabbitmq.devservices.exchanges` key, -followed by one (or more) of the exchange's properties: - -[source, properties] ----- -quarkus.rabbitmq.devservices.exchanges.my-exchange.type=topic # defaults to 'direct' -quarkus.rabbitmq.devservices.exchanges.my-exchange.auto-delete=false # defaults to 'false' -quarkus.rabbitmq.devservices.exchanges.my-exchange.durable=true # defaults to 'false' ----- - -Additionally, any additional arguments may be provided to the exchange's definition by using the `arguments` key: - -[source, properties] ----- -quarkus.rabbitmq.devservices.exchanges.my-exchange.arguments.alternate-exchange=another-exchange ----- - -=== Defining Queues - -To define a RabbitMQ queue you provide the queue's name after the `quarkus.rabbitmq.devservices.queues` key, -followed by one (or more) of the queue's properties: - -[source, properties] ----- -quarkus.rabbitmq.devservices.queues.my-queue.auto-delete=false # defaults to 'false' -quarkus.rabbitmq.devservices.queues.my-queue.durable=true # defaults to 'false' ----- - -Additionally, any additional arguments may be provided to the queue's definition by using the `arguments` key: - -[source, properties] ----- -quarkus.rabbitmq.devservices.queues.my-queue.arguments.x-dead-letter-exchange=another-exchange ----- - -=== Defining Bindings - -To define a RabbitMQ binding you provide the binding's name after the `quarkus.rabbitmq.devservices.bindings` key, -followed by one (or more) of the binding's properties: - -[source, properties] ----- -quarkus.rabbitmq.devservices.bindings.a-binding.source=my-exchange # defaults to name of binding -quarkus.rabbitmq.devservices.bindings.a-binding.routing-key=some-key # defaults to '#' -quarkus.rabbitmq.devservices.bindings.a-binding.destination=my-queue # defaults to name of binding -quarkus.rabbitmq.devservices.bindings.a-binding.destination-type=queue # defaults to 'queue' ----- - -NOTE: The name of the binding is only used for the purposes of the Dev Services configuration and is not part of the -binding defined in RabbitMQ. - -Additionally, any additional arguments may be provided to the binding's definition by using the `arguments` key: - -[source, properties] ----- -quarkus.rabbitmq.devservices.bindings.a-binding.arguments.non-std-option=value ----- diff --git a/_versions/2.7/guides/rabbitmq-reference.adoc b/_versions/2.7/guides/rabbitmq-reference.adoc deleted file mode 100644 index d23e6c5353f..00000000000 --- a/_versions/2.7/guides/rabbitmq-reference.adoc +++ /dev/null @@ -1,847 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Reactive Messaging RabbitMQ Connector Reference Documentation - -include::./attributes.adoc[] - -This guide is the companion from the xref:rabbitmq.adoc[Getting Started with RabbitMQ]. -It explains in more details the configuration and usage of the RabbitMQ connector for reactive messaging. - -TIP: This documentation does not cover all the details of the connector. -Refer to the https://smallrye.io/smallrye-reactive-messaging[SmallRye Reactive Messaging website] for further details. - -The RabbitMQ connector allows Quarkus applications to send and receive messages using the AMQP 0.9.1 protocol. -More details about the protocol can be found in https://www.rabbitmq.com/amqp-0-9-1-reference.html#queue.bind.routing-key[the AMQP 0.9.1 specification]. - -IMPORTANT: The RabbitMQ connector supports AMQP 0-9-1, which is very different from the AMQP 1.0 protocol used by the -AMQP 1.0 connector. You can use the AMQP 1.0 connector with RabbitMQ as described in the -xref:amqp-reference.adoc[AMQP 1.0 connector reference], albeit with *reduced functionality*. - -== RabbitMQ connector extension - -To use the connector, you need to add the `quarkus-smallrye-reactive-messaging-rabbitmq` extension. - -You can add the extension to your project using: - -[source, bash] ----- -> ./mvnw quarkus:add-extensions -Dextensions="quarkus-smallrye-reactive-messaging-rabbitmq" ----- - -Or just add the following dependency to your project: - -[source, xml] ----- - - io.quarkus - quarkus-quarkus-smallrye-reactive-messaging-rabbitmq - ----- - -Once added to your project, you can map _channels_ to RabbitMQ exchanges or queues by configuring the `connector` attribute: - -[source, properties] ----- -# Inbound -mp.messaging.incoming.[channel-name].connector=smallrye-rabbitmq - -# Outbound -mp.messaging.outgoing.[channel-name].connector=smallrye-rabbitmq ----- - -`outgoing` channels are mapped to RabbitMQ exchanges and `incoming` channels are mapped to RabbitMQ queues as required -by the broker. - -== Configuring the RabbitMQ Broker access - -The RabbitMQ connector connects to RabbitMQ brokers. -To configure the location and credentials of the broker, add the following properties in the `application.properties`: - -[source, properties] ----- -rabbitmq-host=amqp # <1> -rabbitmq-port=5672 # <2> -rabbitmq-username=my-username # <3> -rabbitmq-password=my-password # <4> - -mp.messaging.incoming.prices.connector=smallrye-rabbitmq # <5> ----- -<1> Configures the broker host name. You can do it per channel (using the `host` attribute) or globally using `rabbitmq-host` -<2> Configures the broker port. You can do it per channel (using the `port` attribute) or globally using `rabbitmq-port`. The default is `5672`. -<3> Configures the broker username if required. You can do it per channel (using the `username` attribute) or globally using `rabbitmq-username`. -<4> Configures the broker password if required. You can do it per channel (using the `password` attribute) or globally using `rabbitmq-password`. -<5> Instructs the prices channel to be managed by the RabbitMQ connector - -In dev mode and when running tests, xref:rabbitmq-dev-services.adoc[Dev Services for RabbitMQ] automatically starts a RabbitMQ broker. - -== Receiving RabbitMQ messages - -Let's imagine your application receives `Message`. -You can consume the payload directly: - -[source, java] ----- -package inbound; - -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class RabbitMQPriceConsumer { - - @Incoming("prices") - public void consume(double price) { - // process your price. - } - -} ----- - -Or, you can retrieve the Message: - -[source, java] ----- -package inbound; - -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Message; - -import javax.enterprise.context.ApplicationScoped; -import java.util.concurrent.CompletionStage; - -@ApplicationScoped -public class RabbitMQPriceMessageConsumer { - - @Incoming("prices") - public CompletionStage consume(Message price) { - // process your price. - - // Acknowledge the incoming message, marking the RabbitMQ message as `accepted`. - return price.ack(); - } - -} ----- - -=== Inbound Metadata -Messages coming from RabbitMQ contain an instance of `IncomingRabbitMQMetadata` in the metadata. - -[source, java] ----- -Optional metadata = incoming.getMetadata(IncomingRabbitMQMetadata.class); -metadata.ifPresent(meta -> { - final Optional contentEncoding = meta.getContentEncoding(); - final Optional contentType = meta.getContentType(); - final Optional correlationId = meta.getCorrelationId(); - final Optional creationTime = meta.getCreationTime(ZoneId.systemDefault()); - final Optional priority = meta.getPriority(); - final Optional replyTo = meta.getReplyTo(); - final Optional userId = meta.getUserId(); - - // Access a single String-valued header - final Optional stringHeader = meta.getHeader("my-header", String.class); - - // Access all headers - final Map headers = meta.getHeaders(); - // ... -}); ----- - -=== Deserialization - -The connector converts incoming RabbitMQ Messages into Reactive Messaging `Message` instances. The payload type `T` depends on the value of the RabbitMQ received message Envelope `content_type` and `content_encoding` properties. - -[options="header"] -|=== -| content_encoding | content_type | T -| _Value present_ | _n/a_ | `byte[]` -| _No value_ | `text/plain` | `String` -| _No value_ | `application/json` | a JSON element which can be a https://vertx.io/docs/apidocs/io/vertx/core/json/JsonArray.html[`JsonArray`], https://vertx.io/docs/apidocs/io/vertx/core/json/JsonObject.html[`JsonObject`], `String`, ...etc if the buffer contains an array, object, string, ...etc -| _No value_ | _Anything else_ | `byte[]` -|=== - -If you send objects with this RabbitMQ connector (outbound connector), they are encoded as JSON and sent with `content_type` set to `application/json`. You can receive this payload using (Vert.x) JSON Objects, and then map it to the object class you want: - -[source, java] ----- -@ApplicationScoped -public static class Generator { - - @Outgoing("to-rabbitmq") - public Multi prices() { // <1> - AtomicInteger count = new AtomicInteger(); - return Multi.createFrom().ticks().every(Duration.ofMillis(1000)) - .map(l -> new Price().setPrice(count.incrementAndGet())) - .onOverflow().drop(); - } - -} - -@ApplicationScoped -public static class Consumer { - - List prices = new CopyOnWriteArrayList<>(); - - @Incoming("from-rabbitmq") - public void consume(JsonObject p) { // <2> - Price price = p.mapTo(Price.class); // <3> - prices.add(price); - } - - public List list() { - return prices; - } -} ----- -<1> The `Price` instances are automatically encoded to JSON by the connector -<2> You can receive it using a `JsonObject` -<3> Then, you can reconstruct the instance using the `mapTo` method - -NOTE: The `mapTo` method uses the Quarkus Jackson mapper. Check xref:rest-json.adoc#json[this guide] to learn more about the mapper configuration. - -=== Acknowledgement - -When a Reactive Messaging Message associated with a RabbitMQ Message is acknowledged, it informs the broker that the message has been _accepted_. - -Whether you need to explicitly acknowledge the message depends on the `auto-acknowledgement` setting for the channel; if that is set to true then your message will be automatically acknowledged on receipt. - -=== Failure Management - -If a message produced from a RabbitMQ message is nacked, a failure strategy is applied. The RabbitMQ connector supports -three strategies, controlled by the failure-strategy channel setting: - -* `fail` - fail the application; no more RabbitMQ messages will be processed. The RabbitMQ message is marked as rejected. -* `accept` - this strategy marks the RabbitMQ message as _accepted_. The processing continues ignoring the failure. -* `reject` - this strategy marks the RabbitMQ message as rejected (default). The processing continues with the next message. - -== Sending RabbitMQ messages - -=== Serialization - -When sendingWhen sending a `Message`, the connector converts the message into a RabbitMQ Message. The payload is converted to the RabbitMQ Message body. - -[options=header] -|=== -| T | RabbitMQ Message Body -| primitive types or `UUID`/`String` | String value with `content_type` set to `text/plain` -| https://vertx.io/docs/apidocs/io/vertx/core/json/JsonObject.html[`JsonObject`] or https://vertx.io/docs/apidocs/io/vertx/core/json/JsonArray.html[`JsonArray`] | Serialized String payload with `content_type` set to `application/json` -| `io.vertx.mutiny.core.buffer.Buffer` | Binary content, with `content_type` set to `application/octet-stream` -| `byte[]`| Binary content, with `content_type` set to `application/octet-stream` -| Any other class | The payload is converted to JSON (using a Json Mapper) then serialized with `content_type` set to `application/json` -|=== - -If the message payload cannot be serialized to JSON, the message is _nacked_. - -=== Outbound Metadata - -When sending `Messages`, you can add an instance of `OutgoingRabbitMQMetadata` -to influence how the message is handled by RabbitMQ. For example, you can configure the routing key, timestamp and -headers: - -[source, java] ----- -final OutgoingRabbitMQMetadata metadata = new OutgoingRabbitMQMetadata.Builder() - .withHeader("my-header", "xyzzy") - .withRoutingKey("urgent") - .withTimestamp(ZonedDateTime.now()) - .build(); - -// Add `metadata` to the metadata of the outgoing message. -return Message.of("Hello", Metadata.of(metadata)); ----- - -=== Acknowledgement - -By default, the Reactive Messaging `Message` is acknowledged when the broker acknowledges the message. - -== Configuring the RabbitMQ Exchange/Queue - -You can configure the RabbitMQ exchange or queue associated with a channel using properties on the channel configuration. -`incoming` channels are mapped to RabbitMQ `queues` and `outgoing` channels are mapped to `RabbitMQ` exchanges. -For example: - -[source, properties] ----- -mp.messaging.incoming.prices.connector=smallrye-rabbitmq -mp.messaging.incoming.prices.queue.name=my-queue - -mp.messaging.outgoing.orders.connector=smallrye-rabbitmq -mp.messaging.outgoing.orders.exchange.name=my-order-queue ----- - -If the `exchange.name` or `queue.name` attribute is not set, the connector uses the channel name. - -To use an existing queue, you need to configure the `name` and set the exchange's or queue's `declare` property to `false`. -For example, if you have a RabbitMQ broker configured with a `people` exchange and queue, you need the following configuration: - -[source, properties] ----- -mp.messaging.incoming.people.connector=smallrye-rabbitmq -mp.messaging.incoming.people.queue.name=people -mp.messaging.incoming.people.queue.declare=false - -mp.messaging.outgoing.people.connector=smallrye-rabbitmq -mp.messaging.outgoing.people.exchange.name=people -mp.messaging.outgoing.people.exchange.declare=false ----- - -[#blocking-processing] -=== Execution model and Blocking processing - -Reactive Messaging invokes your method on an I/O thread. -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. -But, you often need to combine Reactive Messaging with blocking processing such as database interactions. -For this, you need to use the `@Blocking` annotation indicating that the processing is _blocking_ and should not be run on the caller thread. - -For example, The following code illustrates how you can store incoming payloads to a database using Hibernate with Panache: - -[source,java] ----- -import io.smallrye.reactive.messaging.annotations.Blocking; -import org.eclipse.microprofile.reactive.messaging.Incoming; - -import javax.enterprise.context.ApplicationScoped; -import javax.transaction.Transactional; - -@ApplicationScoped -public class PriceStorage { - - @Incoming("prices") - @Blocking - @Transactional - public void store(int priceInUsd) { - Price price = new Price(); - price.value = priceInUsd; - price.persist(); - } - -} ----- - -[NOTE] -==== -There are 2 `@Blocking` annotations: - -1. `io.smallrye.reactive.messaging.annotations.Blocking` -2. `io.smallrye.common.annotation.Blocking` - -They have the same effect. -Thus, you can use both. -The first one provides more fine-grained tuning such as the worker pool to use and whether it preserves the order. -The second one, used also with other reactive features of Quarkus, uses the default worker pool and preserves the order. -==== - -== Customizing the underlying RabbitMQ client - -The connector uses the Vert.x RabbitMQ client underneath. -More details about this client can be found in the https://vertx.io/docs/vertx-rabbitmq-client/java/[Vert.x website]. - -You can customize the underlying client configuration by producing an instance of `RabbitMQOptions` as follows: - -[source, java] ----- -@Produces -@Identifier("my-named-options") -public RabbitMQOptions getNamedOptions() { - PemKeyCertOptions keycert = new PemKeyCertOptions() - .addCertPath("./tls/tls.crt") - .addKeyPath("./tls/tls.key"); - PemTrustOptions trust = new PemTrustOptions().addCertPath("./tlc/ca.crt"); - // You can use the produced options to configure the TLS connection - return new RabbitMQOptions() - .setSsl(true) - .setPemKeyCertOptions(keycert) - .setPemTrustOptions(trust) - .setUser("user1") - .setPassword("password1") - .setHost("localhost") - .setPort(5672) - .setVirtualHost("vhost1") - .setConnectionTimeout(6000) // in milliseconds - .setRequestedHeartbeat(60) // in seconds - .setHandshakeTimeout(6000) // in milliseconds - .setRequestedChannelMax(5) - .setNetworkRecoveryInterval(500) // in milliseconds - .setAutomaticRecoveryEnabled(true); -} ----- - -This instance is retrieved and used to configure the client used by the connector. -You need to indicate the name of the client using the `client-options-name` attribute: - -[source, properties] ----- -mp.messaging.incoming.prices.client-options-name=my-named-options ----- - -== Health reporting - -If you use the RabbitMQ connector with the `quarkus-smallrye-health` extension, it contributes to the readiness and liveness probes. -The RabbitMQ connector reports the readiness and liveness of each channel managed by the connector. - -To disable health reporting, set the `health-enabled` attribute for the channel to false. - -On the inbound side (receiving messages from RabbitMQ), the check verifies that the receiver is connected to the broker. - -On the outbound side (sending records to RabbitMQ), the check verifies that the sender is not disconnected from the broker; the sender _may_ still be in an initiliased state (connection not yet attempted), but this is regarded as live/ready. - -Note that a message processing failures nacks the message, which is then handled by the `failure-strategy`. -It's the responsibility of the `failure-strategy` to report the failure and influence the outcome of the checks. -The `fail` failure strategy reports the failure, and so the check will report the fault. - -[[dynamic-credentials]] -== Dynamic Credentials - -Quarkus and the RabbitMQ connector support https://www.vaultproject.io/docs/secrets/rabbitmq[Vault's RabbitMQ secrets engine] -for generating short-lived dynamic credentials. This allows Vault to create and retire RabbitMQ credentials on a regular basis. - -First we need to enable Vault's `rabbitmq` secret engine, configure it with RabbitMQ's connection and authentication -information, and create a Vault role `my-role` (replace `10.0.0.3` by the actual host that is running the -RabbitMQ container): -[source,bash, subs=attributes+] ----- -vault secrets enable rabbitmq - -vault write rabbitmq/config/connection \ - connection_uri=http://10.0.0.3:15672 \ - username=guest \ - password=guest - -vault write rabbitmq/roles/my-role \ - vhosts='{"/":{"write": ".*", "read": ".*"}}' ----- - -[NOTE] -==== -For this use case, user `guest` configured above needs to be a RabbitMQ admin user with the capability to create -credentials. -==== - -Then we need to give a read capability to the Quarkus application on path `rabbitmq/creds/my-role`. -[source,bash] ----- -cat < quoteRequestEmitter; // <1> - - /** - * Endpoint to generate a new quote request id and send it to "quote-requests" channel (which - * maps to the "quote-requests" RabbitMQ exchange) using the emitter. - */ - @POST - @Path("/request") - @Produces(MediaType.TEXT_PLAIN) - public String createRequest() { - UUID uuid = UUID.randomUUID(); - quoteRequestEmitter.send(uuid.toString()); // <2> - return uuid.toString(); - } -} ----- -<1> Inject a Reactive Messaging `Emitter` to send messages to the `quote-requests` channel. -<2> On a post request, generate a random UUID and send it to the RabbitMQ queue using the emitter. - -This channel is mapped to a RabbitMQ exchange using the configuration we will add to the `application.properties` file. -Open the `src/main/resource/application.properties` file and add: - -[source, properties] ----- -# Configure the outgoing RabbitMQ exchange `quote-requests` -mp.messaging.outgoing.quote-requests.connector=smallrye-rabbitmq -mp.messaging.outgoing.quote-requests.exchange.name=quote-requests ----- - -All we need to specify is the `smallrye-rabbitmq` connector. -By default, reactive messaging maps the channel name `quote-requests` to the same RabbitMQ exchange name. - -== Processing quote requests - -Now let's consume the quote request and give out a price. -Inside the `processor` project, locate the `src/main/java/org/acme/rabbitmq/processor/QuoteProcessor.java` file and add the following: - -[source, java] ----- -package org.acme.rabbitmq.processor; - -import java.util.Random; - -import javax.enterprise.context.ApplicationScoped; - -import org.acme.rabbitmq.model.Quote; -import org.eclipse.microprofile.reactive.messaging.Incoming; -import org.eclipse.microprofile.reactive.messaging.Outgoing; - -import io.smallrye.reactive.messaging.annotations.Blocking; - -/** - * A bean consuming data from the "quote-requests" RabbitMQ queue and giving out a random quote. - * The result is pushed to the "quotes" RabbitMQ exchange. - */ -@ApplicationScoped -public class QuoteProcessor { - - private Random random = new Random(); - - @Incoming("requests") // <1> - @Outgoing("quotes") // <2> - @Blocking // <3> - public Quote process(String quoteRequest) throws InterruptedException { - // simulate some hard-working task - Thread.sleep(1000); - return new Quote(quoteRequest, random.nextInt(100)); - } -} ----- -<1> Indicates that the method consumes the items from the `requests` channel -<2> Indicates that the objects returned by the method are sent to the `quotes` channel -<3> Indicates that the processing is _blocking_ and cannot be run on the caller thread. - -The `process` method is called for every RabbitMQ message from the `quote-requests` queue, and will send a `Quote` object to the `quotes` exchange. - -As with the previous example we need to configure the connectors in the `application.properties` file. -Open the `src/main/resources/application.properties` file and add: - -[source, properties] ----- -# Configure the incoming RabbitMQ queue `quote-requests` -mp.messaging.incoming.requests.connector=smallrye-rabbitmq -mp.messaging.incoming.requests.queue.name=quote-requests -mp.messaging.incoming.requests.exchange.name=quote-requests - -# Configure the outgoing RabbitMQ exchange `quotes` -mp.messaging.outgoing.quotes.connector=smallrye-rabbitmq -mp.messaging.outgoing.quotes.exchange.name=quotes ----- - -Note that in this case we have one incoming and one outgoing connector configuration, each one distinctly named. -The configuration keys are structured as follows: - -`mp.messaging.[outgoing|incoming].{channel-name}.property=value` - -The `channel-name` segment must match the value set in the `@Incoming` and `@Outgoing` annotation: - -* `quote-requests` -> RabbitMQ queue from which we read the quote requests -* `quotes` -> RabbitMQ exchange in which we write the quotes - -== Receiving quotes - -Back to our `producer` project. -Let's modify the `QuotesResource` to consume quotes, bind it to an HTTP endpoint to send events to clients: - -[source,java] ----- -import io.smallrye.mutiny.Multi; -//... - -@Channel("quotes") Multi quotes; // <1> - -/** - * Endpoint retrieving the "quotes" queue and sending the items to a server sent event. - */ -@GET -@Produces(MediaType.SERVER_SENT_EVENTS) // <2> -public Multi stream() { - return quotes; // <3> -} ----- -<1> Injects the `quotes` channel using the `@Channel` qualifier -<2> Indicates that the content is sent using `Server Sent Events` -<3> Returns the stream (_Reactive Stream_) - -Again we need to configure the incoming `quotes` channel inside `producer` project. -Add the following inside `application.properties` file: - -[source, properties] ----- -# Configure the outgoing `quote-requests` queue -mp.messaging.outgoing.quote-requests.connector=smallrye-rabbitmq - -# Configure the incoming `quotes` queue -mp.messaging.incoming.quotes.connector=smallrye-rabbitmq ----- - -== The HTML page - -Final touch, the HTML page reading the converted prices using SSE. - -Create inside the `producer` project `src/main/resources/META-INF/resources/quotes.html` file, with the following content: - -[source, html] ----- - Quotes - - - - - -
-
-
-

Quotes

- -
-
-
-
- - - - ----- - -Nothing spectacular here. -On each received quote, it updates the page. - -== Get it running - -You just need to run both applications using: - -[source,bash] ----- -> mvn -f rabbitmq-quickstart-producer quarkus:dev ----- - -And, in a separate terminal: - -[source, bash] ----- -> mvn -f rabbitmq-quickstart-processor quarkus:dev ----- - -Quarkus starts a RabbitMQ broker automatically, configures the application and shares the broker instance between different applications. -See xref:rabbitmq-dev-services.adoc[Dev Services for RabbitMQ] for more details. - - -Open `http://localhost:8080/quotes.html` in your browser and request some quotes by clicking the button. - -== Running in JVM or Native mode - -When not running in dev or test mode, you will need to start your RabbitMQ broker. -You can follow the instructions from the https://hub.docker.com/_/rabbitmq[RabbitMQ Docker website] or create a `docker-compose.yaml` file with the following content: - -[source, yaml] ----- -version: '2' - -services: - - rabbit: - image: rabbitmq:3.9-management - ports: - - "5672:5672" - networks: - - rabbitmq-quickstart-network - - producer: - image: quarkus-quickstarts/rabbitmq-quickstart-producer:1.0-${QUARKUS_MODE:-jvm} - build: - context: rabbitmq-quickstart-producer - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - RABBITMQ_HOST: rabbit - RABBITMQ_PORT: 5672 - ports: - - "8080:8080" - networks: - - rabbitmq-quickstart-network - - processor: - image: quarkus-quickstarts/rabbitmq-quickstart-processor:1.0-${QUARKUS_MODE:-jvm} - build: - context: rabbitmq-quickstart-processor - dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm} - environment: - RABBITMQ_HOST: rabbit - RABBITMQ_PORT: 5672 - networks: - - rabbitmq-quickstart-network - -networks: - rabbitmq-quickstart-network: - name: rabbitmq-quickstart ----- - -Note how the RabbitMQ broker location is configured. -The `rabbitmq-host` and `rabbitmq-port` (`AMQP_HOST` and `AMQP_PORT` environment variables) properties configure location. - - -First, make sure you stopped the applications, and build both applications in JVM mode with: - -[source, bash] ----- -> mvn -f rabbitmq-quickstart-producer clean package -> mvn -f rabbitmq-quickstart-processor clean package ----- - -Once packaged, run `docker compose up --build`. -The UI is exposed on http://localhost:8080/quotes.html - -To run your applications as native, first we need to build the native executables: - -[source, bash] ----- -> mvn -f rabbitmq-quickstart-producer package -Pnative -Dquarkus.native.container-build=true -> mvn -f rabbitmq-quickstart-processor package -Pnative -Dquarkus.native.container-build=true ----- - -The `-Dquarkus.native.container-build=true` instructs Quarkus to build Linux 64bits native executables, who can run inside containers. -Then, run the system using: - -[source, bash] ----- -> export QUARKUS_MODE=native -> docker compose up --build ----- - -As before, the UI is exposed on http://localhost:8080/quotes.html - -== Going further - -This guide has shown how you can interact with RabbitMQ using Quarkus. -It utilizes https://smallrye.io/smallrye-reactive-messaging[SmallRye Reactive Messaging] to build data streaming applications. - -If you did the Kafka, you have realized that it's the same code. -The only difference is the connector configuration and the JSON mapping. diff --git a/_versions/2.7/guides/reactive-event-bus.adoc b/_versions/2.7/guides/reactive-event-bus.adoc deleted file mode 100644 index 91c2c4747ba..00000000000 --- a/_versions/2.7/guides/reactive-event-bus.adoc +++ /dev/null @@ -1,426 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the event bus - -include::./attributes.adoc[] - -Quarkus allows different beans to interact using asynchronous events, thus promoting loose-coupling. -The messages are sent to _virtual addresses_. -It offers 3 types of delivery mechanism: - -- point-to-point - send the message, one consumer receives it. If several consumers listen to the address, a round robin is applied; -- publish/subscribe - publish a message, all the consumers listening to the address are receiving the message; -- request/reply - send the message and expect a response. The receiver can respond to the message in an asynchronous-fashion - -All these delivery mechanism are non-blocking, and are providing one of the fundamental brick to build reactive applications. - -NOTE: The asynchronous message passing feature allows replying to messages which is not supported by Reactive Messaging. -However, it is limited to single-event behavior (no stream) and to local messages. - -== Installing - -This mechanism uses the Vert.x EventBus, so you need to enable the `vertx` extension to use this feature. -If you are creating a new project, set the `extensions` parameter are follows: - -:create-app-artifact-id: vertx-quickstart -:create-app-extensions: vertx,resteasy-mutiny -include::includes/devtools/create-app.adoc[] - -If you have an already created project, the `vertx` extension can be added to an existing Quarkus project with -the `add-extension` command: - -:add-extension-extensions: vertx -include::includes/devtools/extension-add.adoc[] - -Otherwise, you can manually add this to the dependencies section of your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-vertx - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-vertx") ----- - -== Consuming events - -To consume events, use the `io.quarkus.vertx.ConsumeEvent` annotation: - -[source, java] ----- -package org.acme.vertx; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingService { - - @ConsumeEvent // <1> - public String consume(String name) { // <2> - return name.toUpperCase(); - } -} ----- -<1> If not set, the address is the fully qualified name of the bean, for instance, in this snippet it's `org.acme.vertx.GreetingService`. -<2> The method parameter is the message body. If the method returns _something_ it's the message response. - -[IMPORTANT] -==== -By default, the code consuming the event must be _non-blocking_, as it's called on the Vert.x event loop. -If your processing is blocking, use the `blocking` attribute: - -[source, java] ----- -@ConsumeEvent(value = "blocking-consumer", blocking = true) -void consumeBlocking(String message) { - // Something blocking -} ----- - -Alternatively, you can annotate your method with `@io.smallrye.common.annotation.Blocking`: -[source, java] ----- -@ConsumeEvent(value = "blocking-consumer") -@Blocking -void consumeBlocking(String message) { - // Something blocking -} ----- - -When using `@Blocking`, it ignores the value of the `blocking` attribute of `@ConsumeEvent`. -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. -==== - -Asynchronous processing is also possible by returning either an `io.smallrye.mutiny.Uni` or a `java.util.concurrent.CompletionStage`: - -[source,java] ----- -package org.acme.vertx; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; -import io.smallrye.mutiny.Uni; - -@ApplicationScoped -public class GreetingService { - - @ConsumeEvent - public CompletionStage consume(String name) { - // return a CompletionStage completed when the processing is finished. - // You can also fail the CompletionStage explicitly - } - - @ConsumeEvent - public Uni process(String name) { - // return an Uni completed when the processing is finished. - // You can also fail the Uni explicitly - } -} ----- - -[TIP] -.Mutiny -==== -The previous example uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -=== Configuring the address - -The `@ConsumeEvent` annotation can be configured to set the address: - -[source, java] ----- -@ConsumeEvent("greeting") // <1> -public String consume(String name) { - return name.toUpperCase(); -} ----- -<1> Receive the messages sent to the `greeting` address - -=== Replying - -The _return_ value of a method annotated with `@ConsumeEvent` is used as response to the incoming message. -For instance, in the following snippet, the returned `String` is the response. - -[source, java] ----- -@ConsumeEvent("greeting") -public String consume(String name) { - return name.toUpperCase(); -} ----- - -You can also return a `Uni` or a `CompletionStage` to handle asynchronous reply: - -[source, java] ----- -@ConsumeEvent("greeting") -public Uni consume2(String name) { - return Uni.createFrom().item(() -> name.toUpperCase()).emitOn(executor); -} ----- - -[NOTE] -==== -You can inject an `executor` if you use the Context Propagation extension: -[source, java] ----- -@Inject ManagedExecutor executor; ----- - -Alternatively, you can use the default Quarkus worker pool using: - -[source, java] ----- -Executor executor = Infrastructure.getDefaultWorkerPool(); ----- -==== - -=== Implementing fire and forget interactions - -You don't have to reply to received messages. -Typically, for a _fire and forget_ interaction, the messages are consumed and the sender does not need to know about it. -To implement this, your consumer method just returns `void` - -[source,java] ----- -@ConsumeEvent("greeting") -public void consume(String event) { - // Do something with the event -} ----- - -=== Dealing with messages - -As said above, this mechanism is based on the Vert.x event bus. So, you can also use `Message` directly: - -[source, java] ----- -@ConsumeEvent("greeting") -public void consume(Message msg) { - System.out.println(msg.address()); - System.out.println(msg.body()); -} ----- - -=== Handling Failures - -If a method annotated with `@ConsumeEvent` throws an exception then: - -* if a reply handler is set then the failure is propagated back to the sender via an `io.vertx.core.eventbus.ReplyException` with code `ConsumeEvent#FAILURE_CODE` and the exception message, -* if no reply handler is set then the exception is rethrown (and wrapped in a `RuntimeException` if necessary) and can be handled by the default exception handler, i.e. `io.vertx.core.Vertx#exceptionHandler()`. - -== Sending messages - -Ok, we have seen how to receive messages, let's now switch to the _other side_: the sender. -Sending and publishing messages use the Vert.x event bus: - -[source, java] ----- -package org.acme.vertx; - -import io.smallrye.mutiny.Uni; -import io.vertx.mutiny.core.eventbus.EventBus; -import io.vertx.mutiny.core.eventbus.Message; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/async") -public class EventResource { - - @Inject - EventBus bus; // <1> - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("{name}") - public Uni greeting(@PathParam String name) { - return bus.request("greeting", name) // <2> - .onItem().transform(Message::body); - } -} ----- -<1> Inject the Event bus -<2> Send a message to the address `greeting`. Message payload is `name` - -The `EventBus` object provides methods to: - -1. `send` a message to a specific address - one single consumer receives the message. -2. `publish` a message to a specific address - all consumers receive the messages. -3. `send` a message and expect reply - -[source, java] ----- -// Case 1 -bus.sendAndForget("greeting", name) -// Case 2 -bus.publish("greeting", name) -// Case 3 -Uni response = bus.request("address", "hello, how are you?") - .onItem().transform(Message::body); ----- - -== Putting things together - bridging HTTP and messages - -Let's revisit a greeting HTTP endpoint and use asynchronous message passing to delegate the call to a separated bean. -It uses the request/reply dispatching mechanism. -Instead of implementing the business logic inside the JAX-RS endpoint, we are sending a message. -This message is consumed by another bean and the response is sent using the _reply_ mechanism. - -First create a new project using: - -:create-app-artifact-id: vertx-http-quickstart -:create-app-extensions: vertx,resteasy-mutiny -include::includes/devtools/create-app.adoc[] - -You can already start the application in _dev mode_ using: - -include::includes/devtools/dev.adoc[] - -Then, creates a new JAX-RS resource with the following content: - -[source,java] -.src/main/java/org/acme/vertx/EventResource.java ----- -package org.acme.vertx; - -import io.smallrye.mutiny.Uni; -import io.vertx.mutiny.core.eventbus.EventBus; -import io.vertx.mutiny.core.eventbus.Message; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/async") -public class EventResource { - - @Inject - EventBus bus; - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("{name}") - public Uni greeting(@PathParam String name) { - return bus.request("greeting", name) // <1> - .onItem().transform(Message::body); // <2> - } -} ----- -<1> send the `name` to the `greeting` address and request a response -<2> when we get the response, extract the body and send it to the user - -If you call this endpoint, you will wait and get a timeout. Indeed, no one is listening. -So, we need a consumer listening on the `greeting` address. Create a `GreetingService` bean with the following content: - -[source, java] -.src/main/java/org/acme/vertx/GreetingService.java ----- -package org.acme.vertx; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingService { - - @ConsumeEvent("greeting") - public String greeting(String name) { - return "Hello " + name; - } - -} ----- - -This bean receives the name, and returns the greeting message. - -Now, open your browser to http://localhost:8080/async/Quarkus, and you should see: - -[source,text] ----- -Hello Quarkus ----- - -To better understand, let's detail how the HTTP request/response has been handled: - -1. The request is received by the `hello` method -2. a message containing the _name_ is sent to the event bus -3. Another bean receives this message and computes the response -4. This response is sent back using the reply mechanism -5. Once the reply is received by the sender, the content is written to the HTTP response - -This application can be packaged using: - -include::includes/devtools/build.adoc[] - -You can also compile it as a native executable with: - -include::includes/devtools/build-native.adoc[] - -== Using codecs - -The https://vertx.io/docs/vertx-core/java/#event_bus[Vert.x Event Bus] uses codecs to _serialize_ and _deserialize_ objects. -Quarkus provides a default codec for local delivery. -So you can exchange objects as follows: - -[source, java] ----- -@GET -@Produces(MediaType.TEXT_PLAIN) -@Path("{name}") -public Uni greeting(@PathParam String name) { - return bus.request("greeting", new MyName(name)) - .onItem().transform(Message::body); -} - -@ConsumeEvent(value = "greeting") -Uni greeting(MyName name) { - return Uni.createFrom().item(() -> "Hello " + name.getName()); -} ----- - -If you want to use a specific codec, you need to explicitly set it on both ends: - -[source, java] ----- -@GET -@Produces(MediaType.TEXT_PLAIN) -@Path("{name}") -public Uni greeting(@PathParam String name) { - return bus.request("greeting", name, - new DeliveryOptions().setCodecName(MyNameCodec.class.getName())) // <1> - .onItem().transform(Message::body); -} - -@ConsumeEvent(value = "greeting", codec = MyNameCodec.class) // <2> -Uni greeting(MyName name) { - return Uni.createFrom().item(() -> "Hello "+name.getName()); -} ----- -<1> Set the name of the codec to use to send the message -<2> Set the codec to use to receive the message diff --git a/_versions/2.7/guides/reactive-routes.adoc b/_versions/2.7/guides/reactive-routes.adoc deleted file mode 100644 index c75d8923f92..00000000000 --- a/_versions/2.7/guides/reactive-routes.adoc +++ /dev/null @@ -1,857 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Reactive Routes - -include::./attributes.adoc[] - -Reactive routes propose an alternative approach to implement HTTP endpoints where you declare and chain _routes_. -This approach became very popular in the JavaScript world, with frameworks like Express.Js or Hapi. -Quarkus also offers the possibility to use reactive routes. -You can implement REST API with routes only or combine them with JAX-RS resources and servlets. - -The code presented in this guide is available in this {quickstarts-base-url}[GitHub repository] under the {quickstarts-tree-url}/reactive-routes-quickstart[`reactive-routes-quickstart` directory] - -NOTE: Reactive Routes were initially introduced to provide a reactive execution model for HTTP APIs on top of the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture]. -With the introduction of xref:resteasy-reactive.adoc[RESTEasy Reactive], you can now implement reactive HTTP APIs and still use JAX-RS annotations. -Reactive Routes are still supported, especially if you want a more _route-based_ approach, and something closer to the underlying reactive engine. - -== Quarkus HTTP - -Before going further, let's have a look at the HTTP layer of Quarkus. -Quarkus HTTP support is based on a non-blocking and reactive engine (Eclipse Vert.x and Netty). -All the HTTP requests your application receive are handled by _event loops_ (I/O Thread) and then are routed towards the code that manages the request. -Depending on the destination, it can invoke the code managing the request on a worker thread (Servlet, Jax-RS) or use the IO Thread (reactive route). -Note that because of this, a reactive route must be non-blocking or explicitly declare its blocking nature (which would result by being called on a worker thread). - -image:http-architecture.png[alt=Quarkus HTTP Architecture] - -See the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture documentation] for further details on this topic. - - -== Declaring reactive routes - -The first way to use reactive routes is to use the `@Route` annotation. -To have access to this annotation, you need to add the `quarkus-reactive-routes` extension: - -In your build file, add: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-reactive-routes - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-reactive-routes") ----- - -Then in a _bean_, you can use the `@Route` annotation as follows: - -[source,java] ----- -package org.acme.reactive.routes; - -import io.quarkus.vertx.web.Route; -import io.quarkus.vertx.web.Route.HttpMethod; -import io.quarkus.vertx.web.RoutingExchange; -import io.vertx.ext.web.RoutingContext; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped // <1> -public class MyDeclarativeRoutes { - - // neither path nor regex is set - match a path derived from the method name - @Route(methods = Route.HttpMethod.GET) // <2> - void hello(RoutingContext rc) { // <3> - rc.response().end("hello"); - } - - @Route(path = "/world") - String helloWorld() { // <4> - return "Hello world!"; - } - - @Route(path = "/greetings", methods = Route.HttpMethod.GET) - void greetings(RoutingExchange ex) { // <5> - ex.ok("hello " + ex.getParam("name").orElse("world")); - } -} ----- -<1> If there is a reactive route found on a class with no scope annotation then `@javax.inject.Singleton` is added automatically. -<2> The `@Route` annotation indicates that the method is a reactive route. Again, by default, the code contained in the method must not block. -<3> The method gets a https://vertx.io/docs/apidocs/io/vertx/ext/web/RoutingContext.html[`RoutingContext`] as a parameter. From the `RoutingContext` you can retrieve the HTTP request (using `request()`) and write the response using `response().end(...)`. -<4> If the annotated method does not return `void` the arguments are optional. -<5> `RoutingExchange` is a convenient wrapper of `RoutingContext` which provides some useful methods. - -More details about using the `RoutingContext` is available in the https://vertx.io/docs/vertx-web/java/[Vert.x Web documentation]. - -The `@Route` annotation allows you to configure: - -* The `path` - for routing by path, using the https://vertx.io/docs/vertx-web/java/#_capturing_path_parameters[Vert.x Web format] -* The `regex` - for routing with regular expressions, see https://vertx.io/docs/vertx-web/java/#_routing_with_regular_expressions[for more details] -* The `methods` - the HTTP verb triggering the route such as `GET`, `POST`... -* The `type` - it can be _normal_ (non-blocking), _blocking_ (method dispatched on a worker thread), or _failure_ to indicate that this route is called on failures -* The `order` - the order of the route when several routes are involved in handling the incoming request. -Must be positive for regular user routes. -* The produced and consumed mime types using `produces`, and `consumes` - -For instance, you can declare a blocking route as follows: - -[source,java] ----- -@Route(methods = HttpMethod.POST, path = "/post", type = Route.HandlerType.BLOCKING) -public void blocking(RoutingContext rc) { - // ... -} ----- - -[NOTE] -==== -Alternatively, you can use `@io.smallrye.common.annotation.Blocking` and omit the `type = Route.HandlerType.BLOCKING`: -[source, java] ----- -@Route(methods = HttpMethod.POST, path = "/post") -@Blocking -public void blocking(RoutingContext rc) { - // ... -} ----- -When `@Blocking` is used, it ignores the `type` attribute of `@Route`. -==== - -The `@Route` annotation is repeatable and so you can declare several routes for a single method: - -[source,java] ----- -@Route(path = "/first") <1> -@Route(path = "/second") -public void route(RoutingContext rc) { - // ... -} ----- -<1> Each route can use different paths, methods... - -If no content-type header is set then we will try to use the most acceptable content type as defined by `io.vertx.ext.web.RoutingContext.getAcceptableContentType()`. - -[source,java] ----- -@Route(path = "/person", produces = "text/html") <1> -String person() { - // ... -} ----- -<1> If the `accept` header matches `text/html` we set the content type automatically. - -=== Handling conflicting routes - -You may end up with multiple routes matching a given path. -In the following example, both route matches `/accounts/me`: - -[source, java] ----- -@Route(path = "/accounts/:id", methods = HttpMethod.GET) -void getAccount(RoutingContext ctx) { - ... -} - -@Route(path = "/accounts/me", methods = HttpMethod.GET) -void getCurrentUserAccount(RoutingContext ctx) { - ... -} ----- - -As a consequence, the result is not the expected one as the first route is called with the path parameter `id` set to `me`. -To avoid the conflict, use the `order` attribute: - -[source, java] ----- -@Route(path = "/accounts/:id", methods = HttpMethod.GET, order = 2) -void getAccount(RoutingContext ctx) { - ... -} - -@Route(path = "/accounts/me", methods = HttpMethod.GET, order = 1) -void getCurrentUserAccount(RoutingContext ctx) { - ... -} ----- - -By giving a lower order to the second route, it gets evaluated first. -If the request path matches, it is invoked, otherwise the other routes are evaluated. - -=== `@RouteBase` - -This annotation can be used to configure some defaults for reactive routes declared on a class. - -[source,java] ----- -@RouteBase(path = "simple", produces = "text/plain") <1> <2> -public class SimpleRoutes { - - @Route(path = "ping") // the final path is /simple/ping - void ping(RoutingContext rc) { - rc.response().end("pong"); - } -} ----- -<1> The `path` value is used as a prefix for any route method declared on the class where `Route#path()` is used. -<2> The value of `produces()` is used for content-based routing for all routes where `Route#produces()` is empty. - - -== Reactive Route Methods - -A route method must be a non-private non-static method of a CDI bean. -If the annotated method returns `void` then it has to accept at least one argument - see the supported types below. -If the annotated method does not return `void` then the arguments are optional. - -NOTE: Methods that return `void` must __end__ the response or the HTTP request to this route will never end. -Some methods of `RoutingExchange` do it for you, others not and you must call the `end()` method of the response by yourself, please refer to its JavaDoc for more information. - -A route method can accept arguments of the following types: - -* `io.vertx.ext.web.RoutingContext` -* `io.quarkus.vertx.web.RoutingExchange` -* `io.vertx.core.http.HttpServerRequest` -* `io.vertx.core.http.HttpServerResponse` -* `io.vertx.mutiny.core.http.HttpServerRequest` -* `io.vertx.mutiny.core.http.HttpServerResponse` - -Furthermore, it is possible to inject the `HttpServerRequest` parameters into a method parameter annotated with `@io.quarkus.vertx.web.Param`: - -[options="header",cols="1,1"] -|=== -|Parameter Type | Obtained via -//------------- -|`java.lang.String` |`routingContext.request().getParam()` -|`java.util.Optional` |`routingContext.request().getParam()` -|`java.util.List` |`routingContext.request().params().getAll()` -|=== - -.Request Parameter Example -[source,java] ----- -@Route -String hello(@Param Optional name) { - return "Hello " + name.orElse("world"); -} ----- - -The `HttpServerRequest` headers can be injected into a method parameter annotated with `@io.quarkus.vertx.web.Header`: - -[options="header",cols="1,1"] -|=== -|Parameter Type | Obtained via -//------------- -|`java.lang.String` |`routingContext.request().getHeader()` -|`java.util.Optional` |`routingContext.request().getHeader()` -|`java.util.List` |`routingContext.request().headers().getAll()` -|=== - -.Request Header Example -[source,java] ----- -@Route -String helloFromHeader(@Header("My-Header") String header) { - return header; -} ----- - -The request body can be injected into a method parameter annotated with `@io.quarkus.vertx.web.Body`. - -[options="header",cols="1,1"] -|=== -|Parameter Type | Obtained via -//------------- -|`java.lang.String` |`routingContext.getBodyAsString()` -|`io.vertx.core.buffer.Buffer` |`routingContext.getBody()` -|`io.vertx.core.json.JsonObject` |`routingContext.getBodyAsJson()` -|`io.vertx.core.json.JsonArray` |`routingContext.getBodyAsJsonArray()` -|any other type |`routingContext.getBodyAsJson().mapTo(MyPojo.class)` -|=== - -.Request Body Example -[source,java] ----- -@Route(produces = "application/json") -Person createPerson(@Body Person person, @Param("id") Optional primaryKey) { - person.setId(primaryKey.map(Integer::valueOf).orElse(42)); - return person; -} ----- - -A failure handler can declare a single method parameter whose type extends `Throwable`. -The type of the parameter is used to match the result of `RoutingContext#failure()`. - -.Failure Handler Example -[source,java] ----- -@Route(type = HandlerType.FAILURE) -void unsupported(UnsupportedOperationException e, HttpServerResponse response) { - response.setStatusCode(501).end(e.getMessage()); -} ----- - -=== Returning Unis - -In a reactive route, you can return a `Uni` directly: - -[source,java] ----- -@Route(path = "/hello") -Uni hello(RoutingContext context) { - return Uni.createFrom().item("Hello world!"); -} - -@Route(path = "/person") -Uni getPerson(RoutingContext context) { - return Uni.createFrom().item(() -> new Person("neo", 12345)); -} ----- - -Returning `Unis` is convenient when using a reactive client: - -[source,java] ----- -@Route(path = "/mail") -Uni sendEmail(RoutingContext context) { - return mailer.send(...); -} ----- - -The item produced by the returned `Uni` can be: - -* a string - written into the HTTP response directly -* a buffer - written into the HTTP response directly -* an object - written into the HTTP response after having been encoded into JSON. -The `content-type` header is set to `application/json` if not already set. - -If the returned `Uni` produces a failure (or is `null`), an HTTP 500 response is written. - -Returning a `Uni` produces a 204 response (no content). - -=== Returning results - -You can also return a result directly: - -[source, java] ----- -@Route(path = "/hello") -String helloSync(RoutingContext context) { - return "Hello world"; -} ----- - -Be aware, the processing must be **non-blocking** as reactive routes are invoked on the IO Thread. -Otherwise, set the `type` attribute of the `@Route` annotation to `Route.HandlerType.BLOCKING`, or use the `@io.smallrye.common.annotation.Blocking` annotation. - -The method can return: - -* a string - written into the HTTP response directly -* a buffer - written into the HTTP response directly -* an object - written into the HTTP response after having been encoded into JSON. -The `content-type` header is set to `application/json` if not already set. - -=== Returning Multis - -A reactive route can return a `Multi`. -The items are written one by one, in the response. -The response `Transfer-Encoding` header is set to `chunked`. - -[source, java] ----- -@Route(path = "/hello") -Multi hellos(RoutingContext context) { - return Multi.createFrom().items("hello", "world", "!"); // <1> -} ----- -<1> Produces `helloworld!` - -The method can return: - -* a `Multi` - the items are written one by one (one per _chunk_) in the response. -* a `Multi` - the buffers are written one by one (one per _chunk_) without any processing. -* a `Multi` - the items are encoded to JSON written one by one in the response. - - -[source, java] ----- -@Route(path = "/people") -Multi people(RoutingContext context) { - return Multi.createFrom().items( - new Person("superman", 1), - new Person("batman", 2), - new Person("spiderman", 3)); -} ----- - -The previous snippet produces: - -[source, json] ----- -{"name":"superman", "id": 1} // chunk 1 -{"name":"batman", "id": 2} // chunk 2 -{"name":"spiderman", "id": 3} // chunk 3 ----- - -=== Streaming JSON Array items - -You can return a `Multi` to produce a JSON Array, where every item is an item from this array. -The response is written item by item to the client. -To do that set the `produces` attribute to `"application/json"` (or `ReactiveRoutes.APPLICATION_JSON`). - -[source, java] ----- -@Route(path = "/people", produces = ReactiveRoutes.APPLICATION_JSON) -Multi people(RoutingContext context) { - return Multi.createFrom().items( - new Person("superman", 1), - new Person("batman", 2), - new Person("spiderman", 3)); -} ----- - -The previous snippet produces: - -[source, json] ----- -[ - {"name":"superman", "id": 1} // chunk 1 - ,{"name":"batman", "id": 2} // chunk 2 - ,{"name":"spiderman", "id": 3} // chunk 3 -] ----- - -TIP: The `produces` attribute is an array. -When you pass a single value you can omit the "{" and "}". -Note that `"application/json"` must be the first value in the array. - -Only `Multi`, `Multi` and `Multi` can be written into the JSON Array. -Using a `Multi` produces an empty array. -You cannot use `Multi`. -If you need to use `Buffer`, transform the content into a JSON or String representation first. - -[NOTE] -.Deprecation of `asJsonArray` -==== -The `ReactiveRoutes.asJsonArray` has been deprecated as it is not compatible with the security layer of Quarkus. -==== - -=== Event Stream and Server-Sent Event support - -You can return a `Multi` to produce an event source (stream of server sent events). -To enable this feature, set the `produces` attribute to `"text/event-stream"` (or `ReactiveRoutes.EVENT_STREAM`), such as in: - -[source, java] ----- -@Route(path = "/people", produces = ReactiveRoutes.EVENT_STREAM) -Multi people(RoutingContext context) { - return Multi.createFrom().items( - new Person("superman", 1), - new Person("batman", 2), - new Person("spiderman", 3)); -} ----- - -This method would produce: - -[source, text] ----- -data: {"name":"superman", "id": 1} -id: 0 - -data: {"name":"batman", "id": 2} -id: 1 - -data: {"name":"spiderman", "id": 3} -id: 2 - ----- - -TIP: The `produces` attribute is an array. -When you pass a single value you can omit the "{" and "}". -Note that `"text/event-stream"` must be the first value in the array. - -You can also implement the `io.quarkus.vertx.web.ReactiveRoutes.ServerSentEvent` interface to customize the `event` and `id` section of the server sent event: - -[source, java] ----- -class PersonEvent implements ReactiveRoutes.ServerSentEvent { - public String name; - public int id; - - public PersonEvent(String name, int id) { - this.name = name; - this.id = id; - } - - @Override - public Person data() { - return new Person(name, id); // Will be JSON encoded - } - - @Override - public long id() { - return id; - } - - @Override - public String event() { - return "person"; - } -} ----- - -Using a `Multi` would produce: - -[source, text] ----- -event: person -data: {"name":"superman", "id": 1} -id: 1 - -event: person -data: {"name":"batman", "id": 2} -id: 2 - -event: person -data: {"name":"spiderman", "id": 3} -id: 3 - ----- - -[NOTE] -.Deprecation of `asEventStream` -==== -The `ReactiveRoutes.asEventStream` has been deprecated as it is not compatible with the security layer of Quarkus. -==== - -=== Json Stream in NDJSON format - -You can return a `Multi` to produce a newline delimited stream of JSON values. -To enable this feature, set the `produces` attribute of the `@Route` annotation to `"application/x-ndjson"` (or `ReactiveRoutes.ND_JSON`): - -[source, java] ----- -@Route(path = "/people", produces = ReactiveRoutes.ND_JSON) -Multi people(RoutingContext context) { - return ReactiveRoutes.asJsonStream(Multi.createFrom().items( - new Person("superman", 1), - new Person("batman", 2), - new Person("spiderman", 3) - )); -} ----- - -This method would produce: - -[source, text] ----- -{"name":"superman", "id": 1} -{"name":"batman", "id": 2} -{"name":"spiderman", "id": 3} - ----- - -TIP: The `produces` attribute is an array. When you pass a single value you can omit the "{" and "}". -Note that `"application/x-ndjson"` must be the first value in the array. - -You can also provide strings instead of Objects, in that case the strings will be wrapped in quotes to become valid JSON values: - -[source, java] ----- -@Route(path = "/people", produces = ReactiveRoutes.ND_JSON) -Multi people(RoutingContext context) { - return ReactiveRoutes.asJsonStream(Multi.createFrom().items( - "superman", - "batman", - "spiderman" - )); -} ----- - -[source, text] ----- -"superman" -"batman" -"spiderman" - ----- - -[NOTE] -.Deprecation of `asJsonStream` -==== -The `ReactiveRoutes.asJsonStream` has been deprecated as it is not compatible with the security layer of Quarkus. -==== - -=== Using Bean Validation - -You can combine reactive routes and Bean Validation. -First, don't forget to add the `quarkus-hibernate-validator` extension to your project. -Then, you can add constraints to your route parameter (annotated with `@Param` or `@Body`): - -[source,java] ----- -@Route(produces = "application/json") -Person createPerson(@Body @Valid Person person, @NonNull @Param("id") String primaryKey) { - // ... -} ----- - -If the parameters do not pass the tests, it returns an HTTP 400 response. -If the request accepts JSON payload, the response follows the https://opensource.zalando.com/problem/constraint-violation/[Problem] format. - -When returning an object or a `Uni`, you can also use the `@Valid` annotation: - -[source,java] ----- -@Route(...) -@Valid Uni createPerson(@Body @Valid Person person, @NonNull @Param("id") String primaryKey) { - // ... -} ----- - -If the item produced by the route does not pass the validation, it returns a HTTP 500 response. -If the request accepts JSON payload, the response follows the https://opensource.zalando.com/problem/constraint-violation/[Problem] format. - -Note that only `@Valid` is supported on the return type. -The returned class can use any constraint. -In the case of `Uni`, it checks the item produced asynchronously. - -== Using the Vert.x Web Router - -You can also register your route directly on the _HTTP routing layer_ by registering routes directly on the `Router` object. -To retrieve the `Router` instance at startup: - -[source,java] ----- -public void init(@Observes Router router) { - router.get("/my-route").handler(rc -> rc.response().end("Hello from my route")); -} ----- - -Check the https://vertx.io/docs/vertx-web/java/#_basic_vert_x_web_concepts[Vert.x Web documentation] to know more about the route registration, options, and available handlers. - - -[NOTE] -==== -`Router` access is provided by the `quarkus-vertx-http` extension. -If you use `quarkus-resteasy` or `quarkus-reactive-routes`, the extension will be added automatically. -==== - -You can also receive the Mutiny variant of the Router (`io.vertx.mutiny.ext.web.Router`): - -[source,java] ----- -public void init(@Observes io.vertx.mutiny.ext.web.Router router) { - router.get("/my-route").handler(rc -> rc.response().endAndForget("Hello from my route")); -} ----- - -== Intercepting HTTP requests - -You can also register filters that would intercept incoming HTTP requests. -Note that these filters are also applied for servlets, JAX-RS resources, and reactive routes. - -For example, the following code snippet registers a filter adding an HTTP header: - -[source,java] ----- -package org.acme.reactive.routes; - -import io.vertx.ext.web.RoutingContext; - -public class MyFilters { - - @RouteFilter(100) <1> - void myFilter(RoutingContext rc) { - rc.response().putHeader("X-Header", "intercepting the request"); - rc.next(); <2> - } -} ----- - -<1> The `RouteFilter#value()` defines the priority used to sort the filters - filters with higher priority are called first. -<2> The filter is likely required to call the `next()` method to continue the chain. - -== Adding OpenAPI and Swagger UI - -You can add support for link:https://www.openapis.org/[OpenAPI] and link:https://swagger.io/tools/swagger-ui/[Swagger UI] by using the `quarkus-smallrye-openapi` extension. - -Add the extension by running this command: - -:add-extension-extensions: quarkus-smallrye-openapi -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-openapi - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-openapi") ----- - -This is enough to generate a basic OpenAPI schema document from your Vert.x Routes: - -[source,bash] ----- -curl http://localhost:8080/q/openapi ----- - -You will see the generated OpenAPI schema document: - -[source, yaml] ----- ---- -openapi: 3.0.3 -info: - title: Generated API - version: "1.0" -paths: - /greetings: - get: - responses: - "204": - description: No Content - /hello: - get: - responses: - "204": - description: No Content - /world: - get: - responses: - "200": - description: OK - content: - '*/*': - schema: - type: string ----- - -Also see xref:openapi-swaggerui.adoc[the OpenAPI Guide]. - -=== Adding MicroProfile OpenAPI Annotations - -You can use link:https://github.com/eclipse/microprofile-open-api[MicroProfile OpenAPI] to better document your schema, -example, adding header info, or specifying the return type on `void` methods might be usefull : - -[source, java] ----- -@OpenAPIDefinition( // <1> - info = @Info( - title="Greeting API", - version = "1.0.1", - contact = @Contact( - name = "Greeting API Support", - url = "http://exampleurl.com/contact", - email = "techsupport@example.com"), - license = @License( - name = "Apache 2.0", - url = "https://www.apache.org/licenses/LICENSE-2.0.html")) -) -@ApplicationScoped -public class MyDeclarativeRoutes { - - // neither path nor regex is set - match a path derived from the method name - @Route(methods = Route.HttpMethod.GET) - @APIResponse(responseCode="200", - description="Say hello", - content=@Content(mediaType="application/json", schema=@Schema(type=SchemaType.STRING))) // <2> - void hello(RoutingContext rc) { - rc.response().end("hello"); - } - - @Route(path = "/world") - String helloWorld() { - return "Hello world!"; - } - - @Route(path = "/greetings", methods = HttpMethod.GET) - @APIResponse(responseCode="200", - description="Greeting", - content=@Content(mediaType="application/json", schema=@Schema(type=SchemaType.STRING))) - void greetings(RoutingExchange ex) { - ex.ok("hello " + ex.getParam("name").orElse("world")); - } -} ----- -<1> Header information about your API. -<2> Defining the response - -This will generate this OpenAPI schema: - -[source, yaml] ----- ---- -openapi: 3.0.3 -info: - title: Greeting API - contact: - name: Greeting API Support - url: http://exampleurl.com/contact - email: techsupport@example.com - license: - name: Apache 2.0 - url: https://www.apache.org/licenses/LICENSE-2.0.html - version: 1.0.1 -paths: - /greetings: - get: - responses: - "200": - description: Greeting - content: - application/json: - schema: - type: string - /hello: - get: - responses: - "200": - description: Say hello - content: - application/json: - schema: - type: string - /world: - get: - responses: - "200": - description: OK - content: - '*/*': - schema: - type: string ----- - -=== Using Swagger UI - -Swagger UI is included by default when running in `dev` or `test` mode, and can optionally added to `prod` mode. -See <> Guide for more details. - -Navigate to link:http://localhost:8080/q/swagger-ui/[localhost:8080/q/swagger-ui/] and you will see the Swagger UI screen: - -image:reactive-routes-guide-screenshot01.png[alt=Swagger UI] - -== Conclusion - -This guide has introduced how you can use reactive routes to define an HTTP endpoint. -It also describes the structure of the Quarkus HTTP layer and how to write filters. diff --git a/_versions/2.7/guides/reactive-sql-clients.adoc b/_versions/2.7/guides/reactive-sql-clients.adoc deleted file mode 100644 index 227a95bfc75..00000000000 --- a/_versions/2.7/guides/reactive-sql-clients.adoc +++ /dev/null @@ -1,735 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Reactive SQL Clients - -include::./attributes.adoc[] -:config-file: application.properties - -The Reactive SQL Clients have a straightforward API focusing on scalability and low-overhead. -Currently, the following database servers are supported: - -* IBM Db2 -* PostgreSQL -* MariaDB/MySQL -* Microsoft SQL Server -* Oracle - -[NOTE] -==== -The Reactive SQL Client for Oracle is considered _tech preview_. - -In _tech preview_ mode, early feedback is requested to mature the idea. -There is no guarantee of stability in the platform until the solution matures. -Feedback is welcome on our https://groups.google.com/d/forum/quarkus-dev[mailing list] or as issues in our https://github.com/quarkusio/quarkus/issues[GitHub issue tracker]. -==== - -In this guide, you will learn how to implement a simple CRUD application exposing data stored in *PostgreSQL* over a RESTful API. - -NOTE: Extension and connection pool class names for each client can be found at the bottom of this document. - -IMPORTANT: If you are not familiar with the Quarkus Vert.x extension, consider reading the xref:vertx.adoc[Using Eclipse Vert.x] guide first. - -The application shall manage fruit entities: - -[source,java] ----- -public class Fruit { - - public Long id; - - public String name; - - public Fruit() { - } - - public Fruit(String name) { - this.name = name; - } - - public Fruit(Long id, String name) { - this.id = id; - this.name = name; - } -} ----- - -[TIP] -==== -Do you need a ready-to-use PostgreSQL server to try out the examples? - -[source,bash] ----- -docker run -it --rm=true --name quarkus_test -e POSTGRES_USER=quarkus_test -e POSTGRES_PASSWORD=quarkus_test -e POSTGRES_DB=quarkus_test -p 5432:5432 postgres:14.1 ----- -==== - -== Installing - -=== Reactive PostgreSQL Client extension - -First, make sure your project has the `quarkus-reactive-pg-client` extension enabled. -If you are creating a new project, use the following command: - -:create-app-artifact-id: reactive-pg-client-quickstart -:create-app-extensions: resteasy,reactive-pg-client,resteasy-mutiny -include::includes/devtools/create-app.adoc[] - -If you have an already created project, the `reactive-pg-client` extension can be added to an existing Quarkus project with the `add-extension` command: - -:add-extension-extensions: reactive-pg-client -include::includes/devtools/extension-add.adoc[] - -Otherwise, you can manually add the dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-reactive-pg-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-reactive-pg-client") ----- - -=== Mutiny - -Reactive REST endpoints in your application that return Uni or Multi need `Mutiny support for RESTEasy` extension (`io.quarkus:quarkus-resteasy-mutiny`) to work properly: - -:add-extension-extensions: resteasy-mutiny -include::includes/devtools/extension-add.adoc[] - -[TIP] -==== -In this guide, we will use the Mutiny API of the Reactive PostgreSQL Client. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -=== JSON Binding - -We will expose `Fruit` instances over HTTP in the JSON format. -Consequently, you also need to add the `quarkus-resteasy-jackson` extension: - -:add-extension-extensions: resteasy-jackson -include::includes/devtools/extension-add.adoc[] - -If you prefer not to use the command line, manually add the dependency to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-jackson") ----- - -Of course, this is only a requirement for this guide, not any application using the Reactive PostgreSQL Client. - -== Configuring - -The Reactive PostgreSQL Client can be configured with standard Quarkus datasource properties and a reactive URL: - -[source,properties] -.src/main/resources/application.properties ----- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus_test -quarkus.datasource.password=quarkus_test -quarkus.datasource.reactive.url=postgresql://localhost:5432/quarkus_test ----- - -With that you may create your `FruitResource` skeleton and `@Inject` a `io.vertx.mutiny.pgclient.PgPool` instance: - -[source,java] -.src/main/java/org/acme/vertx/FruitResource.java ----- -@Path("fruits") -public class FruitResource { - - @Inject - io.vertx.mutiny.pgclient.PgPool client; -} ----- - -== Database schema and seed data - -Before we implement the REST endpoint and data management code, we need to setup the database schema. -It would also be convenient to have some data inserted up-front. - -For production we would recommend to use something like the xref:flyway.adoc[Flyway database migration tool]. -But for development we can simply drop and create the tables on startup, and then insert a few fruits. - -[source,java] -.src/main/java/org/acme/vertx/FruitResource.java ----- -@Inject -@ConfigProperty(name = "myapp.schema.create", defaultValue = "true") // <1> -boolean schemaCreate; - -void config(@Observes StartupEvent ev) { - if (schemaCreate) { - initdb(); - } -} - -private void initdb() { - // TODO -} ----- - -TIP: You may override the default value of the `myapp.schema.create` property in the `application.properties` file. - -Almost ready! -To initialize the DB in development mode, we will use the client simple `query` method. -It returns a `Uni` and thus can be composed to execute queries sequentially: - -[source,java] ----- -client.query("DROP TABLE IF EXISTS fruits").execute() - .flatMap(r -> client.query("CREATE TABLE fruits (id SERIAL PRIMARY KEY, name TEXT NOT NULL)").execute()) - .flatMap(r -> client.query("INSERT INTO fruits (name) VALUES ('Orange')").execute()) - .flatMap(r -> client.query("INSERT INTO fruits (name) VALUES ('Pear')").execute()) - .flatMap(r -> client.query("INSERT INTO fruits (name) VALUES ('Apple')").execute()) - .await().indefinitely(); ----- - -NOTE: Wondering why we need block until the latest query is completed? -This code is part of a `@PostConstruct` method and Quarkus invokes it synchronously. -As a consequence, returning prematurely could lead to serving requests while the database is not ready yet. - -That's it! -So far we have seen how to configure a pooled client and execute simple queries. -We are now ready to develop the data management code and implement our RESTful endpoint. - -== Using - -=== Query results traversal - -In development mode, the database is set up with a few rows in the `fruits` table. -To retrieve all the data, we will use the `query` method again: - -[source,java] ----- -Uni> rowSet = client.query("SELECT id, name FROM fruits ORDER BY name ASC").execute(); ----- - -When the operation completes, we will get a `RowSet` that has all the rows buffered in memory. -A `RowSet` is an `java.lang.Iterable` and thus can be converted to a `Multi`: - -[source,java] ----- -Multi fruits = rowSet - .onItem().transformToMulti(set -> Multi.createFrom().iterable(set)) - .onItem().transform(Fruit::from); ----- - -The `Fruit#from` method converts a `Row` instance to a `Fruit` instance. -It is extracted as a convenience for the implementation of the other data management methods: - -[source,java] -.src/main/java/org/acme/vertx/Fruit.java ----- -private static Fruit from(Row row) { - return new Fruit(row.getLong("id"), row.getString("name")); -} ----- - -Putting it all together, the `Fruit.findAll` method looks like: - -[source,java] -.src/main/java/org/acme/vertx/Fruit.java ----- -public static Multi findAll(PgPool client) { - return client.query("SELECT id, name FROM fruits ORDER BY name ASC").execute() - .onItem().transformToMulti(set -> Multi.createFrom().iterable(set)) - .onItem().transform(Fruit::from); -} ----- - -And the endpoint to get all fruits from the backend: - -[source,java] -.src/main/java/org/acme/vertx/FruitResource.java ----- -@GET -public Multi get() { - return Fruit.findAll(client); -} ----- - -Now start Quarkus in dev mode with: - -include::includes/devtools/dev.adoc[] - -Lastly, open your browser and navigate to http://localhost:8080/fruits, you should see: - -[source,json] ----- -[{"id":3,"name":"Apple"},{"id":1,"name":"Orange"},{"id":2,"name":"Pear"}] ----- - -=== Prepared queries - -The Reactive PostgreSQL Client can also prepare queries and take parameters that are replaced in the SQL statement at execution time: - -[source,java] ----- -client.preparedQuery("SELECT id, name FROM fruits WHERE id = $1").execute(Tuple.of(id)) ----- - -TIP: For PostgreSQL, the SQL string can refer to parameters by position, using `$1`, `$2`, ...etc. -Please refer to the <> section for other databases. - -Similar to the simple `query` method, `preparedQuery` returns an instance of `PreparedQuery>`. -Equipped with this tooling, we are able to safely use an `id` provided by the user to get the details of a particular fruit: - -[source,java] -.src/main/java/org/acme/vertx/Fruit.java ----- -public static Uni findById(PgPool client, Long id) { - return client.preparedQuery("SELECT id, name FROM fruits WHERE id = $1").execute(Tuple.of(id)) // <1> - .onItem().transform(RowSet::iterator) // <2> - .onItem().transform(iterator -> iterator.hasNext() ? from(iterator.next()) : null); // <3> -} ----- -<1> Create a `Tuple` to hold the prepared query parameters. -<2> Get an `Iterator` for the `RowSet` result. -<3> Create a `Fruit` instance from the `Row` if an entity was found. - -And in the JAX-RS resource: - -[source,java] -.src/main/java/org/acme/vertx/FruitResource.java ----- -@GET -@Path("{id}") -public Uni getSingle(@PathParam Long id) { - return Fruit.findById(client, id) - .onItem().transform(fruit -> fruit != null ? Response.ok(fruit) : Response.status(Status.NOT_FOUND)) // <1> - .onItem().transform(ResponseBuilder::build); // <2> -} ----- -<1> Prepare a JAX-RS response with either the `Fruit` instance if found or the `404` status code. -<2> Build and send the response. - -The same logic applies when saving a `Fruit`: - -[source,java] -.src/main/java/org/acme/vertx/Fruit.java ----- -public Uni save(PgPool client) { - return client.preparedQuery("INSERT INTO fruits (name) VALUES ($1) RETURNING id").execute(Tuple.of(name)) - .onItem().transform(pgRowSet -> pgRowSet.iterator().next().getLong("id")); -} ----- - -And in the web resource we handle the `POST` request: - -[source,java] -.src/main/java/org/acme/vertx/FruitResource.java ----- -@POST -public Uni create(Fruit fruit) { - return fruit.save(client) - .onItem().transform(id -> URI.create("/fruits/" + id)) - .onItem().transform(uri -> Response.created(uri).build()); -} ----- - -=== Result metadata - -A `RowSet` does not only hold your data in memory, it also gives you some information about the data itself, such as: - -* the number of rows affected by the query (inserted/deleted/updated/retrieved depending on the query type), -* the column names. - -Let's use this to support removal of fruits in the database: - -[source,java] -.src/main/java/org/acme/vertx/Fruit.java ----- -public static Uni delete(PgPool client, Long id) { - return client.preparedQuery("DELETE FROM fruits WHERE id = $1").execute(Tuple.of(id)) - .onItem().transform(pgRowSet -> pgRowSet.rowCount() == 1); // <1> -} ----- -<1> Inspect metadata to determine if a fruit has been actually deleted. - -And to handle the HTTP `DELETE` method in the web resource: - -[source,java] -.src/main/java/org/acme/vertx/FruitResource.java ----- -@DELETE -@Path("{id}") -public Uni delete(@PathParam Long id) { - return Fruit.delete(client, id) - .onItem().transform(deleted -> deleted ? Status.NO_CONTENT : Status.NOT_FOUND) - .onItem().transform(status -> Response.status(status).build()); -} ----- - -With `GET`, `POST` and `DELETE` methods implemented, we may now create a minimal web page to try the RESTful application out. -We will use https://jquery.com/[jQuery] to simplify interactions with the backend: - -[source,html] ----- - - - - - Reactive PostgreSQL Client - Quarkus - - - - - -

Fruits API Testing

- -

All fruits

-
- -

Create Fruit

- - -
- - - ----- - -In the Javascript code, we need a function to refresh the list of fruits when: - -* the page is loaded, or -* a fruit is added, or -* a fruit is deleted. - -[source,javascript] ----- -function refresh() { - $.get('/fruits', function (fruits) { - var list = ''; - (fruits || []).forEach(function (fruit) { // <1> - list = list - + '' - + '' + fruit.id + '' - + '' + fruit.name + '' - + 'Delete' - + '' - }); - if (list.length > 0) { - list = '' - + '' - + list - + '
IdName
'; - } else { - list = "No fruits in database" - } - $('#all-fruits').html(list); - }); -} - -function deleteFruit(id) { - $.ajax('/fruits/' + id, {method: 'DELETE'}).then(refresh); -} - -$(document).ready(function () { - - $('#create-fruit-button').click(function () { - var fruitName = $('#fruit-name').val(); - $.post({ - url: '/fruits', - contentType: 'application/json', - data: JSON.stringify({name: fruitName}) - }).then(refresh); - }); - - refresh(); -}); ----- -<1> The `fruits` parameter is not defined when the database is empty. - -All done! -Navigate to http://localhost:8080/fruits.html and read/create/delete some fruits. - -[[reactive-sql-clients-details]] -== Database Clients details - -[cols="10,40,40,10"] -|=== -|Database |Extension name |Pool class name |Placeholders - -|IBM Db2 -|`quarkus-reactive-db2-client` -|`io.vertx.mutiny.db2client.DB2Pool` -|`?` - -|MariaDB/MySQL -|`quarkus-reactive-mysql-client` -|`io.vertx.mutiny.mysqlclient.MySQLPool` -|`?` - -|Microsoft SQL Server -|`quarkus-reactive-mssql-client` -|`io.vertx.mutiny.mssqlclient.MSSQLPool` -|`@p1`, `@p2`, etc. - -|Oracle -|`quarkus-reactive-oracle-client` -|`io.vertx.mutiny.oracleclient.OraclePool` -|`?` - -|PostgreSQL -|`quarkus-reactive-pg-client` -|`io.vertx.mutiny.pgclient.PgPool` -|`$1`, `$2`, etc. -|=== - -== Transactions - -The reactive SQL clients support transactions. -A transaction is started with `io.vertx.mutiny.sqlclient.SqlConnection#begin` and terminated with either `io.vertx.mutiny.sqlclient.Transaction#commit` or `io.vertx.mutiny.sqlclient.Transaction#rollback`. -All these operations are asynchronous: - -* `connection.begin()` returns a `Uni` -* `transaction.commit()` and `transaction.rollback()` return `Uni` - -Managing transactions in the reactive programming world can be cumbersome. -Instead of writing repetitive and complex (thus error-prone!) code, you can use the `io.vertx.mutiny.sqlclient.Pool#withTransaction` helper method. - -The following snippet shows how to run 2 insertions in the same transaction: - -[source, java] ----- -public static Uni insertTwoFruits(PgPool client, Fruit fruit1, Fruit fruit2) { - return client.withTransaction(conn -> { - Uni> insertOne = conn.preparedQuery("INSERT INTO fruits (name) VALUES ($1) RETURNING id") - .execute(Tuple.of(fruit1.name)); - Uni> insertTwo = conn.preparedQuery("INSERT INTO fruits (name) VALUES ($1) RETURNING id") - .execute(Tuple.of(fruit2.name)); - - return Uni.combine().all().unis(insertOne, insertTwo) - // Ignore the results (the two ids) - .discardItems(); - }); -} ----- - -In this example, the transaction is automatically committed on success or rolled back on failure. - -You can also create dependent actions as follows: - -[source, java] ----- -return client.withTransaction(conn -> conn - - .preparedQuery("INSERT INTO person (firstname,lastname) VALUES ($1,$2) RETURNING id") - .execute(Tuple.of(person.getFirstName(), person.getLastName())) - - .onItem().transformToUni(id -> conn.preparedQuery("INSERT INTO addr (person_id,addrline1) VALUES ($1,$2)") - .execute(Tuple.of(id.iterator().next().getLong("id"), person.getLastName()))) - - .onItem().ignore().andContinueWithNull()); ----- - -== Working with batch query results - -When executing batch queries, reactive SQL clients return a `RowSet` that corresponds to the results of the first element in the batch. -To get the results of the following batch elements, you must invoke the `RowSet#next` method until it returns `null`. - -Let's say you want to update some rows and compute the total number of affected rows. -You must inspect each `RowSet`: - -[source, java] ----- -PreparedQuery> preparedQuery = client.preparedQuery("UPDATE fruits SET name = $1 WHERE id = $2"); - -Uni> rowSet = preparedQuery.executeBatch(Arrays.asList( - Tuple.of("Orange", 1), - Tuple.of("Pear", 2), - Tuple.of("Apple", 3))); - -Uni totalAffected = rowSet.onItem().transform(res -> { - int total = 0; - do { - total += res.rowCount(); // <1> - } while ((res = res.next()) != null); // <2> - return total; -}); ----- -<1> Compute the sum of `RowSet#rowCount`. -<2> Invoke `RowSet#next` until it returns `null`. - -As another example, if you want to load all the rows you just inserted, you must concatenate the contents of each `RowSet`: - -[source, java] ----- -PreparedQuery> preparedQuery = client.preparedQuery("INSERT INTO fruits (name) VALUES ($1) RETURNING *"); - -Uni> rowSet = preparedQuery.executeBatch(Arrays.asList( - Tuple.of("Orange"), - Tuple.of("Pear"), - Tuple.of("Apple"))); - -// Generate a Multi of RowSet items -Multi> rowSets = rowSet.onItem().transformToMulti(res -> { - return Multi.createFrom().generator(() -> res, (rs, emitter) -> { - RowSet next = null; - if (rs != null) { - emitter.emit(rs); - next = rs.next(); - } - if (next == null) { - emitter.complete(); - } - return next; - }); -}); - -// Transform each RowSet into Multi of Row items and Concatenate -Multi rows = rowSets.onItem().transformToMultiAndConcatenate(Multi.createFrom()::iterable); ----- - -== Multiple Datasources - -The reactive SQL clients support defining several datasources. - -A typical configuration with several datasources would look like: - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql <1> -quarkus.datasource.username=user-default -quarkus.datasource.password=password-default -quarkus.datasource.reactive.url=postgresql://localhost:5432/default - -quarkus.datasource."additional1".db-kind=postgresql <2> -quarkus.datasource."additional1".username=user-additional1 -quarkus.datasource."additional1".password=password-additional1 -quarkus.datasource."additional1".reactive.url=postgresql://localhost:5432/additional1 - -quarkus.datasource."additional2".db-kind=mysql <3> -quarkus.datasource."additional2".username=user-additional2 -quarkus.datasource."additional2".password=password-additional2 -quarkus.datasource."additional2".reactive.url=mysql://localhost:3306/additional2 ----- -<1> The default datasource - using PostgreSQL. -<2> A named datasource called `additional1` - using PostgreSQL. -<3> A named datasource called `additional2` - using MySQL. - -You can then inject the clients as follows: - -[source,java] ----- -@Inject <1> -PgPool defaultClient; - -@Inject -@ReactiveDataSource("additional1") <2> -PgPool additional1Client; - -@Inject -@ReactiveDataSource("additional2") -MySQLPool additional2Client; ----- -<1> Injecting the client for the default datasource does not require anything special. -<2> For a named datasource, you use the `@ReactiveDataSource` CDI qualifier with the datasource name as its value. - -== UNIX Domain Socket connections - -The PostgreSQL and MariaDB/MySQL clients can be configured to connect to the server through a UNIX domain socket. - -First make sure that xref:vertx-reference.adoc#native-transport[native transport support] is enabled. - -Then configure the database connection url. -This step depends on the database type. - -=== PostgreSQL - -PostgresSQL domain socket paths have the following form: `/.s.PGSQL.` - -The database connection url must be configured so that: - -* the `host` is the `directory` in the socket path -* the `port` is the `port` in the socket path - -Consider the following socket path: `/var/run/postgresql/.s.PGSQL.5432`. - -In `application.properties` add: - -[source,properties] ----- -quarkus.datasource.reactive.url=postgresql://:5432/quarkus_test?host=/var/run/postgresql ----- - -=== MariaDB/MySQL - -The database connection url must be configured so that the `host` is the socket path. - -Consider the following socket path: `/var/run/mysqld/mysqld.sock`. - -In `application.properties` add: - -[source,properties] ----- -quarkus.datasource.reactive.url=mysql:///quarkus_test?host=/var/run/mysqld/mysqld.sock ----- - -== Pooled Connection `idle-timeout` - -Reactive datasources can be configured with an `idle-timeout` (in milliseconds). -It is the maximum time a connection remains unused in the pool before it is closed. - -NOTE: The `idle-timeout` is disabled by default. - -For example, you could expire idle connections after 60 minutes: - -[source,properties] ----- -quarkus.datasource.reactive.idle-timeout=PT60M ----- - -== Configuration Reference - -=== Common Datasource - -include::{generated-dir}/config/quarkus-datasource.adoc[opts=optional, leveloffset=+1] - -=== Reactive Datasource - -include::{generated-dir}/config/quarkus-reactive-datasource.adoc[opts=optional, leveloffset=+1] - -=== IBM Db2 - -include::{generated-dir}/config/quarkus-reactive-db2-client.adoc[opts=optional, leveloffset=+1] - -=== MariaDB/MySQL - -include::{generated-dir}/config/quarkus-reactive-mysql-client.adoc[opts=optional, leveloffset=+1] - -=== Microsoft SQL Server - -include::{generated-dir}/config/quarkus-reactive-mssql-client.adoc[opts=optional, leveloffset=+1] - -=== Oracle - -include::{generated-dir}/config/quarkus-reactive-oracle-client.adoc[opts=optional, leveloffset=+1] - -=== PostgreSQL - -include::{generated-dir}/config/quarkus-reactive-pg-client.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/reaugmentation.adoc b/_versions/2.7/guides/reaugmentation.adoc deleted file mode 100644 index 001f11bac68..00000000000 --- a/_versions/2.7/guides/reaugmentation.adoc +++ /dev/null @@ -1,71 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Re-augment a Quarkus Application - -include::./attributes.adoc[] - -== What is augmentation? - -Quarkus application configuration may include two types of configuration options: - -- Build time options, handled during the application build time; -- Runtime options, that may be adjusted after the application has been built but before it has been launched. - -The augmentation is a phase of an application build process during which the byte code of the application is optimized according to the application build time configuration. -Initialization steps that used to happen when an EAR file was deployed on a Jakarta EE server such as parsing static configuration, creating proxy instances, etc. now happen at augmentation time. -CDI beans added after augmentation won't work (because of the missing proxy classes) as well as build time properties (e.g. `quarkus.datasource.db-kind`) changed after augmentation will be ignored. -Build time properties are marked with a lock icon (icon:lock[]) in the xref:all-config.adoc[list of all configuration options]. -It doesn't matter if you use profiles or any other way to override the properties. -The build time properties that were active during augmentation are baked into the build. - -> Re-augmentation is the process of recreating the augmentation output for a different build time configuration - -== When is re-augmentation useful? - -Re-augmentation is useful in case the users of your application want to be able to change some of its build time properties. -For instance changing the database driver or switching features on or off (e.g. xref:opentracing.adoc[OpenTracing] or link:{config-consul-guide}[Config Consul]). -If there are only two or three build time properties that depend on the user environment, you may consider providing alternative builds of the application. However, in case there are more such properties you may prefer shipping a mutable jar instead and let your users re-augment the application for their environment. -Please notice that you won't be able to use native images with the package type `mutable-jar`. -Think of the consequences and what other options you have! - -It is not a good idea to do re-augmentation at runtime unless you miss the good old times when starting up a server took several minutes and you could enjoy a cup of coffee until it was ready. - -== How to re-augment a Quarkus application - -In order to run the augmentation steps you need the deployment JARs of the used Quarkus extensions. -These JARs are only present in the `mutable-jar` distribution. This means that you need to build your application with `quarkus.package.type=mutable-jar`. -The `mutable-jar` distribution is the same as the `fast-jar` distribution, except for the additional folder `quarkus-app/lib/deployment` -which contains the deployment JARs and their dependencies (and some class-loader configuration). - -TIP: By default, you'll get a warning if a build time property has been changed at runtime. -You may set the `quarkus.configuration.build-time-mismatch-at-runtime=fail` property to make sure your application does not startup if there is a mismatch. -However, as of this writing changing `quarkus.datasource.db-kind` at runtime did neither fail nor produce a warning but was silently ignored. - -=== 1. Build your application as `mutable-jar` - -[source,bash] ----- -mvn clean install -Dquarkus.package.type=mutable-jar ----- - -=== 2. Re-augment your application with a different build time configuration - -In order to re-augment your Quarkus application with different build time properties, start the application with the desired configuration plus the `quarkus.launch.rebuild` system property set to `true`. - -The following example changes the `quarkus.datasource.db-kind` to `mysql`. For this to work the `mysql-extension` must have been included in the build. Augmentation can only use extensions that were present during compile time. - -[source,bash] ----- -java -jar -Dquarkus.launch.rebuild=true -Dquarkus.datasource.db-kind=mysql target/quarkus-app/quarkus-run.jar ----- - -NOTE: It does not matter if you use system properties, environment variables, profiles, or an external config file. The current -configuration will be used for augmentation (the content of `quarkus-app/quarkus` will be replaced with the new augmentation output). -The command line above will not launch the application. Quarkus will exit immediately after the application has been re-augmented. - -=== 3. Optional: Delete the deployments folder - -You may delete the directory `quarkus-app/lib/deployment` to save some space in your ZIP distribution or Docker image (remember to use a multistage Docker build to avoid unnecessary layers). After deleting the `deployment` directory, it is obviously not possible anymore to re-augment the application. diff --git a/_versions/2.7/guides/redis-dev-services.adoc b/_versions/2.7/guides/redis-dev-services.adoc deleted file mode 100644 index ff0bd7ae37c..00000000000 --- a/_versions/2.7/guides/redis-dev-services.adoc +++ /dev/null @@ -1,38 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services for Redis - -:extension-status: preview -include::./attributes.adoc[] - -Quarkus supports a feature called Dev Services that allows you to create various datasources without any config. -What that means practically, is that if you have docker running and have not configured `quarkus.redis.hosts`, -Quarkus will automatically start a Redis container when running tests or dev-mode, and automatically configure the connection. - -Available properties to customize the Redis DevService. - -include::{generated-dir}/config/quarkus-redis-client-config-group-dev-services-config.adoc[opts=optional, leveloffset=+1] - -When running the production version of the application, the Redis connection need to be configured as normal, -so if you want to include a production database config in your `application.properties` and continue to use Dev Services -we recommend that you use the `%prod.` profile to define your Redis settings. - -Dev Services for Redis relies on Docker to start the server. -If your environment does not support Docker, you will need to start the server manually, or connect to an already running server. - -== Shared server - -Most of the time you need to share the server between applications. -Dev Services for Redis implements a _service discovery_ mechanism for your multiple Quarkus applications running in _dev_ mode to share a single server. - -NOTE: Dev Services for Redis starts the container with the `quarkus-dev-service-redis` label which is used to identify the container. - -If you need multiple (shared) servers, you can configure the `quarkus.redis.devservices.service-name` attribute and indicate the server name. -It looks for a container with the same value, or starts a new one if none can be found. -The default service name is `redis`. - -Sharing is enabled by default in dev mode, but disabled in test mode. -You can disable the sharing with `quarkus.redis.devservices.shared=false`. diff --git a/_versions/2.7/guides/redis-reference.adoc b/_versions/2.7/guides/redis-reference.adoc deleted file mode 100644 index 19415fd9753..00000000000 --- a/_versions/2.7/guides/redis-reference.adoc +++ /dev/null @@ -1,101 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Redis Reference Guide - -:extension-status: preview -include::./attributes.adoc[] -:numbered: -:sectnums: - -[[custom_redis_commands]] -== How to use custom Redis Commands - -As the list of commands which are supported out-of-box by both https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/RedisClient.java[`RedisClient`] and https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/reactive/ReactiveRedisClient.java[`ReactiveRedisClient`] depends on what is available in https://github.com/vert-x3/vertx-redis-client[`vertx-redis-client`], then there might be case when you need a command which is not (yet) available via https://github.com/vert-x3/vertx-redis-client[`vertx-redis-client`]. - -In such case (if you don't want to wait for the new command to be supported in https://github.com/vert-x3/vertx-redis-client[`vertx-redis-client`]), you can implement it in either https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/RedisClient.java[`RedisClient`] or https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/reactive/ReactiveRedisClient.java[`ReactiveRedisClient`]. -In order to do so, you will need: - -- Generate a new `Command` based on the nodejs code available in https://github.com/vert-x3/vertx-redis-client[`vertx-redis-client`] repository: - -If you don't have a Redis service running locally, you can run Redis in a Docker container: -[source,shell script] ----- -docker run --name redis -p 7006:6379 -d redis ----- - -Next, while being in https://github.com/vert-x3/vertx-redis-client[`vertx-redis-client`] root folder execute: - -[source,shell script] ----- -cd tools -npm i -npm start ----- - -The above sequence of commands should update the https://github.com/vert-x3/vertx-redis-client/blob/master/src/main/java/io/vertx/redis/client/Command.java[`Command.java`] file, so it includes all the possible commands supported by a particular Redis version. - -[source,java] ----- -Command ZUNION = Command.create("zunion", -3, 0, 0, 0, false, true, true, false); ----- - -This definition is very important as we will have to use it in the service. -Once we have this `Command` we can start to update the redis-client extension by: - -- Updating the https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/RedisClient.java[`RedisClient`] interface, i.e.: - -[source,java] ----- -Response zunion(List args); ----- - -- Updating the https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/runtime/RedisClientImpl.java[`RedisClientImpl`], i.e.: - -[source,java] ----- -@Override -public Response zunion(List args) { - final io.vertx.mutiny.redis.client.Command ZUNION = Command.create("zunion", -3, 0, 0, 0, false, true, true, false); - final io.vertx.mutiny.redis.client.Request requestWithArgs = args.stream().reduce( - io.vertx.mutiny.redis.client.Request.cmd(ZUNION), - (request, s) -> request.arg(s), - (request, request2) -> request); - - return await(mutinyRedis.send(requestWithArgs)); -} ----- - -- Updating the https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/reactive/ReactiveRedisClient.java[`ReactiveRedisClient`] interface, i.e.: - -[source,java] ----- -Uni zunion(List args); - -Response zunionAndAwait(List args); ----- - -- Updating the https://github.com/quarkusio/quarkus/blob/main/extensions/redis-client/runtime/src/main/java/io/quarkus/redis/client/runtime/ReactiveRedisClientImpl.java[`ReactiveRedisClientImpl`], i.e.: - -[source,java] ----- -@Override -public Uni zunion(List args) { - final Command ZUNION = Command.create("zunion", -3, 0, 0, 0, false, true, true, false); - final io.vertx.mutiny.redis.client.Request requestWithArgs = args.stream().reduce( - io.vertx.mutiny.redis.client.Request.cmd(ZUNION), - (request, s) -> request.arg(s), - (request, request2) -> request); - - return mutinyRedis.send(requestWithArgs); -} - -@Override -public Response zunionAndAwait(List args) { - return zunion(args).await().indefinitely(); -} ----- - -- Please note that it's using the `MutinyRedis` class which does asynchronous calls to Redis. diff --git a/_versions/2.7/guides/redis.adoc b/_versions/2.7/guides/redis.adoc deleted file mode 100644 index 119ecb8deb8..00000000000 --- a/_versions/2.7/guides/redis.adoc +++ /dev/null @@ -1,629 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the Redis Client -:extension-status: preview -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can connect to a Redis server using the Redis Client extension. - -include::./status-include.adoc[] - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* A running Redis server, or Docker Compose to start one - -== Architecture - -In this guide, we are going to expose a simple Rest API to increment numbers by using the https://redis.io/commands/incrby[`INCRBY`] command. -Along the way, we'll see how to use other Redis commands like `GET`, `SET`, `DEL` and `KEYS`. - -We'll be using the Quarkus Redis Client extension to connect to our Redis Server. The extension is implemented on top of the https://vertx.io/docs/vertx-redis-client/java/[Vert.x Redis Client], -providing an asynchronous and non-blocking way to connect to Redis. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `redis-quickstart` {quickstarts-tree-url}/redis-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: redis-quickstart -:create-app-extensions: redis-client,resteasy-jackson,resteasy-mutiny -include::includes/devtools/create-app.adoc[] - -This command generates a new project, importing the Redis extension. - - -If you already have your Quarkus project configured, you can add the `redis-client` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: redis-client -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-redis-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-redis-client") ----- - -== Starting the Redis server - -Then, we need to start a Redis instance (if you do not have one already) using the following command: - -[source, bash] ----- -docker run --ulimit memlock=-1:-1 -it --rm=true --memory-swappiness=0 --name redis_quarkus_test -p 6379:6379 redis:5.0.6 ----- - -[NOTE] -==== -If you use xref:redis-dev-services.adoc[Dev Services for Redis], launching the container manually is not necessary! -==== - -== Configuring Redis properties - -Once we have the Redis server running, we need to configure the Redis connection properties. -This is done in the `application.properties` configuration file. Edit it to the following content: - -[source,properties] ----- -quarkus.redis.hosts=redis://localhost:6379 <1> ----- - -<1> Configure Redis hosts to connect to. Here we connect to the Redis server we started in the previous section - -[NOTE] -==== -This is needed if you are not using xref:redis-dev-services.adoc[Dev Services for Redis] -==== - - -== Creating the Increment POJO - -We are going to model our increments using the `Increment` POJO. -Create the `src/main/java/org/acme/redis/Increment.java` file, with the following content: - -[source, java] ----- -package org.acme.redis; - -public class Increment { - public String key; // <1> - public int value; // <2> - - public Increment(String key, int value) { - this.key = key; - this.value = value; - } - - public Increment() { - } -} ----- -<1> The key that will be used as the Redis key -<2> The value held by the Redis key - - -== Creating the Increment Service - -We are going to create an `IncrementService` class which will play the role of a Redis client. -With this class, we'll be able to perform the `SET`, `GET` , `DELET`, `KEYS` and `INCRBY` Redis commands. - -Create the `src/main/java/org/acme/redis/IncrementService.java` file, with the following content: - -[source, java] ----- -package org.acme.redis; - -import io.quarkus.redis.client.RedisClient; -import io.quarkus.redis.client.reactive.ReactiveRedisClient; -import io.smallrye.mutiny.Uni; - -import io.vertx.mutiny.redis.client.Response; - -import java.util.ArrayList; -import java.util.Arrays; -import java.util.List; - -import javax.inject.Inject; -import javax.inject.Singleton; - -@Singleton -class IncrementService { - - @Inject - RedisClient redisClient; // <1> - - @Inject - ReactiveRedisClient reactiveRedisClient; // <2> - - Uni del(String key) { - return reactiveRedisClient.del(Arrays.asList(key)) - .map(response -> null); - } - - String get(String key) { - return redisClient.get(key).toString(); - } - - void set(String key, Integer value) { - redisClient.set(Arrays.asList(key, value.toString())); - } - - void increment(String key, Integer incrementBy) { - redisClient.incrby(key, incrementBy.toString()); - } - - Uni> keys() { - return reactiveRedisClient - .keys("*") - .map(response -> { - List result = new ArrayList<>(); - for (Response r : response) { - result.add(r.toString()); - } - return result; - }); - } -} ----- -<1> Inject the Redis synchronous client -<2> Inject the Reactive Redis client - -== Creating the Increment Resource - -Create the `src/main/java/org/acme/redis/IncrementResource.java` file, with the following content: - -[source, java] ----- -package org.acme.redis; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.PathParam; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; -import javax.ws.rs.POST; -import javax.ws.rs.DELETE; -import java.util.List; - -import io.smallrye.mutiny.Uni; - -@Path("/increments") -public class IncrementResource { - - @Inject - IncrementService service; - - @GET - public Uni> keys() { - return service.keys(); - } - - @POST - public Increment create(Increment increment) { - service.set(increment.key, increment.value); - return increment; - } - - @GET - @Path("/{key}") - public Increment get(@PathParam("key") String key) { - return new Increment(key, Integer.valueOf(service.get(key))); - } - - @PUT - @Path("/{key}") - public void increment(@PathParam("key") String key, Integer value) { - service.increment(key, value); - } - - @DELETE - @Path("/{key}") - public Uni delete(@PathParam("key") String key) { - return service.del(key); - } -} ----- - -== Creating the test class - -Create the `src/test/java/org/acme/redis/IncrementResourceTest.java` file with the following content: - -[source, java] ----- -package org.acme.redis; - -import static org.hamcrest.Matchers.is; - -import org.junit.jupiter.api.Test; - -import io.quarkus.test.junit.QuarkusTest; - -import static io.restassured.RestAssured.given; - -import io.restassured.http.ContentType; - -@QuarkusTest -public class IncrementResourceTest { - - @Test - public void testRedisOperations() { - // verify that we have nothing - given() - .accept(ContentType.JSON) - .when() - .get("/increments") - .then() - .statusCode(200) - .body("size()", is(0)); - - // create a first increment key with an initial value of 0 - given() - .contentType(ContentType.JSON) - .accept(ContentType.JSON) - .body("{\"key\":\"first-key\",\"value\":0}") - .when() - .post("/increments") - .then() - .statusCode(200) - .body("key", is("first-key")) - .body("value", is(0)); - - // create a second increment key with an initial value of 10 - given() - .contentType(ContentType.JSON) - .accept(ContentType.JSON) - .body("{\"key\":\"second-key\",\"value\":10}") - .when() - .post("/increments") - .then() - .statusCode(200) - .body("key", is("second-key")) - .body("value", is(10)); - - // increment first key by 1 - given() - .contentType(ContentType.JSON) - .body("1") - .when() - .put("/increments/first-key") - .then() - .statusCode(204); - - // verify that key has been incremented - given() - .accept(ContentType.JSON) - .when() - .get("/increments/first-key") - .then() - .statusCode(200) - .body("key", is("first-key")) - .body("value", is(1)); - - // increment second key by 1000 - given() - .contentType(ContentType.JSON) - .body("1000") - .when() - .put("/increments/second-key") - .then() - .statusCode(204); - - // verify that key has been incremented - given() - .accept(ContentType.JSON) - .when() - .get("/increments/second-key") - .then() - .statusCode(200) - .body("key", is("second-key")) - .body("value", is(1010)); - - // verify that we have two keys in registered - given() - .accept(ContentType.JSON) - .when() - .get("/increments") - .then() - .statusCode(200) - .body("size()", is(2)); - - // delete first key - given() - .accept(ContentType.JSON) - .when() - .delete("/increments/first-key") - .then() - .statusCode(204); - - // verify that we have one key left after deletion - given() - .accept(ContentType.JSON) - .when() - .get("/increments") - .then() - .statusCode(200) - .body("size()", is(1)); - - // delete second key - given() - .accept(ContentType.JSON) - .when() - .delete("/increments/second-key") - .then() - .statusCode(204); - - // verify that there is no key left - given() - .accept(ContentType.JSON) - .when() - .get("/increments") - .then() - .statusCode(200) - .body("size()", is(0)); - } -} ----- - -== Get it running - -If you followed the instructions, you should have the Redis server running. -Then, you just need to run the application using: - -include::includes/devtools/dev.adoc[] - -Open another terminal and run the `curl http://localhost:8080/increments` command. - -== Interacting with the application -As we have seen above, the API exposes five Rest endpoints. -In this section we are going to see how to initialise an increment, see the list of current increments, -incrementing a value given its key, retrieving the current value of an increment, and finally deleting -a key. - -=== Creating a new increment - -[source,bash] ----- -curl -X POST -H "Content-Type: application/json" -d '{"key":"first","value":10}' http://localhost:8080/increments <1> ----- -<1> We create the first increment, with the key `first` and an initial value of `10`. - -Running the above command should return the result below: - -[source, json] ------ -{ - "key": "first", - "value": 10 -} ------ - -=== See current increments keys - -To see the list of current increments keys, run the following command: - -[source,bash] ----- -curl http://localhost:8080/increments ----- - -The above command should return `["first"]` indicating that we have only one increment thus far. - -=== Retrieve a new increment - -To retrieve an increment using its key, we will have to run the below command: - -[source,bash] ----- -curl http://localhost:8080/increments/first <1> ----- -<1> Running this command, should return the following result: - -[source, json] ----- -{ - "key": "first", - "value": 10 -} ----- - -=== Increment a value given its key - -To increment a value, run the following command: - -[source,bash] ----- -curl -X PUT -H "Content-Type: application/json" -d '27' http://localhost:8080/increments/first <1> ----- -<1> Increment the `first` value by 27. - -Now, running the command `curl http://localhost:8080/increments/first` should return the following result: - -[source, json] ----- -{ - "key": "first", - "value": 37 <1> -} ----- -<1> We see that the value of the `first` key is now `37` which is exactly the result of `10 + 27`, quick maths. - -=== Deleting a key - -Use the command below, to delete an increment given its key. - -[source,bash] ----- -curl -X DELETE http://localhost:8080/increments/first <1> ----- -<1> Delete the `first` increment. - -Now, running the command `curl http://localhost:8080/increments` should return an empty list `[]` - -== Packaging and running in JVM mode - -You can run the application as a conventional jar file. - -First, we will need to package it: - -include::includes/devtools/build.adoc[] - -NOTE: This command will start a Redis instance to execute the tests. Thus your Redis containers need to be stopped. - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -== Running Native - -You can also create a native executable from this application without making any -source code changes. A native executable removes the dependency on the JVM: -everything needed to run the application on the target platform is included in -the executable, allowing the application to run with minimal resource overhead. - -Compiling a native executable takes a bit longer, as GraalVM performs additional -steps to remove unnecessary codepaths. Use the `native` profile to compile a -native executable: - -include::includes/devtools/build-native.adoc[] - -Once the build is finished, you can run the executable with: - -[source,bash] ----- -./target/redis-quickstart-1.0.0-SNAPSHOT-runner ----- - -== Connection Health Check - -If you are using the `quarkus-smallrye-health` extension, `quarkus-vertx-redis` will automatically add a readiness health check -to validate the connection to the Redis server. - -So when you access the `/q/health/ready` endpoint of your application you will have information about the connection validation status. - -This behavior can be disabled by setting the `quarkus.redis.health.enabled` property to `false` in your `application.properties`. - -[[multiple-clients-configuration]] -== Multiple Redis Clients - -The Redis extension allows you to configure multiple clients. -Using several clients works the same way as having a single client. - -[source,properties] ----- -quarkus.redis.hosts=redis://localhost:6379 -quarkus.redis.second.hosts=redis://localhost:6379 ----- - -Notice there's an extra bit in the key (the `second` segment). -The syntax is as follows: `quarkus.redis.[optional name.][redis configuration property]`. -If the name is omitted, it configures the default client. - -== Named Redis Client Injection - -When using multiple clients, you can select the client to inject using the `io.quarkus.redis.client.RedisClientName` qualifier. -Using the above properties to configure three different clients, you can also inject each one as follows: - -[source,java,indent=0] ----- -@Inject -RedisClient defaultRedisClient; - -@Inject -@RedisClientName("second") -RedisClient redisClient2; - -@Inject -@RedisClientName("second") -ReactiveRedisClient reactiveClient2; ----- - -== Providing Redis Hosts Programmatically - -The `RedisHostsProvider` programmatically provides redis hosts. This allows for configuration of properties like redis connection password coming from other sources. - -[NOTE] -==== -This is useful as it removes the need to store sensitive data in application.properties. -==== - -[source,java,indent=0] ----- -@ApplicationScoped -@Named("hosts-provider") // the name of the host provider -public class ExampleRedisHostProvider implements RedisHostsProvider { - @Override - public Set getHosts() { - // do stuff to get the host - String host = "redis://localhost:6379/3" - return Collections.singleton(URI.create(host)); - } -} ----- - -The host provider can be used to configure the redis client like shown below -[source,properties,indent=0] ----- -quarkus.redis.hosts-provider-name=hosts-provider ----- - -== Creating Clients Programmatically - -The `RedisClient` and `ReactiveRedisClient` provide factory methods to create clients programmatically. -The client to be created are configured using the usual <>. - -[NOTE] -==== -This is useful to create a client dynamically in a non-CDI bean e.g a xref:hibernate-orm-panache.adoc[Panache entity]. -Or to create a different client when running in pub/sub mode. This mode requires two different connections -because once a connection invokes a subscriber mode then it cannot be used for running other commands -than the command to leave that mode. -==== - -The below code snippet shows how we can create dynamic clients using the configurations in <>. -[source,java,indent=0] ----- -// creating default redis client -RedisClient defaultRedisClient = RedisClient.createClient(); - -// creating named redis client whose configuration name is "second" -RedisClient namedRedisClient = RedisClient.createClient("second"); - -// creating a default reactive redis client -ReactiveRedisClient defaultReactiveRedisClient = ReactiveRedisClient.createClient(); - -// creating a named reactive redis client whose configuration name is "second" -ReactiveRedisClient namedReactiveRedisClient = ReactiveRedisClient.createClient("second"); ----- - -Please see also <>. - -[[config-reference]] -== Configuration Reference - -include::{generated-dir}/config/quarkus-redis-client.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/rest-client-multipart.adoc b/_versions/2.7/guides/rest-client-multipart.adoc deleted file mode 100644 index a6319e784d9..00000000000 --- a/_versions/2.7/guides/rest-client-multipart.adoc +++ /dev/null @@ -1,302 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the REST Client with Multipart - -include::./attributes.adoc[] - -RESTEasy has rich support for the `multipart/*` and `multipart/form-data` mime types. The multipart mime format is used to pass lists of content bodies. Multiple content bodies are embedded in one message. `multipart/form-data` is often found in web application HTML Form documents and is generally used to upload files. The form-data format is the same as other multipart formats, except that each inlined piece of content has a name associated with it. - - -This guide explains how to use the RESTEasy REST Client with Multipart in order to interact with REST APIs -requiring `multipart/form-data` content-type with very little effort. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `rest-client-multipart-quickstart` {quickstarts-tree-url}/rest-client-multipart-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - - - -:create-app-artifact-id: rest-client-multipart-quickstart -:create-app-extensions: rest-client,resteasy,resteasy-multipart -include::includes/devtools/create-app.adoc[] - -This command generates the Maven project with a REST endpoint and imports the `rest-client` and `resteasy` extensions. -It also adds the `resteasy-multipart` extension to support `multipart/form-data` requests. - -== Setting up the model - -In this guide we will be demonstrating how to invoke a REST service accepting `multipart/form-data` input. -We are assuming the payload is well-known before the request is sent, so we can model as a POJO. - -[NOTE] -==== -If the payload is unknown, you can also use the RESTEasy custom API instead. If that's the case, see the RESTEasy Multipart Providers link at the end of the guide. -==== - -Our first order of business is to setup the model we will be using to define the `multipart/form-data` payload, in the form of a `MultipartBody` POJO. - -Create a `src/main/java/org/acme/rest/client/multipart/MultipartBody.java` file and set the following content: - -[source,java] ----- -package org.acme.rest.client.multipart; - -import java.io.InputStream; - -import javax.ws.rs.FormParam; -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.annotations.providers.multipart.PartType; - -public class MultipartBody { - - @FormParam("file") - @PartType(MediaType.APPLICATION_OCTET_STREAM) - public InputStream file; - - @FormParam("fileName") - @PartType(MediaType.TEXT_PLAIN) - public String fileName; -} ----- - -The purpose of the annotations in the code above is the following: - -* `@FormParam` is a standard JAX-RS annotation used to define a form parameter contained within a request entity body -* `@PartType` is a RESTEasy specific annotation required when a client performs a multipart request and defines the content type for the part. - -== Create the interface - -Using the RESTEasy REST Client is as simple as creating an interface using the proper JAX-RS and MicroProfile annotations. In our case the interface should be created at `src/main/java/org/acme/rest/client/multipart/MultipartService.java` and have the following content: - -[source, java] ----- -package org.acme.rest.client.multipart; - -import javax.ws.rs.Consumes; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.providers.multipart.MultipartForm; - -@Path("/echo") -@RegisterRestClient -public interface MultipartService { - - @POST - @Consumes(MediaType.MULTIPART_FORM_DATA) - @Produces(MediaType.TEXT_PLAIN) - String sendMultipartData(@MultipartForm MultipartBody data); - -} ----- - -The `sendMultipartData` method gives our code the ability to POST a `multipart/form-data` request to our Echo service (running in the same server for demo purposes). -Because in this demo we have the exact knowledge of the `multipart/form-data` packets, we can map them to the model class created in the previous section using the `@org.jboss.resteasy.annotations.providers.multipart.MultipartForm` annotation. - -The client will handle all the networking and marshalling leaving our code clean of such technical details. - -The purpose of the annotations in the code above is the following: - -* `@RegisterRestClient` allows Quarkus to know that this interface is meant to be available for -CDI injection as a REST Client -* `@Path`, `@GET` and `@PathParam` are the standard JAX-RS annotations used to define how to access the service -* `@MultipartForm` defines the parameter as a value object for incoming/outgoing request/responses of the multipart/form-data mime type. -* `@Consumes` defines the expected content-type consumed by this request (parameters) -* `@Produces` defines the expected content-type produced by this request (return type) - -[NOTE] -==== -While `@Consumes` and `@Produces` are optional as auto-negotiation is supported, -it is heavily recommended to annotate your endpoints with them to define precisely the expected content-types. - -It will allow to narrow down the number of JAX-RS providers (which can be seen as converters) included in the native executable. -==== - -== Create the configuration - -In order to determine the base URL to which REST calls will be made, the REST Client uses configuration from `application.properties`. -The name of the property needs to follow a certain convention which is best displayed in the following code: - -[source,properties] ----- -# Your configuration properties -quarkus.rest-client."org.acme.rest.client.multipart.MultipartService".url=http://localhost:8080/ ----- - -Having this configuration means that all requests performed using `org.acme.rest.client.multipart.MultipartService` will use `http://localhost:8080/ ` as the base URL. - -Note that `org.acme.rest.client.multipart.MultipartService` _must_ match the fully qualified name of the `MultipartService` interface we created in the previous section. - -== Create the JAX-RS resource - -Create the `src/main/java/org/acme/rest/client/multipart/MultipartClientResource.java` file with the following content: - -[source,java] ----- -package org.acme.rest.client.multipart; - -import java.io.ByteArrayInputStream; -import java.nio.charset.StandardCharsets; - -import javax.inject.Inject; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.eclipse.microprofile.rest.client.inject.RestClient; - -@Path("/client") -public class MultipartClientResource { - - @Inject - @RestClient - MultipartService service; - - @POST - @Path("/multipart") - @Produces(MediaType.TEXT_PLAIN) - public String sendFile() throws Exception { - MultipartBody body = new MultipartBody(); - body.fileName = "greeting.txt"; - body.file = new ByteArrayInputStream("HELLO WORLD".getBytes(StandardCharsets.UTF_8)); - return service.sendMultipartData(body); - } -} ----- - -Note that in addition to the standard CDI `@Inject` annotation, we also need to use the MicroProfile `@RestClient` annotation to inject `MultipartService`. - - -== Creating the server - -For demo purposes, let's create a simple Echo endpoint that will act as the server part. - -Create the directory `src/main/java/org/acme/rest/client/multipart/server` and include a `EchoService.java` file with the following content: - -[source,java] ----- -package org.acme.rest.client.multipart.server; - -import javax.ws.rs.Consumes; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/echo") -public class EchoService { - - @POST - @Consumes(MediaType.MULTIPART_FORM_DATA) - @Produces(MediaType.TEXT_PLAIN) - public String echo(String requestBody) throws Exception { - return requestBody; - } -} ----- - -This will just return the request body and it's useful for testing purposes. - -== Update the test - -We also need to update the functional test to reflect the changes made to the endpoint. -Edit the `src/test/java/org/acme/rest/client/multipart/MultipartClientResourceTest.java` file to: - -[source, java] ----- -package org.acme.rest.client.multipart; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.containsString; - -@QuarkusTest -public class MultipartClientResourceTest { - - @Test - public void testMultipartDataIsSent() { - given() - .when().post("/client/multipart") - .then() - .statusCode(200) - .body( containsString("Content-Disposition: form-data; name=\"file\""), - containsString("HELLO WORLD"), - containsString("Content-Disposition: form-data; name=\"fileName\""), - containsString("greeting.txt")); - } - -} ----- - -The code above uses link:http://rest-assured.io/[REST Assured] to assert that the returned content from the echo service contains multipart elements - -Because the test runs in a different port, we also need to include an `application.properties` in our `src/test/resources` with the following content: - -[source,properties] ----- -# Your configuration properties -quarkus.rest-client."org.acme.rest.client.multipart.MultipartService".url=http://localhost:8081/ ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -In a terminal, run `curl -X POST http://localhost:8080/client/multipart` - -You should see an output similar to: - -[source,text] ----- ---89d288bd-960f-460c-b266-64c5b4d170fa -Content-Disposition: form-data; name="fileName" -Content-Type: text/plain - -greeting.txt ---89d288bd-960f-460c-b266-64c5b4d170fa -Content-Disposition: form-data; name="file" -Content-Type: application/octet-stream - -HELLO WORLD ---89d288bd-960f-460c-b266-64c5b4d170fa-- ----- - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed with `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Further reading - -* link:https://docs.jboss.org/resteasy/docs/4.5.6.Final/userguide/html/Multipart.html[RESTEasy Multipart Provider] -* link:https://download.eclipse.org/microprofile/microprofile-rest-client-1.4.1/microprofile-rest-client-1.4.1.html[MicroProfile Rest Client specification] diff --git a/_versions/2.7/guides/rest-client-reactive.adoc b/_versions/2.7/guides/rest-client-reactive.adoc deleted file mode 100644 index a5e25d8a45d..00000000000 --- a/_versions/2.7/guides/rest-client-reactive.adoc +++ /dev/null @@ -1,1004 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the REST Client Reactive - -include::./attributes.adoc[] - -This guide explains how to use the REST Client Reactive in order to interact with REST APIs. -REST Client Reactive is a non-blocking counterpart of the RESTEasy REST Client. - -If your application uses a client and exposes REST endpoints, please use xref:resteasy-reactive.adoc[RESTEasy Reactive] -for the server part. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `rest-client-reactive-quickstart` {quickstarts-tree-url}/rest-client-reactive-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: rest-client-reactive-quickstart -:create-app-extensions: resteasy-reactive-jackson,rest-client-reactive-jackson -include::includes/devtools/create-app.adoc[] - -This command generates the Maven project with a REST endpoint and imports: - -* the `resteasy-reactive-jackson` extension for the REST server support. Use `resteasy-reactive` instead if you do not wish to use Jackson; -* the `rest-client-reactive-jackson` extension for the REST client support. Use `rest-client-reactive` instead if you do not wish to use Jackson - -If you already have your Quarkus project configured, you can add the `rest-client-reactive-jackson` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: rest-client-reactive-jackson -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-rest-client-reactive-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-rest-client-reactive-jackson") ----- - -== Setting up the model - -In this guide we will be demonstrating how to consume part of the REST API supplied by the link:https://stage.code.quarkus.io[stage.code.quarkus.io] service. -Our first order of business is to set up the model we will be using, in the form of a `Extension` POJO. - -Create a `src/main/java/org/acme/rest/client/Extension.java` file and set the following content: - -[source,java] ----- -package org.acme.rest.client; - -import java.util.List; - -public class Extension { - - public String id; - public String name; - public String shortName; - public List keywords; - -} ----- - -The model above is only a subset of the fields provided by the service, but it suffices for the purposes of this guide. - -== Create the interface - -Using the REST Client Reactive is as simple as creating an interface using the proper JAX-RS and MicroProfile annotations. In our case the interface should be created at `src/main/java/org/acme/rest/client/ExtensionsService.java` and have the following content: - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; -import java.util.Set; - -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - @GET - Set getById(@QueryParam("id") String id); -} ----- - -The `getById` method gives our code the ability to get an extension by id from the Code Quarkus API. The client will handle all the networking and marshalling leaving our code clean of such technical details. - -The purpose of the annotations in the code above is the following: - -* `@RegisterRestClient` allows Quarkus to know that this interface is meant to be available for -CDI injection as a REST Client -* `@Path`, `@GET` and `@PathParam` are the standard JAX-RS annotations used to define how to access the service - -[NOTE] -==== -When the `quarkus-rest-client-reactive-jackson` extension is installed, Quarkus will use the `application/json` media type -by default for most return values, unless the media type is explicitly set via `@Produces` or `@Consumes` annotations. - -If you don't rely on the JSON default, it is heavily recommended to annotate your endpoints with the `@Produces` and `@Consumes` annotations to define precisely the expected content-types. -It will allow to narrow down the number of JAX-RS providers (which can be seen as converters) included in the native executable. -==== - -[WARNING] -==== -The `getById` method above is a blocking call. It should not be invoked on the event loop. -The <> section describes how to make non-blocking calls. -==== - -=== Path Parameters - -If the GET request requires path parameters you can leverage the `@PathParam("parameter-name")` annotation instead of -(or in addition to) the `@QueryParam`. Path and query parameters can be combined, as required, as illustrated in the example below. - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.QueryParam; -import java.util.Set; - -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - @GET - @Path("/stream/{stream}") - Set getByStream(@PathParam("stream") String stream, @QueryParam("id") String id); -} ----- - - -== Create the configuration - -In order to determine the base URL to which REST calls will be made, the REST Client uses configuration from `application.properties`. -The name of the property needs to follow a certain convention which is best displayed in the following code: - -[source,properties] ----- -# Your configuration properties -quarkus.rest-client."org.acme.rest.client.ExtensionsService".url=https://stage.code.quarkus.io/api # // <1> ----- - -<1> Having this configuration means that all requests performed using `org.acme.rest.client.ExtensionsService` will use `https://stage.code.quarkus.io/api` as the base URL. -Using the configuration above, calling the `getById` method of `ExtensionsService` with a value of `io.quarkus:quarkus-rest-client-reactive` would result in an HTTP GET request being made to `https://stage.code.quarkus.io/api/extensions?id=io.quarkus:quarkus-rest-client-reactive`. - -Note that `org.acme.rest.client.ExtensionsService` _must_ match the fully qualified name of the `ExtensionsService` interface we created in the previous section. - -To facilitate the configuration, you can use the `@RegisterRestClient` `configKey` property that allows to use different configuration root than the fully qualified name of your interface. - -[source, java] ----- - -@RegisterRestClient(configKey="extensions-api") -public interface ExtensionsService { - [...] -} ----- - -[source,properties] ----- -# Your configuration properties -quarkus.rest-client.extensions-api.url=https://stage.code.quarkus.io/api -quarkus.rest-client.extensions-api.scope=javax.inject.Singleton ----- - -== Create the JAX-RS resource - -Create the `src/main/java/org/acme/rest/client/ExtensionsResource.java` file with the following content: - - -[source,java] ----- -package org.acme.rest.client; - -import io.smallrye.common.annotation.Blocking; -import org.eclipse.microprofile.rest.client.inject.RestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.util.Set; - -@Path("/extension") -public class ExtensionsResource { - - @RestClient // <1> - ExtensionsService extensionsService; - - - @GET - @Path("/id/{id}") - @Blocking // <2> - public Set id(String id) { - return extensionsService.getById(id); - } -} ----- - -There are two interesting parts in this listing: - -<1> the client stub is injected with the `@RestClient` annotation instead of the usual CDI `@Inject` -<2> the call we are making with the client is blocking, hence we need the `@Blocking` annotation on the REST endpoint - -== Programmatic client creation with RestClientBuilder - -Instead of annotating the client with `@RegisterRestClient`, and injecting -a client with `@RestClient`, you can also create REST Client programmatically. -You do that with `RestClientBuilder`. - -With this approach the client interface could look as follows: - -[source,java] ----- -package org.acme.rest.client; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; -import java.util.Set; - -@Path("/extensions") -public interface ExtensionsService { - - @GET - Set getById(@QueryParam("id") String id); -} ----- - -And the service as follows: -[source,java] ----- -package org.acme.rest.client; - -import io.smallrye.mutiny.Uni; -import org.eclipse.microprofile.rest.client.RestClientBuilder; -import org.eclipse.microprofile.rest.client.inject.RestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.net.URI; -import java.util.Set; -import java.util.concurrent.CompletionStage; - -@Path("/extension") -public class ExtensionsResource { - - private final ExtensionsService extensionsService; - - public ExtensionsResource() { - extensionsService = RestClientBuilder.newBuilder() - .baseUri(URI.create("https://stage.code.quarkus.io/api")) - .build(ExtensionsService.class); - } - - @GET - @Path("/id/{id}") - public Set id(String id) { - return extensionsService.getById(id); - } -} ----- - -== Update the test - -Next, we need to update the functional test to reflect the changes made to the endpoint. -Edit the `src/test/java/org/acme/rest/client/ExtensionsResourceTest.java` file and change the content of the test to: - - -[source, java] ----- -package org.acme.rest.client; - -import io.quarkus.test.junit.QuarkusTest; - -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.hasItem; -import static org.hamcrest.CoreMatchers.is; -import static org.hamcrest.Matchers.greaterThan; - -@QuarkusTest -public class ExtensionsResourceTest { - - @Test - public void testExtensionsIdEndpoint() { - given() - .when().get("/extension/id/io.quarkus:quarkus-rest-client-reactive") - .then() - .statusCode(200) - .body("$.size()", is(1), - "[0].id", is("io.quarkus:quarkus-rest-client-reactive"), - "[0].name", is("REST Client Reactive"), - "[0].keywords.size()", greaterThan(1), - "[0].keywords", hasItem("rest-client")); - } -} ----- - -The code above uses link:http://rest-assured.io/[REST Assured]'s link:https://github.com/rest-assured/rest-assured/wiki/GettingStarted#jsonpath[json-path] capabilities. - - -[#async-support] -== Async Support - -To get the full power of the reactive nature of the client, you can use the non-blocking flavor of REST Client Reactive extension, -which comes with support for `CompletionStage` and `Uni`. -Let's see it in action by adding a `getByIdAsync` method in our `ExtensionsService` REST interface. The code should look like: - -[source,java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; -import java.util.Set; -import java.util.concurrent.CompletionStage; - -@Path("/extensions") -@RegisterRestClient(configKey = "extensions-api") -public interface ExtensionsService { - - @GET - Set getById(@QueryParam("id") String id); - - @GET - CompletionStage> getByIdAsync(@QueryParam("id") String id); -} ----- - -Open the `src/main/java/org/acme/rest/client/ExtensionsResource.java` file and update it with the following content: - -[source,java] ----- -package org.acme.rest.client; - -import io.smallrye.common.annotation.Blocking; -import org.eclipse.microprofile.rest.client.inject.RestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.util.Set; -import java.util.concurrent.CompletionStage; - -@Path("/extension") -public class ExtensionsResource { - - @RestClient - ExtensionsService extensionsService; - - - @GET - @Path("/id/{id}") - @Blocking - public Set id(String id) { - return extensionsService.getById(id); - } - - @GET - @Path("/id-async/{id}") - public CompletionStage> idAsync(String id) { - return extensionsService.getByIdAsync(id); - } -} ----- - -Please note that since the invocation is now non-blocking, we don't need the `@Blocking` annotation anymore on the endpoint. -This means that the `idAsync` method will be invoked on the event loop, i.e. will not get offloaded to a worker pool thread -and thus reducing hardware resource utilization. - - -To test asynchronous methods, add the test method below in `ExtensionsResourceTest`: -[source,java] ----- -@Test -public void testExtensionIdAsyncEndpoint() { - given() - .when().get("/extension/id-async/io.quarkus:quarkus-rest-client-reactive") - .then() - .statusCode(200) - .body("$.size()", is(1), - "[0].id", is("io.quarkus:quarkus-rest-client-reactive"), - "[0].name", is("REST Client Reactive"), - "[0].keywords.size()", greaterThan(1), - "[0].keywords", hasItem("rest-client")); -} ----- - -The `Uni` version is very similar: - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import java.util.Set; -import java.util.concurrent.CompletionStage; - -@Path("/extensions") -@RegisterRestClient(configKey = "extensions-api") -public interface ExtensionsService { - - // ... - - @GET - Uni> getByIdAsUni(@QueryParam("id") String id); -} ----- - -The `ExtensionsResource` becomes: - -[source,java] ----- -package org.acme.rest.client; - -import io.smallrye.common.annotation.Blocking; -import io.smallrye.mutiny.Uni; -import org.eclipse.microprofile.rest.client.inject.RestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.util.Set; -import java.util.concurrent.CompletionStage; - -@Path("/extension") -public class ExtensionsResource { - - @RestClient - ExtensionsService extensionsService; - - - // ... - - @GET - @Path("/id-uni/{id}") - public Uni> idUni(String id) { - return extensionsService.getByIdAsUni(id); - } -} ----- - -[TIP] -.Mutiny -==== -The previous snippet uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -When returning a `Uni`, every _subscription_ invokes the remote service. -It means you can re-send the request by re-subscribing on the `Uni`, or use a `retry` as follows: - -[source, java] ----- - -@RestClient ExtensionsService extensionsService; - -// ... - -extensionsService.getByIdAsUni(id) - .onFailure().retry().atMost(10); ----- - -If you use a `CompletionStage`, you would need to call the service's method to retry. -This difference comes from the laziness aspect of Mutiny and its subscription protocol. -More details about this can be found in https://smallrye.io/smallrye-mutiny/#_uni_and_multi[the Mutiny documentation]. - -== Custom headers support - -There are a few ways in which you can specify custom headers for your REST calls: - -- by registering a `ClientHeadersFactory` or a `ReactiveClientHeadersFactory` with the `@RegisterClientHeaders` annotation -- by specifying the value of the header with `@ClientHeaderParam` -- by specifying the value of the header by `@HeaderParam` - -The code below demonstrates how to use each of these techniques: - -[source, java] ----- -package org.acme.rest.client; - -import io.smallrye.mutiny.Uni; -import org.eclipse.microprofile.rest.client.annotation.ClientHeaderParam; -import org.eclipse.microprofile.rest.client.annotation.RegisterClientHeaders; -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.HeaderParam; -import javax.ws.rs.Path; -import javax.ws.rs.QueryParam; -import java.util.Set; -import java.util.concurrent.CompletionStage; - -@Path("/extensions") -@RegisterRestClient -@RegisterClientHeaders(RequestUUIDHeaderFactory.class) // <1> -@ClientHeaderParam(name = "my-header", value = "constant-header-value") // <2> -@ClientHeaderParam(name = "computed-header", value = "{org.acme.rest.client.Util.computeHeader}") // <3> -public interface ExtensionsService { - - @GET - @ClientHeaderParam(name = "header-from-properties", value = "${header.value}") // <4> - Set getById(@QueryParam("id") String id, @HeaderParam("jaxrs-style-header") String headerValue); // <5> -} ----- - -<1> There can be only one `ClientHeadersFactory` per class. With it, you can not only add custom headers, but you can also transform existing ones. See the `RequestUUIDHeaderFactory` class below for an example of the factory. -<2> `@ClientHeaderParam` can be used on the client interface and on methods. It can specify a constant header value... -<3> ... and a name of a method that should compute the value of the header. It can either be a static method or a default method in this interface -<4> ... as well as a value from your application's configuration -<5> ... or as a normal JAX-RS `@HeaderParam` annotated argument - -[NOTE] -==== -When using Kotlin, if default methods are going to be leveraged, then the Kotlin compiler needs to be configured to use Java's default interface capabilities. -See link:https://kotlinlang.org/docs/java-to-kotlin-interop.html#default-methods-in-interfaces[this] for more details. -==== - -A `ClientHeadersFactory` can look as follows: - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.ext.ClientHeadersFactory; - -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.core.MultivaluedHashMap; -import javax.ws.rs.core.MultivaluedMap; -import java.util.UUID; - -@ApplicationScoped -public class RequestUUIDHeaderFactory implements ClientHeadersFactory { - - @Override - public MultivaluedMap update(MultivaluedMap incomingHeaders, MultivaluedMap clientOutgoingHeaders) { - MultivaluedMap result = new MultivaluedHashMap<>(); - result.add("X-request-uuid", UUID.randomUUID().toString()); - return result; - } -} ----- - -As you see in the example above, you can make your `ClientHeadersFactory` implementation a CDI bean by -annotating it with a scope-defining annotation, such as `@Singleton`, `@ApplicationScoped`, etc. - -To specify a value for `${header.value}`, simply put the following in your `application.properties`: - -[source,properties] ----- -header.value=value of the header ----- - -Also, there is a reactive flavor of `ClientHeadersFactory` that allows doing blocking operations. For example: - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.ext.ClientHeadersFactory; - -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.core.MultivaluedHashMap; -import javax.ws.rs.core.MultivaluedMap; -import java.util.UUID; - -@ApplicationScoped -public class GetTokenReactiveClientHeadersFactory extends ReactiveClientHeadersFactory { - - @Inject - Service service; - - @Override - public Uni> getHeaders( - MultivaluedMap incomingHeaders, - MultivaluedMap clientOutgoingHeaders); - return Uni.createFrom().item(() -> { - MultivaluedHashMap newHeaders = new MultivaluedHashMap<>(); - // perform blocking call - newHeaders.add(HEADER_NAME, service.getToken()); - return newHeaders; - }); - } -} ----- - -=== Default header factory - -The `@RegisterClientHeaders` annotation can also be used without any custom factory specified. In that case the `DefaultClientHeadersFactoryImpl` factory will be used. -If you make a REST client call from a REST resource, this factory will propagate all the headers listed in `org.eclipse.microprofile.rest.client.propagateHeaders` configuration property from the resource request to the client request. Individual header names are comma-separated. -[source, java] ----- -@Path("/extensions") -@RegisterRestClient -@RegisterClientHeaders -public interface ExtensionsService { - - @GET - Set getById(@QueryParam("id") String id); - - @GET - CompletionStage> getByIdAsync(@QueryParam("id") String id); -} ----- - -[source,properties] ----- -org.eclipse.microprofile.rest.client.propagateHeaders=Authorization,Proxy-Authorization ----- - -== Exception handling - -The MicroProfile REST Client specification introduces the `org.eclipse.microprofile.rest.client.ext.ResponseExceptionMapper` whose purpose is to convert an HTTP response to an exception. - -A simple example of implementing such a `ResponseExceptionMapper` for the `ExtensionsService` discussed above, could be: - -[source, java] ----- -public class MyResponseExceptionMapper implements ResponseExceptionMapper { - - @Override - public RuntimeException toThrowable(Response response) { - if (response.getStatus() == 500) { - throw new RuntimeException("The remote service responded with HTTP 500"); - } - return null; - } -} ----- - -`ResponseExceptionMapper` also defines the `getPriority` method which is used in order to determine the priority with which `ResponseExceptionMapper` implementations will be called (implementations with a lower value for `getPriority` will be invoked first). -If `toThrowable` returns an exception, then that exception will be thrown. If `null` is returned, the next implementation of `ResponseExceptionMapper` in the chain will be called (if there is any). - -The class as written above, would not be automatically be used by any REST Client. To make it available to every REST Client of the application, the class needs to be annotated with `@Provider` (as long as `quarkus.rest-client-reactive.provider-autodiscovery` is not set to `false`). -Alternatively, if the exception handling class should only apply to specific REST Client interfaces, you can either annotate the interfaces with `@RegisterProvider(MyResponseExceptionMapper.class)`, or register it using configuration using the `providers` property of the proper `quarkus.rest-client` configuration group. - -=== Using @ClientExceptionMapper - -A simpler way to convert HTTP response codes of 400 or above is to use the `@ClientExceptionMapper` annotation. - -For the `ExtensionsService` REST Client interface defined above, an example use of `@ClientExceptionMapper` would be: - -[source, java] ----- -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - @GET - Set getById(@QueryParam("id") String id); - - @GET - CompletionStage> getByIdAsync(@QueryParam("id") String id); - - @ClientExceptionMapper - static RuntimeException toException(Response response) { - if (response.getStatus() == 500) { - return new RuntimeException("The remote service responded with HTTP 500"); - } - return null; - } -} ----- - -Naturally this handling is per REST Client. `@ClientExceptionMapper` uses the default priority if the `priority` attribute is not set and the normal rules of invoking all handlers in turn apply. - -== Multipart Form support - -REST Client Reactive support multipart messages. - -=== Sending Multipart messages - -REST Client Reactive allows sending data as multipart forms. This way you can for example -send files efficiently. - -To send data as a multipart form, you need to create a class that would encapsulate all the fields -to be sent, e.g. - -[source, java] ----- -public class FormDto { - @FormParam("file") - @PartType(MediaType.APPLICATION_OCTET_STREAM) - public File file; - - @FormParam("otherField") - @PartType(MediaType.TEXT_PLAIN) - public String textProperty; -} ----- - -The method that sends a form needs to specify multipart form data as the consumed media type, e.g. -[source, java] ----- - @POST - @Consumes(MediaType.MULTIPART_FORM_DATA) - @Produces(MediaType.TEXT_PLAIN) - @Path("/binary") - String sendMultipart(@MultipartForm FormDto data); ----- - -Fields specified as `File`, `Path`, `byte[]` or `Buffer` are sent as files; as binary files for -`@PartType(MediaType.APPLICATION_OCTET_STREAM)`, as text files for other content types. -Other fields are sent as form attributes. - -There are a few modes in which the form data can be encoded. By default, -Rest Client Reactive uses RFC1738. -You can override it by specifying the mode either on the client level, -by setting `io.quarkus.rest.client.multipart-post-encoder-mode` RestBuilder property -to the selected value of `HttpPostRequestEncoder.EncoderMode` or -by specifying `quarkus.rest-client.multipart-post-encoder-mode` in your -`application.properties`. Please note that the latter works only for -clients created with the `@RegisterRestClient` annotation. -All the available modes are described in the link:https://netty.io/4.1/api/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.EncoderMode.html[Netty documentation] - -=== Receiving Multipart Messages -REST Client Reactive also supports receiving multipart messages. -As with sending, to parse a multipart response, you need to create a class that describes the response data, e.g. - -[source,java] ----- -public class FormDto { - @RestForm // <1> - @PartType(MediaType.APPLICATION_OCTET_STREAM) - public File file; - - @FormParam("otherField") // <2> - @PartType(MediaType.TEXT_PLAIN) - public String textProperty; -} ----- -<1> uses the shorthand `@RestForm` annotation to make a field as a part of a multipart form -<2> the standard `@FormParam` can also be used. It allows to override the name of the multipart part. - -Then, create an interface method that corresponds to the call and make it return the `FormDto`: -[source,java] ----- - @GET - @Produces(MediaType.MULTIPART_FORM_DATA) - @Path("/get-file") - FormDto data sendMultipart(); ----- - -At the moment, multipart response support is subject to the following limitations: - -- files sent in multipart responses can only be parsed to `File`, `Path` and `FileDownload` -- each field of the response type has to be annotated with `@PartType` - fields without this annotation are ignored - -REST Client Reactive needs to know the classes used as multipart return types upfront. If you have an interface method that produces `multipart/form-data`, the return type will be discovered automatically. However, if you intend to use the `ClientBuilder` API to parse a response as multipart, you need to annotate your DTO class with `@MultipartForm`. - -WARNING: The files you download are not automatically removed and can take up a lot of disk space. Consider removing the files when you are done working with them. - -== Proxy support -REST Client Reactive supports sending requests through a proxy. -It honors the JVM settings for it but also allows to specify both: - -* global client proxy settings, with `quarkus.rest-client.proxy-address`, `quarkus.rest-client.proxy-user`, `quarkus.rest-client.proxy-password`, `quarkus.rest-client.non-proxy-hosts` - -* per-client proxy settings, with `quarkus.rest-client..proxy-address`, etc. These are applied only to clients injected with CDI, that is the ones created with `@RegisterRestClient` - -If `proxy-address` is set on the client level, the client uses its specific proxy settings. No proxy settings are propagated from the global configuration or JVM properties. - -If `proxy-address` is not set for the client but is set on the global level, the client uses the global settings. -Otherwise, the client uses the JVM settings. - - -An example configuration for setting proxy: - -[source,properties] ----- -# global proxy configuration is used for all clients -quarkus.rest-client.proxy-address=localhost:8182 -quarkus.rest-client.proxy-user= -quarkus.rest-client.proxy-password= -quarkus.rest-client.non-proxy-hosts=example.com - -# per-client configuration overrides the global settings for a specific client -quarkus.rest-client.my-client.proxy-address=localhost:8183 -quarkus.rest-client.my-client.proxy-user= -quarkus.rest-client.my-client.proxy-password= -quarkus.rest-client.my-client.url=... ----- - -NOTE: MicroProfile REST Client specification does not allow setting proxy credentials. In order to specify proxy user and proxy password programmatically, you need to cast your `RestClientBuilder` to `RestClientBuilderImpl`. - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/extension/id/io.quarkus:quarkus-rest-client-reactive. - -You should see a JSON object containing some basic information about this extension. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed with `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Logging traffic -REST Client Reactive can log the requests it sends and the responses it receives. -To enable logging, add the `quarkus.rest-client.logging.scope` property to your `application.properties` and set it to: - -* `request-response` to log the request and response contents, or -* `all` to also enable low level logging of the underlying libraries. - -As HTTP messages can have large bodies, we limit the amount of body characters logged. The default limit is `100`, you can change it by specifying `quarkus.rest-client.logging.body-limit`. - -NOTE: REST Client Reactive is logging the traffic with level DEBUG and does not alter logger properties. You may need to adjust your logger configuration to use this feature. - -An example logging configuration: - -[source,properties] ----- -quarkus.rest-client.logging.scope=request-response -quarkus.rest-client.logging.body-limit=50 - -quarkus.log.category."org.jboss.resteasy.reactive.client.logging".level=DEBUG ----- - -== Mocking the client for tests -If you use a client injected with the `@RestClient` annotation, you can easily mock it for tests. -You can do it with Mockito's `@InjectMock` or with `QuarkusMock`. - -This section shows how to replace your client with a mock. If you would like to get a more in-depth understanding of how mocking works in Quarkus, see the blog post on https://quarkus.io/blog/mocking/[Mocking CDI beans]. - -NOTE: Mocking does not work when using `@NativeImageTest` or `@QuarkusIntegrationTest`. - -Let's assume you have the following client: -[source,java] ----- -package io.quarkus.it.rest.client.main; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; - - -@Path("/") -@RegisterRestClient -public interface Client { - @GET - String get(); -} ----- - - -=== Mocking with InjectMock -The simplest approach to mock a client for tests is to use Mockito and `@InjectMock`. - -First, add the following dependency to your application: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-junit5-mockito - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-junit5-mockito") ----- - -Then, in your test you can simply use `@InjectMock` to create and inject a mock: - -[source,java] ----- -import static org.assertj.core.api.Assertions.assertThat; -import static org.mockito.Mockito.when; - -import org.junit.jupiter.api.BeforeEach; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.junit.mockito.InjectMock; - -@QuarkusTest -public class InjectMockTest { - - @InjectMock - Client mock; - - @BeforeEach - public void setUp() { - when(mock.get()).thenReturn("MockAnswer"); - } - - @Test - void doTest() { - // ... - } -} ----- - -=== Mocking with QuarkusMock -If Mockito doesn't meet your needs, you can create a mock programmatically using `QuarkusMock`, e.g.: - -[source,java] ----- -import org.eclipse.microprofile.rest.client.inject.RestClient; -import org.junit.jupiter.api.BeforeEach; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.junit.QuarkusMock; -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -public class QuarkusMockTest { - - @BeforeEach - public void setUp() { - Client customMock = new Client() { //<1> - @Override - public String get() { - return "MockAnswer"; - } - }; - QuarkusMock.installMockForType(customMock, Client.class, RestClient.LITERAL); // <2> - } - @Test - void doTest() { - // ... - } -} ----- - -<1> here we use a manually created implementation of the client interface to replace the actual Client -<2> note that `RestClient.LITERAL` has to be passed as the last argument of the `installMockForType` method - - -== Using a Mock HTTP Server for tests -In some cases you may want to mock the remote endpoint - the HTTP server - instead of mocking the client itself. -This may be especially useful for native tests, or for programmatically created clients. - -You can easily mock an HTTP Server with Wiremock. -The xref:rest-client.adoc#using-a-mock-http-server-for-tests[Wiremock section of the Quarkus - Using the REST Client] -describes how to set it up in detail. - -== Known limitations -While the REST Client Reactive extension aims to be a drop-in replacement for the REST Client extension, there are some differences -and limitations: - -- the default scope of the client for the new extension is `@ApplicationScoped` while the `quarkus-rest-client` defaults to `@Dependent` -To change this behavior, set the `quarkus.rest-client-reactive.scope` property to the fully qualified scope name. -- it is not possible to set `HostnameVerifier` or `SSLContext` -- a few things that don't make sense for a non-blocking implementations, such as setting the `ExecutorService`, don't work - - - -== Further reading - - * link:https://download.eclipse.org/microprofile/microprofile-rest-client-2.0/microprofile-rest-client-spec-2.0.html[MicroProfile Rest Client specification] diff --git a/_versions/2.7/guides/rest-client.adoc b/_versions/2.7/guides/rest-client.adoc deleted file mode 100644 index 2c28658e7d5..00000000000 --- a/_versions/2.7/guides/rest-client.adoc +++ /dev/null @@ -1,731 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using the REST Client - -include::./attributes.adoc[] - -This guide explains how to use the RESTEasy REST Client in order to interact with REST APIs -with very little effort. - -TIP: there is another guide if you need to write server xref:rest-json.adoc[JSON REST APIs]. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `rest-client-quickstart` {quickstarts-tree-url}/rest-client-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: rest-client-quickstart -:create-app-extensions: resteasy,resteasy-jackson,rest-client,rest-client-jackson -include::includes/devtools/create-app.adoc[] - -This command generates the Maven project with a REST endpoint and imports: - -* the `resteasy` and `resteasy-jackson` extensions for the REST server support; -* the `rest-client` and `rest-client-jackson` extensions for the REST client support. - -If you already have your Quarkus project configured, you can add the `rest-client` and the `rest-client-jackson` extensions -to your project by running the following command in your project base directory: - -:add-extension-extensions: rest-client,rest-client-jackson -include::includes/devtools/extension-add.adoc[] - -This will add the following to your `pom.xml`: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-rest-client - - - io.quarkus - quarkus-rest-client-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-rest-client") -implementation("io.quarkus:quarkus-rest-client-jackson") ----- - -== Setting up the model - -In this guide we will be demonstrating how to consume part of the REST API supplied by the link:https://stage.code.quarkus.io[stage.code.quarkus.io] service. -Our first order of business is to set up the model we will be using, in the form of an `Extension` POJO. - -Create a `src/main/java/org/acme/rest/client/Extension.java` file and set the following content: - -[source,java] ----- -package org.acme.rest.client; - -import java.util.List; - -public class Extension { - - public String id; - public String name; - public String shortName; - public List keywords; - -} ----- - -The model above is only a subset of the fields provided by the service, but it suffices for the purposes of this guide. - -== Create the interface - -Using the RESTEasy REST Client is as simple as creating an interface using the proper JAX-RS and MicroProfile annotations. In our case the interface should be created at `src/main/java/org/acme/rest/client/ExtensionsService.java` and have the following content: - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.util.Set; - -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - @GET - Set getById(@QueryParam String id); -} ----- - -The `getById` method gives our code the ability to get an extension by id from the Code Quarkus API. The client will handle all the networking and marshalling leaving our code clean of such technical details. - -The purpose of the annotations in the code above is the following: - -* `@RegisterRestClient` allows Quarkus to know that this interface is meant to be available for -CDI injection as a REST Client -* `@Path`, `@GET` and `@PathParam` are the standard JAX-RS annotations used to define how to access the service - -[NOTE] -==== -When a JSON extension is installed such as `quarkus-rest-client-jackson` or `quarkus-rest-client-jsonb`, Quarkus will use the `application/json` media type -by default for most return values, unless the media type is explicitly set via -`@Produces` or `@Consumes` annotations (there are some exceptions for well known types, such as `String` and `File`, which default to `text/plain` and `application/octet-stream` -respectively). - -If you don't want JSON by default you can set `quarkus.resteasy-json.default-json=false` and the default will change back to being auto-negotiated. If you set this -you will need to add `@Produces(MediaType.APPLICATION_JSON)` and `@Consumes(MediaType.APPLICATION_JSON)` to your endpoints in order to use JSON. - -If you don't rely on the JSON default, it is heavily recommended to annotate your endpoints with the `@Produces` and `@Consumes` annotations to define precisely the expected content-types. -It will allow to narrow down the number of JAX-RS providers (which can be seen as converters) included in the native executable. -==== - -=== Path Parameters - -If the GET request requires path parameters you can leverage the `@PathParam("parameter-name")` annotation instead of (or in addition to) the `@QueryParam`. Path and query parameters can be combined, as required, as illustrated in a mock example below. - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.jaxrs.PathParam; -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.util.Set; - -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - @GET - @Path("/stream/{stream}") - Set getByStream(@PathParam String stream, @QueryParam("id") String id); -} ----- - - -== Create the configuration - -In order to determine the base URL to which REST calls will be made, the REST Client uses configuration from `application.properties`. -The name of the property needs to follow a certain convention which is best displayed in the following code: - -[source,properties] ----- -# Your configuration properties -quarkus.rest-client."org.acme.rest.client.ExtensionsService".url=https://stage.code.quarkus.io/api # // <1> -quarkus.rest-client."org.acme.rest.client.ExtensionsService".scope=javax.inject.Singleton # // <2> ----- - -<1> Having this configuration means that all requests performed using `ExtensionsService` will use `https://stage.code.quarkus.io` as the base URL. -Using the configuration above, calling the `getById` method of `ExtensionsService` with a value of `io.quarkus:quarkus-rest-client` would result in an HTTP GET request being made to `https://stage.code.quarkus.io/api/extensions?id=io.quarkus:quarkus-rest-client`. -<2> Having this configuration means that the default scope of `ExtensionsService` will be `@Singleton`. Supported scope values are `@Singleton`, `@Dependent`, `@ApplicationScoped` and `@RequestScoped`. The default scope is `@Dependent`. -The default scope can also be defined on the interface. - -Note that `org.acme.rest.client.ExtensionsService` _must_ match the fully qualified name of the `ExtensionsService` interface we created in the previous section. - -[NOTE] -==== -The standard MicroProfile Rest Client properties notation can also be used to configure the client: - -[source,properties] ----- -org.acme.rest.client.ExtensionsService/mp-rest/url=https://stage.code.quarkus.io/api -org.acme.rest.client.ExtensionsService/mp-rest/scope=javax.inject.Singleton ----- - -If a property is specified via both the Quarkus notation and the MicroProfile notation, the Quarkus notation takes a precedence. -==== - - -To facilitate the configuration, you can use the `@RegisterRestClient` `configKey` property that allows to use another configuration root than the fully qualified name of your interface. - -[source, java] ----- - -@RegisterRestClient(configKey="extensions-api") -public interface ExtensionsService { - [...] -} ----- - -[source,properties] ----- -# Your configuration properties -quarkus.rest-client.extensions-api.url=https://stage.code.quarkus.io/api -quarkus.rest-client.extensions-api.scope=javax.inject.Singleton ----- - -=== Disabling Hostname Verification - -To disable the SSL hostname verification for a specific REST client, add the following property to your configuration: - -[source,properties] ----- -quarkus.rest-client.extensions-api.hostname-verifier=io.quarkus.restclient.NoopHostnameVerifier ----- - -=== Disabling SSL verifications - -To disable all SSL verifications, add the following property to your configuration: - -[source,properties] ----- -quarkus.tls.trust-all=true ----- -[WARNING] -==== -This setting should not be used in production as it will disable any kind of SSL verification. -==== - -== Create the JAX-RS resource - -Create the `src/main/java/org/acme/rest/client/ExtensionsResource.java` file with the following content: - -[source,java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.inject.RestClient; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import java.util.Set; - -@Path("/extension") -public class ExtensionsResource { - - @Inject - @RestClient - ExtensionsService extensionsService; - - @GET - @Path("/id/{id}") - public Set id(@PathParam String id) { - return extensionsService.getById(id); - } -} ----- - -Note that in addition to the standard CDI `@Inject` annotation, we also need to use the MicroProfile `@RestClient` annotation to inject `ExtensionsService`. - -== Update the test - -We also need to update the functional test to reflect the changes made to the endpoint. -Edit the `src/test/java/org/acme/rest/client/ExtensionsResourceTest.java` file and change the content of the `testExtensionIdEndpoint` method to: - - -[source, java] ----- -package org.acme.rest.client; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.hasItem; -import static org.hamcrest.CoreMatchers.is; -import static org.hamcrest.Matchers.greaterThan; - -import org.acme.rest.client.resources.WireMockExtensionsResource; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -@QuarkusTestResource(WireMockExtensionsResource.class) -public class ExtensionsResourceTest { - - @Test - public void testExtensionsIdEndpoint() { - given() - .when().get("/extension/id/io.quarkus:quarkus-rest-client") - .then() - .statusCode(200) - .body("$.size()", is(1), - "[0].id", is("io.quarkus:quarkus-rest-client"), - "[0].name", is("REST Client"), - "[0].keywords.size()", greaterThan(1), - "[0].keywords", hasItem("rest-client")); - } -} ----- - -The code above uses link:http://rest-assured.io/[REST Assured]'s link:https://github.com/rest-assured/rest-assured/wiki/GettingStarted#jsonpath[json-path] capabilities. - - -== Async Support - -The rest client supports asynchronous rest calls. -Async support comes in 2 flavors: you can return a `CompletionStage` or a `Uni` (requires the `quarkus-rest-client-mutiny` extension). -Let's see it in action by adding a `getByIdAsync` method in our `ExtensionsService` REST interface. The code should look like: - -[source, java] ----- -package org.acme.rest.client; - -import java.util.Set; -import java.util.concurrent.CompletionStage; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - @GET - Set getById(@QueryParam String id); - - @GET - CompletionStage> getByIdAsync(@QueryParam String id); - -} ----- - -Open the `src/main/java/org/acme/rest/client/ExtensionsResource.java` file and update it with the following content: - -[source,java] ----- -package org.acme.rest.client; - -import java.util.Set; -import java.util.concurrent.CompletionStage; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.rest.client.inject.RestClient; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -@Path("/extension") -public class ExtensionsResource { - - @Inject - @RestClient - ExtensionsService extensionsService; - - @GET - @Path("/id/{id}") - public Set id(@PathParam String id) { - return extensionsService.getById(id); - } - - @GET - @Path("/id-async/{id}") - public CompletionStage> idAsync(@PathParam String id) { - return extensionsService.getByIdAsync(id); - } - -} ----- - -To test asynchronous methods, add the test method below in `ExtensionsResourceTest`: -[source,java] ----- -@Test -public void testExtensionIdAsyncEndpoint() { - given() - .when().get("/extension/id-async/io.quarkus:quarkus-rest-client") - .then() - .statusCode(200) - .body("$.size()", is(1), - "[0].id", is("io.quarkus:quarkus-rest-client"), - "[0].name", is("REST Client"), - "[0].keywords.size()", greaterThan(1), - "[0].keywords", hasItem("rest-client")); -} ----- - -The `Uni` version is very similar: - -[source, java] ----- -package org.acme.rest.client; - -import java.util.Set; -import java.util.concurrent.CompletionStage; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -import io.smallrye.mutiny.Uni; - -@Path("/extensions") -@RegisterRestClient -public interface ExtensionsService { - - // ... - - @GET - Uni> getByIdAsUni(@QueryParam String id); -} ----- - -The `ExtensionsResource` becomes: - -[source,java] ----- -package org.acme.rest.client; - -import java.util.Set; -import java.util.concurrent.CompletionStage; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.rest.client.inject.RestClient; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import io.smallrye.mutiny.Uni; - -@Path("/extension") -public class ExtensionsResource { - - @Inject - @RestClient - ExtensionsService extensionsService; - - - // ... - - @GET - @Path("/id-uni/{id}") - public Uni> idMutiny(@PathParam String id) { - return extensionsService.getByIdAsUni(id); - } -} ----- - -[TIP] -.Mutiny -==== -The previous snippet uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -When returning a `Uni`, every _subscription_ invokes the remote service. -It means you can re-send the request by re-subscribing on the `Uni`, or use a `retry` as follows: - -[source, java] ----- - -@Inject @RestClient ExtensionsService extensionsService; - -// ... - -extensionsService.getByIdAsUni(id) - .onFailure().retry().atMost(10); ----- - -If you use a `CompletionStage`, you would need to call the service's method to retry. -This difference comes from the laziness aspect of Mutiny and its subscription protocol. -More details about this can be found in https://smallrye.io/smallrye-mutiny/#_uni_and_multi[the Mutiny documentation]. - -== Custom headers support - -The MicroProfile REST client allows amending request headers by registering a `ClientHeadersFactory` with the `@RegisterClientHeaders` annotation. - -Let's see it in action by adding a `@RegisterClientHeaders` annotation pointing to a `RequestUUIDHeaderFactory` class in our `ExtensionsService` REST interface: - -[source, java] ----- -package org.acme.rest.client; - -import java.util.Set; -import java.util.concurrent.CompletionStage; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.rest.client.annotation.RegisterClientHeaders; -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -import io.smallrye.mutiny.Uni; - -@Path("/extensions") -@RegisterRestClient -@RegisterClientHeaders(RequestUUIDHeaderFactory.class) -public interface ExtensionsService { - - @GET - Set getById(@QueryParam String id); - - @GET - CompletionStage> getByIdAsync(@QueryParam String id); - - @GET - Uni> getByIdAsUni(@QueryParam String id); -} ----- - -And the `RequestUUIDHeaderFactory` would look like: - -[source, java] ----- -package org.acme.rest.client; - -import org.eclipse.microprofile.rest.client.ext.ClientHeadersFactory; - -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.core.MultivaluedHashMap; -import javax.ws.rs.core.MultivaluedMap; -import java.util.UUID; - -@ApplicationScoped -public class RequestUUIDHeaderFactory implements ClientHeadersFactory { - - @Override - public MultivaluedMap update(MultivaluedMap incomingHeaders, MultivaluedMap clientOutgoingHeaders) { - MultivaluedMap result = new MultivaluedHashMap<>(); - result.add("X-request-uuid", UUID.randomUUID().toString()); - return result; - } -} ----- - -As you see in the example above, you can make your `ClientHeadersFactory` implementation a CDI bean by -annotating it with a scope-defining annotation, such as `@Singleton`, `@ApplicationScoped`, etc. - - -=== Default header factory - -You can also use `@RegisterClientHeaders` annotation without any custom factory specified. In that case the `DefaultClientHeadersFactoryImpl` factory will be used and all headers listed in `org.eclipse.microprofile.rest.client.propagateHeaders` configuration property will be amended. Individual header names are comma-separated. -[source, java] ----- -@Path("/extensions") -@RegisterRestClient -@RegisterClientHeaders -public interface ExtensionsService { - - @GET - Set getById(@QueryParam String id); - - @GET - CompletionStage> getByIdAsync(@QueryParam String id); - - @GET - Uni> getByIdAsUni(@QueryParam String id); -} - ----- - -[source,properties] ----- -org.eclipse.microprofile.rest.client.propagateHeaders=Authorization,Proxy-Authorization ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/extension/id/io.quarkus:quarkus-rest-client. - -You should see a JSON object containing some basic information about the REST Client extension. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed with `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== REST Client and RESTEasy interactions - -In Quarkus, the REST Client extension and xref:rest-json.adoc[the RESTEasy extension] share the same infrastructure. -One important consequence of this consideration is that they share the same list of providers (in the JAX-RS meaning of the word). - -For instance, if you declare a `WriterInterceptor`, it will by default intercept both the servers calls and the client calls, -which might not be the desired behavior. - -However, you can change this default behavior and constrain a provider to: - -* only consider *client* calls by adding the `@ConstrainedTo(RuntimeType.CLIENT)` annotation to your provider; -* only consider *server* calls by adding the `@ConstrainedTo(RuntimeType.SERVER)` annotation to your provider. - -[#using-a-mock-http-server-for-tests] -== Using a Mock HTTP Server for tests - -Setting up a mock HTTP server, against which tests are run, is a common testing pattern. -Examples of such servers are link:http://wiremock.org/[Wiremock] and link:https://docs.hoverfly.io/projects/hoverfly-java/en/latest/index.html[Hoverfly]. -In this section we'll demonstrate how Wiremock can be leveraged for testing the `ExtensionsService` which was developed above. - -First of all, Wiremock needs to be added as a test dependency. For a Maven project that would happen like so: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - com.github.tomakehurst - wiremock-jre8 - test - ${wiremock.version} <1> - ----- -<1> Use a proper Wiremock version. All available versions can be found link:https://search.maven.org/artifact/com.github.tomakehurst/wiremock-jre8[here]. - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("com.github.tomakehurst:wiremock-jre8:$wiremockVersion") <1> ----- -<1> Use a proper Wiremock version. All available versions can be found link:https://search.maven.org/artifact/com.github.tomakehurst/wiremock-jre8[here]. - -In Quarkus tests when some service needs to be started before the Quarkus tests are ran, we utilize the `@io.quarkus.test.common.QuarkusTestResource` -annotation to specify a `io.quarkus.test.common.QuarkusTestResourceLifecycleManager` which can start the service and supply configuration -values that Quarkus will use. - -[NOTE] -==== -For more details about `@QuarkusTestResource` refer to xref:getting-started-testing.adoc#quarkus-test-resource[this part of the documentation]. -==== - -Let's create an implementation of `QuarkusTestResourceLifecycleManager` called `WiremockExtensions` like so: - -[source,java] ----- -package org.acme.rest.client; - -import java.util.Collections; -import java.util.Map; - -import com.github.tomakehurst.wiremock.WireMockServer; -import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; - -import static com.github.tomakehurst.wiremock.client.WireMock.*; // <1> - -public class WireMockExtensions implements QuarkusTestResourceLifecycleManager { // <2> - - private WireMockServer wireMockServer; - - @Override - public Map start() { - wireMockServer = new WireMockServer(); - wireMockServer.start(); // <3> - - stubFor(get(urlEqualTo("/extensions?id=io.quarkus:quarkus-rest-client")) // <4> - .willReturn(aResponse() - .withHeader("Content-Type", "application/json") - .withBody( - "[{" + - "\"id\": \"io.quarkus:quarkus-rest-client\"," + - "\"name\": \"REST Client\"" + - "}]" - ))); - - stubFor(get(urlMatching(".*")).atPriority(10).willReturn(aResponse().proxiedFrom("https://stage.code.quarkus.io/api"))); // <5> - - return Collections.singletonMap("quarkus.rest-client.\"org.acme.rest.client.ExtensionsService\".url", wireMockServer.baseUrl()); // <6> - } - - @Override - public void stop() { - if (null != wireMockServer) { - wireMockServer.stop(); // <7> - } - } -} ----- - -<1> Statically importing the methods in the Wiremock package makes it easier to read the test. -<2> The `start` method is invoked by Quarkus before any test is run and returns a `Map` of configuration properties that apply during the test execution. -<3> Launch Wiremock. -<4> Configure Wiremock to stub the calls to `/extensions?id=io.quarkus:quarkus-rest-client` by returning a specific canned response. -<5> All HTTP calls that have not been stubbed are handled by calling the real service. This is done for demonstration purposes, as it is not something that would usually happen in a real test. -<6> As the `start` method returns configuration that applies for tests, we set the rest-client property that controls the base URL which is used by the implementation -of `ExtensionsService` to the base URL where Wiremock is listening for incoming requests. -<7> When all tests have finished, shutdown Wiremock. - - -The `ExtensionsResourceTest` test class needs to be annotated like so: - -[source,java] ----- -@QuarkusTest -@QuarkusTestResource(WireMockExtensions.class) -public class ExtensionsResourceTest { - -} ----- - -[WARNING] -==== -`@QuarkusTestResource` applies to all tests, not just `ExtensionsResourceTest`. -==== - -== Further reading - - * link:https://download.eclipse.org/microprofile/microprofile-rest-client-2.0/microprofile-rest-client-spec-2.0.html[MicroProfile Rest Client specification] diff --git a/_versions/2.7/guides/rest-data-panache.adoc b/_versions/2.7/guides/rest-data-panache.adoc deleted file mode 100644 index ab3fb3216d6..00000000000 --- a/_versions/2.7/guides/rest-data-panache.adoc +++ /dev/null @@ -1,433 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Generating JAX-RS resources with Panache - -include::./attributes.adoc[] -:extension-status: experimental - -A lot of web applications are monotonous CRUD applications with REST APIs that are tedious to write. -To streamline this task, REST Data with Panache extension can generate the basic CRUD endpoints for your entities and repositories. - -While this extension is still experimental and provides a limited feature set, we hope to get an early feedback for it. -Currently, this extension supports Hibernate ORM and MongoDB with Panache and can generate CRUD resources that work with `application/json` and `application/hal+json` content. - -include::./status-include.adoc[] - -== Setting up REST Data with Panache - -=== Hibernate ORM - -* Add the required dependencies to your build file -** Hibernate ORM REST Data with Panache extension (`quarkus-hibernate-orm-rest-data-panache`) -** A JDBC driver extension (`quarkus-jdbc-postgresql`, `quarkus-jdbc-h2`, `quarkus-jdbc-mariadb`, ...) -** One of the RESTEasy JSON serialization extensions (the extension supports both RESTEasy Classic and RESTEasy Reactive) - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-hibernate-orm-rest-data-panache - - - io.quarkus - quarkus-jdbc-postgresql - - - - - io.quarkus - quarkus-resteasy-reactive-jackson - - - - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-hibernate-orm-rest-data-panache") -implementation("io.quarkus:quarkus-jdbc-postgresql") - -// Use this if you are using RESTEasy Reactive -implementation("io.quarkus:quarkus-resteasy-reactive-jackson") - -// Use this if you are going to use RESTEasy Classic -// implementation("io.quarkus:quarkus-resteasy-jackson") ----- - -* Implement the Panache entities and/or repositories as explained in the xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache guide]. -* Define the interfaces for generation as explained in the resource generation section. - -=== MongoDB - -* Add the required dependencies to your build file -** MongoDB REST Data with Panache extension (`quarkus-mongodb-rest-data-panache`) -** One of the RESTEasy JSON serialization extensions (`quarkus-resteasy-reactive-jackson` or `quarkus-resteasy-reactive-jsonb`) - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-mongodb-rest-data-panache - - - - - io.quarkus - quarkus-resteasy-reactive-jackson - - - io.quarkus - quarkus-resteasy-reactive-links - - - - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-mongodb-rest-data-panache") - -// Use these if you are using RESTEasy Reactive -implementation("io.quarkus:quarkus-resteasy-reactive-jackson") -implementation("io.quarkus:quarkus-resteasy-reactive-links") - -// Use these if you are going to use RESTEasy Classic -// implementation("io.quarkus:quarkus-resteasy-jackson") -// implementation("io.quarkus:resteasy-links") ----- - -* Implement the Panache entities and/or repositories as explained in the xref:mongodb-panache.adoc[MongoDB with Panache guide]. -* Define the interfaces for generation as explained in the resource generation section. - -== Generating resources - -REST Data with Panache generates JAX-RS resources based on the interfaces available in your application. -For each entity and repository that you want to generate, provide a resource interface. -_Do not implement these interfaces and don't provide custom methods because they will be ignored._ You can, however, override the methods from the extended interface in order to customize them (see the section at the end). - -=== PanacheEntityResource - -If your application has an entity (e.g. `Person`) that extends either `PanacheEntity` or `PanacheEntityBase` class, you could instruct REST Data with Panache to generate its JAX-RS resource with the following interface: - -[source,java] ----- -public interface PeopleResource extends PanacheEntityResource { -} ----- - -=== PanacheRepositoryResource - -If your application has a simple entity (e.g. `Person`) and a repository (e.g. `PersonRepository`) that implements either `PanacheRepository` or `PanacheRepositoryBase` interface, you could instruct REST Data with Panache to generate its JAX-RS resource with the following interface: - -[source,java] ----- -public interface PeopleResource extends PanacheRepositoryResource { -} ----- - -=== PanacheMongoEntityResource - -If your application has an entity (e.g. `Person`) that extends either `PanacheMongoEntity` or `PanacheMongoEntityBase` class, you could instruct REST Data with Panache to generate its JAX-RS resource with the following interface: - -[source,java] ----- -public interface PeopleResource extends PanacheMongoEntityResource { -} ----- - -=== PanacheMongoRepositoryResource - -If your application has a simple entity (e.g. `Person`) and a repository (e.g. `PersonRepository`) that implements either `PanacheMongoRepository` or `PanacheMongoRepositoryBase` interface, you could instruct REST Data with Panache to generate its JAX-RS resource with the following interface: - -[source,java] ----- -public interface PeopleResource extends PanacheMongoRepositoryResource { -} ----- - -=== The generated resource - -The generated resources will be functionally equivalent for both entities and repositories. -The only difference being the particular data access pattern and data storage in use. - -If you have defined one of the `PeopleResource` interfaces mentioned above, this extension will generate its implementation using a particular data access strategy. -The implemented class then will be used by a generated JAX-RS resource, which will look like this: - -[source,java] ----- -public class PeopleResourceJaxRs { // The actual class name is going to be unique - @Inject - PeopleResource resource; - - @GET - @Path("{id}") - @Produces("application/json") - public Person get(@PathParam("id") Long id){ - Person person = resource.get(id); - if (person == null) { - throw new WebApplicationException(404); - } - return person; - } - - @GET - @Produces("application/json") - public Response list(@QueryParam("sort") List sortQuery, - @QueryParam("page") @DefaultValue("0") int pageIndex, - @QueryParam("size") @DefaultValue("20") int pageSize) { - Page page = Page.of(pageIndex, pageSize); - Sort sort = getSortFromQuery(sortQuery); - List people = resource.list(page, sort); - // ... build a response with page links and return a 200 response with a list - } - - @Transactional - @POST - @Consumes("application/json") - @Produces("application/json") - public Response add(Person personToSave) { - Person person = resource.add(person); - // ... build a new location URL and return 201 response with an entity - } - - @Transactional - @PUT - @Path("{id}") - @Consumes("application/json") - @Produces("application/json") - public Response update(@PathParam("id") Long id, Person personToSave) { - if (resource.get(id) == null) { - Person person = resource.update(id, personToSave); - return Response.status(204).build(); - } - Person person = resource.update(id, personToSave); - // ... build a new location URL and return 201 response with an entity - } - - @Transactional - @DELETE - @Path("{id}") - public void delete(@PathParam("id") Long id) { - if (!resource.delete(id)) { - throw new WebApplicationException(404); - } - } -} ----- - -== Resource customisation - -REST Data with Panache provides a `@ResourceProperties` and `@MethodProperties` annotations that can be used to customize certain features of the resource. -It can be used in your resource interface: - -[source,java] ----- -@ResourceProperties(hal = true, path = "my-people") -public interface PeopleResource extends PanacheEntityResource { - @MethodProperties(path = "all") - List list(Page page, Sort sort); - - @MethodProperties(exposed = false) - boolean delete(Long id); -} ----- - -=== Available options - -`@ResourceProperties` - -* `exposed` - whether resource could be exposed. A global resource property that can be overridden for each method. Default is `true`. -* `path` - resource base path. Default path is a hyphenated lowercase resource name without a suffix of `resource` or `controller`. -* `paged` - whether collection responses should be paged or not. -First, last, previous and next page URIs are included in the response headers if they exist. -Request page index and size are taken from the `page` and `size` query parameters that default to `0` and `20` respectively. -Default is `true`. -* `hal` - in addition to the standard `application/json` responses, generates additional methods that can return `application/hal+json` responses if requested via an `Accept` header. -Default is `false`. -* `halCollectionName` - name that should be used when generating a hal collection response. Default name is a hyphenated lowercase resource name without a suffix of `resource` or `controller`. - -`@MethodProperties` - -* `exposed` - does not expose a particular HTTP verb when set to `false`. Default is `true`. -* `path` - operation path (this is appended to the resource base path). Default is an empty string. - -== Query parameters - -REST Data with Panache supports the following query parameters with the generated resources. - -* `page` - a page number which should be returned by a list operation. -It applies to the paged resources only and is a number starting with 0. Default is 0. -* `size` - a page size which should be returned by a list operation. -It applies to the paged resources only and is a number starting with 1. Default is 20. -* `sort` - a comma separated list of fields which should be used for sorting a result of a list operation. -Fields are sorted in the ascending order unless they're prefixed with a `-`. -E.g. `?sort=name,-age` will sort the result by the name ascending by the age descending. - -== Response body examples - -As mentioned above REST Data with Panache supports the `application/json` and `application/hal+json` response content types. -Here are a couple of examples of how a response body would look like for the `get` and `list` operations assuming there are five `Person` records in a database. - -=== GET /people/1 - -`Accept: application/json` - -[source,json] ----- -{ - "id": 1, - "name": "John Johnson", - "birth": "1988-01-10" -} ----- - -`Accept: application/hal+json` - -[source,json] ----- -{ - "id": 1, - "name": "John Johnson", - "birth": "1988-01-10", - "_links": { - "self": { - "href": "http://example.com/people/1" - }, - "remove": { - "href": "http://example.com/people/1" - }, - "update": { - "href": "http://example.com/people/1" - }, - "add": { - "href": "http://example.com/people" - }, - "list": { - "href": "http://example.com/people" - } - } -} ----- - -=== GET /people?page=0&size=2 - -`Accept: application/json` - -[source,json] ----- -[ - { - "id": 1, - "name": "John Johnson", - "birth": "1988-01-10" - }, - { - "id": 2, - "name": "Peter Peterson", - "birth": "1986-11-20" - } -] - ----- - -`Accept: application/hal+json` - -[source,json] ----- -{ - "_embedded": [ - { - "id": 1, - "name": "John Johnson", - "birth": "1988-01-10", - "_links": { - "self": { - "href": "http://example.com/people/1" - }, - "remove": { - "href": "http://example.com/people/1" - }, - "update": { - "href": "http://example.com/people/1" - }, - "add": { - "href": "http://example.com/people" - }, - "list": { - "href": "http://example.com/people" - } - } - }, - { - "id": 2, - "name": "Peter Peterson", - "birth": "1986-11-20", - "_links": { - "self": { - "href": "http://example.com/people/2" - }, - "remove": { - "href": "http://example.com/people/2" - }, - "update": { - "href": "http://example.com/people/2" - }, - "add": { - "href": "http://example.com/people" - }, - "list": { - "href": "http://example.com/people" - } - } - } - ], - "_links": { - "add": { - "href": "http://example.com/people" - }, - "list": { - "href": "http://example.com/people" - }, - "first": { - "href": "http://example.com/people?page=0&size=2" - }, - "last": { - "href": "http://example.com/people?page=2&size=2" - }, - "next": { - "href": "http://example.com/people?page=1&size=2" - } - } -} ----- - -Both responses would also contain these headers: - -* Link: < http://example.com/people?page=0&size=2 >; rel="first" -* Link: < http://example.com/people?page=2&size=2 >; rel="last" -* Link: < http://example.com/people?page=1&size=2 >; rel="next" - -A `previous` link header (and hal link) would not be included, because the previous page does not exist. diff --git a/_versions/2.7/guides/rest-json.adoc b/_versions/2.7/guides/rest-json.adoc deleted file mode 100644 index c78d04665b5..00000000000 --- a/_versions/2.7/guides/rest-json.adoc +++ /dev/null @@ -1,718 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Writing JSON REST Services - -include::./attributes.adoc[] - -JSON is now the _lingua franca_ between microservices. - -In this guide, we see how you can get your REST services to consume and produce JSON payloads. - -TIP: there is another guide if you need a xref:rest-client.adoc[REST client] (including support for JSON). - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -The application built in this guide is quite simple: the user can add elements in a list using a form and the list is updated. - -All the information between the browser and the server are formatted as JSON. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `rest-json-quickstart` {quickstarts-tree-url}/rest-json-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: rest-json-quickstart -:create-app-extensions: resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a new project importing the RESTEasy/JAX-RS and https://github.com/FasterXML/jackson[Jackson] extensions, -and in particular adds the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-jackson") ----- - -[NOTE] -==== -To improve user experience, Quarkus registers the three Jackson https://github.com/FasterXML/jackson-modules-java8[Java 8 modules] so you don't need to do it manually. -==== - -Quarkus also supports https://eclipse-ee4j.github.io/jsonb-api/[JSON-B] so, if you prefer JSON-B over Jackson, you can create a project relying on the RESTEasy JSON-B extension instead: - -:create-app-artifact-id: rest-json-quickstart -:create-app-extensions: resteasy-jsonb -include::includes/devtools/create-app.adoc[] - -This command generates a new project importing the RESTEasy/JAX-RS and https://eclipse-ee4j.github.io/jsonb-api/[JSON-B] extensions, -and in particular adds the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-jsonb - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-jsonb") ----- - -== Creating your first JSON REST service - -In this example, we will create an application to manage a list of fruits. - -First, let's create the `Fruit` bean as follows: - -[source,java] ----- -package org.acme.rest.json; - -public class Fruit { - - public String name; - public String description; - - public Fruit() { - } - - public Fruit(String name, String description) { - this.name = name; - this.description = description; - } -} ----- - -Nothing fancy. One important thing to note is that having a default constructor is required by the JSON serialization layer. - -Now, create the `org.acme.rest.json.FruitResource` class as follows: - -[source,java] ----- -package org.acme.rest.json; - -import java.util.Collections; -import java.util.LinkedHashMap; -import java.util.Set; - -import javax.ws.rs.DELETE; -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.Path; - -@Path("/fruits") -public class FruitResource { - - private Set fruits = Collections.newSetFromMap(Collections.synchronizedMap(new LinkedHashMap<>())); - - public FruitResource() { - fruits.add(new Fruit("Apple", "Winter fruit")); - fruits.add(new Fruit("Pineapple", "Tropical fruit")); - } - - @GET - public Set list() { - return fruits; - } - - @POST - public Set add(Fruit fruit) { - fruits.add(fruit); - return fruits; - } - - @DELETE - public Set delete(Fruit fruit) { - fruits.removeIf(existingFruit -> existingFruit.name.contentEquals(fruit.name)); - return fruits; - } -} ----- - -The implementation is pretty straightforward and you just need to define your endpoints using the JAX-RS annotations. - -The `Fruit` objects will be automatically serialized/deserialized by https://eclipse-ee4j.github.io/jsonb-api/[JSON-B] or https://github.com/FasterXML/jackson[Jackson], -depending on the extension you chose when initializing the project. - -[NOTE] -==== -When a JSON extension is installed such as `quarkus-resteasy-jackson` or `quarkus-resteasy-jsonb`, Quarkus will use the `application/json` media type -by default for most return values, unless the media type is explicitly set via -`@Produces` or `@Consumes` annotations (there are some exceptions for well known types, such as `String` and `File`, which default to `text/plain` and `application/octet-stream` -respectively). - -If you don't want JSON by default you can set `quarkus.resteasy-json.default-json=false` and the default will change back to being auto-negotiated. If you set this -you will need to add `@Produces(MediaType.APPLICATION_JSON)` and `@Consumes(MediaType.APPLICATION_JSON)` to your endpoints in order to use JSON. - -If you don't rely on the JSON default, it is heavily recommended to annotate your endpoints with the `@Produces` and `@Consumes` annotations to define precisely the expected content-types. -It will allow to narrow down the number of JAX-RS providers (which can be seen as converters) included in the native executable. -==== - -[[json]] -=== Configuring JSON support - -==== Jackson - -In Quarkus, the default Jackson `ObjectMapper` obtained via CDI (and consumed by the Quarkus extensions) is configured to ignore unknown properties -(by disabling the `DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES` feature). - -You can restore the default behavior of Jackson by setting `quarkus.jackson.fail-on-unknown-properties=true` in your `application.properties` -or on a per class basis via `@JsonIgnoreProperties(ignoreUnknown = false)`. - -Furthermore, the `ObjectMapper` is configured to format dates and time in ISO-8601 -(by disabling the `SerializationFeature.WRITE_DATES_AS_TIMESTAMPS` feature). - -The default behaviour of Jackson can be restored by setting `quarkus.jackson.write-dates-as-timestamps=true` -in your `application.properties`. If you want to change the format for a single field, you can use the -`@JsonFormat` annotation. - -Also, Quarkus makes it very easy to configure various Jackson settings via CDI beans. -The simplest (and suggested) approach is to define a CDI bean of type `io.quarkus.jackson.ObjectMapperCustomizer` -inside of which any Jackson configuration can be applied. - -An example where a custom module needs to be registered would look like so: - -[source,java] ----- -import com.fasterxml.jackson.databind.ObjectMapper; -import io.quarkus.jackson.ObjectMapperCustomizer; -import javax.inject.Singleton; - -@Singleton -public class RegisterCustomModuleCustomizer implements ObjectMapperCustomizer { - - public void customize(ObjectMapper mapper) { - mapper.registerModule(new CustomModule()); - } -} ----- - -Users can even provide their own `ObjectMapper` bean if they so choose. -If this is done, it is very important to manually inject and apply all `io.quarkus.jackson.ObjectMapperCustomizer` beans in the CDI producer that produces `ObjectMapper`. -Failure to do so will prevent Jackson specific customizations provided by various extensions from being applied. - -[source,java] ----- -import com.fasterxml.jackson.databind.ObjectMapper; -import io.quarkus.jackson.ObjectMapperCustomizer; - -import javax.enterprise.inject.Instance; -import javax.inject.Singleton; - -public class CustomObjectMapper { - - // Replaces the CDI producer for ObjectMapper built into Quarkus - @Singleton - ObjectMapper objectMapper(Instance customizers) { - ObjectMapper mapper = myObjectMapper(); // Custom `ObjectMapper` - - // Apply all ObjectMapperCustomizer beans (incl. Quarkus) - for (ObjectMapperCustomizer customizer : customizers) { - customizer.customize(mapper); - } - - return mapper; - } -} ----- - -==== JSON-B - -As stated above, Quarkus provides the option of using JSON-B instead of Jackson via the use of the `quarkus-resteasy-jsonb` extension. - -Following the same approach as described in the previous section, JSON-B can be configured using a `io.quarkus.jsonb.JsonbConfigCustomizer` bean. - -If for example a custom serializer named `FooSerializer` for type `com.example.Foo` needs to be registered with JSON-B, the addition of a bean like the following would suffice: - -[source,java] ----- -import io.quarkus.jsonb.JsonbConfigCustomizer; -import javax.inject.Singleton; -import javax.json.bind.JsonbConfig; -import javax.json.bind.serializer.JsonbSerializer; - -@Singleton -public class FooSerializerRegistrationCustomizer implements JsonbConfigCustomizer { - - public void customize(JsonbConfig config) { - config.withSerializers(new FooSerializer()); - } -} ----- - -A more advanced option would be to directly provide a bean of `javax.json.bind.JsonbConfig` (with a `Dependent` scope) or in the extreme case to provide a bean of type `javax.json.bind.Jsonb` (with a `Singleton` scope). -If the latter approach is leveraged it is very important to manually inject and apply all `io.quarkus.jsonb.JsonbConfigCustomizer` beans in the CDI producer that produces `javax.json.bind.Jsonb`. -Failure to do so will prevent JSON-B specific customizations provided by various extensions from being applied. - -[source,java] ----- -import io.quarkus.jsonb.JsonbConfigCustomizer; - -import javax.enterprise.context.Dependent; -import javax.enterprise.inject.Instance; -import javax.json.bind.JsonbConfig; - -public class CustomJsonbConfig { - - // Replaces the CDI producer for JsonbConfig built into Quarkus - @Dependent - JsonbConfig jsonConfig(Instance customizers) { - JsonbConfig config = myJsonbConfig(); // Custom `JsonbConfig` - - // Apply all JsonbConfigCustomizer beans (incl. Quarkus) - for (JsonbConfigCustomizer customizer : customizers) { - customizer.customize(config); - } - - return config; - } -} ----- - - -== Creating a frontend - -Now let's add a simple web page to interact with our `FruitResource`. -Quarkus automatically serves static resources located under the `META-INF/resources` directory. -In the `src/main/resources/META-INF/resources` directory, add a `fruits.html` file with the content from this {quickstarts-blob-url}/rest-json-quickstart/src/main/resources/META-INF/resources/fruits.html[fruits.html] file in it. - -You can now interact with your REST service: - -:devtools-wrapped: - - * start Quarkus with: -+ -include::includes/devtools/dev.adoc[] - * open a browser to `http://localhost:8080/fruits.html` - * add new fruits to the list via the form - -:!devtools-wrapped: - -== Building a native executable - -You can build a native executable with the usual command: - -include::includes/devtools/build-native.adoc[] - -Running it is as simple as executing `./target/rest-json-quickstart-1.0.0-SNAPSHOT-runner`. - -You can then point your browser to `http://localhost:8080/fruits.html` and use your application. - -== About serialization - -JSON serialization libraries use Java reflection to get the properties of an object and serialize them. - -When using native executables with GraalVM, all classes that will be used with reflection need to be registered. -The good news is that Quarkus does that work for you most of the time. -So far, we haven't registered any class, not even `Fruit`, for reflection usage and everything is working fine. - -Quarkus performs some magic when it is capable of inferring the serialized types from the REST methods. -When you have the following REST method, Quarkus determines that `Fruit` will be serialized: - -[source,JAVA] ----- -@GET -public List list() { - // ... -} ----- - -Quarkus does that for you automatically by analyzing the REST methods at build time -and that's why we didn't need any reflection registration in the first part of this guide. - -Another common pattern in the JAX-RS world is to use the `Response` object. -`Response` comes with some nice perks: - - * you can return different entity types depending on what happens in your method (a `Legume` or an `Error` for instance); - * you can set the attributes of the `Response` (the status comes to mind in the case of an error). - -Your REST method then looks like this: - -[source,JAVA] ----- -@GET -public Response list() { - // ... -} ----- - -It is not possible for Quarkus to determine at build time the type included in the `Response` as the information is not available. -In this case, Quarkus won't be able to automatically register for reflection the required classes. - -This leads us to our next section. - -== Using Response - -Let's create the `Legume` class which will be serialized as JSON, following the same model as for our `Fruit` class: - -[source,JAVA] ----- -package org.acme.rest.json; - -public class Legume { - - public String name; - public String description; - - public Legume() { - } - - public Legume(String name, String description) { - this.name = name; - this.description = description; - } -} ----- - -Now let's create a `LegumeResource` REST service with only one method which returns the list of legumes. - -This method returns a `Response` and not a list of `Legume`. - -[source,JAVA] ----- -package org.acme.rest.json; - -import java.util.Collections; -import java.util.LinkedHashSet; -import java.util.Set; - -import javax.ws.rs.Consumes; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.Response; - -@Path("/legumes") -@Produces(MediaType.APPLICATION_JSON) -@Consumes(MediaType.APPLICATION_JSON) -public class LegumeResource { - - private Set legumes = Collections.synchronizedSet(new LinkedHashSet<>()); - - public LegumeResource() { - legumes.add(new Legume("Carrot", "Root vegetable, usually orange")); - legumes.add(new Legume("Zucchini", "Summer squash")); - } - - @GET - public Response list() { - return Response.ok(legumes).build(); - } -} ----- - -Now let's add a simple web page to display our list of legumes. -In the `src/main/resources/META-INF/resources` directory, add a `legumes.html` file with the content from this -{quickstarts-blob-url}/rest-json-quickstart/src/main/resources/META-INF/resources/legumes.html[legumes.html] file in it. - -Open a browser to http://localhost:8080/legumes.html and you will see our list of legumes. - -The interesting part starts when running the application as a native executable: - -:devtools-wrapped: - - * create the native executable with: -+ -include::includes/devtools/build-native.adoc[] - * execute it with `./target/rest-json-quickstart-1.0.0-SNAPSHOT-runner` - * open a browser and go to http://localhost:8080/legumes.html - -:!devtools-wrapped: - -No legumes there. - -As mentioned above, the issue is that Quarkus was not able to determine the `Legume` class will require some reflection by analyzing the REST endpoints. -The JSON serialization library tries to get the list of fields of `Legume` and gets an empty list so it does not serialize the fields' data. - -[NOTE] -==== -At the moment, when JSON-B or Jackson tries to get the list of fields of a class, if the class is not registered for reflection, no exception will be thrown. -GraalVM will simply return an empty list of fields. - -Hopefully, this will change in the future and make the error more obvious. -==== - -We can register `Legume` for reflection manually by adding the `@RegisterForReflection` annotation on our `Legume` class: -[source,JAVA] ----- -import io.quarkus.runtime.annotations.RegisterForReflection; - -@RegisterForReflection -public class Legume { - // ... -} ----- - -TIP: The `@RegisterForReflection` annotation instructs Quarkus to keep the class and its members during the native compilation. More details about the `@RegisterForReflection` annotation can be found on the xref:writing-native-applications-tips.adoc#registerForReflection[native application tips] page. - -Let's do that and follow the same steps as before: - -:devtools-wrapped: - - * hit `Ctrl+C` to stop the application - * create the native executable with: -+ -include::includes/devtools/build-native.adoc[] - * execute it with `./target/rest-json-quickstart-1.0.0-SNAPSHOT-runner` - * open a browser and go to http://localhost:8080/legumes.html - -:!devtools-wrapped: - -This time, you can see our list of legumes. - -[[reactive]] -== Being reactive - -You can return _reactive types_ to handle asynchronous processing. -Quarkus recommends the usage of https://smallrye.io/smallrye-mutiny[Mutiny] to write reactive and asynchronous code. - -To integrate Mutiny and RESTEasy, you need to add the `quarkus-resteasy-mutiny` dependency to your project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-mutiny - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-mutiny") ----- - -Then, your endpoint can return `Uni` or `Multi` instances: - -[source,java] ----- - -@GET -@Path("/{name}") -public Uni getOne(@PathParam String name) { - return findByName(name); -} - -@GET -public Multi getAll() { - return findAll(); -} ----- - -Use `Uni` when you have a single result. -Use `Multi` when you have multiple items that may be emitted asynchronously. - -You can use `Uni` and `Response` to return asynchronous HTTP responses: `Uni`. - -More details about Mutiny can be found in xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. - -== HTTP filters and interceptors - -Both HTTP request and response can be intercepted by providing `ContainerRequestFilter` or `ContainerResponseFilter` -implementations respectively. These filters are suitable for processing the metadata associated with a message: HTTP -headers, query parameters, media type, and other metadata. They also have the capability to abort the request -processing, for instance when the user does not have the permissions to access the endpoint. - -Let's use `ContainerRequestFilter` to add logging capability to our service. We can do that by implementing -`ContainerRequestFilter` and annotating it with the `@Provider` annotation: - -[source,java] ----- -package org.acme.rest.json; - -import io.vertx.core.http.HttpServerRequest; -import org.jboss.logging.Logger; - -import javax.ws.rs.container.ContainerRequestContext; -import javax.ws.rs.container.ContainerRequestFilter; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.UriInfo; -import javax.ws.rs.ext.Provider; - -@Provider -public class LoggingFilter implements ContainerRequestFilter { - - private static final Logger LOG = Logger.getLogger(LoggingFilter.class); - - @Context - UriInfo info; - - @Context - HttpServerRequest request; - - @Override - public void filter(ContainerRequestContext context) { - - final String method = context.getMethod(); - final String path = info.getPath(); - final String address = request.remoteAddress().toString(); - - LOG.infof("Request %s %s from IP %s", method, path, address); - } -} ----- - -Now, whenever a REST method is invoked, the request will be logged into the console: - -[source,text] ----- -2019-06-05 12:44:26,526 INFO [org.acm.res.jso.LoggingFilter] (executor-thread-1) Request GET /legumes from IP 127.0.0.1 -2019-06-05 12:49:19,623 INFO [org.acm.res.jso.LoggingFilter] (executor-thread-1) Request GET /fruits from IP 0:0:0:0:0:0:0:1 -2019-06-05 12:50:44,019 INFO [org.acm.res.jso.LoggingFilter] (executor-thread-1) Request POST /fruits from IP 0:0:0:0:0:0:0:1 -2019-06-05 12:51:04,485 INFO [org.acm.res.jso.LoggingFilter] (executor-thread-1) Request GET /fruits from IP 127.0.0.1 ----- - -== CORS filter - -link:https://en.wikipedia.org/wiki/Cross-origin_resource_sharing[Cross-origin resource sharing] (CORS) is a mechanism that -allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource -was served. - -Quarkus comes with a CORS filter. Read the xref:http-reference.adoc#cors-filter[HTTP Reference Documentation] to learn -how to use it. - -== GZip Support - -Quarkus comes with GZip support (even though it is not enabled by default). The following configuration knobs allow to configure GZip support. - -[source, properties] ----- -quarkus.resteasy.gzip.enabled=true // <1> -quarkus.resteasy.gzip.max-input=10M // <2> ----- - -<1> Enable Gzip support. -<2> Configure the upper limit on deflated request body. This is useful to mitigate potential attacks by limiting their reach. The default value is `10M`. -This configuration option would recognize strings in this format (shown as a regular expression): `[0-9]+[KkMmGgTtPpEeZzYy]?`. If no suffix is given, assume bytes. - -Once GZip support has been enabled you can use it on an endpoint by adding the `@org.jboss.resteasy.annotations.GZIP` annotation to your endpoint method. - -If you want to compress everything then we recommended that you use the `quarkus.http.enable-compression=true` setting instead to globally enable -compression support. - -== Multipart Support - -RESTEasy supports multipart via the https://docs.jboss.org/resteasy/docs/4.5.6.Final/userguide/html/Multipart.html[RESTEasy Multipart Provider]. - -Quarkus provides an extension called `quarkus-resteasy-multipart` to make things easier for you. - -This extension slightly differs from the RESTEasy default behavior as the default charset (if none is specified in your request) is UTF-8 rather than US-ASCII. - -You can configure this behavior with the following configuration properties: - -include::{generated-dir}/config/quarkus-resteasy-multipart.adoc[leveloffset=+1, opts=optional] - -== Servlet compatibility - -In Quarkus, RESTEasy can either run directly on top of the Vert.x HTTP server, or on top of Undertow if you have any servlet dependency. - -As a result, certain classes, such as `HttpServletRequest` are not always available for injection. Most use-cases for this particular -class are covered by JAX-RS equivalents, except for getting the remote client's IP. RESTEasy comes with a replacement API which you can inject: -https://docs.jboss.org/resteasy/docs/4.5.6.Final/javadocs/org/jboss/resteasy/spi/HttpRequest.html[`HttpRequest`], which has the methods -https://docs.jboss.org/resteasy/docs/4.5.6.Final/javadocs/org/jboss/resteasy/spi/HttpRequest.html#getRemoteAddress--[`getRemoteAddress()`] -and https://docs.jboss.org/resteasy/docs/4.5.6.Final/javadocs/org/jboss/resteasy/spi/HttpRequest.html#getRemoteHost--[`getRemoteHost()`] -to solve this problem. - -== RESTEasy and REST Client interactions - -In Quarkus, the RESTEasy extension and xref:rest-client.adoc[the REST Client extension] share the same infrastructure. -One important consequence of this consideration is that they share the same list of providers (in the JAX-RS meaning of the word). - -For instance, if you declare a `WriterInterceptor`, it will by default intercept both the servers calls and the client calls, -which might not be the desired behavior. - -However, you can change this default behavior and constrain a provider to: - -* only consider *server* calls by adding the `@ConstrainedTo(RuntimeType.SERVER)` annotation to your provider; -* only consider *client* calls by adding the `@ConstrainedTo(RuntimeType.CLIENT)` annotation to your provider. - -== What's Different from Jakarta EE Development - -=== No Need for `Application` Class - -Configuration via an application-supplied subclass of `Application` is supported, but not required. - -=== Only a single JAX-RS application - -In contrast to JAX-RS (and RESTeasy) running in a standard servlet-container, Quarkus only supports the deployment of a single JAX-RS application. -If multiple JAX-RS `Application` classes are defined, the build will fail with the message `Multiple classes have been annotated with @ApplicationPath which is currently not supported`. - -If multiple JAX-RS applications are defined, the property `quarkus.resteasy.ignore-application-classes=true` can be used to ignore all explicit `Application` classes. This makes all resource-classes available via the application-path as defined by `quarkus.resteasy.path` (default: `/`). - -=== Support limitations of JAX-RS application - -The RESTEasy extension doesn't support the method `getProperties()` of the class `javax.ws.rs.core.Application`. Moreover, it only relies on the methods `getClasses()` and `getSingletons()` to filter out the annotated resource, provider and feature classes. -It doesn't filter out the built-in resource, provider and feature classes and also the resource, provider and feature classes registered by the other extensions. -Finally the objects returned by the method `getSingletons()` are ignored, only the classes are took into account to filter out the resource, provider and feature classes, in other words the method `getSingletons()` is actually managed the same way as `getClasses()`. - -=== Lifecycle of Resources - -In Quarkus all JAX-RS resources are treated as CDI beans. -It's possible to inject other beans via `@Inject`, bind interceptors using bindings such as `@Transactional`, define `@PostConstruct` callbacks, etc. - -If there is no scope annotation declared on the resource class then the scope is defaulted. -The default scope can be controlled through the `quarkus.resteasy.singleton-resources` property. -If set to `true` (default) then a *single instance* of a resource class is created to service all requests (as defined by `@javax.inject.Singleton`). -If set to `false` then a *new instance* of the resource class is created per each request. -An explicit CDI scope annotation (`@RequestScoped`, `@ApplicationScoped`, etc.) always overrides the default behavior and specifies the lifecycle of resource instances. - -== Include/Exclude JAX-RS classes with build time conditions - -Quarkus enables the inclusion or exclusion of JAX-RS Resources, Providers and Features directly thanks to build time conditions in the same that it does for CDI beans. -Thus, the various JAX-RS classes can be annotated with profile conditions (`@io.quarkus.arc.profile.IfBuildProfile` or `@io.quarkus.arc.profile.UnlessBuildProfile`) and/or with property conditions (`io.quarkus.arc.properties.IfBuildProperty` or `io.quarkus.arc.properties.UnlessBuildProperty`) to indicate to Quarkus at build time under which conditions these JAX-RS classes should be included. - -In the following example, Quarkus includes the endpoint `sayHello` if and only if the build profile `app1` has been enabled. - -[source,java] ----- -@IfBuildProfile("app1") -public class ResourceForApp1Only { - - @GET - @Path("sayHello") - public String sayHello() { - return "hello"; - } -} ----- - -Please note that if a JAX-RS Application has been detected and the method `getClasses()` and/or `getSingletons()` has/have been overridden, Quarkus will ignore the build time conditions and consider only what has been defined in the JAX-RS Application. - -== Conclusion - -Creating JSON REST services with Quarkus is easy as it relies on proven and well known technologies. - -As usual, Quarkus further simplifies things under the hood when running your application as a native executable. - -There is only one thing to remember: if you use `Response` and Quarkus can't determine the beans that are serialized, you need to annotate them with `@RegisterForReflection`. diff --git a/_versions/2.7/guides/resteasy-reactive.adoc b/_versions/2.7/guides/resteasy-reactive.adoc deleted file mode 100644 index 6288698e08b..00000000000 --- a/_versions/2.7/guides/resteasy-reactive.adoc +++ /dev/null @@ -1,2120 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Writing REST Services with RESTEasy Reactive - -include::./attributes.adoc[] -:jaxrsapi: https://javadoc.io/doc/javax.ws.rs/javax.ws.rs-api/2.1.1 -:jaxrsspec: /specs/jaxrs/2.1/index.html -:jdkapi: https://docs.oracle.com/en/java/javase/11/docs/api/java.base -:mutinyapi: https://smallrye.io/smallrye-mutiny/apidocs -:httpspec: https://tools.ietf.org/html/rfc7231 -:jsonpapi: https://javadoc.io/doc/javax.json/javax.json-api/1.1.4 -:vertxapi: https://javadoc.io/static/io.vertx/vertx-core/4.1.0 -:resteasy-reactive-api: https://javadoc.io/doc/io.quarkus.resteasy.reactive/resteasy-reactive/2.0.0.Final -:resteasy-reactive-common-api: https://javadoc.io/doc/io.quarkus.resteasy.reactive/resteasy-reactive-common/2.0.0.Final - -This guide explains how to write REST Services with RESTEasy Reactive in Quarkus. - -== What is RESTEasy Reactive? - -RESTEasy Reactive is a new link:{jaxrsspec}[JAX-RS] -implementation written from the ground up to work on our -common https://vertx.io/[Vert.x] layer and is thus fully reactive, while also being very tightly integrated with -Quarkus and consequently moving a lot of work to build time. - -You should be able to use it in place of any JAX-RS implementation, but on top of that it has -great performance for both blocking and non-blocking endpoints, and a lot of new features on top -of what JAX-RS provides. - -== Writing endpoints - -=== Getting started - -Add the following import to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-reactive - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-reactive") ----- - -You can now write your first endpoint in the `org.acme.rest.Endpoint` class: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("") -public class Endpoint { - - @GET - public String hello() { - return "Hello, World!"; - } -} ----- - -=== Terminology - -REST:: https://en.wikipedia.org/wiki/Representational_state_transfer[REpresentational State Transfer] -Endpoint:: Java method which is called to serve a REST call -URL / URI (Uniform Resource Locator / Identifier):: Used to identify the location of REST resources (https://tools.ietf.org/html/rfc7230#section-2.7[specification]) -Resource:: Represents your domain object. This is what your API serves and modifies. Also called an `entity` in JAX-RS. -Representation:: How your resource is represented on the wire, can vary depending on content types -Content type:: Designates a particular representation (also called a media type), for example `text/plain` or `application/json` -HTTP:: Underlying wire protocol for routing REST calls (see https://tools.ietf.org/html/rfc7230[HTTP specifications]) -HTTP request:: the request part of the HTTP call, consisting of an HTTP method, a target URI, headers and an optional message body -HTTP response:: the response part of the HTTP call, consisting of an HTTP response status, headers and an optional message body - -=== Declaring endpoints: URI mapping - -Any class annotated with a link:{jaxrsapi}/javax/ws/rs/Path.html[`@Path`] annotation can have its methods exposed as REST endpoints, -provided they have an HTTP method annotation (see below). - -That link:{jaxrsapi}/javax/ws/rs/Path.html[`@Path`] annotation defines the URI prefix under which those methods will be exposed. It can -be empty, or contain a prefix such as `rest` or `rest/V1`. - -Each exposed endpoint method can in turn have another link:{jaxrsapi}/javax/ws/rs/Path.html[`@Path`] annotation which adds to its containing -class annotation. For example, this defines a `rest/hello` endpoint: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("rest") -public class Endpoint { - - @Path("hello") - @GET - public String hello() { - return "Hello, World!"; - } -} ----- - -See <> for more information about URI mapping. - -You can set the root path for all rest endpoints using the `@ApplicationPath` annotation, as shown below. - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.ApplicationPath; -import javax.ws.rs.core.Application; - -@ApplicationPath("/api") -public static class MyApplication extends Application { - -} ----- - -This will cause all rest endpoints to be resolve relative to `/api`, so the endpoint above with `@Path("rest")` would -be accessible at `/api/rest/. You can also set the `quarkus.rest.path` build time property to set the root path if you -don't want to use an annotation. - -=== Declaring endpoints: HTTP methods - -Each endpoint method must be annotated with one of the following annotations, which defines which HTTP -method will be mapped to the method: - -.Table HTTP method annotations -|=== -|Annotation|Usage - -|link:{jaxrsapi}/javax/ws/rs/GET.html[`@GET`] -|Obtain a resource representation, should not modify state, link:{httpspec}#section-4.2.2[idempotent] (link:{httpspec}#section-4.3.1[HTTP docs]) - -|link:{jaxrsapi}/javax/ws/rs/HEAD.html[`@HEAD`] -|Obtain metadata about a resource, similar to `GET` with no body (link:{httpspec}#section-4.3.2[HTTP docs]) - -|link:{jaxrsapi}/javax/ws/rs/POST.html[`@POST`] -|Create a resource and obtain a link to it (link:{httpspec}#section-4.3.3[HTTP docs]) - -|link:{jaxrsapi}/javax/ws/rs/PUT.html[`@PUT`] -|Replace a resource or create one, should be link:{httpspec}#section-4.2.2[idempotent] (link:{httpspec}#section-4.3.4[HTTP docs]) - -|link:{jaxrsapi}/javax/ws/rs/DELETE.html[`@DELETE`] -|Delete an existing resource, link:{httpspec}#section-4.2.2[idempotent] (link:{httpspec}#section-4.3.5[HTTP docs]) - -|link:{jaxrsapi}/javax/ws/rs/OPTIONS.html[`@OPTIONS`] -|Obtain information about a resource, link:{httpspec}#section-4.2.2[idempotent] (link:{httpspec}#section-4.3.7[HTTP docs]) - -|link:{jaxrsapi}/javax/ws/rs/PATCH.html[`@PATCH`] -|Update a resource, or create one, not link:{httpspec}#section-4.2.2[idempotent] (https://tools.ietf.org/html/rfc5789#section-2[HTTP docs]) - -|=== - -You can also declare other HTTP methods by declaring them as an annotation with the -link:{jaxrsapi}/javax/ws/rs/HttpMethod.html[`@HttpMethod`] annotation: - -[source,java] ----- -package org.acme.rest; - -import java.lang.annotation.Retention; -import java.lang.annotation.RetentionPolicy; - -import javax.ws.rs.HttpMethod; -import javax.ws.rs.Path; - -@Retention(RetentionPolicy.RUNTIME) -@HttpMethod("FROMAGE") -@interface FROMAGE { -} - -@Path("") -public class Endpoint { - - @FROMAGE - public String hello() { - return "Hello, Cheese World!"; - } -} ----- - -=== Declaring endpoints: representation / content types - -Each endpoint method may consume or produce specific resource representations, which are indicated by -the HTTP link:{httpspec}#section-3.1.1.5[`Content-Type`] header, which in turn contains -link:{httpspec}#section-3.1.1.1[MIME (Media Type)] values, such as the following: - -- `text/plain` which is the default for any endpoint returning a `String`. -- `text/html` for HTML (such as with xref:qute.adoc[Qute templating]) -- `application/json` for a <> -- `text/*` which is a sub-type wildcard for any text media type -- `\*/*` which is a wildcard for any media type - -You may annotate your endpoint class with the link:{jaxrsapi}/javax/ws/rs/Produces.html[`@Produces`] -or link:{jaxrsapi}/javax/ws/rs/Consumes.html[`@Consumes`] annotations, which -allow you to specify one or more media types that your endpoint may accept as HTTP request body -or produce as HTTP response body. Those class annotations apply to each method. - -Any method may also be annotated with the link:{jaxrsapi}/javax/ws/rs/Produces.html[`@Produces`] -or link:{jaxrsapi}/javax/ws/rs/Consumes.html[`@Consumes`] annotations, in which -case they override any eventual class annotation. - -The link:{jaxrsapi}/javax/ws/rs/core/MediaType.html[`MediaType`] class has many constants you -can use to point to specific pre-defined media types. - -See <> for more information. - -=== Accessing request parameters - -[[request-parameters]] - -NOTE: don't forget to configure your compiler to generate parameter name information with `-parameters` (javac) -or `` or `` (https://maven.apache.org/plugins/maven-compiler-plugin/compile-mojo.html#parameters[Maven]). - -The following HTTP request elements may be obtained by your endpoint method: - -.Table HTTP request parameter annotations -|=== -|HTTP element|Annotation|Usage - -|[[path-parameter]]Path parameter -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestPath.html[`@RestPath`] (or nothing) -|URI template parameter (simplified version of the https://tools.ietf.org/html/rfc6570[URI Template specification]), -see <> for more information. - -|Query parameter -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestQuery.html[`@RestQuery`] -|The value of an https://tools.ietf.org/html/rfc3986#section-3.4[URI query parameter] - -|Header -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestHeader.html[`@RestHeader`] -|The value of an https://tools.ietf.org/html/rfc7230#section-3.2[HTTP header] - -|Cookie -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestCookie.html[`@RestCookie`] -|The value of an https://tools.ietf.org/html/rfc6265#section-4.2[HTTP cookie] - -|Form parameter -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestForm.html[`@RestForm`] -|The value of an https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST[HTTP URL-encoded FORM] - -|Matrix parameter -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestMatrix.html[`@RestMatrix`] -|The value of an https://tools.ietf.org/html/rfc3986#section-3.3[URI path segment parameter] - -|=== - -For each of those annotations, you may specify the name of the element they refer to, otherwise -they will use the name of the annotated method parameter. - -If a client made the following HTTP call: - -[source,http] ----- -POST /cheeses;variant=goat/tomme?age=matured HTTP/1.1 -Content-Type: application/x-www-form-urlencoded -Cookie: level=hardcore -X-Cheese-Secret-Handshake: fist-bump - -smell=strong ----- - -Then you could obtain all the various parameters with this endpoint method: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.POST; -import javax.ws.rs.Path; - -import org.jboss.resteasy.reactive.RestCookie; -import org.jboss.resteasy.reactive.RestForm; -import org.jboss.resteasy.reactive.RestHeader; -import org.jboss.resteasy.reactive.RestMatrix; -import org.jboss.resteasy.reactive.RestPath; -import org.jboss.resteasy.reactive.RestQuery; - -@Path("/cheeses/{type}") -public class Endpoint { - - @POST - public String allParams(@RestPath String type, - @RestMatrix String variant, - @RestQuery String age, - @RestCookie String level, - @RestHeader("X-Cheese-Secret-Handshake") - String secretHandshake, - @RestForm String smell) { - return type + "/" + variant + "/" + age + "/" + level + "/" + secretHandshake + "/" + smell; - } -} ----- - -NOTE: the link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestPath.html[`@RestPath`] -annotation is optional: any parameter whose name matches an existing URI -template variable will be automatically assumed to have link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestPath.html[`@RestPath`]. - -You can also use any of the JAX-RS annotations link:{jaxrsapi}/javax/ws/rs/PathParam.html[`@PathParam`], -link:{jaxrsapi}/javax/ws/rs/QueryParam.html[`@QueryParam`], -link:{jaxrsapi}/javax/ws/rs/HeaderParam.html[`@HeaderParam`], -link:{jaxrsapi}/javax/ws/rs/CookieParam.html[`@CookieParam`], -link:{jaxrsapi}/javax/ws/rs/FormParam.html[`@FormParam`] or -link:{jaxrsapi}/javax/ws/rs/MatrixParam.html[`@MatrixParam`] for this, -but they require you to specify the parameter name. - -See <> for more advanced use-cases. - -=== Declaring URI parameters - -[[uri-parameters]] - -You can declare URI parameters and use regular expressions in your path, so for instance -the following endpoint will serve requests for `/hello/stef/23` and `/hello` but not -`/hello/stef/0x23`: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("hello") -public class Endpoint { - - @Path("{name}/{age:\\d+}") - @GET - public String personalisedHello(String name, int age) { - return "Hello " + name + " is your age really " + age + "?"; - } - - @GET - public String genericHello() { - return "Hello stranger"; - } -} ----- - - -=== Accessing the request body - -Any method parameter with no annotation will receive the method body.footnote:[Unless it is a -<> or a <>.], after it has been mapped from -its HTTP representation to the Java type of the parameter. - -The following parameter types will be supported out of the box: - -[[resource-types]] - -.Table Request body parameter type -|=== -|Type|Usage - -|link:{jdkapi}/java/io/File.html[`File`] -|The entire request body in a temporary file - -|`byte[]` -|The entire request body, not decoded - -|`char[]` -|The entire request body, decoded - -|link:{jdkapi}/java/lang/String.html[`String`] -|The entire request body, decoded - -|link:{jdkapi}/java/io/InputStream.html[`InputStream`] -|The request body in a blocking stream - -|link:{jdkapi}/java/io/Reader.html[`Reader`] -|The request body in a blocking stream - -|All Java primitives and their wrapper classes -|Java primitive types - -|link:{jdkapi}/java/math/BigDecimal.html[`BigDecimal`], link:{jdkapi}/java/math/BigInteger.html[`BigInteger`] -|Large integers and decimals. - -|link:{jsonpapi}/javax/json/JsonArray.html[`JsonArray`], link:{jsonpapi}/javax/json/JsonArray.html[`JsonObject`], -link:{jsonpapi}/javax/json/JsonArray.html[`JsonStructure`], link:{jsonpapi}/javax/json/JsonArray.html[`JsonValue`] -|JSON value types - -|link:{vertxapi}io/vertx/core/buffer/Buffer.html[`Buffer`] -|Vert.x Buffer - -|any other type -|Will be <> - -|=== - -NOTE: You can add support for more <>. - -=== Handling Multipart Form data - -To handle HTTP requests that have `multipart/form-data` as their content type, RESTEasy Reactive introduces the -link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/MultipartForm.html[`@MultipartForm`] annotation. -Let us look at an example of its use. - -Assuming an HTTP request containing a file upload and a form value containing a string description need to be handled, we could write a POJO -that will hold this information like so: - -[source,java] ----- -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.reactive.PartType; -import org.jboss.resteasy.reactive.RestForm; -import org.jboss.resteasy.reactive.multipart.FileUpload; - -public class FormData { - - @RestForm - @PartType(MediaType.TEXT_PLAIN) - public String description; - - @RestForm("image") - public FileUpload file; -} ----- - -The `name` field will contain the data contained in the part of HTTP request called `description` (because -link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestForm.html[`@RestForm`] does not define a value, the field name is used), -while the `file` field will contain data about the uploaded file in the `image` part of HTTP request. - -NOTE: link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/multipart/FileUpload.html[`FileUpload`] -provides access to various metadata of the uploaded file. If however all you need is a handle to the uploaded file, `java.nio.file.Path` or `java.io.File` could be used. - -NOTE: When access to all uploaded files without specifying the form names is needed, RESTEasy Reactive allows the use of `@RestForm List`, where it is important to **not** set a name for the link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestForm.html[`@RestForm`] annotation. - -NOTE: link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/PartType.html[`@PartType`] is used to aid -in deserialization of the corresponding part of the request into the desired Java type. It is very useful when -for example the corresponding body part is JSON and needs to be converted to a POJO. - -This POJO could be used in a Resource method like so: - -[source,java] ----- -import javax.ws.rs.Consumes; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.reactive.MultipartForm; - -@Path("multipart") -public class Endpoint { - - @POST - @Produces(MediaType.APPLICATION_JSON) - @Consumes(MediaType.MULTIPART_FORM_DATA) - @Path("form") - public String form(@MultipartForm FormData formData) { - // return something - } -} ----- - -The use of link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/MultipartForm.html[`@MultipartForm`] as -method parameter makes RESTEasy Reactive handle the request as a multipart form request. - -TIP: The use of `@MultipartForm` is actually unnecessary as RESTEasy Reactive can infer this information from the use of `@Consumes(MediaType.MULTIPART_FORM_DATA)` - -WARNING: When handling file uploads, it is very important to move the file to permanent storage (like a database, a dedicated file system or a cloud storage) in your code that handles the POJO. -Otherwise, the file will no longer be accessible when the request terminates. -Moreoever if `quarkus.http.body.delete-uploaded-files-on-end` is set to true, Quarkus will delete the uploaded file when the HTTP response is sent. If the setting is disabled, -the file will reside on the file system of the server (in the directory defined by the `quarkus.http.body.uploads-directory` configuration option), but as the uploaded files are saved -with a UUID file name and no additional metadata is saved, these files are essentially a random dump of files. - -Similarly, RESTEasy Reactive can produce Multipart Form data to allow users download files from the server. For example, we could write a POJO -that will hold the information we want to expose as: - -[source,java] ----- -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.reactive.PartType; -import org.jboss.resteasy.reactive.RestForm; - -public class DownloadFormData { - - @RestForm - String name; - - @RestForm - @PartType(MediaType.APPLICATION_OCTET_STREAM) - File file; -} ----- - -And then expose this POJO via a Resource like so: - -[source,java] ----- -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("multipart") -public class Endpoint { - - @GET - @Produces(MediaType.MULTIPART_FORM_DATA) - @Path("file") - public DownloadFormData getFile() { - // return something - } -} ----- - -WARNING: For the time being, returning Multipart data is limited to be blocking endpoints. - -=== Returning a response body - -In order to return an HTTP response, simply return the resource you want from your method. The method -return type and its optional content type will be used to decide how to serialise it to the HTTP -response (see <> for more advanced information). - -You can return any of the pre-defined types that you can read from the <>, -and any other type will be mapped <>. - -In addition, the following return types are also supported: - -.Table Additional response body parameter type -|=== -|Type|Usage - -|link:{jdkapi}/java/nio/file/Path.html[`Path`] -|The contents of the file specified by the given path - -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/PathPart.html[`PathPart`] -|The partial contents of the file specified by the given path - -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/FilePart.html[`FilePart`] -|The partial contents of a file - -|link:{vertxapi}/io/vertx/core/file/AsyncFile.html[`AsyncFile`] -|Vert.x AsyncFile, which can be in full, or partial - -|=== - -Alternately, you can also return a <> such as link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni`], -link:{mutinyapi}/io/smallrye/mutiny/Multi.html[`Multi`] or -link:{jdkapi}/java/util/concurrent/CompletionStage.html[`CompletionStage`] -that resolve to one of the mentioned return types. - -=== Setting other response properties - -==== Manually setting the response - -If you need to set more properties on the HTTP response than just the body, such as the status code -or headers, you can make your method return `org.jboss.resteasy.reactive.RestResponse` from a resource method. -An example of this could look like: - -[source,java] ----- -package org.acme.rest; - -import java.time.Duration; -import java.time.Instant; -import java.util.Date; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.NewCookie; - -import org.jboss.resteasy.reactive.RestResponse; -import org.jboss.resteasy.reactive.RestResponse.ResponseBuilder; - -@Path("") -public class Endpoint { - - @GET - public RestResponse hello() { - // HTTP OK status with text/plain content type - return ResponseBuilder.ok("Hello, World!", MediaType.TEXT_PLAIN_TYPE) - // set a response header - .header("X-FroMage", "Camembert") - // set the Expires response header to two days from now - .expires(Date.from(Instant.now().plus(Duration.ofDays(2)))) - // send a new cookie - .cookie(new NewCookie("Flavour", "praliné")) - // end of builder API - .build(); - } -} ----- - -NOTE: You can also use the JAX-RS type link:{jaxrsapi}/javax/ws/rs/core/Response.html[`Response`] but it is -not strongly typed to your entity. - -==== Using annotations - -Alternatively, if you only need to set the status code and / or HTTP headers with static values, you can use `@org.jboss.resteasy.reactive.ResponseStatus` and /or `ResponseHeader` respectively. -An example of this could look like: - -[source,java] ----- -package org.acme.rest; - -import org.jboss.resteasy.reactive.Header; -import org.jboss.resteasy.reactive.ResponseHeaders; -import org.jboss.resteasy.reactive.ResponseStatus; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("") -public class Endpoint { - - @ResponseStatus(201) - @ResponseHeader(name = "X-FroMage", value = "Camembert") - @GET - public String hello() { - return "Hello, World!"; - } -} ----- - -=== Async/reactive support - -[[reactive]] - -If your endpoint method needs to accomplish an asynchronous or reactive task before -being able to answer, you can declare your method to return the -link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni`] type (from https://smallrye.io/smallrye-mutiny/[Mutiny]), in which -case the current HTTP request will be automatically suspended after your method, until -the returned link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni`] instance resolves to a value, -which will be mapped to a response exactly according to the previously described rules: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.smallrye.mutiny.Uni; - -@Path("escoffier") -public class Endpoint { - - @GET - public Uni culinaryGuide() { - return Book.findByIsbn("978-2081229297"); - } -} ----- - -This allows you to not block the event-loop thread while the book is being fetched from the -database, and allows Quarkus to serve more requests until your book is ready to -be sent to the client and terminate this request. Check out our -<> for more information. - -The link:{jdkapi}/java/util/concurrent/CompletionStage.html[`CompletionStage`] return -type is also supported. - -=== Streaming support - -If you want to stream your response element by element, you can make your endpoint method return a -link:{mutinyapi}/io/smallrye/mutiny/Multi.html[`Multi`] type (from https://smallrye.io/smallrye-mutiny/[Mutiny]). -This is especially useful for streaming text or binary data. - -This example, using https://github.com/quarkiverse/quarkus-reactive-messaging-http[Reactive Messaging HTTP] shows how to stream -text data: - -[source,java] ----- -package org.acme.rest; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.reactive.messaging.Channel; - -import io.smallrye.mutiny.Multi; - -@Path("logs") -public class Endpoint { - - @Inject - @Channel("log-out") - Multi logs; - - @GET - public Multi streamLogs() { - return logs; - } -} ----- - -NOTE: Response filters are not invoked on streamed responses, because they would give a false -impression that you can set headers or HTTP status codes, which is not true after the initial -response. - -=== Server-Sent Event (SSE) support - -If you want to stream JSON objects in your response, you can use -https://html.spec.whatwg.org/multipage/server-sent-events.html[Server-Sent Events] -by just annotating your endpoint method with -link:{jaxrsapi}/javax/ws/rs/Produces.html[`@Produces(MediaType.SERVER_SENT_EVENTS)`] -and specifying that each element should be <> with -`@RestSseElementType(MediaType.APPLICATION_JSON)`. - -[source,java] ----- -package org.acme.rest; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.reactive.RestSseElementType; - -import io.smallrye.mutiny.Multi; -import io.smallrye.mutiny.Uni; - -import io.smallrye.reactive.messaging.annotations.Channel; - -@Path("escoffier") -public class Endpoint { - - // Inject our Book channel - @Inject - @Channel("book-out") - Multi books; - - @GET - // Send the stream over SSE - @Produces(MediaType.SERVER_SENT_EVENTS) - // Each element will be sent as JSON - @RestSseElementType(MediaType.APPLICATION_JSON) - public Multi stream() { - return books; - } -} ----- - -=== Controlling HTTP Caching features - -RESTEasy Reactive provides the link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/Cache.html[`@Cache`] -and link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/NoCache.html[`@NoCache`] annotations to facilitate -handling HTTP caching semantics, i.e. setting the 'Cache-Control' HTTP header. - -These annotations can be placed either on a Resource Method or a Resource Class (in which case it applies to all Resource Methods of the class that do *not* contain the same annotation) and allow users -to return domain objects and not have to deal with building up the `Cache-Control` HTTP header explicitly. - -While link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/Cache.html[`@Cache`] -builds a complex `Cache-Control` header, link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/NoCache.html[`@NoCache`] -is a simplified notation to say that you don't want anything cached; i.e. `Cache-Control: nocache`. - -NOTE: More information on the `Cache-Control` header and be found in link:https://datatracker.ietf.org/doc/html/rfc7234[RFC 7234] - -=== Accessing context objects - -[[context-objects]] - -There are a number of contextual objects that the framework will give you, if your endpoint -method takes parameters of the following type: - -.Table Context object -|=== -|Type|Usage - -|link:{jaxrsapi}/javax/ws/rs/core/HttpHeaders.html[`HttpHeaders`] -|All the request headers - -|link:{jaxrsapi}/javax/ws/rs/container/ResourceInfo.html[`ResourceInfo`] -|Information about the current endpoint method and class (requires reflection) - -|link:{jaxrsapi}/javax/ws/rs/core/SecurityContext.html[`SecurityContext`] -|Access to the current user and roles - -|link:{resteasy-reactive-api}/org/jboss/resteasy/reactive/server/SimpleResourceInfo.html[`SimpleResourceInfo`] -|Information about the current endpoint method and class (no reflection required) - -|link:{jaxrsapi}/javax/ws/rs/core/UriInfo.html[`UriInfo`] -|Provides information about the current endpoint and application URI - -|link:{jaxrsapi}/javax/ws/rs/core/Application.html[`Application`] -|Advanced: Current JAX-RS application class - -|link:{jaxrsapi}/javax/ws/rs/core/Configuration.html[`Configuration`] -|Advanced: Configuration about the deployed JAX-RS application - -|link:{jaxrsapi}/javax/ws/rs/ext/Providers.html[`Providers`] -|Advanced: Runtime access to JAX-RS providers - -|link:{jaxrsapi}/javax/ws/rs/core/Request.html[`Request`] -|Advanced: Access to the current HTTP method and <> - -|link:{jaxrsapi}/javax/ws/rs/core/ResourceContext.html[`ResourceContext`] -|Advanced: access to instances of endpoints - -|link:{resteasy-reactive-api}/org/jboss/resteasy/reactive/server/spi/ServerRequestContext.html[`ServerRequestContext`] -|Advanced: RESTEasy Reactive access to the current request/response - -|link:{jaxrsapi}/javax/ws/rs/sse/Sse.html[`Sse`] -|Advanced: Complex SSE use-cases - -|link:{vertxapi}/io/vertx/core/http/HttpServerRequest.html[`HttpServerRequest`] -|Advanced: Vert.x HTTP Request - -|link:{vertxapi}/io/vertx/core/http/HttpServerResponse.html[`HttpServerResponse`] -|Advanced: Vert.x HTTP Response - -|=== - -For example, here is how you can return the name of the currently logged-in user: - -[source,java] ----- -package org.acme.rest; - -import java.security.Principal; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.SecurityContext; - -@Path("user") -public class Endpoint { - - @GET - public String userName(SecurityContext security) { - Principal user = security.getUserPrincipal(); - return user != null ? user.getName() : ""; - } -} ----- - -You can also inject those context objects using -https://javadoc.io/static/javax.inject/javax.inject/1/javax/inject/Inject.html[`@Inject`] on fields of the same -type: - -[source,java] ----- -package org.acme.rest; - -import java.security.Principal; - -import javax.inject.Inject; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.SecurityContext; - -@Path("user") -public class Endpoint { - - @Inject - SecurityContext security; - - @GET - public String userName() { - Principal user = security.getUserPrincipal(); - return user != null ? user.getName() : ""; - } -} ----- - -Or even on your endpoint constructor: - -[source,java] ----- -package org.acme.rest; - -import java.security.Principal; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.SecurityContext; - -@Path("user") -public class Endpoint { - - SecurityContext security; - - Endpoint(SecurityContext security) { - this.security = security; - } - - @GET - public String userName() { - Principal user = security.getUserPrincipal(); - return user != null ? user.getName() : ""; - } -} ----- - - -=== JSON serialisation - -[[json]] - -Instead of importing `io.quarkus:quarkus-resteasy-reactive`, you can import either of the following modules to get support for JSON: - -.Table Context object -|=== -|GAV|Usage - -|`io.quarkus:quarkus-resteasy-reactive-jackson` -|https://github.com/FasterXML/jackson[Jackson support] - -|`io.quarkus:quarkus-resteasy-reactive-jsonb` -|https://eclipse-ee4j.github.io/jsonb-api/[JSON-B support] - -|=== - -In both cases, importing those modules will allow HTTP message bodies to be read from JSON -and serialised to JSON, for <>. - -==== Advanced Jackson-specific features - -When using the `quarkus-resteasy-reactive-jackson` extension there are some advanced features that RESTEasy Reactive supports. - -[[secure-serialization]] -===== Secure serialization - -When used with Jackson to perform JSON serialization, RESTEasy Reactive provides the ability to limit the set of fields that are serialized based on the roles of the current user. -This is achieved by simply annotating the fields (or getters) of the POJO being returned with `@io.quarkus.resteasy.reactive.jackson.SecureField`. - -A simple example could be the following: - -Assume we have a POJO named `Person` which looks like so: - -[source,java] ----- -package org.acme.rest; - -import io.quarkus.resteasy.reactive.jackson.SecureField; - -public class Person { - - @SecureField(rolesAllowed = "admin") - private final Long id; - private final String first; - private final String last; - - public Person(Long id, String first, String last) { - this.id = id; - this.first = first; - this.last = last; - } - - public Long getId() { - return id; - } - - public String getFirst() { - return first; - } - - public String getLast() { - return last; - } -} ----- - -A very simple JAX-RS Resource that uses `Person` could be: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("person") -public class Person { - - @Path("{id}") - @GET - public Person getPerson(Long id) { - return new Person(id, "foo", "bar"); - } -} ----- - -Assuming security has been set up for the application (see our xref:security.adoc[guide] for more details), when a user with the `admin` role -performs an HTTP GET on `/person/1` they will receive: - -[source,json] ----- -{ - "id": 1, - "first": "foo", - "last": "bar" -} ----- - -as the response. - -Any user however that does not have the `admin` role will receive: - -[source,json] ----- -{ - "first": "foo", - "last": "bar" -} ----- - -NOTE: No additional configuration needs to be applied for this secure serialization to take place. However, users can use the `@io.quarkus.resteasy.reactive.jackson.EnableSecureSerialization` and `@io.quarkus.resteasy.reactive.jackson.DisableSecureSerialization` -annotation to opt-in or out for specific JAX-RS Resource classes or methods. - -===== @JsonView support - -JAX-RS methods can be annotated with https://fasterxml.github.io/jackson-annotations/javadoc/2.10/com/fasterxml/jackson/annotation/JsonView.html[@JsonView] -in order to customize the serialization of the returned POJO, on a per method-basis. This is best explained with an example. - -A typical use of `@JsonView` is to hide certain fields on certain methods. In that vein, let's define two views: - -[source,java] ----- -public class Views { - - public static class Public { - } - - public static class Private extends Public { - } -} ----- - -Let's assume we have the `User` POJO on which we want to hide some field during serialization. A simple example of this is: - -[source,java] ----- -public class User { - - @JsonView(Views.Private.class) - public int id; - - @JsonView(Views.Public.class) - public String name; -} ----- - -Depending on the JAX-RS method that returns this user, we might want to exclude the `id` field from serialization - for example you might want an insecure method -to not expose this field. The way we can achieve that in RESTEasy Reactive is shown in the following example: - -[source,java] ----- -@JsonView(Views.Public.class) -@GET -@Path("/public") -public User userPublic() { - return testUser(); -} - -@JsonView(Views.Private.class) -@GET -@Path("/private") -public User userPrivate() { - return testUser(); -} ----- - -When the result the `userPublic` method is serialized, the `id` field will not be contained in the response as the `Public` view does not include it. -The result of `userPrivate` however will include the `id` as expected when serialized. - -===== Completely customized per method serialization - -There are times when you need to completely customize the serialization of a POJO on a per JAX-RS method basis. For such use cases, the `@io.quarkus.resteasy.reactive.jackson.CustomSerialization` annotation -is a great tool, as it allows you to configure a per-method `com.fasterxml.jackson.databind.ObjectWriter` which can be configured at will. - -Here is an example use case: - -[source,java] ----- -@CustomSerialization(UnquotedFields.class) -@GET -@Path("/invalid-use-of-custom-serializer") -public User invalidUseOfCustomSerializer() { - return testUser(); -} ----- - -where `UnquotedFields` is a `BiFunction` defined as so: - -[source,java] ----- -public static class UnquotedFields implements BiFunction { - - @Override - public ObjectWriter apply(ObjectMapper objectMapper, Type type) { - return objectMapper.writer().without(JsonWriteFeature.QUOTE_FIELD_NAMES); - } -} ----- - -Essentially what this class does is force Jackson to not include quotes in the field names. - -It is important to note that this customization is only performed for the serialization of the JAX-RS methods that use `@CustomSerialization(UnquotedFields.class)`. - -=== XML serialisation - -[[xml]] - -To enable XML support, add the `quarkus-resteasy-reactive-jaxb` extension to your project. - -.Table Context object -|=== -|GAV|Usage - -|`io.quarkus:quarkus-resteasy-reactive-jaxb` -|https://javaee.github.io/jaxb-v2/[XML support] - -|=== - -Importing this module will allow HTTP message bodies to be read from XML -and serialised to XML, for <>. - -== More advanced usage - -Here are some more advanced topics that you may not need to know about initially, but -could prove useful for more complex use cases. - -=== Execution model, blocking, non-blocking - -[[execution-model]] - -RESTEasy Reactive is implemented using two main thread types: - -- Event-loop threads: which are responsible, among other things, for reading bytes from the HTTP request and - writing bytes back to the HTTP response -- Worker threads: they are pooled and can be used to offload long-running operations - -The event-loop threads (also called IO threads) are responsible for actually performing all the IO -operations in an asynchronous way, and to trigger any listener interested in the completion of those -IO operations. - -By default, the thread RESTEasy Reactive will run endpoint methods on depends on the signature of the method. -If a method returns one of the following types then it is considered non-blocking, and will be run on the IO thread -by default: - -- `io.smallrye.mutiny.Uni` -- `io.smallrye.mutiny.Multi` -- `java.util.concurrent.CompletionStage` -- `org.reactivestreams.Publisher` -- Kotlin `suspended` methods - -This 'best guess' approach means that the majority of operations will run on the correct thread by default. If you are -writing reactive code then your method will generally return one of these types, and will be executed on the IO thread. -If you are writing blocking code your methods will generally return the result directly, and these will be run on a worker -thread. - -You can override this behaviour using the -https://javadoc.io/doc/io.smallrye.common/smallrye-common-annotation/1.5.0/io/smallrye/common/annotation/Blocking.html[`@Blocking`] -and -https://javadoc.io/doc/io.smallrye.common/smallrye-common-annotation/1.5.0/io/smallrye/common/annotation/NonBlocking.html[`@NonBlocking`] -annotations. This can be applied at the method, class or `javax.ws.rs.core.Application` level. - -The example below will override the default behaviour and always run on a worker thread, even though it returns a `Uni`. - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.smallrye.common.annotation.Blocking; - -@Path("yawn") -public class Endpoint { - - @Blocking - @GET - public Uni blockingHello() throws InterruptedException { - // do a blocking operation - Thread.sleep(1000); - return Uni.createFrom().item("Yaaaawwwwnnnnnn…"); - } -} ----- - -Most of the time, there are ways to achieve the same blocking operations in an asynchronous/reactive -way, using https://smallrye.io/smallrye-mutiny/[Mutiny], http://hibernate.org/reactive/[Hibernate Reactive] -or any of the xref:quarkus-reactive-architecture.adoc#quarkus-extensions-enabling-reactive[Quarkus Reactive extensions] for example: - -[source,java] ----- -package org.acme.rest; - -import java.time.Duration; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.smallrye.mutiny.Uni; - -@Path("yawn") -public class Endpoint { - - @GET - public Uni blockingHello() throws InterruptedException { - return Uni.createFrom().item("Yaaaawwwwnnnnnn…") - // do a non-blocking sleep - .onItem().delayIt().by(Duration.ofSeconds(2)); - } -} ----- - -If a method or class is annotated with `javax.transaction.Transactional` then it will also be treated as a blocking -method. This is because JTA is a blocking technology, and is generally used with other blocking technology such as -Hibernate and JDBC. An explicit `@Blocking` or `@NonBlocking` on the class will override this behaviour. - -==== Overriding the default behaviour - -If you want to override the default behaviour you can annotate a `javax.ws.rs.core.Application` subclass in your application -with `@Blocking` or `@NonBlocking`, and this will set the default for every method that does not have an explicit annotation. - -Behaviour can still be overridden on a class or method level by annotating them directly, however all endpoints without -an annotation will now follow the default, no matter their method signature. - -=== Exception mapping - -If your application needs to return non-nominal HTTP codes in error cases, the best is -to throw exceptions that will result in the proper HTTP response being sent by the -framework using link:{jaxrsapi}/javax/ws/rs/WebApplicationException.html[`WebApplicationException`] or any of its subtypes: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.BadRequestException; -import javax.ws.rs.GET; -import javax.ws.rs.NotFoundException; -import javax.ws.rs.Path; - -@Path("fromages/{fromage}") -public class Endpoint { - - @GET - public String findFromage(String fromage) { - if(fromage == null) - // send a 400 - throw new BadRequestException(); - if(!fromage.equals("camembert")) - // send a 404 - throw new NotFoundException("Unknown cheese: " + fromage); - return "Camembert is a very nice cheese"; - } -} ----- - -If your endpoint method is delegating calls to another service layer which -does not know of JAX-RS, you need a way to turn service exceptions to an -HTTP response, and you can do that using the -link:{resteasy-reactive-api}/org/jboss/resteasy/reactive/server/ServerExceptionMapper.html[`@ServerExceptionMapper`] -annotation on a method, with one parameter of the exception type you want to handle, and turning -that exception into a link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestResponse.html[`RestResponse`] (or a -link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni>`]): - -[source,java] ----- -package org.acme.rest; - -import java.util.Map; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.ws.rs.BadRequestException; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.Response; - -import org.jboss.resteasy.reactive.server.ServerExceptionMapper; -import org.jboss.resteasy.reactive.RestResponse; - -class UnknownCheeseException extends RuntimeException { - public final String name; - - public UnknownCheeseException(String name) { - this.name = name; - } -} - -@ApplicationScoped -class CheeseService { - private static final Map cheeses = - Map.of("camembert", "Camembert is a very nice cheese", - "gouda", "Gouda is acceptable too, especially with cumin"); - - public String findCheese(String name) { - String ret = cheeses.get(name); - if(ret != null) - return ret; - throw new UnknownCheeseException(name); - } -} - -@Path("fromages/{fromage}") -public class Endpoint { - - @Inject - CheeseService cheeses; - - @ServerExceptionMapper - public RestResponse mapException(UnknownCheeseException x) { - return RestResponse.status(Response.Status.NOT_FOUND, "Unknown cheese: " + x.name); - } - - @GET - public String findFromage(String fromage) { - if(fromage == null) - // send a 400 - throw new BadRequestException(); - return cheeses.findCheese(fromage); - } -} ----- - -NOTE: exception mappers defined in REST endpoint classes will only be called if the -exception is thrown in the same class. If you want to define global exception mappers, -simply define them outside a REST endpoint class: - -[source,java] ----- -package org.acme.rest; - -import org.jboss.resteasy.reactive.server.ServerExceptionMapper; -import org.jboss.resteasy.reactive.RestResponse; - -class ExceptionMappers { - @ServerExceptionMapper - public RestResponse mapException(UnknownCheeseException x) { - return RestResponse.status(Response.Status.NOT_FOUND, "Unknown cheese: " + x.name); - } -} ----- - -You can also declare link:{jaxrsspec}#exceptionmapper[exception mappers in the JAX-RS way]. - -Your exception mapper may declare any of the following parameter types: - -.Table Exception mapper parameters -|=== -|Type|Usage - -|An exception type -|Defines the exception type you want to handle - -|Any of the <> -| - -|link:{jaxrsapi}/javax/ws/rs/container/ContainerRequestContext.html[`ContainerRequestContext`] -|A context object to access the current request - -|=== - -It may declare any of the following return types: - -.Table Exception mapper return types -|=== -|Type|Usage - -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestResponse.html[`RestResponse`] or link:{jaxrsapi}/javax/ws/rs/core/Response.html[`Response`] -|The response to send to the client when the exception occurs - -|link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni`] or link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni`] -|An asynchronous response to send to the client when the exception occurs - -|=== - -=== Request or response filters - -You can declare functions which are invoked in the following phases of the request processing: - -- Before the endpoint method is identified: pre-routing request filter -- After routing, but before the endpoint method is called: normal request filter -- After the endpoint method is called: response filter - -These filters allow you to do various things such as examine the request URI, -HTTP method, influence routing, look or change request headers, abort the request, -or modify the response. - -Request filters can be declared with the -link:{resteasy-reactive-api}/org/jboss/resteasy/reactive/server/ServerRequestFilter.html[`@ServerRequestFilter`] -annotation: - -[source,java] ----- -import java.util.Optional; - -class Filters { - - @ServerRequestFilter(preMatching = true) - public void preMatchingFilter(ContainerRequestContext requestContext) { - // make sure we don't lose cheese lovers - if("yes".equals(requestContext.getHeaderString("Cheese"))) { - requestContext.setRequestUri(URI.create("/cheese")); - } - } - - @ServerRequestFilter - public Optional> getFilter(ContainerRequestContext ctx) { - // only allow GET methods for now - if(ctx.getMethod().equals(HttpMethod.GET)) { - return Optional.of(RestResponse.status(Response.Status.METHOD_NOT_ALLOWED)); - } - return Optional.empty(); - } -} ----- - -[IMPORTANT] -==== -Request filters are usually executed on the same thread that the method that handles the request will be executed. -That means that if the method servicing the request is annotated with `@Blocking`, then the filters will also be run -on the worker thread. -If the method is annotated with `@NonBlocking` (or is not annotated at all), then the filters will also be run -on the same event-loop thread. - -If however a filter needs to be run on the event-loop despite the fact that the method servicing the request will be -run on a worker thread, then `@ServerRequestFilter(nonBlocking=true)` can be used. -Note however, that these filters need to be run before **any** filter that does not use that setting and would run on a worker thread. -==== - -Similarly, response filters can be declared with the -link:{resteasy-reactive-api}/org/jboss/resteasy/reactive/server/ServerResponseFilter.html[`@ServerResponseFilter`] -annotation: - -[source,java] ----- -class Filters { - @ServerResponseFilter - public void getFilter(ContainerResponseContext responseContext) { - Object entity = responseContext.getEntity(); - if(entity instanceof String) { - // make it shout - responseContext.setEntity(((String)entity).toUpperCase()); - } - } -} ----- - -You can also link:{jaxrsspec}#filters[declare request and response filters in the JAX-RS way]. - -Your filters may declare any of the following parameter types: - -.Table Filter parameters -|=== -|Type|Usage - -|Any of the <> -| - -|link:{jaxrsapi}/javax/ws/rs/container/ContainerRequestContext.html[`ContainerRequestContext`] -|A context object to access the current request - -|link:{jaxrsapi}/javax/ws/rs/container/ContainerResponseContext.html[`ContainerResponseContext`] -|A context object to access the current response - -|link:{jdkapi}/java/lang/Throwable.html[`Throwable`] -|Any thrown exception, or `null` (only for response filters) - -|=== - -It may declare any of the following return types: - -.Table Filter return types -|=== -|Type|Usage - -|link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestResponse.html[`RestResponse`] or link:{jaxrsapi}/javax/ws/rs/core/Response.html[`Response`] -|The response to send to the client instead of continuing the filter chain, or `null` if the filter chain should proceed - -|link:{jdkapi}/java/util/Optional.html[`Optional>`] or link:{jdkapi}/java/util/Optional.html[`Optional`] -|An optional response to send to the client instead of continuing the filter chain, or an empty value if the filter chain should proceed - -|link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni>`] or link:{mutinyapi}/io/smallrye/mutiny/Uni.html[`Uni`] -|An asynchronous response to send to the client instead of continuing the filter chain, or `null` if the filter chain should proceed - -|=== - -NOTE: You can restrict the Resource methods for which a filter runs, by using link:{jaxrsapi}/javax/ws/rs/NameBinding.html[`@NameBinding`] meta-annotations. - -=== Readers and Writers: mapping entities and HTTP bodies - -[[readers-writers]] - -Whenever your endpoint methods return a object (of when they return a -link:{resteasy-reactive-common-api}/org/jboss/resteasy/reactive/RestResponse.html[`RestResponse`] -or link:{jaxrsapi}/javax/ws/rs/core/Response.html[`Response`] with -an entity), RESTEasy Reactive will look for a way to map that into an HTTP response body. - -Similarly, whenever your endpoint method takes an object as parameter, we will look for -a way to map the HTTP request body into that object. - -This is done via a pluggable system of link:{jaxrsapi}/javax/ws/rs/ext/MessageBodyReader.html[`MessageBodyReader`] -and link:{jaxrsapi}/javax/ws/rs/ext/MessageBodyWriter.html[`MessageBodyWriter`] interfaces, -which are responsible for defining which Java type they map from/to, for which media types, -and how they turn HTTP bodies to/from Java instances of that type. - -For example, if we have our own `FroMage` type on our endpoint: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; - -class FroMage { - public String name; - - public FroMage(String name) { - this.name = name; - } -} - -@Path("cheese") -public class Endpoint { - - @GET - public FroMage sayCheese() { - return new FroMage("Cheeeeeese"); - } - - @PUT - public void addCheese(FroMage fromage) { - System.err.println("Received a new cheese: " + fromage.name); - } -} ----- - -Then we can define how to read and write it with our body reader/writers, annotated -with link:{jaxrsapi}/javax/ws/rs/ext/Provider.html[`@Provider`]: - -[source,java] ----- -package org.acme.rest; - -import java.io.IOException; -import java.io.InputStream; -import java.io.OutputStream; -import java.lang.annotation.Annotation; -import java.lang.reflect.Type; -import java.nio.charset.StandardCharsets; - -import javax.ws.rs.WebApplicationException; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.MultivaluedMap; -import javax.ws.rs.ext.MessageBodyReader; -import javax.ws.rs.ext.MessageBodyWriter; -import javax.ws.rs.ext.Provider; - -@Provider -public class FroMageBodyHandler implements MessageBodyReader, - MessageBodyWriter { - - @Override - public boolean isWriteable(Class type, Type genericType, - Annotation[] annotations, MediaType mediaType) { - return type == FroMage.class; - } - - @Override - public void writeTo(FroMage t, Class type, Type genericType, - Annotation[] annotations, MediaType mediaType, - MultivaluedMap httpHeaders, - OutputStream entityStream) - throws IOException, WebApplicationException { - entityStream.write(("[FroMageV1]" + t.name) - .getBytes(StandardCharsets.UTF_8)); - } - - @Override - public boolean isReadable(Class type, Type genericType, - Annotation[] annotations, MediaType mediaType) { - return type == FroMage.class; - } - - @Override - public FroMage readFrom(Class type, Type genericType, - Annotation[] annotations, MediaType mediaType, - MultivaluedMap httpHeaders, - InputStream entityStream) - throws IOException, WebApplicationException { - String body = new String(entityStream.readAllBytes(), StandardCharsets.UTF_8); - if(body.startsWith("[FroMageV1]")) - return new FroMage(body.substring(11)); - throw new IOException("Invalid fromage: " + body); - } - -} ----- - -If you want to get the most performance our of your writer, you can extend the -link:{resteasy-reactive-api}/org/jboss/resteasy/reactive/server/spi/ServerMessageBodyWriter.html[`ServerMessageBodyWriter`] -instead of link:{jaxrsapi}/javax/ws/rs/ext/MessageBodyWriter.html[`MessageBodyWriter`] -where you will be able to use less reflection and bypass the blocking IO layer: - -[source,java] ----- -package org.acme.rest; - -import java.io.IOException; -import java.io.InputStream; -import java.io.OutputStream; -import java.lang.annotation.Annotation; -import java.lang.reflect.Type; -import java.nio.charset.StandardCharsets; - -import javax.ws.rs.WebApplicationException; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.MultivaluedMap; -import javax.ws.rs.ext.MessageBodyReader; -import javax.ws.rs.ext.Provider; - -import org.jboss.resteasy.reactive.server.spi.ResteasyReactiveResourceInfo; -import org.jboss.resteasy.reactive.server.spi.ServerMessageBodyWriter; -import org.jboss.resteasy.reactive.server.spi.ServerRequestContext; - -@Provider -public class FroMageBodyHandler implements MessageBodyReader, - ServerMessageBodyWriter { - - // … - - @Override - public boolean isWriteable(Class type, ResteasyReactiveResourceInfo target, - MediaType mediaType) { - return type == FroMage.class; - } - - @Override - public void writeResponse(FroMage t, ServerRequestContext context) - throws WebApplicationException, IOException { - context.serverResponse().end("[FroMageV1]" + t.name); - } -} ----- - -NOTE: You can restrict which content-types your reader/writer apply to by adding -link:{jaxrsapi}/javax/ws/rs/Consumes.html[`Consumes`]/link:{jaxrsapi}/javax/ws/rs/Produces.html[`Produces`] annotations -on your provider class. - -=== Reader and Writer interceptors - -Just as you can intercept requests and responses, you can also intercept readers and writers, by -extending the link:{jaxrsapi}/javax/ws/rs/ext/ReaderInterceptor.html[`ReaderInterceptor`] or -link:{jaxrsapi}/javax/ws/rs/ext/WriterInterceptor.html[`WriterInterceptor`] on a class annotated with -link:{jaxrsapi}/javax/ws/rs/ext/Provider.html[`@Provider`]. - -If we look at this endpoint: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.GET; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; - -@Path("cheese") -public class Endpoint { - - @GET - public String sayCheese() { - return "Cheeeeeese"; - } - - @PUT - public void addCheese(String fromage) { - System.err.println("Received a new cheese: " + fromage); - } -} ----- - -We can add reader and writer interceptors like this: - -[source,java] ----- -package org.acme.rest; - -import java.io.IOException; - -import javax.ws.rs.WebApplicationException; -import javax.ws.rs.ext.Provider; -import javax.ws.rs.ext.ReaderInterceptor; -import javax.ws.rs.ext.ReaderInterceptorContext; -import javax.ws.rs.ext.WriterInterceptor; -import javax.ws.rs.ext.WriterInterceptorContext; - -@Provider -public class FroMageIOInterceptor implements ReaderInterceptor, WriterInterceptor { - - @Override - public void aroundWriteTo(WriterInterceptorContext context) - throws IOException, WebApplicationException { - System.err.println("Before writing " + context.getEntity()); - context.proceed(); - System.err.println("After writing " + context.getEntity()); - } - - @Override - public Object aroundReadFrom(ReaderInterceptorContext context) - throws IOException, WebApplicationException { - System.err.println("Before reading " + context.getGenericType()); - Object entity = context.proceed(); - System.err.println("After reading " + entity); - return entity; - } -} ----- - -=== Parameter mapping - -All <> can be declared as link:{jdkapi}/java/lang/String.html[`String`], but also -any of the following types: - -- Types for which a link:{jaxrsapi}/javax/ws/rs/ext/ParamConverter.html[`ParamConverter`] is available via a registered -link:{jaxrsapi}/javax/ws/rs/ext/ParamConverterProvider.html[`ParamConverterProvider`]. -- Primitive types. -- Types that have a constructor that accepts a single link:{jdkapi}/java/lang/String.html[`String`] argument. -- Types that have a static method named `valueOf` or `fromString` with a single link:{jdkapi}/java/lang/String.html[`String`] argument -that return an instance of the type. If both methods are present then `valueOf` will be used unless -the type is an `enum` in which case `fromString` will be used. -- link:{jdkapi}/java/util/List.html[`List`], link:{jdkapi}/java/util/Set.html[`Set`], or -link:{jdkapi}/java/util/SortedSet.html[`SortedSet`], where `T` satisfies any above criterion. - -The following example illustrates all those possibilities: - -[source,java] ----- -package org.acme.rest; - -import java.lang.annotation.Annotation; -import java.lang.reflect.Type; -import java.util.List; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.ext.ParamConverter; -import javax.ws.rs.ext.ParamConverterProvider; -import javax.ws.rs.ext.Provider; - -import org.jboss.resteasy.reactive.RestQuery; - -@Provider -class MyConverterProvider implements ParamConverterProvider { - - @Override - public ParamConverter getConverter(Class rawType, Type genericType, - Annotation[] annotations) { - // declare a converter for this type - if(rawType == Converter.class) { - return (ParamConverter) new MyConverter(); - } - return null; - } - -} - -// this is my custom converter -class MyConverter implements ParamConverter { - - @Override - public Converter fromString(String value) { - return new Converter(value); - } - - @Override - public String toString(Converter value) { - return value.value; - } - -} - -// this uses a converter -class Converter { - String value; - Converter(String value) { - this.value = value; - } -} - -class Constructor { - String value; - // this will use the constructor - public Constructor(String value) { - this.value = value; - } -} - -class ValueOf { - String value; - private ValueOf(String value) { - this.value = value; - } - // this will use the valueOf method - public static ValueOf valueOf(String value) { - return new ValueOf(value); - } -} - -@Path("hello") -public class Endpoint { - - @Path("{converter}/{constructor}/{primitive}/{valueOf}") - @GET - public String convertions(Converter converter, Constructor constructor, - int primitive, ValueOf valueOf, - @RestQuery List list) { - return converter + "/" + constructor + "/" + primitive - + "/" + valueOf + "/" + list; - } -} ----- - -==== Handling dates - -RESTEasy Reactive supports the use of the implementations of `java.time.Temporal` (like `java.time.LocalDateTime`) as query, path or form params. Furthermore it provides the `@org.jboss.resteasy.reactive.DateFormat` annotation which can be used to -set a custom expected pattern (otherwise the JDK's default format for each type is used implicitly). - -=== Preconditions - -https://tools.ietf.org/html/rfc7232[HTTP allows requests to be conditional], based on a number of -conditions, such as: - -- Date of last resource modification -- A resource tag, similar to a hash code of the resource to designate its state or version - -Let's see how you can do conditional request validation using the -link:{jaxrsapi}/javax/ws/rs/core/Request.html[`Request`] context object: - -[source,java] ----- -package org.acme.rest; - -import java.time.Instant; -import java.time.temporal.ChronoUnit; -import java.time.temporal.TemporalUnit; -import java.util.Date; - -import javax.ws.rs.GET; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; -import javax.ws.rs.core.EntityTag; -import javax.ws.rs.core.Request; -import javax.ws.rs.core.Response; -import javax.ws.rs.core.Response.ResponseBuilder; - -@Path("conditional") -public class Endpoint { - - // It's important to keep our date on seconds because that's how it's sent to the - // user in the Last-Modified header - private Date date = Date.from(Instant.now().truncatedTo(ChronoUnit.SECONDS)); - private int version = 1; - private EntityTag tag = new EntityTag("v1"); - private String resource = "Some resource"; - - @GET - public Response get(Request request) { - // first evaluate preconditions - ResponseBuilder conditionalResponse = request.evaluatePreconditions(date, tag); - if(conditionalResponse != null) - return conditionalResponse.build(); - // preconditions are OK - return Response.ok(resource) - .lastModified(date) - .tag(tag) - .build(); - } - - @PUT - public Response put(Request request, String body) { - // first evaluate preconditions - ResponseBuilder conditionalResponse = request.evaluatePreconditions(date, tag); - if(conditionalResponse != null) - return conditionalResponse.build(); - // preconditions are OK, we can update our resource - resource = body; - date = Date.from(Instant.now().truncatedTo(ChronoUnit.SECONDS)); - version++; - tag = new EntityTag("v" + version); - return Response.ok(resource) - .lastModified(date) - .tag(tag) - .build(); - } -} ----- - -When we call `GET /conditional` the first time, we will get this response: - -[source] ----- -HTTP/1.1 200 OK -Content-Type: text/plain;charset=UTF-8 -ETag: "v1" -Last-Modified: Wed, 09 Dec 2020 16:10:19 GMT -Content-Length: 13 - -Some resource ----- - -So now if we want to check if we need to fetch a new version, we can make the following request: - -[source] ----- -GET /conditional HTTP/1.1 -Host: localhost:8080 -If-Modified-Since: Wed, 09 Dec 2020 16:10:19 GMT ----- - -And we would get the following response: - -[source] ----- -HTTP/1.1 304 Not Modified ----- - -Because the resource has not been modified since that date. This saves on sending the resource, -but can also help your users detect concurrent modification, for example, let's suppose that one -client wants to update the resource, but another user has modified it since. You can follow the -previous `GET` request with this update: - -[source] ----- -PUT /conditional HTTP/1.1 -Host: localhost:8080 -If-Unmodified-Since: Wed, 09 Dec 2020 16:25:43 GMT -If-Match: v1 -Content-Length: 8 -Content-Type: text/plain - -newstuff ----- - -And if some other user has modified the resource between your `GET` and your `PUT` you would -get this answer back: - -[source] ----- -HTTP/1.1 412 Precondition Failed -ETag: "v2" -Content-Length: 0 ----- - -=== Negotiation - -One of the main ideas of REST (https://tools.ietf.org/html/rfc7231#section-3.4[and HTTP]) is that -your resource is independent from its representation, and -that both the client and server are free to represent their resources in as many media types as -they want. This allows the server to declare support for multiple representations and let the -client declare which ones it supports and get served something appropriate. - -The following endpoint supports serving cheese in plain text or JSON: - -[source,java] ----- -package org.acme.rest; - -import javax.ws.rs.Consumes; -import javax.ws.rs.GET; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import com.fasterxml.jackson.annotation.JsonCreator; - -class FroMage { - public String name; - @JsonCreator - public FroMage(String name) { - this.name = name; - } - @Override - public String toString() { - return "Cheese: " + name; - } -} - -@Path("negotiated") -public class Endpoint { - - @Produces({MediaType.APPLICATION_JSON, MediaType.TEXT_PLAIN}) - @GET - public FroMage get() { - return new FroMage("Morbier"); - } - - @Consumes(MediaType.TEXT_PLAIN) - @PUT - public FroMage putString(String cheese) { - return new FroMage(cheese); - } - - @Consumes(MediaType.APPLICATION_JSON) - @PUT - public FroMage putJson(FroMage fromage) { - return fromage; - } -} ----- - -The user will be able to select which representation it gets with the -link:{httpspec}#section-5.3.2[`Accept`] header, in the case of JSON: - -[source,sh] ----- -> GET /negotiated HTTP/1.1 -> Host: localhost:8080 -> Accept: application/json - -< HTTP/1.1 200 OK -< Content-Type: application/json -< Content-Length: 18 -< -< {"name":"Morbier"} ----- - -And for text: - -[source,sh] ----- -> GET /negotiated HTTP/1.1 -> Host: localhost:8080 -> Accept: text/plain -> -< HTTP/1.1 200 OK -< Content-Type: text/plain -< Content-Length: 15 -< -< Cheese: Morbier ----- - -Similarly, you can `PUT` two different representations. JSON: - -[source,sh] ----- -> PUT /negotiated HTTP/1.1 -> Host: localhost:8080 -> Content-Type: application/json -> Content-Length: 16 -> -> {"name": "brie"} - -< HTTP/1.1 200 OK -< Content-Type: application/json;charset=UTF-8 -< Content-Length: 15 -< -< {"name":"brie"} ----- - -Or plain text: - -[source,sh] ----- -> PUT /negotiated HTTP/1.1 -> Host: localhost:8080 -> Content-Type: text/plain -> Content-Length: 9 -> -> roquefort - -< HTTP/1.1 200 OK -< Content-Type: application/json;charset=UTF-8 -< Content-Length: 20 -< -< {"name":"roquefort"} ----- - -== Include/Exclude JAX-RS classes with build time conditions - -Quarkus enables the inclusion or exclusion of JAX-RS Resources, Providers and Features directly thanks to build time conditions in the same that it does for CDI beans. -Thus, the various JAX-RS classes can be annotated with profile conditions (`@io.quarkus.arc.profile.IfBuildProfile` or `@io.quarkus.arc.profile.UnlessBuildProfile`) and/or with property conditions (`io.quarkus.arc.properties.IfBuildProperty` or `io.quarkus.arc.properties.UnlessBuildProperty`) to indicate to Quarkus at build time under which conditions these JAX-RS classes should be included. - -In the following example, Quarkus includes the endpoint `sayHello` if and only if the build profile `app1` has been enabled. - -[source,java] ----- -@IfBuildProfile("app1") -public class ResourceForApp1Only { - - @GET - @Path("sayHello") - public String sayHello() { - return "hello"; - } -} ----- - -Please note that if a JAX-RS Application has been detected and the method `getClasses()` and/or `getSingletons()` has/have been overridden, Quarkus will ignore the build time conditions and consider only what has been defined in the JAX-RS Application. - - -== RESTEasy Reactive client - -In addition to the Server side, RESTEasy Reactive comes with a new MicroProfile Rest Client implementation that is non-blocking at its core. - -Please note that the `quarkus-rest-client` extension may not be used with RESTEasy Reactive, use `quarkus-rest-client-reactive` instead. - -See the xref:rest-client-reactive.adoc[REST Client Reactive Guide] for more information about the reactive REST client. diff --git a/_versions/2.7/guides/scheduler-reference.adoc b/_versions/2.7/guides/scheduler-reference.adoc deleted file mode 100644 index 626ae5b30bd..00000000000 --- a/_versions/2.7/guides/scheduler-reference.adoc +++ /dev/null @@ -1,335 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Scheduler Reference Guide - -include::./attributes.adoc[] - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -Modern applications often need to run specific tasks periodically. -There are two scheduler extensions in Quarkus. -The `quarkus-scheduler` extension brings the API and a lightweight in-memory scheduler implementation. -The `quarkus-quartz` extension implements the API from the `quarkus-scheduler` extension and contains a scheduler implementation based on the Quartz library. -You will only need `quarkus-quartz` for more advanced scheduling use cases, such as persistent tasks, clustering and programmatic scheduling of jobs. - -NOTE: If you add the `quarkus-quartz` dependency to your project the lightweight scheduler implementation from the `quarkus-scheduler` extension is automatically disabled. - -== Scheduled Methods - -If you annotate a method with `@io.quarkus.scheduler.Scheduled` it is automatically scheduled for invocation. -In fact, such a method must be a non-private non-static method of a CDI bean. -As a consequence of being a method of a CDI bean a scheduled method can be annotated with interceptor bindings, such as `@javax.transaction.Transactional` and `@org.eclipse.microprofile.metrics.annotation.Counted`. - -NOTE: If there is no CDI scope defined on the declaring class then `@Singleton` is used. - -Furthermore, the annotated method must return `void` and either declare no parameters or one parameter of type `io.quarkus.scheduler.ScheduledExecution`. - -TIP: The annotation is repeatable so a single method could be scheduled multiple times. - -=== Triggers - -A trigger is defined either by the `@Scheduled#cron()` or by the `@Scheduled#every()` attributes. -If both are specified, the cron expression takes precedence. -If none is specified, the build fails with an `IllegalStateException`. - -==== CRON - -A CRON trigger is defined by a cron-like expression. -For example `"0 15 10 * * ?"` fires at 10:15am every day. - -.CRON Trigger Example -[source,java] ----- -@Scheduled(cron = "0 15 10 * * ?") -void fireAt10AmEveryDay() { } ----- - -The syntax used in CRON expressions is controlled by the `quarkus.scheduler.cron-type` property. -The values can be `cron4j`, `quartz`, `unix` and `spring`. -`quartz` is used by default. - -The `cron` attribute supports <> including default values and nested -Property Expressions. (Note that "{property.path}" style expressions are still supported but don't offer the full functionality of Property Expressions.) - - -.CRON Config Property Example -[source,java] ----- -@Scheduled(cron = "${myMethod.cron.expr}") -void myMethod() { } ----- - -If you wish to disable a specific scheduled method, you can set its cron expression to `"off"` or `"disabled"`. - -.application.properties -[source,properties] ----- -myMethod.cron.expr=disabled ----- - -Property Expressions allow you to define a default value that is used, if the property is not configured. - -.CRON Config Property Example with default `0 0 15 ? * MON *` -[source,java] ----- -@Scheduled(cron = "${myMethod.cron.expr:0 0 15 ? * MON *}") -void myMethod() { } ----- - -If the property `myMethod.cron.expr` is undefined or `null`, the default value (`0 0 15 ? * MON *`) will be used. - -==== Intervals - -An interval trigger defines a period between invocations. -The period expression is based on the ISO-8601 duration format `PnDTnHnMn.nS` and the value of `@Scheduled#every()` is parsed with `java.time.Duration#parse(CharSequence)`. -However, if an expression starts with a digit then the `PT` prefix is added automatically. -So for example, `15m` can be used instead of `PT15M` and is parsed as "15 minutes". - -.Interval Trigger Example -[source,java] ----- -@Scheduled(every = "15m") -void every15Mins() { } ----- - -The `every` attribute supports <> including default values and nested -Property Expressions. (Note that `"{property.path}"` style expressions are still supported but don't offer the full functionality of Property Expressions.) - -.Interval Config Property Example -[source,java] ----- -@Scheduled(every = "${myMethod.every.expr}") -void myMethod() { } ----- - -Intervals can be disabled by setting their value to `"off"` or `"disabled"`. -So for example a Property Expression with the default value `"off"` can be used to disable the trigger if its Config Property has not been set. - -.Interval Config Property Example with a Default Value -[source,java] ----- -@Scheduled(every = "${myMethod.every.expr:off}") -void myMethod() { } ----- - - -=== Identity - -By default, a unique id is generated for each scheduled method. -This id is used in log messages and during debugging. -Sometimes a possibility to specify an explicit id may come in handy. - -.Identity Example -[source,java] ----- -@Scheduled(identity = "myScheduledMethod") -void myMethod() { } ----- - -The `identity` attribute supports <> including default values and nested -Property Expressions. (Note that `"{property.path}"` style expressions are still supported but don't offer the full functionality of Property Expressions.) - -.Interval Config Property Example -[source,java] ----- -@Scheduled(identity = "${myMethod.identity.expr}") -void myMethod() { } ----- - -=== Delayed Execution - -`@Scheduled` provides two ways to delay the time a trigger should start firing at. - -`@Scheduled#delay()` and `@Scheduled#delayUnit()` form the initial delay together. - -[source,java] ----- -@Scheduled(every = "2s", delay = 2, delayUnit = TimeUnit.HOUR) <1> -void everyTwoSeconds() { } ----- -<1> The trigger fires for the first time two hours after the application start. - -NOTE: The final value is always rounded to full second. - -`@Scheduled#delayed()` is a text alternative to the properties above. -The period expression is based on the ISO-8601 duration format `PnDTnHnMn.nS` and the value is parsed with `java.time.Duration#parse(CharSequence)`. -However, if an expression starts with a digit, the `PT` prefix is added automatically. -So for example, `15s` can be used instead of `PT15S` and is parsed as "15 seconds". - -[source,java] ----- -@Scheduled(every = "2s", delayed = "2h") -void everyTwoSeconds() { } ----- - -NOTE: If `@Scheduled#delay()` is set to a value greater than zero the value of `@Scheduled#delayed()` is ignored. - -The main advantage over `@Scheduled#delay()` is that the value is configurable. -The `delay` attribute supports <> including default values and nested -Property Expressions. (Note that `"{property.path}"` style expressions are still supported but don't offer the full functionality of Property Expressions.) - - -[source,java] ----- -@Scheduled(every = "2s", delayed = "${myMethod.delay.expr}") <1> -void everyTwoSeconds() { } ----- -<1> The config property `myMethod.delay.expr` is used to set the delay. - -[[concurrent_execution]] -=== Concurrent Execution - -By default, a scheduled method can be executed concurrently. -Nevertheless, it is possible to specify the strategy to handle concurrent executions via `@Scheduled#concurrentExecution()`. - -[source,java] ----- -import static io.quarkus.scheduler.Scheduled.ConcurrentExecution.SKIP; - -@Scheduled(every = "1s", concurrentExecution = SKIP) <1> -void nonConcurrent() { - // we can be sure that this method is never executed concurrently -} ----- -<1> Concurrent executions are skipped. - -TIP: A CDI event of type `io.quarkus.scheduler.SkippedExecution` is fired when an execution of a scheduled method is skipped. - -NOTE: Note that only executions within the same application instance are considered. This feature is not intended to work across the cluster. - -[[conditional_execution]] -=== Conditional Execution - -You can define the logic to skip any execution of a scheduled method via `@Scheduled#skipExecutionIf()`. -The specified bean class must implement `io.quarkus.scheduler.Scheduled.SkipPredicate` and the execution is skipped if the result of the `test()` method is `true`. - -[source,java] ----- -class Jobs { - - @Scheduled(every = "1s", skipExecutionIf = MyPredicate.class) <1> - void everySecond() { - // do something every second... - } -} - -@Singleton <2> -class MyPredicate implements SkipPredicate { - - @Inject - MyService service; - - boolean test(ScheduledExecution execution) { - return !service.isStarted(); <3> - } -} ----- -<1> A bean instance of `MyPredicate.class` is used to evaluate whether an execution should be skipped. There must be exactly one bean that has the specified class in its set of bean types, otherwise the build fails. -<2> The scope of the bean must be active during execution. -<3> `Jobs.everySecond()` is skipped until `MyService.isStarted()` returns `true`. - -Note that this is an equivalent of the following code: - -[source,java] ----- -class Jobs { - - @Inject - MyService service; - - @Scheduled(every = "1s") - void everySecond() { - if (service.isStarted()) { - // do something every second... - } - } -} ----- - -The main idea is to keep the the logic to skip the execution outside the scheduled business methods so that it can be reused and refactored easily. - -TIP: A CDI event of type `io.quarkus.scheduler.SkippedExecution` is fired when an execution of a scheduled method is skipped. - -== Scheduler - -Quarkus provides a built-in bean of type `io.quarkus.scheduler.Scheduler` that can be injected and used to pause/resume the scheduler and individual scheduled methods identified by a specific `Scheduled#identity()`. - -.Scheduler Injection Example -[source,java] ----- -import io.quarkus.scheduler.Scheduler; - -class MyService { - - @Inject - Scheduler scheduler; - - void ping() { - scheduler.pause(); <1> - scheduler.pause("myIdentity"); <2> - if (scheduler.isRunning()) { - throw new IllegalStateException("This should never happen!"); - } - scheduler.resume("myIdentity"); <3> - scheduler.resume(); <4> - } -} ----- -<1> Pause all triggers. -<2> Pause a specific scheduled method by its identity -<3> Resume a specific scheduled method by its identity -<4> Resume the scheduler. - -== Programmatic Scheduling - -If you need to schedule a job programmatically you'll need to add the xref:quartz.adoc[Quartz extension] and use the Quartz API directly. - -.Programmatic Scheduling with Quartz API -[source,java] ----- -import org.quartz.Scheduler; - -class MyJobs { - - void onStart(@Observes StartupEvent event, Scheduler quartz) throws SchedulerException { - JobDetail job = JobBuilder.newJob(SomeJob.class) - .withIdentity("myJob", "myGroup") - .build(); - Trigger trigger = TriggerBuilder.newTrigger() - .withIdentity("myTrigger", "myGroup") - .startNow() - .withSchedule(SimpleScheduleBuilder.simpleSchedule() - .withIntervalInSeconds(1) - .repeatForever()) - .build(); - quartz.scheduleJob(job, trigger); - } -} ----- - -NOTE: By default, the scheduler is not started unless a `@Scheduled` business method is found. You may need to force the start of the scheduler for "pure" programmatic scheduling. See also <>. - -== Scheduled Methods and Testing - -It is often desirable to disable the scheduler when running the tests. -The scheduler can be disabled through the runtime config property `quarkus.scheduler.enabled`. -If set to `false` the scheduler is not started even though the application contains scheduled methods. -You can even disable the scheduler for particular <>. - -== Metrics - -Some basic metrics are published out of the box if `quarkus.scheduler.metrics.enabled` is set to `true` and a metrics extension is present. - -If the xref:micrometer.adoc[Micrometer extension] is present, then a `@io.micrometer.core.annotation.Timed` interceptor binding is added to all `@Scheduled` methods automatically (unless it's already present) and a `io.micrometer.core.instrument.Timer` with name `scheduled.methods` and a `io.micrometer.core.instrument.LongTaskTimer` with name `scheduled.methods.running` are registered. The fully qualified name of the declaring class and the name of a `@Scheduled` method are used as tags. - -If the xref:smallrye-metrics.adoc[SmallRye Metrics extension] is present, then a `@org.eclipse.microprofile.metrics.annotation.Timed` interceptor binding is added to all `@Scheduled` methods automatically (unless it's already present) and a `org.eclipse.microprofile.metrics.Timer` is created for each `@Scheduled` method. The name consists of the fully qualified name of the declaring class and the name of a `@Scheduled` method. The timer has a tag `scheduled=true`. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-scheduler.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/scheduler.adoc b/_versions/2.7/guides/scheduler.adoc deleted file mode 100644 index be844d9da5d..00000000000 --- a/_versions/2.7/guides/scheduler.adoc +++ /dev/null @@ -1,191 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Scheduling Periodic Tasks - -include::./attributes.adoc[] - -Modern applications often need to run specific tasks periodically. -In this guide, you learn how to schedule periodic tasks. - -TIP: If you need a clustered scheduler use the xref:quartz.adoc[Quartz extension]. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we create a straightforward application accessible using HTTP to get the current value of a counter. -This counter is periodically (every 10 seconds) incremented. - -image::scheduling-task-architecture.png[alt=Architecture] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `scheduler-quickstart` {quickstarts-tree-url}/scheduler-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: scheduler-quickstart -:create-app-extensions: resteasy,scheduler -include::includes/devtools/create-app.adoc[] - -It generates a new project including: - -* a landing page accessible on `http://localhost:8080` -* example `Dockerfile` files for both `native` and `jvm` modes -* the application configuration file - -The project also imports the RESTEasy and scheduler extensions. - -If you already have your Quarkus project configured, you can add the `scheduler` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: scheduler -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-scheduler - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-scheduler") ----- - -[#standard-scheduling] -== Creating a scheduled job - -In the `org.acme.scheduler` package, create the `CounterBean` class, with the following content: - -[source,java] ----- -package org.acme.scheduler; - -import java.util.concurrent.atomic.AtomicInteger; -import javax.enterprise.context.ApplicationScoped; -import io.quarkus.scheduler.Scheduled; -import io.quarkus.scheduler.ScheduledExecution; - -@ApplicationScoped // <1> -public class CounterBean { - - private AtomicInteger counter = new AtomicInteger(); - - public int get() { // <2> - return counter.get(); - } - - @Scheduled(every="10s") // <3> - void increment() { - counter.incrementAndGet(); // <4> - } - - @Scheduled(cron="0 15 10 * * ?") <5> - void cronJob(ScheduledExecution execution) { - counter.incrementAndGet(); - System.out.println(execution.getScheduledFireTime()); - } - - @Scheduled(cron = "{cron.expr}") <6> - void cronJobWithExpressionInConfig() { - counter.incrementAndGet(); - System.out.println("Cron expression configured in application.properties"); - } -} ----- -<1> Declare the bean in the _application_ scope -<2> The `get()` method allows retrieving the current value. -<3> Use the `@Scheduled` annotation to instruct Quarkus to run this method every 10 seconds provided a worker thread is available -(Quarkus is using 10 worker threads for the scheduler). If it is not available the method invocation should be re-scheduled by default i.e -it should be invoked as soon as possible. The invocation of the scheduled method does not depend on the status or result of the previous invocation. -<4> The code is pretty straightforward. Every 10 seconds, the counter is incremented. -<5> Define a job with a cron-like expression. The annotated method is executed at 10:15am every day. -<6> Define a job with a cron-like expression `cron.expr` which is configurable in `application.properties`. - -== Updating the application configuration file - -Edit the `application.properties` file and add the `cron.expr` configuration: -[source,properties] ----- -# By default, the syntax used for cron expressions is based on Quartz - http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html -# You can change the syntax using the following property: -# quarkus.scheduler.cron-type=unix -cron.expr=*/5 * * * * ? ----- - -== Creating the REST resource - -Create the `CountResource` class as follows: - -[source,java] ----- -package org.acme.scheduler; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/count") -public class CountResource { - - @Inject - CounterBean counter; // <1> - - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "count: " + counter.get(); // <2> - } -} ----- -<1> Inject the `CounterBean` -<2> Send back the current counter value - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -In another terminal, run `curl localhost:8080/count` to check the counter value. -After a few seconds, re-run `curl localhost:8080/count` to verify the counter has been incremented. - -Observe the console to verify that the message `Cron expression configured in application.properties` has been displayed indicating -that the cron job using an expression configured in `application.properties` has been triggered. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed with `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -[[scheduler-configuration-reference]] -== Scheduler Configuration Reference - -include::{generated-dir}/config/quarkus-scheduler.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/scripting.adoc b/_versions/2.7/guides/scripting.adoc deleted file mode 100644 index a85430d21fc..00000000000 --- a/_versions/2.7/guides/scripting.adoc +++ /dev/null @@ -1,440 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Scripting with Quarkus -include::./attributes.adoc[] -:extension-status: preview - -Quarkus provides integration with https://jbang.dev[jbang] which allows you to write Java scripts/applications requiring no Maven nor Gradle to get running. - -In this guide, we will see how you can write a REST application using just a single Java file. - -include::./status-include.adoc[] - -== Prerequisites - -:prerequisites-time: 5 minutes -:prerequisites-no-maven: -:prerequisites-no-cli: -include::includes/devtools/prerequisites.adoc[] -* https://jbang.dev/download[JBang] - -== Solution - -Normally we would link to a Git repository to clone but in this case there is no additional files than the following: - -[source,java,subs=attributes+] ----- -//usr/bin/env jbang "$0" "$@" ; exit $? -//DEPS io.quarkus:quarkus-resteasy:{quarkus-version} -//JAVAC_OPTIONS -parameters -//JAVA_OPTIONS -Djava.util.logging.manager=org.jboss.logmanager.LogManager - -import io.quarkus.runtime.Quarkus; -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import org.jboss.resteasy.annotations.jaxrs.PathParam; -import org.jboss.logging.Logger; - -@Path("/hello") -@ApplicationScoped -public class quarkusapp { - - @GET - public String sayHello() { - return "hello"; - } - - public static void main(String[] args) { - Quarkus.run(args); - } - - @Inject - GreetingService service; - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("/greeting/{name}") - public String greeting(@PathParam String name) { - return service.greeting(name); - } - - @ApplicationScoped - static public class GreetingService { - - public String greeting(String name) { - return "hello " + name; - } - } -} ----- - -== Architecture - -In this guide, we create a straightforward application serving a `hello` endpoint with a single source file, no additional build files like `pom.xml` or `build.gradle` needed. To demonstrate dependency injection, this endpoint uses a `greeting` bean. - -image::getting-started-architecture.png[alt=Architecture, align=center] - -== Creating the initial file - -First, we need a Java file. JBang lets you create an initial version using: - -[source,bash,subs=attributes+] ----- -jbang init scripting/quarkusapp.java -cd scripting ----- - -This command generates a .java file that you can directly run on Linux and macOS, i.e. `./quarkusapp.java` - on Windows you need to use `jbang quarkusapp.java`. - -This initial version will print `Hello World` when run. - -Once generated, look at the `quarkusapp.java` file. - -You will find at the top a line looking like this: - -[source,java] ----- -//usr/bin/env jbang "$0" "$@" ; exit $? ----- - -This line is what on Linux and macOS allows you to run it as a script. On Windows this line is ignored. - -The next line - -[source,java] ----- -// //DEPS ----- - -Is illustrating how you add dependencies to this script. This is a feature of JBang. - -Go ahead and update this line to include the `quarkus-resteasy` dependency like so: - -[source,java,subs=attributes+] ----- -//DEPS io.quarkus:quarkus-resteasy:{quarkus-version} ----- - -Now, run `jbang quarkusapp.java` and you will see JBang resolving this dependency and building the jar with help from Quarkus' JBang integration. - -[source,shell,subs=attributes+] ----- -$ jbang quarkusapp.java - -[jbang] Resolving dependencies... -[jbang] Resolving io.quarkus:quarkus-resteasy:{quarkus-version}...Done -[jbang] Dependencies resolved -[jbang] Building jar... -[jbang] Post build with io.quarkus.launcher.JBangIntegration -Aug 30, 2020 5:40:55 AM org.jboss.threads.Version -INFO: JBoss Threads version 3.1.1.Final -Aug 30, 2020 5:40:56 AM io.quarkus.deployment.QuarkusAugmentor run -INFO: Quarkus augmentation completed in 722ms -Hello World ----- - -For now the application does nothing new. - -[TIP] -.How do I edit this file and get content assist? -==== -As there is nothing but a `.java` file, most IDE's don't handle content assist well. -To work around that you can run `jbang edit quarkusapp.java`, this will print out a directory that will have a temporary project setup you can use in your IDE. - -On Linux/macOS you can run ` `jbang edit quarkusapp.java``. - -If you add dependencies while editing, you can get JBang to automatically refresh -the IDE project using `jbang edit --live= quarkusapp.java`. -==== - - -=== The JAX-RS resources - -Now let us replace the class with one that uses Quarkus features: - -[source,java] ----- -import io.quarkus.runtime.Quarkus; -import javax.enterprise.context.ApplicationScoped; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("/hello") -@ApplicationScoped -public class quarkusapp { - - @GET - public String sayHello() { - return "hello"; - } - - public static void main(String[] args) { - Quarkus.run(args); - } -} ----- - -It's a very simple class with a main method that starts Quarkus with a REST endpoint, returning "hello" to requests on "/hello". - -[TIP] -.Why is the `main` method there? -==== -A `main` method is currently needed for the JBang integration to work - we might remove this requirement in the future. -==== - -== Running the application - -Now when you run the application you will see Quarkus start up. - -Use: `jbang quarkusapp.java`: - -[source,shell,subs=attributes+] ----- -$ jbang quarkusapp.java - -[jbang] Building jar... -[jbang] Post build with io.quarkus.launcher.JBangIntegration -Aug 30, 2020 5:49:01 AM org.jboss.threads.Version -INFO: JBoss Threads version 3.1.1.Final -Aug 30, 2020 5:49:02 AM io.quarkus.deployment.QuarkusAugmentor run -INFO: Quarkus augmentation completed in 681ms -__ ____ __ _____ ___ __ ____ ______ - --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ - -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ ---\___\_\____/_/ |_/_/|_/_/|_|\____/___/ -2020-08-30 05:49:03,255 INFO [io.quarkus] (main) Quarkus {quarkus-version} on JVM started in 0.638s. Listening on: http://0.0.0.0:8080 -2020-08-30 05:49:03,272 INFO [io.quarkus] (main) Profile prod activated. -2020-08-30 05:49:03,272 INFO [io.quarkus] (main) Installed features: [cdi, resteasy] ----- - -Once started, you can request the provided endpoint: - -[source,shell] ----- -$ curl -w "\n" http://localhost:8080/hello -hello ----- - -After that, hit `CTRL+C` to stop the application. - -[TIP] -.Automatically add newline with `curl -w "\n"` -==== -We are using `curl -w "\n"` in this example to avoid your terminal printing a '%' or put both result and next command prompt on the same line. -==== - -[TIP] -.Why is `quarkus-resteasy` not resolved? -==== -In this second run you should not see a line saying it is resolving `quarkus-resteasy` as JBang caches the dependency resolution between runs. -If you want to clear the caches to force resolution use `jbang cache clear`. -==== - -== Using injection - -Dependency injection in Quarkus is based on ArC which is a CDI-based dependency injection solution tailored for Quarkus' architecture. -You can learn more about it in the xref:cdi-reference.adoc[Contexts and Dependency Injection guide]. - -ArC comes as a dependency of `quarkus-resteasy` so you already have it handy. - -Let's modify the application and add a companion bean. - -Normally you would add a separate class, but as we are aiming to have it all in one file you will add a -nested class. - -Add the following *inside* the `quarkusapp` class body. - -[source, java] ----- -@ApplicationScoped -static public class GreetingService { - - public String greeting(String name) { - return "hello " + name; - } - -} ----- - -[TIP] -.Use of nested static public classes -==== -We are using a nested static public class instead of a top level class for two reasons: - -. JBang currently does not support multiple source files. -. All Java frameworks relying on introspection have challenges using top level classes as they are not as visible as public classes; and in Java there can only be one top level public class in a file. - -==== - -Edit the `quarksapp` class to inject the `GreetingService` and create a new endpoint using it, you should end up with something like: - -[source,java,subs=attributes+] ----- -//usr/bin/env jbang "$0" "$@" ; exit $? -//DEPS io.quarkus:quarkus-resteasy:{quarkus-version} - -import io.quarkus.runtime.Quarkus; -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -@Path("/hello") -@ApplicationScoped -public class quarkusapp { - - @GET - public String sayHello() { - return "hello from Quarkus with jbang.dev"; - } - - public static void main(String[] args) { - Quarkus.run(args); - } - - @Inject - GreetingService service; - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("/greeting/{name}") - public String greeting(@PathParam String name) { - return service.greeting(name); - } - - @ApplicationScoped - static public class GreetingService { - - public String greeting(String name) { - return "hello " + name; - } - } -} ----- - -Now when you run `jbang quarkusapp.java` you can check what the new end point returns: - -[source,shell,subs=attributes+] ----- -$ curl -w "\n" http://localhost:8080/hello/greeting/quarkus -hello null ----- - -Now that is unexpected, why is it returning `hello null` and not `hello quarkus`? - -The reason is that JAX-RS `@PathParam` relies on the `-parameters` compiler flag to be set to be able to map `{name}` to the `name` parameter. - -We fix that by adding the following comment instruction to the file: - -[source,java,subs=attributes+] ----- -//JAVAC_OPTIONS -parameters ----- - -Now when you run with `jbang quarkusapp.java` the end point should return what you expect: - -[source,shell,subs=attributes+] ----- -$ curl -w "\n" http://localhost:8080/hello/greeting/quarkus -hello quarkus ----- - -== Debugging - -To debug the application you use `jbang --debug quarkusapp.java` and you can use your IDE to connect on port 4004; if you want to use the -more traditional Quarkus debug port you can use `jbang --debug=5005 quarkusapp.java`. - -Note: JBang debugging always suspends thus you need to connect the debugger to have the application run. - -== Logging - -To use logging in Quarkus scripting with JBang you do as usual, with configuring a logger, i.e. - -[source,java] ----- -public static final Logger LOG = Logger.getLogger(quarkusapp.class); ----- - -To get it to work you need to add a Java option to ensure the logging is initialized properly, i.e. - -[source,java] ----- -//JAVA_OPTIONS -Djava.util.logging.manager=org.jboss.logmanager.LogManager ----- - -With that in place running `jbang quarkusapp.java` will log and render as expected. - -== Configuring Application - -You can use `//Q:CONFIG =` to set up static configuration for your application. - -I.e. if you wanted to add the `smallrye-openapi` and `swagger-ui` extensions and have the Swagger UI always show up you would add the following: - -[source,java,subs=attributes+] ----- -//DEPS io.quarkus:quarkus-smallrye-openapi:{quarkus-version} -//DEPS io.quarkus:quarkus-swagger-ui:{quarkus-version} -//Q:CONFIG quarkus.swagger-ui.always-include=true ----- - -Now during build the `quarkus.swagger-ui.always-include` will be generated into the resulting jar and `http://0.0.0.0:8080/q/swagger-ui` will be available when run. - -== Running as a native application - -If you have the `native-image` binary installed and `GRAALVM_HOME` set, you can get the native executable built and run using `jbang --native quarkusapp.java`: - -[source,shell,subs=attributes+] ----- -$ jbang --native quarkusapp.java - -[jbang] Building jar... -[jbang] Post build with io.quarkus.launcher.JBangIntegration -Aug 30, 2020 6:21:15 AM org.jboss.threads.Version -INFO: JBoss Threads version 3.1.1.Final -Aug 30, 2020 6:21:16 AM io.quarkus.deployment.pkg.steps.JarResultBuildStep buildNativeImageThinJar -INFO: Building native image source jar: /var/folders/yb/sytszfld4sg8vwr1h0w20jlw0000gn/T/quarkus-jbang3291688251685023074/quarkus-application-native-image-source-jar/quarkus-application-runner.jar -Aug 30, 2020 6:21:16 AM io.quarkus.deployment.pkg.steps.NativeImageBuildStep build -INFO: Building native image from /var/folders/yb/sytszfld4sg8vwr1h0w20jlw0000gn/T/quarkus-jbang3291688251685023074/quarkus-application-native-image-source-jar/quarkus-application-runner.jar -Aug 30, 2020 6:21:16 AM io.quarkus.deployment.pkg.steps.NativeImageBuildStep checkGraalVMVersion -INFO: Running Quarkus native-image plugin on GraalVM Version 20.1.0 (Java Version 11.0.7) -Aug 30, 2020 6:21:16 AM io.quarkus.deployment.pkg.steps.NativeImageBuildStep build -INFO: /Users/max/.sdkman/candidates/java/20.1.0.r11-grl/bin/native-image -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 -J-Duser.language=en -J-Dfile.encoding=UTF-8 --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy\$BySpaceAndTime -H:+JNI -jar quarkus-application-runner.jar -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:EnableURLProtocols=http --no-server -H:-UseServiceLoaderFeature -H:+StackTrace quarkus-application-runner - -Aug 30, 2020 6:22:31 AM io.quarkus.deployment.QuarkusAugmentor run -INFO: Quarkus augmentation completed in 76010ms -__ ____ __ _____ ___ __ ____ ______ - --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ - -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ ---\___\_\____/_/ |_/_/|_/_/|_|\____/___/ -2020-08-30 06:22:32,012 INFO [io.quarkus] (main) Quarkus {quarkus-version} native started in 0.017s. Listening on: http://0.0.0.0:8080 -2020-08-30 06:22:32,013 INFO [io.quarkus] (main) Profile prod activated. -2020-08-30 06:22:32,013 INFO [io.quarkus] (main) Installed features: [cdi, resteasy] ----- - -This native build will take some time on first run but any subsequent runs (without changing `quarkusapp.java`) will be close to instant thanks to JBang cache: - -[source,shell,subs=attributes+] ----- -$ jbang --native quarkusapp.java -__ ____ __ _____ ___ __ ____ ______ - --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ - -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ ---\___\_\____/_/ |_/_/|_/_/|_|\____/___/ -2020-08-30 06:23:36,846 INFO [io.quarkus] (main) Quarkus {quarkus-version} native started in 0.015s. Listening on: http://0.0.0.0:8080 -2020-08-30 06:23:36,846 INFO [io.quarkus] (main) Profile prod activated. -2020-08-30 06:23:36,846 INFO [io.quarkus] (main) Installed features: [cdi, resteasy] ----- - -=== Conclusion - -If you want to get started with Quarkus or write something quickly, Quarkus Scripting with jbang lets you do that. No Maven, no Gradle - just a Java file. In this guide we outlined the very basics on using Quarkus with JBang; if you want to learn more about what JBang can do, go see https://jbang.dev. diff --git a/_versions/2.7/guides/security-authorization.adoc b/_versions/2.7/guides/security-authorization.adoc deleted file mode 100644 index 5f52e6f411e..00000000000 --- a/_versions/2.7/guides/security-authorization.adoc +++ /dev/null @@ -1,242 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Authorization of Web Endpoints - -include::./attributes.adoc[] - -Quarkus has an integrated pluggable web security layer. If security is enabled all HTTP requests will have a permission -check performed to make sure they are allowed to continue. - -NOTE: Configuration authorization checks are executed before any annotation-based authorization check is done, so both -checks have to pass for a request to be allowed. This means you cannot use `@PermitAll` to open up a path if the path has -been blocked using `quarkus.http.auth.` configuration. If you are using JAX-RS you may want to consider using the -`quarkus.security.jaxrs.deny-unannotated-endpoints` or `quarkus.security.jaxrs.default-roles-allowed` to set default security -requirements instead of HTTP path level matching, as these properties can be overridden by annotations on an individual -endpoint. - -== Authorization using Configuration - -The default implementation allows you to define permissions using config in `application.properties`. An example -config is shown below: - -[source,properties] ----- -quarkus.http.auth.policy.role-policy1.roles-allowed=user,admin <1> - -quarkus.http.auth.permission.roles1.paths=/roles-secured/*,/other/*,/api/* <2> -quarkus.http.auth.permission.roles1.policy=role-policy1 - -quarkus.http.auth.permission.permit1.paths=/public/* <3> -quarkus.http.auth.permission.permit1.policy=permit -quarkus.http.auth.permission.permit1.methods=GET - -quarkus.http.auth.permission.deny1.paths=/forbidden <4> -quarkus.http.auth.permission.deny1.policy=deny ----- -<1> This defines a role based policy that allows users with the `user` and `admin` roles. This is referenced by later rules. -<2> This is a permission set that references the previously defined policy. `roles1` is an arbitrary name, you can call the permission sets whatever you want. -<3> This permission references the default `permit` built-in policy to allow `GET` methods to `/public`. This is actually a no-op in this example, as this request would have been allowed anyway. -<4> This permission references the built-in `deny` policy for `/forbidden`. This is an exact path match as it does not end with `*`. - -Permissions are defined in config using permission sets. These are arbitrarily named permission grouping. Each permission -set must specify a policy that is used to control access. There are three built-in policies: `deny`, `permit` and `authenticated`, -which respectively permits all, denies all and only allows authenticated users. - -It is also possible to define role based policies, as shown in the example. These policies will only allow users with the -specified roles to access the resources. - -=== Matching on paths, methods - -Permission sets can also specify paths and methods as a comma separated list. If a path ends with `*` then it is considered -to be a wildcard match and will match all sub paths, otherwise it is an exact match and will only match that specific path: - -[source,properties] ----- -quarkus.http.auth.permission.permit1.paths=/public/*,/css/*,/js/*,/robots.txt -quarkus.http.auth.permission.permit1.policy=permit -quarkus.http.auth.permission.permit1.methods=GET,HEAD ----- - -=== Matching path but not method - -If a request would match one or more permission sets based on the path, but does not match any due to method requirements -then the request is rejected. - -TIP: Given the above permission set, `GET /public/foo` would match both the path and method and thus be allowed, -whereas `POST /public/foo` would match the path but not the method and would thus be rejected. - -=== Matching multiple paths: longest path wins - -Matching is always done on a longest path wins basis, less specific permission sets are not considered if a more specific one -has been matched: - -[source,properties] ----- -quarkus.http.auth.permission.permit1.paths=/public/* -quarkus.http.auth.permission.permit1.policy=permit -quarkus.http.auth.permission.permit1.methods=GET,HEAD - -quarkus.http.auth.permission.deny1.paths=/public/forbidden-folder/* -quarkus.http.auth.permission.deny1.policy=deny ----- - -TIP: Given the above permission set, `GET /public/forbidden-folder/foo` would match both permission sets' paths, -but because it matches the `deny1` permission set's path on a longer match, `deny1` will be chosen and the request will -be rejected. - -[NOTE] -==== -Subpath permissions always win against the root path permissions as explained above in the `deny1` versus `permit1` permission example. -Here is another example showing a subpath permission allowing a public resource access with the root path permission requiring the authorization: - -[source,properties] ----- -quarkus.http.auth.policy.user-policy.roles-allowed=user -quarkus.http.auth.permission.roles.paths=/api/* -quarkus.http.auth.permission.roles.policy=user-policy - -quarkus.http.auth.permission.public.paths=/api/noauth/* -quarkus.http.auth.permission.public.policy=permit ----- -==== - -=== Matching multiple paths: most specific method wins - -If a path is registered with multiple permission sets then any permission sets that specify a HTTP method will take -precedence and permissions sets without a method will not be considered (assuming of course the method matches). In this -instance, the permission sets without methods will only come into effect if the request method does not match any of the -sets with method permissions. - -[source,properties] ----- -quarkus.http.auth.permission.permit1.paths=/public/* -quarkus.http.auth.permission.permit1.policy=permit -quarkus.http.auth.permission.permit1.methods=GET,HEAD - -quarkus.http.auth.permission.deny1.paths=/public/* -quarkus.http.auth.permission.deny1.policy=deny ----- - -TIP: Given the above permission set, `GET /public/foo` would match both permission sets' paths, -but because it matches the `permit1` permission set's explicit method, `permit1` will be chosen and the request will -be accepted. `PUT /public/foo` on the other hand, will not match the method permissions of `permit1` and so -`deny1` will be activated and reject the request. - -=== Matching multiple paths and methods: both win - -If multiple permission sets specify the same path and method (or multiple have no method) then both permissions have to -allow access for the request to proceed. Note that for this to happen both have to either have specified the method, or -have no method, method specific matches take precedence as stated above: - -[source,properties] ----- -quarkus.http.auth.policy.user-policy1.roles-allowed=user -quarkus.http.auth.policy.admin-policy1.roles-allowed=admin - -quarkus.http.auth.permission.roles1.paths=/api/*,/restricted/* -quarkus.http.auth.permission.roles1.policy=user-policy1 - -quarkus.http.auth.permission.roles2.paths=/api/*,/admin/* -quarkus.http.auth.permission.roles2.policy=admin-policy1 ----- - -TIP: Given the above permission set, `GET /api/foo` would match both permission sets' paths, -so would require both the `user` and `admin` roles. - -=== Configuration Properties to Deny access - -There are three configuration settings that alter the RBAC Deny behavior: - -`quarkus.security.jaxrs.deny-unannotated-endpoints=true|false`:: -If set to true, the access will be denied for all JAX-RS endpoints by default, so if a JAX-RS endpoint does not have any security annotations -then it will default to `@DenyAll` behaviour. This is useful to ensure you cannot accidentally expose an endpoint that is supposed to be secured. Defaults to `false`. - -`quarkus.security.jaxrs.default-roles-allowed=role1,role2`:: -Defines the default role requirements for unannotated endpoints. The role '**' is a special role that means any authenticated user. This cannot be combined with -`deny-unannotated-endpoints`, as the deny will take effect instead. - -`quarkus.security.deny-unannotated-members=true|false`:: -- if set to true, the access will be denied to all CDI methods -and JAX-RS endpoints that do not have security annotations but are defined in classes that contain methods with -security annotations. Defaults to `false`. - -=== Disabling permissions - -Permissions can be disabled at build time with an `enabled` property for each declared permission, for example: - -[source,properties] ----- -quarkus.http.auth.permission.permit1.enabled=false -quarkus.http.auth.permission.permit1.paths=/public/*,/css/*,/js/*,/robots.txt -quarkus.http.auth.permission.permit1.policy=permit -quarkus.http.auth.permission.permit1.methods=GET,HEAD ----- - -and enabled at runtime with a system property or environment variable, for example: `-Dquarkus.http.auth.permission.permit1.enabled=true`. - -[#standard-security-annotations] -== Authorization using Annotations - -Quarkus comes with built-in security to allow for Role-Based Access Control (link:https://en.wikipedia.org/wiki/Role-based_access_control[RBAC]) -based on the common security annotations `@RolesAllowed`, `@DenyAll`, `@PermitAll` on REST endpoints and CDI beans. -An example of an endpoint that makes use of both JAX-RS and Common Security annotations to describe and secure its endpoints is given in <>. Quarkus also provides -the `io.quarkus.security.Authenticated` annotation that will permit any authenticated user to access the resource -(equivalent to `@RolesAllowed("**")`). - -[#subject-example] -.SubjectExposingResource Example -[source,java] ----- -import java.security.Principal; - -import javax.annotation.security.DenyAll; -import javax.annotation.security.PermitAll; -import javax.annotation.security.RolesAllowed; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.SecurityContext; - -@Path("subject") -public class SubjectExposingResource { - - @GET - @Path("secured") - @RolesAllowed("Tester") <1> - public String getSubjectSecured(@Context SecurityContext sec) { - Principal user = sec.getUserPrincipal(); <2> - String name = user != null ? user.getName() : "anonymous"; - return name; - } - - @GET - @Path("unsecured") - @PermitAll <3> - public String getSubjectUnsecured(@Context SecurityContext sec) { - Principal user = sec.getUserPrincipal(); <4> - String name = user != null ? user.getName() : "anonymous"; - return name; - } - - @GET - @Path("denied") - @DenyAll <5> - public String getSubjectDenied(@Context SecurityContext sec) { - Principal user = sec.getUserPrincipal(); - String name = user != null ? user.getName() : "anonymous"; - return name; - } -} ----- -<1> This `/subject/secured` endpoint requires an authenticated user that has been granted the role "Tester" through the use of the `@RolesAllowed("Tester")` annotation. -<2> The endpoint obtains the user principal from the JAX-RS `SecurityContext`. This will be non-null for a secured endpoint. -<3> The `/subject/unsecured` endpoint allows for unauthenticated access by specifying the `@PermitAll` annotation. -<4> This call to obtain the user principal will return null if the caller is unauthenticated, non-null if the caller is authenticated. -<5> The `/subject/denied` endpoint disallows any access regardless of whether the call is authenticated by specifying the `@DenyAll` annotation. - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-built-in-authentication.adoc b/_versions/2.7/guides/security-built-in-authentication.adoc deleted file mode 100644 index 00e38e0d4ee..00000000000 --- a/_versions/2.7/guides/security-built-in-authentication.adoc +++ /dev/null @@ -1,158 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Built-In Authentication Support - -include::./attributes.adoc[] - -This document describes the Quarkus built-in authentication mechanisms for HTTP based FORM, BASIC and Mutual TLS authentication as well as the proactive authentication. - -[[basic-auth]] -== Basic Authentication - -To enable basic authentication set `quarkus.http.auth.basic=true`. You must also have at least one extension installed -that provides a username/password based `IdentityProvider`, such as xref:security-jdbc.adoc[Elytron JDBC]. - -Please see xref:security.adoc#identity-providers[Security Identity Providers] for more information. - -Please also see xref:security-testing.adoc#configuring-user-information[Configuring User Information in application.properties] section. - -[[form-auth]] -== Form Based Authentication - -Quarkus provides form based authentication that works in a similar manner to traditional Servlet form based auth. Unlike -traditional form authentication, the authenticated user is not stored in an HTTP session, as Quarkus does not provide -clustered HTTP session support. Instead the authentication information is stored in an encrypted cookie, which can -be read by all members of the cluster (provided they all share the same encryption key). - -The encryption key can be set using the `quarkus.http.auth.session.encryption-key` property, and it must be at least 16 characters -long. This key is hashed using SHA-256 and the resulting digest is used as a key for AES-256 encryption of the cookie -value. This cookie contains an expiry time as part of the encrypted value, so all nodes in the cluster must have their -clocks synchronized. At one minute intervals a new cookie will be generated with an updated expiry time if the session -is in use. - -The following properties can be used to configure form based auth: - -include::{generated-dir}/config/quarkus-vertx-http-config-group-form-auth-config.adoc[opts=optional, leveloffset=+1] - -[[mutual-tls]] -== Mutual TLS Authentication - -Quarkus provides mTLS authentication so that you can authenticate users based on their X.509 certificates. - -To use this authentication method, you should first enable SSL for your application. For more details, check the xref:http-reference.adoc#ssl[Supporting secure connections with SSL] guide. - -Once your application is accepting secure connections, the next step is to configure a `quarkus.http.ssl.certificate.trust-store-file` -holding all the certificates that your application should trust as well as how your application should ask for certificates when -a client (e.g.: browser or another service) tries to access one of its protected resources. - -[source,properties] ----- -quarkus.http.ssl.certificate.key-store-file=server-keystore.jks <1> -quarkus.http.ssl.certificate.key-store-password=the_key_store_secret -quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks <2> -quarkus.http.ssl.certificate.trust-store-password=the_trust_store_secret -quarkus.http.ssl.client-auth=required <3> - -quarkus.http.auth.permission.default.paths=/* <4> -quarkus.http.auth.permission.default.policy=authenticated ----- -<1> Configures a key store where the server's private key is located. -<2> Configures a trust store from where the trusted certificates are going to be loaded from. -<3> Defines that the server should *always* ask certificates from clients. You can relax this behavior by using `REQUEST` so -that the server should still accept requests without a certificate. Useful when you are also supporting authentication methods other than -mTLS. -<4> Defines a policy where only authenticated users should have access to resources from your application. - -Once the incoming request matches a valid certificate in the truststore, your application should be able to obtain the subject by -just injecting a `SecurityIdentity` as follows: - -[#x509-subject-example] -.Obtaining the subject -[source,java] ----- -@Inject -SecurityIdentity identity; - -@GET -@Produces(MediaType.TEXT_PLAIN) -public String hello() { - return String.format("Hello, %s", identity.getPrincipal().getName()); -} ----- - -You should also be able to get the certificate as follows: - -[#x509-credential-example] -.Obtaining the certificate -[source,java] ----- -import java.security.cert.X509Certificate; -import io.quarkus.security.credential.CertificateCredential; - -CertificateCredential credential = identity.getCredential(CertificateCredential.class); -X509Certificate certificate = credential.getCertificate(); ----- - -=== Authorization - -The information from the client certificate can be used to enhance Quarkus `SecurityIdentity`. For example, one can add new roles after checking a client certificate subject name, etc. -Please see the xref:security-customization.adoc#security-identity-customization[SecurityIdentity Customization] section for more information about customizing Quarkus `SecurityIdentity`. - -[[proactive-authentication]] -== Proactive Authentication - -By default Quarkus does what we call proactive authentication. This means that if an incoming request has a -credential then that request will always be authenticated (even if the target page does not require authentication). - -This means that requests with an invalid credential will always be rejected, even for public pages. You can change -this behavior and only authenticate when required by setting `quarkus.http.auth.proactive=false`. - -If you disable proactive authentication then the authentication process will only be run when an identity is requested, -either because there are security rules that requires the user to be authenticated, or due to programmatic access to the -current identity. - -Note that if proactive authentication is in use accessing the `SecurityIdentity` is a blocking operation. This is because -authentication may not have happened yet, and accessing it may require calls to external systems such as databases that -may block. For blocking applications this is no problem, however if you are have disabled authentication in a reactive -application this will fail (as you cannot do blocking operations on the IO thread). To work around this you need to -`@Inject` an instance of `io.quarkus.security.identity.CurrentIdentityAssociation`, and call the -`Uni getDeferredIdentity();` method. You can then subscribe to the resulting `Uni` and will be notified -when authentication is complete and the identity is available. - -=== How to customize authentication exception responses - -By default the authentication security constraints are enforced before the JAX-RS chain starts. -Disabling the proactive authentication effectively shifts this process to the moment when the JAX-RS chain starts running thus making it possible to use JAX-RS `ExceptionMapper` to capture Quarkus Security authentication exceptions such as `io.quarkus.security.AuthenticationFailedException`, for example: - -[source,java] ----- -package io.quarkus.it.keycloak; - -import javax.annotation.Priority; -import javax.ws.rs.Priorities; -import javax.ws.rs.core.Response; -import javax.ws.rs.ext.ExceptionMapper; -import javax.ws.rs.ext.Provider; - -import io.quarkus.security.AuthenticationFailedException; - -@Provider -@Priority(Priorities.AUTHENTICATION) -public class AuthenticationFailedExceptionMapper implements ExceptionMapper { - - @Context - UriInfo uriInfo; - - @Override - public Response toResponse(AuthenticationFailedException exception) { - return Response.status(401).header("WWW-Authenticate", "Basic realm=\"Quarkus\"").build(); - } -} ----- - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-customization.adoc b/_versions/2.7/guides/security-customization.adoc deleted file mode 100644 index 2bfc46bdea5..00000000000 --- a/_versions/2.7/guides/security-customization.adoc +++ /dev/null @@ -1,450 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Security Tips and Tricks - -include::./attributes.adoc[] - -== Quarkus Security Dependency - -`io.quarkus:quarkus-security` module contains the core Quarkus security classes. - -In most cases, it does not have to be added directly to your project's build file as it is already provided by all of the security extensions. -However, if you need to write your own custom security code (for example, register a <>) or use <> libraries, then please make sure it is included: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-security - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-security") ----- - -== HttpAuthenticationMechanism Customization - -One can customize `HttpAuthenticationMechanism` by registering a CDI implementation bean. -In the example below the custom authenticator delegates to `JWTAuthMechanism` provided by `quarkus-smallrye-jwt`: - -[source,java] ----- -@Alternative -@Priority(1) -@ApplicationScoped -public class CustomAwareJWTAuthMechanism implements HttpAuthenticationMechanism { - - private static final Logger LOG = LoggerFactory.getLogger(CustomAwareJWTAuthMechanism.class); - - @Inject - JWTAuthMechanism delegate; - - @Override - public Uni authenticate(RoutingContext context, IdentityProviderManager identityProviderManager) { - // do some custom action and delegate - return delegate.authenticate(context, identityProviderManager); - } - - @Override - public Uni getChallenge(RoutingContext context) { - return delegate.getChallenge(context); - } - - @Override - public Set> getCredentialTypes() { - return delegate.getCredentialTypes(); - } - - @Override - public HttpCredentialTransport getCredentialTransport() { - return delegate.getCredentialTransport(); - } - -} ----- - -[[security-identity-customization]] -== Security Identity Customization - -Internally, the identity providers create and update an instance of the `io.quarkus.security.identity.SecurityIdentity` class which holds the principal, roles, credentials which were used to authenticate the client (user) and other security attributes. An easy option to customize `SecurityIdentity` is to register a custom `SecurityIdentityAugmentor`. For example, the augmentor below adds an addition role: - -[source,java] ----- -import io.quarkus.security.identity.AuthenticationRequestContext; -import io.quarkus.security.identity.SecurityIdentity; -import io.quarkus.security.identity.SecurityIdentityAugmentor; -import io.quarkus.security.runtime.QuarkusSecurityIdentity; -import io.smallrye.mutiny.Uni; - -import javax.enterprise.context.ApplicationScoped; -import java.util.function.Supplier; - -@ApplicationScoped -public class RolesAugmentor implements SecurityIdentityAugmentor { - - @Override - public Uni augment(SecurityIdentity identity, AuthenticationRequestContext context) { - return Uni.createFrom().item(build(identity)); - - // Do 'return context.runBlocking(build(identity));' - // if a blocking call is required to customize the identity - } - - private Supplier build(SecurityIdentity identity) { - if(identity.isAnonymous()) { - return () -> identity; - } else { - // create a new builder and copy principal, attributes, credentials and roles from the original identity - QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); - - // add custom role source here - builder.addRole("dummy"); - return builder::build; - } - } -} ----- - -Here is another example showing how to use the client certificate available in the current xref:security-built-in-authentication.adoc#mutual-tls[Mutual TLS] request to add more roles: - -[source,java] ----- -import java.security.cert.X509Certificate; -import io.quarkus.security.credential.CertificateCredential; -import io.quarkus.security.identity.AuthenticationRequestContext; -import io.quarkus.security.identity.SecurityIdentity; -import io.quarkus.security.identity.SecurityIdentityAugmentor; -import io.quarkus.security.runtime.QuarkusSecurityIdentity; -import io.smallrye.mutiny.Uni; - -import javax.enterprise.context.ApplicationScoped; -import java.util.function.Supplier; -import java.util.Set; - -@ApplicationScoped -public class RolesAugmentor implements SecurityIdentityAugmentor { - - @Override - public Uni augment(SecurityIdentity identity, AuthenticationRequestContext context) { - return Uni.createFrom().item(build(identity)); - } - - private Supplier build(SecurityIdentity identity) { - // create a new builder and copy principal, attributes, credentials and roles from the original identity - QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); - - CertificateCredential certificate = identity.getCredential(CertificateCredential.class); - if (certificate != null) { - builder.addRoles(extractRoles(certificate.getCertificate())); - } - return builder::build; - } - - private Set extractRoles(X509Certificate certificate) { - String name = certificate.getSubjectX500Principal().getName(); - - switch (name) { - case "CN=client": - return Collections.singleton("user"); - case "CN=guest-client": - return Collections.singleton("guest"); - default: - return Collections.emptySet(); - } - } -} ----- - -[NOTE] -==== -If more than one custom `SecurityIdentityAugmentor` is registered then they will be considered equal candidates and invoked in random order. -You can enforce the order by implementing a default `SecurityIdentityAugmentor#priority` method. Augmentors with higher priorities will be invoked first. -==== - -[[jaxrs-security-context]] -== Custom JAX-RS SecurityContext - -If you use JAX-RS `ContainerRequestFilter` to set a custom JAX-RS `SecurityContext` then make sure `ContainerRequestFilter` runs in the JAX-RS pre-match phase by adding a `@PreMatching` annotation to it for this custom security context to be linked with Quarkus `SecurityIdentity`, for example: - -[source,java] ----- -import java.security.Principal; - -import javax.ws.rs.container.ContainerRequestContext; -import javax.ws.rs.container.ContainerRequestFilter; -import javax.ws.rs.container.PreMatching; -import javax.ws.rs.core.SecurityContext; -import javax.ws.rs.ext.Provider; - -@Provider -@PreMatching -public class SecurityOverrideFilter implements ContainerRequestFilter { - @Override - public void filter(ContainerRequestContext requestContext) throws IOException { - String user = requestContext.getHeaders().getFirst("User"); - String role = requestContext.getHeaders().getFirst("Role"); - if (user != null && role != null) { - requestContext.setSecurityContext(new SecurityContext() { - @Override - public Principal getUserPrincipal() { - return new Principal() { - @Override - public String getName() { - return user; - } - }; - } - - @Override - public boolean isUserInRole(String r) { - return role.equals(r); - } - - @Override - public boolean isSecure() { - return false; - } - - @Override - public String getAuthenticationScheme() { - return "basic"; - } - }); - } - - } -} ----- - -== Disabling Authorization - -If you have a good reason to disable the authorization (for example, when testing) then you can register a custom `AuthorizationController`: - -[source,java] ----- -@Alternative -@Priority(Interceptor.Priority.LIBRARY_AFTER) -@ApplicationScoped -public class DisabledAuthController extends AuthorizationController { - @ConfigProperty(name = "disable.authorization", defaultValue = "false") - boolean disableAuthorization; - - @Override - public boolean isAuthorizationEnabled() { - return !disableAuthorization; - } -} ----- - -Please also see xref:security-testing.adoc#testing-security[TestingSecurity Annotation] section on how to disable the security checks using `TestSecurity` annotation. - -== Registering Security Providers - -=== Default providers - -When running in native mode, the default behavior for GraalVM native executable generation is to only include the main "SUN" provider -unless you have enabled SSL, in which case all security providers are registered. If you are not using SSL, then you can selectively -register security providers by name using the `quarkus.security.security-providers` property. The following example illustrates -configuration to register the "SunRsaSign" and "SunJCE" security providers: - -.Example Security Providers Configuration -[source,properties] ----- -quarkus.security.security-providers=SunRsaSign,SunJCE ----- - -[[bouncy-castle]] -=== BouncyCastle - -If you need to register an `org.bouncycastle.jce.provider.BouncyCastleProvider` JCE provider then please set a `BC` provider name: - -.Example Security Providers BouncyCastle Configuration -[source,properties] ----- -quarkus.security.security-providers=BC ----- - -and add the BouncyCastle provider dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.bouncycastle - bcprov-jdk15on - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.bouncycastle:bcprov-jdk15on") ----- - -[[bouncy-castle-jsse]] -=== BouncyCastle JSSE - -If you need to register an `org.bouncycastle.jsse.provider.BouncyCastleJsseProvider` JSSE provider and use it instead of the default SunJSSE provider then please set a `BCJSSE` provider name: - -.Example Security Providers BouncyCastle JSSE Configuration -[source,properties] ----- -quarkus.security.security-providers=BCJSSE - -quarkus.http.ssl.client-auth=REQUIRED - -quarkus.http.ssl.certificate.key-store-file=server-keystore.jks -quarkus.http.ssl.certificate.key-store-password=password -quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks -quarkus.http.ssl.certificate.trust-store-password=password ----- - -and add the BouncyCastle TLS dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.bouncycastle - bctls-jdk15on - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.bouncycastle:bctls-jdk15on") ----- - -[[bouncy-castle-fips]] -=== BouncyCastle FIPS - -If you need to register an `org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider` JCE provider then please set a `BCFIPS` provider name: - -.Example Security Providers BouncyCastle FIPS Configuration -[source,properties] ----- -quarkus.security.security-providers=BCFIPS ----- - -and add the BouncyCastle FIPS provider dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.bouncycastle - bc-fips - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.bouncycastle:bc-fips") ----- - -[NOTE] -==== -`BCFIPS` provider option is supported in native image but the algorithm self-tests which rely on `java.security.SecureRandom` to verify the generated keys have been removed for these tests to pass. The following classes have been affected: -- `org.bouncycastle.crypto.general.DSA` -- `org.bouncycastle.crypto.general.DSTU4145` -- `org.bouncycastle.crypto.general.ECGOST3410` -- `org.bouncycastle.crypto.general.GOST3410` -- `org.bouncycastle.crypto.fips.FipsDSA` -- `org.bouncycastle.crypto.fips.FipsEC` -- `org.bouncycastle.crypto.fips.FipsRSA` -==== - -[[bouncy-castle-jsse-fips]] -=== BouncyCastle JSSE FIPS - -If you need to register an `org.bouncycastle.jsse.provider.BouncyCastleJsseProvider` JSSE provider and use it in combination with `org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider` instead of the default SunJSSE provider then please set a `BCFIPSJSSE` provider name: - -.Example Security Providers BouncyCastle FIPS JSSE Configuration -[source,properties] ----- -quarkus.security.security-providers=BCFIPSJSSE - -quarkus.http.ssl.client-auth=REQUIRED - -quarkus.http.ssl.certificate.key-store-file=server-keystore.jks -quarkus.http.ssl.certificate.key-store-password=password -quarkus.http.ssl.certificate.key-store-file-type=BCFKS -quarkus.http.ssl.certificate.key-store-provider=BCFIPS -quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks -quarkus.http.ssl.certificate.trust-store-password=password -quarkus.http.ssl.certificate.trust-store-file-type=BCFKS -quarkus.http.ssl.certificate.trust-store-provider=BCFIPS ----- - -and the BouncyCastle TLS dependency optimized for using the BouncyCastle FIPS provider: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.bouncycastle - bctls-fips - - - - org.bouncycastle - bc-fips - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.bouncycastle:bctls-fips") -implementation("org.bouncycastle:bc-fips") ----- - -Note that the keystore and truststore type and provider are set to `BCFKS` and `BCFIPS`. -One can generate a keystore with this type and provider like this: - -[source,shell] ----- -keytool -genkey -alias server -keyalg RSA -keystore server-keystore.jks -keysize 2048 -keypass password -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath $PATH_TO_BC_FIPS_JAR -storetype BCFKS ----- - -[NOTE] -==== -`BCFIPSJSSE` provider option is currently not supported in native image. -==== - -== Reactive Security - -If you are going to use security in a reactive environment, you will likely need SmallRye Context Propagation: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-context-propagation - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-context-propagation") ----- - -This will allow you to propagate the identity throughout the reactive callbacks. You also need to make sure you -are using an executor that is capable of propagating the identity (e.g. no `CompletableFuture.supplyAsync`), -to make sure that Quarkus can propagate it. For more information see the -xref:context-propagation.adoc[Context Propagation Guide]. - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-jdbc.adoc b/_versions/2.7/guides/security-jdbc.adoc deleted file mode 100644 index a8b7056d7ad..00000000000 --- a/_versions/2.7/guides/security-jdbc.adoc +++ /dev/null @@ -1,309 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Security with JDBC - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use a database to store your user identities. - - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this example, we build a very simple microservice which offers three endpoints: - -* `/api/public` -* `/api/users/me` -* `/api/admin` - -The `/api/public` endpoint can be accessed anonymously. -The `/api/admin` endpoint is protected with RBAC (Role-Based Access Control) where only users granted with the `admin` role can access. At this endpoint, we use the `@RolesAllowed` annotation to declaratively enforce the access constraint. -The `/api/users/me` endpoint is also protected with RBAC (Role-Based Access Control) where only users granted with the `user` role can access. As a response, it returns a JSON document with details about the user. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `security-jdbc-quickstart` {quickstarts-tree-url}/security-jdbc-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-jdbc-quickstart -:create-app-extensions: elytron-security-jdbc,jdbc-postgresql,resteasy -include::includes/devtools/create-app.adoc[] - -[NOTE] -==== -Don't forget to add the database connector library of choice. Here we are using PostgreSQL as identity store. -==== - -This command generates a new project, importing the `elytron-security-jdbc` extension -which is an https://docs.wildfly.org/17/WildFly_Elytron_Security.html#jdbc-security-realm[`wildfly-elytron-realm-jdbc`] adapter for Quarkus applications. - -If you already have your Quarkus project configured, you can add the `elytron-security-jdbc` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: elytron-security-jdbc -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-elytron-security-jdbc - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-elytron-security-jdbc") ----- - -== Writing the application - -Let's start by implementing the `/api/public` endpoint. As you can see from the source code below, it is just a regular JAX-RS resource: - -[source,java] ----- -package org.acme.security.jdbc; - -import javax.annotation.security.PermitAll; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/public") -public class PublicResource { - - @GET - @PermitAll - @Produces(MediaType.TEXT_PLAIN) - public String publicResource() { - return "public"; - } -} ----- - -The source code for the `/api/admin` endpoint is also very simple. The main difference here is that we are using a `@RolesAllowed` annotation to make sure that only users granted with the `admin` role can access the endpoint: - - -[source,java] ----- -package org.acme.security.jdbc; - -import javax.annotation.security.RolesAllowed; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/admin") -public class AdminResource { - - @GET - @RolesAllowed("admin") - @Produces(MediaType.TEXT_PLAIN) - public String adminResource() { - return "admin"; - } -} ----- - -Finally, let's consider the `/api/users/me` endpoint. As you can see from the source code below, we are trusting only users with the `user` role. -We are using `SecurityContext` to get access to the current authenticated Principal and we return the user's name. This information is loaded from the database. - -[source,java] ----- -package org.acme.security.jdbc; - -import javax.annotation.security.RolesAllowed; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.SecurityContext; - -@Path("/api/users") -public class UserResource { - - @GET - @RolesAllowed("user") - @Path("/me") - public String me(@Context SecurityContext securityContext) { - return securityContext.getUserPrincipal().getName(); - } -} ----- - -=== Configuring the Application - -The `elytron-security-jdbc` extension requires at least one datasource to access to your database. - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus -quarkus.datasource.password=quarkus -quarkus.datasource.jdbc.url=jdbc:postgresql:elytron-security-jdbc ----- - -In our context, we are using PostgreSQL as identity store and we initialize the database with users and roles. - -[source,sql] ----- -CREATE TABLE test_user ( - id INT, - username VARCHAR(255), - password VARCHAR(255), - role VARCHAR(255) -); - -INSERT INTO test_user (id, username, password, role) VALUES (1, 'admin', 'admin', 'admin'); -INSERT INTO test_user (id, username, password, role) VALUES (2, 'user','user', 'user'); ----- - -[NOTE] -==== -It is probably useless but we kindly remind you that you must not store clear-text passwords in production environment ;-). -The `elytron-security-jdbc` offers a built-in bcrypt password mapper. -==== - -We can now configure the Elytron JDBC Realm. - -[source,properties] ----- -quarkus.security.jdbc.enabled=true -quarkus.security.jdbc.principal-query.sql=SELECT u.password, u.role FROM test_user u WHERE u.username=? <1> -quarkus.security.jdbc.principal-query.clear-password-mapper.enabled=true <2> -quarkus.security.jdbc.principal-query.clear-password-mapper.password-index=1 -quarkus.security.jdbc.principal-query.attribute-mappings.0.index=2 <3> -quarkus.security.jdbc.principal-query.attribute-mappings.0.to=groups ----- - -The `elytron-security-jdbc` extension requires at least one principal query to authenticate the user and its identity. - -<1> We define a parameterized SQL statement (with exactly 1 parameter) which should return the user's password plus any additional information you want to load. -<2> We configure the password mapper with the position of the password field in the `SELECT` fields and other information like salt, hash encoding, etc. -<3> We use `attribute-mappings` to bind the `SELECT` projection fields (ie. `u.role` here) to the target Principal representation attributes. - -[NOTE] -==== -In the `principal-query` configuration all the `index` properties start at 1 (rather than 0). -==== - -== Testing the Application - -The application is now protected and the identities are provided by our database. -The very first thing to check is to ensure the anonymous access works. - -[source,shell] ----- -$ curl -i -X GET http://localhost:8080/api/public -HTTP/1.1 200 OK -Content-Length: 6 -Content-Type: text/plain;charset=UTF-8 - -public% ----- - -Now, let's try a to hit a protected resource anonymously. - -[source,shell] ----- -$ curl -i -X GET http://localhost:8080/api/admin -HTTP/1.1 401 Unauthorized -Content-Length: 14 -Content-Type: text/html;charset=UTF-8 - -Not authorized% ----- - -So far so good, now let's try with an allowed user. - -[source,shell] ----- -$ curl -i -X GET -u admin:admin http://localhost:8080/api/admin -HTTP/1.1 200 OK -Content-Length: 5 -Content-Type: text/plain;charset=UTF-8 - -admin% ----- -By providing the `admin:admin` credentials, the extension authenticated the user and loaded their roles. -The `admin` user is authorized to access to the protected resources. - -The user `admin` should be forbidden to access a resource protected with `@RolesAllowed("user")` because it doesn't have this role. - -[source,shell] ----- -$ curl -i -X GET -u admin:admin http://localhost:8080/api/users/me -HTTP/1.1 403 Forbidden -Content-Length: 34 -Content-Type: text/html;charset=UTF-8 - -Forbidden% ----- - -Finally, using the user `user` works and the security context contains the principal details (username for instance). - -[source,shell] ----- -$ curl -i -X GET -u user:user http://localhost:8080/api/users/me -HTTP/1.1 200 OK -Content-Length: 4 -Content-Type: text/plain;charset=UTF-8 - -user% ----- - -== Advanced Configuration - -This guide only covered an easy use case, the extension offers multiple datasources, multiple principal queries configuration as well as a bcrypt password mapper. - -[source,properties] --- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus -quarkus.datasource.password=quarkus -quarkus.datasource.jdbc.url=jdbc:postgresql:multiple-data-sources-users - -quarkus.datasource.permissions.db-kind=postgresql -quarkus.datasource.permissions.username=quarkus -quarkus.datasource.permissions.password=quarkus -quarkus.datasource.permissions.jdbc.url=jdbc:postgresql:multiple-data-sources-permissions - -quarkus.security.jdbc.enabled=true -quarkus.security.jdbc.principal-query.sql=SELECT u.password FROM test_user u WHERE u.username=? -quarkus.security.jdbc.principal-query.clear-password-mapper.enabled=true -quarkus.security.jdbc.principal-query.clear-password-mapper.password-index=1 - -quarkus.security.jdbc.principal-query.roles.sql=SELECT r.role_name FROM test_role r, test_user_role ur WHERE ur.username=? AND ur.role_id = r.id -quarkus.security.jdbc.principal-query.roles.datasource=permissions -quarkus.security.jdbc.principal-query.roles.attribute-mappings.0.index=1 -quarkus.security.jdbc.principal-query.roles.attribute-mappings.0.to=groups --- - -[[configuration-reference]] -== Configuration Reference - -include::{generated-dir}/config/quarkus-elytron-security-jdbc.adoc[opts=optional, leveloffset=+1] - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-jpa.adoc b/_versions/2.7/guides/security-jpa.adoc deleted file mode 100644 index a3145bb24c6..00000000000 --- a/_versions/2.7/guides/security-jpa.adoc +++ /dev/null @@ -1,418 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Security with JPA - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use a database to store your user identities with -xref:hibernate-orm.adoc[Hibernate ORM] or xref:hibernate-orm-panache.adoc[Hibernate ORM with Panache]. - - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this example, we build a very simple microservice which offers three endpoints: - -* `/api/public` -* `/api/users/me` -* `/api/admin` - -The `/api/public` endpoint can be accessed anonymously. -The `/api/admin` endpoint is protected with RBAC (Role-Based Access Control) where only users granted with the `admin` role can access. At this endpoint, we use the `@RolesAllowed` annotation to declaratively enforce the access constraint. -The `/api/users/me` endpoint is also protected with RBAC (Role-Based Access Control) where only users granted with the `user` role can access. As a response, it returns a JSON document with details about the user. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `security-jpa-quickstart` {quickstarts-tree-url}/security-jpa-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-jpa-quickstart -:create-app-extensions: security-jpa,jdbc-postgresql,resteasy,hibernate-orm-panache -include::includes/devtools/create-app.adoc[] - -[NOTE] -==== -Don't forget to add the database connector library of choice. Here we are using PostgreSQL as identity store. -==== - -This command generates a Maven project, importing the `security-jpa` extension -which allows you to map your security source to JPA entities. - -If you already have your Quarkus project configured, you can add the `security-jpa` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: security-jpa -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-security-jpa - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-security-jpa") ----- - -== Writing the application - -Let's start by implementing the `/api/public` endpoint. As you can see from the source code below, it is just a regular JAX-RS resource: - -[source,java] ----- -package org.acme.security.jpa; - -import javax.annotation.security.PermitAll; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/public") -public class PublicResource { - - @GET - @PermitAll - @Produces(MediaType.TEXT_PLAIN) - public String publicResource() { - return "public"; - } -} ----- - -The source code for the `/api/admin` endpoint is also very simple. The main difference here is that we are using a `@RolesAllowed` annotation to make sure that only users granted with the `admin` role can access the endpoint: - - -[source,java] ----- -package org.acme.security.jpa; - -import javax.annotation.security.RolesAllowed; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/admin") -public class AdminResource { - - @GET - @RolesAllowed("admin") - @Produces(MediaType.TEXT_PLAIN) - public String adminResource() { - return "admin"; - } -} ----- - -Finally, let's consider the `/api/users/me` endpoint. As you can see from the source code below, we are trusting only users with the `user` role. -We are using `SecurityContext` to get access to the current authenticated Principal and we return the user's name. This information is loaded from the database. - -[source,java] ----- -package org.acme.security.jpa; - -import javax.annotation.security.RolesAllowed; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.SecurityContext; - -@Path("/api/users") -public class UserResource { - - @GET - @RolesAllowed("user") - @Path("/me") - public String me(@Context SecurityContext securityContext) { - return securityContext.getUserPrincipal().getName(); - } -} ----- - -=== Defining our user entity - -We can now describe how our security information is stored in our model by adding a few annotations to our `User` entity: - -[source,java] ----- -package org.acme.security.jpa; - -import javax.persistence.Entity; -import javax.persistence.Table; - -import io.quarkus.hibernate.orm.panache.PanacheEntity; -import io.quarkus.elytron.security.common.BcryptUtil; -import io.quarkus.security.jpa.Password; -import io.quarkus.security.jpa.Roles; -import io.quarkus.security.jpa.UserDefinition; -import io.quarkus.security.jpa.Username; - -@Entity -@Table(name = "test_user") -@UserDefinition <1> -public class User extends PanacheEntity { - @Username <2> - public String username; - @Password <3> - public String password; - @Roles <4> - public String role; - - /** - * Adds a new user in the database - * @param username the user name - * @param password the unencrypted password (it will be encrypted with bcrypt) - * @param role the comma-separated roles - */ - public static void add(String username, String password, String role) { <5> - User user = new User(); - user.username = username; - user.password = BcryptUtil.bcryptHash(password); - user.role = role; - user.persist(); - } -} - ----- - -The `security-jpa` extension is only initialized if there is a single entity annotated with `@UserDefinition`. - -<1> This annotation must be present on a single entity. It can be a regular Hibernate ORM entity or a Hibernate ORM with Panache entity as in this example. -<2> This indicates the field used for the user name. -<3> This indicates the field used for the password. This defaults to using bcrypt hashed passwords, but you can also configure it for clear text passwords or custom passwords. -<4> This indicates the comma-separated list of roles added to the target Principal representation attributes. -<5> This method allows us to add users while hashing the password with the proper bcrypt hash. - -=== Configuring the Application - -The `security-jpa` extension requires at least one datasource to access to your database. - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus -quarkus.datasource.password=quarkus -quarkus.datasource.jdbc.url=jdbc:postgresql:security_jpa - -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -In our context, we are using PostgreSQL as identity store. The database schema is created by Hibernate ORM automatically -on startup (change this in production) and we initialize the database with users and roles in the `Startup` class: - -[source,java] ----- -package org.acme.security.jpa; - -import javax.enterprise.event.Observes; -import javax.inject.Singleton; -import javax.transaction.Transactional; - -import io.quarkus.runtime.StartupEvent; - - -@Singleton -public class Startup { - @Transactional - public void loadUsers(@Observes StartupEvent evt) { - // reset and load all test users - User.deleteAll(); - User.add("admin", "admin", "admin"); - User.add("user", "user", "user"); - } -} ----- - -[NOTE] -==== -It is probably useless but we kindly remind you that you must not store clear-text passwords in production environments ;-). -As a result, the `security-jpa` defaults to using bcrypt-hashed passwords. -==== - -== Testing the Application - -You can start the application in dev mode as follows: - -include::includes/devtools/dev.adoc[] - -[NOTE] -==== -In the following tests we use the basic authentication mechanism, you can enable it by setting `quarkus.http.auth.basic=true` in the `application.properties` file. -==== - -The application is now protected and the identities are provided by our database. -The very first thing to check is to ensure the anonymous access works. - -[source,shell] ----- -$ curl -i -X GET http://localhost:8080/api/public -HTTP/1.1 200 OK -Content-Length: 6 -Content-Type: text/plain;charset=UTF-8 - -public% ----- - -Now, let's try a to hit a protected resource anonymously. - -[source,shell] ----- -$ curl -i -X GET http://localhost:8080/api/admin -HTTP/1.1 401 Unauthorized -Content-Length: 14 -Content-Type: text/html;charset=UTF-8 - -Not authorized% ----- - -So far so good, now let's try with an allowed user. - -[source,shell] ----- -$ curl -i -X GET -u admin:admin http://localhost:8080/api/admin -HTTP/1.1 200 OK -Content-Length: 5 -Content-Type: text/plain;charset=UTF-8 - -admin% ----- -By providing the `admin:admin` credentials, the extension authenticated the user and loaded their roles. -The `admin` user is authorized to access to the protected resources. - -The user `admin` should be forbidden to access a resource protected with `@RolesAllowed("user")` because it doesn't have this role. - -[source,shell] ----- -$ curl -i -X GET -u admin:admin http://localhost:8080/api/users/me -HTTP/1.1 403 Forbidden -Content-Length: 34 -Content-Type: text/html;charset=UTF-8 - -Forbidden% ----- - -Finally, using the user `user` works and the security context contains the principal details (username for instance). - -[source,shell] ----- -$ curl -i -X GET -u user:user http://localhost:8080/api/users/me -HTTP/1.1 200 OK -Content-Length: 4 -Content-Type: text/plain;charset=UTF-8 - -user% ----- - -== Supported model types - -- The `@UserDefinition` class must be a JPA entity (with Panache or not). -- The `@Username` and `@Password` field types must be of type `String`. -- The `@Roles` field must either be of type `String` or `Collection` or alternately a `Collection` where `X` is an entity class with one `String` field annotated with the `@RolesValue` annotation. -- Each `String` role element type will be parsed as a comma-separated list of roles. - -== Storing roles in another entity - -You can also store roles in another entity: - -[source,java] ----- -@UserDefinition -@Table(name = "test_user") -@Entity -public class User extends PanacheEntity { - @Username - public String name; - - @Password - public String pass; - - @ManyToMany - @Roles - public List roles = new ArrayList<>(); -} - -@Entity -public class Role extends PanacheEntity { - - @ManyToMany(mappedBy = "roles") - public List users; - - @RolesValue - public String role; -} ----- - -== Password storage and hashing - -By default, we consider passwords to be stored hashed with https://en.wikipedia.org/wiki/Bcrypt[bcrypt] under the -https://en.wikipedia.org/wiki/Crypt_(C)[Modular Crypt Format] (MCF). - -When you need to create such a hashed password we provide the convenient `String BcryptUtil.bcryptHash(String password)` -function, which defaults to creating a random salt and hashing in 10 iterations (though you can specify the iterations and salt -too). - -NOTE: with MCF you don't need dedicated columns to store the hashing algorithm, the iterations count or the salt because -they're all stored in the hashed value. - -You also have the possibility to store password using different hashing algorithm `@Password(value = PasswordType.CUSTOM, provider = CustomPasswordProvider.class)`: - -[source,java] ----- -@UserDefinition -@Table(name = "test_user") -@Entity -public class CustomPasswordUserEntity { - @Id - @GeneratedValue - public Long id; - - @Column(name = "username") - @Username - public String name; - - @Column(name = "password") - @Password(value = PasswordType.CUSTOM, provider = CustomPasswordProvider.class) - public String pass; - - @Roles - public String role; -} - -public class CustomPasswordProvider implements PasswordProvider { - @Override - public Password getPassword(String pass) { - byte[] digest = DatatypeConverter.parseHexBinary(pass); - return SimpleDigestPassword.createRaw(SimpleDigestPassword.ALGORITHM_SIMPLE_DIGEST_SHA_256, digest); - } -} ----- - -WARN: you can also store passwords in clear text with `@Password(PasswordType.CLEAR)` but we strongly recommend against -it in production. - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-jwt-build.adoc b/_versions/2.7/guides/security-jwt-build.adoc deleted file mode 100644 index 8a3fe00faef..00000000000 --- a/_versions/2.7/guides/security-jwt-build.adoc +++ /dev/null @@ -1,344 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Build, Sign and Encrypt JSON Web Tokens - -include::./attributes.adoc[] -:toc: - -According to link:https://datatracker.ietf.org/doc/html/rfc7519[RFC7519], JSON Web Token (JWT) is a compact, URL-safe means of representing claims which are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code(MAC) and/or encrypted. - -Signing the claims is used most often to secure the claims. What is known today as a JWT token is typically produced by signing the claims in a JSON format using the steps described in the link:https://tools.ietf.org/html/rfc7515[JSON Web Signature] specification. - -However, when the claims are sensitive, their confidentiality can be guaranteed by following the steps described in the link:https://tools.ietf.org/html/rfc7516[JSON Web Encryption] specification to produce a JWT token with the encrypted claims. - -Finally both the confidentiality and integrity of the claims can be further enforced by signing them first and then encrypting the nested JWT token. - -SmallRye JWT Build provides an API for securing JWT claims using all of these options. link:https://bitbucket.org/b_c/jose4j/wiki/Home[Jose4J] is used internally to support this API. - -== Dependency - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-jwt-build") ----- - -Note you can use Smallrye JWT Build API without having to create MicroProfile JWT endpoints supported by `quarkus-smallrye-jwt`. -It can also be excluded from `quarkus-smallrye-jwt` if MP JWT endpoints do not need to generate JWT tokens. - -== Create JwtClaimsBuilder and set the claims - -The first step is to initialize a `JwtClaimsBuilder` using one of the options below and add some claims to it: - -[source, java] ----- -import java.util.Collections; -import javax.json.Json; -import javax.json.JsonObject; -import io.smallrye.jwt.build.Jwt; -import io.smallrye.jwt.build.JwtClaimsBuilder; -import org.eclipse.microprofile.jwt.JsonWebToken; -... -// Create an empty builder and add some claims -JwtClaimsBuilder builder1 = Jwt.claims(); -builder1.claim("customClaim", "custom-value").issuer("https://issuer.org"); -// Or start typing the claims immediately: -// JwtClaimsBuilder builder1 = Jwt.upn("Alice"); - -// Builder created from the existing claims -JwtClaimsBuilder builder2 = Jwt.claims("/tokenClaims.json"); - -// Builder created from a map of claims -JwtClaimsBuilder builder3 = Jwt.claims(Collections.singletonMap("customClaim", "custom-value")); - -// Builder created from JsonObject -JsonObject userName = Json.createObjectBuilder().add("username", "Alice").build(); -JsonObject userAddress = Json.createObjectBuilder().add("city", "someCity").add("street", "someStreet").build(); -JsonObject json = Json.createObjectBuilder(userName).add("address", userAddress).build(); -JwtClaimsBuilder builder4 = Jwt.claims(json); - -// Builder created from JsonWebToken -@Inject JsonWebToken token; -JwtClaimsBuilder builder5 = Jwt.claims(token); ----- - -The API is fluent so the builder initialization can be done as part of the fluent API sequence. - -The builder will also set `iat` (issued at) to the current time, `exp` (expires at) to 5 minutes away from the current time (it can be customized with the `smallrye.jwt.new-token.lifespan` property) and `jti` (unique token identifier) claims if they have not already been set. - -One can also configure `smallrye.jwt.new-token.issuer` and `smallrye.jwt.new-token.audience` properties and skip setting the issuer and audience directly with the builder API. - -The next step is to decide how to secure the claims. - -[[sign-claims]] -== Sign the claims - -The claims can be signed immediately or after the `JSON Web Signature` headers have been set: - -[source, java] ----- -import io.smallrye.jwt.build.Jwt; -... - -// Sign the claims using an RSA private key loaded from the location set with a 'smallrye.jwt.sign.key.location' property. -// No 'jws()' transition is necessary. Default algorithm is RS256. -String jwt1 = Jwt.claims("/tokenClaims.json").sign(); - -// Set the headers and sign the claims with an RSA private key loaded in the code (the implementation of this method is omitted). -// Note a 'jws()' transition to a 'JwtSignatureBuilder', Default algorithm is RS256. -String jwt2 = Jwt.claims("/tokenClaims.json").jws().keyId("kid1").header("custom-header", "custom-value").sign(getPrivateKey()); ----- - -Note the `alg` (algorithm) header is set to `RS256` by default. Signing key identifier (`kid` header) does not have to be set if a single JSON Web Key (JWK) containing a `kid` property is used. - -RSA and Elliptic Curve (EC) private keys as well as symmetric secret keys can be used to sign the claims. -`ES256` and `HS256` are the defaut algorithms for EC private and symmetric key algorithms respectively. - -You can customize the signature algorithm, for example: - -[source, java] ----- -import io.smallrye.jwt.SignatureAlgorithm; -import io.smallrye.jwt.build.Jwt; - -// Sign the claims using an RSA private key loaded from the location set with a 'smallrye.jwt.sign.key.location' property. Algorithm is PS256. -String jwt = Jwt.upn("Alice").jws().algorithm(SignatureAlgorithm.PS256).sign(); ----- - -Alternatively you can use a `smallrye.jwt.new-token.signature-algorithm` property: - -```properties -smallrye.jwt.new-token.signature-algorithm=PS256 -``` - -and write a simpler API sequence: - -[source, java] ----- -import io.smallrye.jwt.build.Jwt; - -// Sign the claims using an RSA private key loaded from the location set with a 'smallrye.jwt.sign.key.location' property. Algorithm is PS256. -String jwt = Jwt.upn("Alice").sign(); ----- - -Note the `sign` step can be combined with the <> step to produce `inner-signed and encrypted` tokens, see <> section. - -[[encrypt-claims]] -== Encrypt the claims - -The claims can be encrypted immediately or after the `JSON Web Encryption` headers have been set the same way as they can be signed. -The only minor difference is that encrypting the claims always requires a `jwe()` `JwtEncryptionBuilder` transition given that the API has been optimized to support signing and inner-signing of the claims. - -[source, java] ----- -import io.smallrye.jwt.build.Jwt; -... - -// Encrypt the claims using an RSA public key loaded from the location set with a 'smallrye.jwt.encrypt.key.location' property. Default key encryption algorithm is RSA-OAEP. -String jwt1 = Jwt.claims("/tokenClaims.json").jwe().encrypt(); - -// Set the headers and encrypt the claims with an RSA public key loaded in the code (the implementation of this method is omitted). Default key encryption algorithm is A256KW. -String jwt2 = Jwt.claims("/tokenClaims.json").jwe().header("custom-header", "custom-value").encrypt(getSecretKey()); ----- - -Note the `alg` (key management algorithm) header is set to `RSA-OAEP` and the `enc` (content encryption header) is set to `A256GCM` by default. - -RSA and Elliptic Curve (EC) public keys as well as symmetric secret keys can be used to encrypt the claims. -`ECDH-ES` and `A256KW` are the defaut algorithms for EC public and symmetric key encryption algorithms respectively. - -Note two encryption operations are done when creating an ecrypted token: - -1) the generated content encryption key is encrypted by the key supplied with the API using the key encryption algorithm such as `RSA-OAEP` -2) the claims are encryped by the generated content encryption key using the content encryption algorithm such as `A256GCM`. - -You can customize the key and content encryption algorithms, for example: - -[source, java] ----- -import io.smallrye.jwt.KeyEncryptionAlgorithm; -import io.smallrye.jwt.ContentEncryptionAlgorithm; -import io.smallrye.jwt.build.Jwt; - -// Encrypt the claims using an RSA public key loaded from the location set with a 'smallrye.jwt.encrypt.key.location' property. -// Key encryption algorithm is RSA-OAEP-256, content encryption algorithm is A256CBC-HS512. -String jwt = Jwt.subject("Bob").jwe() - .keyAlgorithm(KeyEncryptionAlgorithm.RSA_OAEP_256) - .contentAlgorithm(ContentEncryptionAlgorithm.A256CBC_HS512) - .encrypt(); ----- - -Alternatively you can use `smallrye.jwt.new-token.key-encryption-algorithm` and `smallrye.jwt.new-token.content-encryption-algorithm` properties to customize the key and content encryption algorithms: - -```properties -smallrye.jwt.new-token.key-encryption-algorithm=RSA-OAEP-256 -smallrye.jwt.new-token.content-encryption-algorithm=A256CBC-HS512 -``` - -and write a simpler API sequence: - -[source, java] ----- -import io.smallrye.jwt.build.Jwt; - -// Encrypt the claims using an RSA public key loaded from the location set with a 'smallrye.jwt.encrypt.key.location' property. -// Key encryption algorithm is RSA-OAEP-256, content encryption algorithm is A256CBC-HS512. -String jwt = Jwt.subject("Bob").encrypt(); ----- - -Note that when the token is directly encrypted by the public RSA or EC key it is not possible to verify which party sent the token. -Therefore the secret keys should be preferred for directly encrypting the tokens, for example, when using JWT as cookies where a secret key is managed by the Quarkus endpoint with only this endpoint being both a producer and a consumer of the encrypted token. - -If you would like to use RSA or EC public keys to encrypt the token then it is recommended to sign the token first if the signing key is available, see the next <> section. - -[[innersign-encrypt-claims]] -== Sign the claims and encrypt the nested JWT token - -The claims can be signed and then the nested JWT token encrypted by combining the sign and encrypt steps. -[source, java] ----- -import io.smallrye.jwt.build.Jwt; -... - -// Sign the claims and encrypt the nested token using the private and public keys loaded from the locations set with the 'smallrye.jwt.sign.key.location' and 'smallrye.jwt.encrypt.key.location' properties respectively. Signature algorithm is RS256, key encryption algorithm is RSA-OAEP-256. -String jwt = Jwt.claims("/tokenClaims.json").innerSign().encrypt(); ----- - -== Fast JWT Generation - -If `smallrye.jwt.sign.key.location` or/and `smallrye.jwt.encrypt.key.location` properties are set then one can secure the existing claims (resources, maps, JsonObjects) with a single call: - -[source,java] ----- -// More compact than Jwt.claims("/claims.json").sign(); -Jwt.sign("/claims.json"); - -// More compact than Jwt.claims("/claims.json").jwe().encrypt(); -Jwt.encrypt("/claims.json"); - -// More compact than Jwt.claims("/claims.json").innerSign().encrypt(); -Jwt.signAndEncrypt("/claims.json"); ----- -As mentioned above, `iat` (issued at), `exp` (expires at), `jti` (token identifier), `iss` (issuer) and `aud` (audience) claims will be added if needed. - -== Dealing with the keys - -`smallrye.jwt.sign.key.location` and `smallrye.jwt.encrypt.key.location` properties can be used to point to signing and encryption key locations. The keys can be located on the local file system, classpath, or fetched from the remote endpoints and can be in `PEM` or `JSON Web Key` (`JWK`) formats. For example: - -[source,properties] ----- -smallrye.jwt.sign.key=privateKey.pem -smallrye.jwt.encrypt.key=publicKey.pem ----- - -You can also use MicroProfile `ConfigSource` to fetch the keys from the external services such as link:{vault-guide}[HashiCorp Vault] or other secret managers and use `smallrye.jwt.sign.key` and `smallrye.jwt.encrypt.key` properties instead: - -[source,properties] ----- -smallrye.jwt.sign.key=${private.key.from.vault} -smallrye.jwt.encrypt.key=${public.key.from.vault} ----- - -where both `private.key.from.vault` and `public.key.from.vault` are the `PEM` or `JWK` formatted key values provided by the custom `ConfigSource`. -`smallrye.jwt.sign.key` and `smallrye.jwt.encrypt.key` can also contain only the Base64-encoded private or public keys values. - -However, please note, directly inlining the private keys in the configuration is not recommended. Use the `smallrye.jwt.sign.key` property only if you need to fetch a signing key value from the remote secret manager. - -The keys can also be loaded by the code which builds the token and supplied to JWT Build API. - -If you need to sign and/or encrypt the token using the symmetric secret key then consider using `io.smallrye.jwt.util.KeyUtils` to generate a SecretKey of the required length. - -For example, one needs to have a 64 byte key to sign using the `HS512` algorithm (`512/8`) and a 32 byte key to encrypt the content encryption key with the `A256KW` algorithm (`256/8`): - -[source,java] ----- -import javax.crypto.SecretKey; -import io.smallrye.jwt.KeyEncryptionAlgorithm; -import io.smallrye.jwt.SignatureAlgorithm; -import io.smallrye.jwt.build.Jwt; -import io.smallrye.jwt.util.KeyUtils; - -SecretKey signingKey = KeyUtils.generateSecretKey(SignatureAlgorithm.HS512); -SecretKey encryptionKey = KeyUtils.generateSecretKey(KeyEncryptionAlgorithm.A256KW); -String jwt = Jwt.claim("sensitiveClaim", getSensitiveClaim()).innerSign(signingKey).encrypt(encryptionKey); ----- - -You can also consider using a `JSON Web Key` (JWK) or `JSON Web Key Set` (JWK Set) format to store a secret key on a secure file system and refer to it using either `smallrye.jwt.sign.key.location` or `smallrye.jwt.encrypt.key.location` properties, for example: - -[source,json] ----- -{ - "kty":"oct", - "kid":"secretKey", - "k":"Fdh9u8rINxfivbrianbbVT1u232VQBZYKx1HGAGPt2I" -} ----- - -or - -[source,json] ----- -{ - "keys": [ - { - "kty":"oct", - "kid":"secretKey1", - "k":"Fdh9u8rINxfivbrianbbVT1u232VQBZYKx1HGAGPt2I" - }, - { - "kty":"oct", - "kid":"secretKey2", - "k":"AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow" - } - ] -} ----- - -`io.smallrye.jwt.util.KeyUtils` can also be used to generate a pair of assymetric RSA or EC keys. These keys can be stored using a `JWK`, `JWK Set` or `PEM` format. - -== SmallRye JWT Builder configuration - -SmallRye JWT supports the following properties which can be used to customize the way claims are signed and/or encrypted: - -[cols=" - io.quarkus - quarkus-smallrye-jwt - - - io.quarkus - quarkus-smallrye-jwt-build - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-jwt") -implementation("io.quarkus:quarkus-smallrye-jwt-build") ----- - -=== Examine the JAX-RS resource - -Create a REST endpoint in `src/main/java/org/acme/security/jwt/TokenSecuredResource.java` with the following content: - -.REST Endpoint V1 -[source,java] ----- -package org.acme.security.jwt; - -import java.security.Principal; - -import javax.annotation.security.PermitAll; -import javax.enterprise.context.RequestScoped; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.InternalServerErrorException; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.SecurityContext; - -import org.eclipse.microprofile.jwt.JsonWebToken; - -@Path("/secured") -public class TokenSecuredResource { - - @Inject - JsonWebToken jwt; // <1> - - @GET() - @Path("permit-all") - @PermitAll // <2> - @Produces(MediaType.TEXT_PLAIN) - public String hello(@Context SecurityContext ctx) { - return getResponseString(ctx); // <3> - } - - private String getResponseString(SecurityContext ctx) { - String name; - if (ctx.getUserPrincipal() == null) { // <4> - name = "anonymous"; - } else if (!ctx.getUserPrincipal().getName().equals(jwt.getName())) { // <5> - throw new InternalServerErrorException("Principal and JsonWebToken names do not match"); - } else { - name = ctx.getUserPrincipal().getName(); // <6> - } - return String.format("hello + %s," - + " isHttps: %s," - + " authScheme: %s," - + " hasJWT: %s", - name, ctx.isSecure(), ctx.getAuthenticationScheme(), hasJwt()); // <7> - } - - private boolean hasJwt() { - return jwt.getClaimNames() != null; - } -} ----- -<1> Here we inject the JsonWebToken interface, an extension of the java.security.Principal interface that provides access to the claims associated with the current authenticated token. -<2> @PermitAll is a JSR 250 common security annotation that indicates that the given endpoint is accessible by any caller, authenticated or not. -<3> Here we inject the JAX-RS SecurityContext to inspect the security state of the call and use a `getResponseString()` function to populate a response string. -<4> Here we check if the call is insecure by checking the request user/caller `Principal` against null. -<5> Here we check that the Principal and JsonWebToken have the same name since JsonWebToken does represent the current Principal. -<6> Here we get the Principal name. -<7> The reply we build up makes use of the caller name, the `isSecure()` and `getAuthenticationScheme()` states of the request `SecurityContext`, and whether a non-null `JsonWebToken` was injected. - -=== Run the application - -Now we are ready to run our application. Use: - -include::includes/devtools/dev.adoc[] - -and you should see output similar to: - -.quarkus:dev Output -[source,shell] ----- -[INFO] Scanning for projects... -[INFO] -[INFO] ----------------------< org.acme:security-jwt-quickstart >----------------------- -[INFO] Building security-jwt-quickstart 1.0.0-SNAPSHOT -[INFO] --------------------------------[ jar ]--------------------------------- -... -Listening for transport dt_socket at address: 5005 -2020-07-15 16:09:50,883 INFO [io.quarkus] (Quarkus Main Thread) security-jwt-quickstart 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.073s. Listening on: http://0.0.0.0:8080 -2020-07-15 16:09:50,885 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. -2020-07-15 16:09:50,885 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, mutiny, resteasy, resteasy-jackson, security, smallrye-context-propagation, smallrye-jwt, vertx, vertx-web] ----- - -Now that the REST endpoint is running, we can access it using a command line tool like curl: - -.curl command for /secured/permit-all -[source,shell] ----- -$ curl http://127.0.0.1:8080/secured/permit-all; echo -hello + anonymous, isHttps: false, authScheme: null, hasJWT: false ----- - -We have not provided any JWT in our request, so we would not expect that there is any security state seen by the endpoint, and -the response is consistent with that: - -* user name is anonymous -* isHttps is false as https is not used -* authScheme is null -* hasJWT is false - -Use Ctrl-C to stop the Quarkus server. - -So now let's actually secure something. Take a look at the new endpoint method `helloRolesAllowed` in the following: - -.REST Endpoint V2 -[source,java] ----- -package org.acme.security.jwt; - -import javax.annotation.security.PermitAll; -import javax.annotation.security.RolesAllowed; -import javax.enterprise.context.RequestScoped; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.InternalServerErrorException; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.SecurityContext; - -import org.eclipse.microprofile.jwt.JsonWebToken; - -@Path("/secured") -@RequestScoped -public class TokenSecuredResource { - - @Inject - JsonWebToken jwt; // <1> - - @GET - @Path("permit-all") - @PermitAll - @Produces(MediaType.TEXT_PLAIN) - public String hello(@Context SecurityContext ctx) { - return getResponseString(ctx); - } - - @GET - @Path("roles-allowed") // <2> - @RolesAllowed({ "User", "Admin" }) // <3> - @Produces(MediaType.TEXT_PLAIN) - public String helloRolesAllowed(@Context SecurityContext ctx) { - return getResponseString(ctx) + ", birthdate: " + jwt.getClaim("birthdate").toString(); // <4> - } - - private String getResponseString(SecurityContext ctx) { - String name; - if (ctx.getUserPrincipal() == null) { - name = "anonymous"; - } else if (!ctx.getUserPrincipal().getName().equals(jwt.getName())) { - throw new InternalServerErrorException("Principal and JsonWebToken names do not match"); - } else { - name = ctx.getUserPrincipal().getName(); - } - return String.format("hello + %s," - + " isHttps: %s," - + " authScheme: %s," - + " hasJWT: %s", - name, ctx.isSecure(), ctx.getAuthenticationScheme(), hasJwt()); - } - - private boolean hasJwt() { - return jwt.getClaimNames() != null; - } -} ----- -<1> Here we inject `JsonWebToken` -<2> This new endpoint will be located at /secured/roles-allowed -<3> @RolesAllowed is a JSR 250 common security annotation that indicates that the given endpoint is accessible by a caller if -they have either a "User" or "Admin" role assigned. -<4> Here we build the reply the same way as in the `hello` method but also add a value of the JWT `birthdate` claim by directly calling the injected `JsonWebToken`. - -After you make this addition to your `TokenSecuredResource`, rerun the `./mvnw compile quarkus:dev` command, and then try `curl -v http://127.0.0.1:8080/secured/roles-allowed; echo` to attempt to access the new endpoint. Your output should be: - -.curl command for /secured/roles-allowed -[source,shell] ----- -$ curl -v http://127.0.0.1:8080/secured/roles-allowed; echo -* Trying 127.0.0.1... -* TCP_NODELAY set -* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) -> GET /secured/roles-allowed HTTP/1.1 -> Host: 127.0.0.1:8080 -> User-Agent: curl/7.54.0 -> Accept: */* -> -< HTTP/1.1 401 Unauthorized -< Connection: keep-alive -< Content-Type: text/html;charset=UTF-8 -< Content-Length: 14 -< Date: Sun, 03 Mar 2019 16:32:34 GMT -< -* Connection #0 to host 127.0.0.1 left intact -Not authorized ----- - -Excellent, we have not provided any JWT in the request, so we should not be able to access the endpoint, and we were not. Instead we received an HTTP 401 Unauthorized error. We need to obtain and pass in a valid JWT to access that endpoint. There are two steps to this, 1) configuring our {extension-name} extension with information on how to validate a JWT, and 2) generating a matching JWT with the appropriate claims. - -=== Configuring the {extension-name} Extension Security Information - -Create a `security-jwt-quickstart/src/main/resources/application.properties` with the following content: - -.application.properties for TokenSecuredResource -[source, properties] ----- -mp.jwt.verify.publickey.location=publicKey.pem #<1> -mp.jwt.verify.issuer=https://example.com/issuer #<2> - -quarkus.native.resources.includes=publicKey.pem #<3> ----- -<1> We are setting public key location to point to a classpath publicKey.pem location. We will add this key in part B, <>. -<2> We are setting the issuer to the URL string `https://example.com/issuer`. -<3> We are including the public key as a resource in the native executable. - -=== Adding a Public Key - -The https://tools.ietf.org/html/rfc7519[JWT specification] defines various levels of security of JWTs that one can use. -The {mp-jwt} specification requires that JWTs that are signed with the RSA-256 signature algorithm. This in -turn requires a RSA public key pair. On the REST endpoint server side, you need to configure the location of the RSA public -key to use to verify the JWT sent along with requests. The `mp.jwt.verify.publickey.location=publicKey.pem` setting configured -previously expects that the public key is available on the classpath as `publicKey.pem`. To accomplish this, copy the following -content to a `security-jwt-quickstart/src/main/resources/publicKey.pem` file. - -.RSA Public Key PEM Content -[source, text] ----- ------BEGIN PUBLIC KEY----- -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlivFI8qB4D0y2jy0CfEq -Fyy46R0o7S8TKpsx5xbHKoU1VWg6QkQm+ntyIv1p4kE1sPEQO73+HY8+Bzs75XwR -TYL1BmR1w8J5hmjVWjc6R2BTBGAYRPFRhor3kpM6ni2SPmNNhurEAHw7TaqszP5e -UF/F9+KEBWkwVta+PZ37bwqSE4sCb1soZFrVz/UT/LF4tYpuVYt3YbqToZ3pZOZ9 -AX2o1GCG3xwOjkc4x0W7ezbQZdC9iftPxVHR8irOijJRRjcPDtA6vPKpzLl6CyYn -sIYPd99ltwxTHjr3npfv/3Lw50bAkbT4HeLFxTx4flEoZLKO/g0bAoV2uqBhkA9x -nQIDAQAB ------END PUBLIC KEY----- ----- - -=== Generating a JWT - -Often one obtains a JWT from an identity manager like https://www.keycloak.org/[Keycloak], but for this quickstart we will generate our own using the JWT generation API provided by `smallrye-jwt` (see xref:smallrye-jwt-build.adoc[Generate JWT tokens with SmallRye JWT] for more information). - -Take the code from the following listing and place into `security-jwt-quickstart/src/main/java/org/acme/security/jwt/GenerateToken.java`: - -.GenerateToken main Driver Class -[source, java] ----- -package org.acme.security.jwt; - -import java.util.Arrays; -import java.util.HashSet; - -import org.eclipse.microprofile.jwt.Claims; - -import io.smallrye.jwt.build.Jwt; - -public class GenerateToken { - /** - * Generate JWT token - */ - public static void main(String[] args) { - String token = - Jwt.issuer("https://example.com/issuer") // <1> - .upn("jdoe@quarkus.io") // <2> - .groups(new HashSet<>(Arrays.asList("User", "Admin"))) // <3> - .claim(Claims.birthdate.name(), "2001-07-13") // <4> - .sign(); - System.out.println(token); - } -} ----- - -<1> The `iss` claim is the issuer of the JWT. This needs to match the server side `mp.jwt.verify.issuer`. -in order for the token to be accepted as valid. -<2> The `upn` claim is defined by the {mp-jwt} spec as preferred claim to use for the -`Principal` seen via the container security APIs. -<3> The `group` claim provides the groups and top-level roles associated with the JWT bearer. -<4> The `birthday` claim. It can be considered to be a sensitive claim so you may want to consider encrypting the claims, see xref:smallrye-jwt-build.adoc[Generate JWT tokens with SmallRye JWT]. - -Note for this code to work we need the content of the RSA private key that corresponds to the public key we have in the TokenSecuredResource application. Take the following PEM content and place it into `security-jwt-quickstart/src/test/resources/privateKey.pem`: - -.RSA Private Key PEM Content -[source, text] ----- ------BEGIN PRIVATE KEY----- -MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCWK8UjyoHgPTLa -PLQJ8SoXLLjpHSjtLxMqmzHnFscqhTVVaDpCRCb6e3Ii/WniQTWw8RA7vf4djz4H -OzvlfBFNgvUGZHXDwnmGaNVaNzpHYFMEYBhE8VGGiveSkzqeLZI+Y02G6sQAfDtN -qqzM/l5QX8X34oQFaTBW1r49nftvCpITiwJvWyhkWtXP9RP8sXi1im5Vi3dhupOh -nelk5n0BfajUYIbfHA6ORzjHRbt7NtBl0L2J+0/FUdHyKs6KMlFGNw8O0Dq88qnM -uXoLJiewhg9332W3DFMeOveel+//cvDnRsCRtPgd4sXFPHh+UShkso7+DRsChXa6 -oGGQD3GdAgMBAAECggEAAjfTSZwMHwvIXIDZB+yP+pemg4ryt84iMlbofclQV8hv -6TsI4UGwcbKxFOM5VSYxbNOisb80qasb929gixsyBjsQ8284bhPJR7r0q8h1C+jY -URA6S4pk8d/LmFakXwG9Tz6YPo3pJziuh48lzkFTk0xW2Dp4SLwtAptZY/+ZXyJ6 -96QXDrZKSSM99Jh9s7a0ST66WoxSS0UC51ak+Keb0KJ1jz4bIJ2C3r4rYlSu4hHB -Y73GfkWORtQuyUDa9yDOem0/z0nr6pp+pBSXPLHADsqvZiIhxD/O0Xk5I6/zVHB3 -zuoQqLERk0WvA8FXz2o8AYwcQRY2g30eX9kU4uDQAQKBgQDmf7KGImUGitsEPepF -KH5yLWYWqghHx6wfV+fdbBxoqn9WlwcQ7JbynIiVx8MX8/1lLCCe8v41ypu/eLtP -iY1ev2IKdrUStvYRSsFigRkuPHUo1ajsGHQd+ucTDf58mn7kRLW1JGMeGxo/t32B -m96Af6AiPWPEJuVfgGV0iwg+HQKBgQCmyPzL9M2rhYZn1AozRUguvlpmJHU2DpqS -34Q+7x2Ghf7MgBUhqE0t3FAOxEC7IYBwHmeYOvFR8ZkVRKNF4gbnF9RtLdz0DMEG -5qsMnvJUSQbNB1yVjUCnDAtElqiFRlQ/k0LgYkjKDY7LfciZl9uJRl0OSYeX/qG2 -tRW09tOpgQKBgBSGkpM3RN/MRayfBtmZvYjVWh3yjkI2GbHA1jj1g6IebLB9SnfL -WbXJErCj1U+wvoPf5hfBc7m+jRgD3Eo86YXibQyZfY5pFIh9q7Ll5CQl5hj4zc4Y -b16sFR+xQ1Q9Pcd+BuBWmSz5JOE/qcF869dthgkGhnfVLt/OQzqZluZRAoGAXQ09 -nT0TkmKIvlza5Af/YbTqEpq8mlBDhTYXPlWCD4+qvMWpBII1rSSBtftgcgca9XLB -MXmRMbqtQeRtg4u7dishZVh1MeP7vbHsNLppUQT9Ol6lFPsd2xUpJDc6BkFat62d -Xjr3iWNPC9E9nhPPdCNBv7reX7q81obpeXFMXgECgYEAmk2Qlus3OV0tfoNRqNpe -Mb0teduf2+h3xaI1XDIzPVtZF35ELY/RkAHlmWRT4PCdR0zXDidE67L6XdJyecSt -FdOUH8z5qUraVVebRFvJqf/oGsXc4+ex1ZKUTbY0wqY1y9E39yvB3MaTmZFuuqk8 -f3cg+fr8aou7pr9SHhJlZCU= ------END PRIVATE KEY----- ----- - -We will use a `smallrye.jwt.sign.key.location` property to point to this private signing key. - -[NOTE] -.Generating Keys with OpenSSL -==== -It is also possible to generate a public and private key pair using the OpenSSL command line tool. - -.openssl commands for generating keys -[source, text] ----- -openssl genrsa -out rsaPrivateKey.pem 2048 -openssl rsa -pubout -in rsaPrivateKey.pem -out publicKey.pem ----- - -An additional step is needed for generating the private key for converting it into the PKCS#8 format. - -.openssl command for converting private key -[source, text] ----- -openssl pkcs8 -topk8 -nocrypt -inform pem -in rsaPrivateKey.pem -outform pem -out privateKey.pem ----- - -You can use the generated pair of keys instead of the keys used in this quickstart. -==== - -Now we can generate a JWT to use with `TokenSecuredResource` endpoint. To do this, run the following command: - -.Command to Generate JWT - -.Sample JWT Generation Output -[source,shell] ----- -$ mvn exec:java -Dexec.mainClass=org.acme.security.jwt.GenerateToken -Dexec.classpathScope=test -Dsmallrye.jwt.sign.key.location=privateKey.pem - -eyJraWQiOiJcL3ByaXZhdGVLZXkucGVtIiwidHlwIjoiSldUIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJqZG9lLXVzaW5nLWp3dC1yYmFjIiwiYXVkIjoidXNpbmctand0LXJiYWMiLCJ1cG4iOiJqZG9lQHF1YXJrdXMuaW8iLCJiaXJ0aGRhdGUiOiIyMDAxLTA3LTEzIiwiYXV0aF90aW1lIjoxNTUxNjU5Njc2LCJpc3MiOiJodHRwczpcL1wvcXVhcmt1cy5pb1wvdXNpbmctand0LXJiYWMiLCJyb2xlTWFwcGluZ3MiOnsiZ3JvdXAyIjoiR3JvdXAyTWFwcGVkUm9sZSIsImdyb3VwMSI6Ikdyb3VwMU1hcHBlZFJvbGUifSwiZ3JvdXBzIjpbIkVjaG9lciIsIlRlc3RlciIsIlN1YnNjcmliZXIiLCJncm91cDIiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiamRvZSIsImV4cCI6MTU1MTY1OTk3NiwiaWF0IjoxNTUxNjU5Njc2LCJqdGkiOiJhLTEyMyJ9.O9tx_wNNS4qdpFhxeD1e7v4aBNWz1FCq0UV8qmXd7dW9xM4hA5TO-ZREk3ApMrL7_rnX8z81qGPIo_R8IfHDyNaI1SLD56gVX-NaOLS2OjfcbO3zOWJPKR_BoZkYACtMoqlWgIwIRC-wJKUJU025dHZiNL0FWO4PjwuCz8hpZYXIuRscfFhXKrDX1fh3jDhTsOEFfu67ACd85f3BdX9pe-ayKSVLh_RSbTbBPeyoYPE59FW7H5-i8IE-Gqu838Hz0i38ksEJFI25eR-AJ6_PSUD0_-TV3NjXhF3bFIeT4VSaIZcpibekoJg0cQm-4ApPEcPLdgTejYHA-mupb8hSwg ----- - -The JWT string is the Base64 URL encoded string that has 3 parts separated by '.' characters. -First part - JWT headers, second part - JWT claims, third part - JWT signature. - -=== Finally, Secured Access to /secured/roles-allowed -Now let's use this to make a secured request to the /secured/roles-allowed endpoint. Make sure you have the Quarkus server still running in dev mode, and then run the following command, making sure to use your version of the generated JWT from the previous step: - -[source,bash] ----- -curl -H "Authorization: Bearer eyJraWQiOiJcL3ByaXZhdGVLZXkucGVtIiwidHlwIjoiSldUIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJqZG9lLXVzaW5nLWp3dC1yYmFjIiwiYXVkIjoidXNpbmctand0LXJiYWMiLCJ1cG4iOiJqZG9lQHF1YXJrdXMuaW8iLCJiaXJ0aGRhdGUiOiIyMDAxLTA3LTEzIiwiYXV0aF90aW1lIjoxNTUxNjUyMDkxLCJpc3MiOiJodHRwczpcL1wvcXVhcmt1cy5pb1wvdXNpbmctand0LXJiYWMiLCJyb2xlTWFwcGluZ3MiOnsiZ3JvdXAyIjoiR3JvdXAyTWFwcGVkUm9sZSIsImdyb3VwMSI6Ikdyb3VwMU1hcHBlZFJvbGUifSwiZ3JvdXBzIjpbIkVjaG9lciIsIlRlc3RlciIsIlN1YnNjcmliZXIiLCJncm91cDIiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiamRvZSIsImV4cCI6MTU1MTY1MjM5MSwiaWF0IjoxNTUxNjUyMDkxLCJqdGkiOiJhLTEyMyJ9.aPA4Rlc4kw7n_OZZRRk25xZydJy_J_3BRR8ryYLyHTO1o68_aNWWQCgpnAuOW64svPhPnLYYnQzK-l2vHX34B64JySyBD4y_vRObGmdwH_SEufBAWZV7mkG3Y4mTKT3_4EWNu4VH92IhdnkGI4GJB6yHAEzlQI6EdSOa4Nq8Gp4uPGqHsUZTJrA3uIW0TbNshFBm47-oVM3ZUrBz57JKtr0e9jv0HjPQWyvbzx1HuxZd6eA8ow8xzvooKXFxoSFCMnxotd3wagvYQ9ysBa89bgzL-lhjWtusuMFDUVYwFqADE7oOSOD4Vtclgq8svznBQ-YpfTHfb9QEcofMlpyjNA" http://127.0.0.1:8080/secured/roles-allowed; echo ----- - -.curl Command for /secured/roles-allowed With JWT -[source,shell] ----- -$ curl -H "Authorization: Bearer eyJraWQ..." http://127.0.0.1:8080/secured/roles-allowed; echo -hello + jdoe@quarkus.io, isHttps: false, authScheme: Bearer, hasJWT: true, birthdate: 2001-07-13 ----- - -Success! We now have: - -* a non-anonymous caller name of jdoe@quarkus.io -* an authentication scheme of Bearer -* a non-null JsonWebToken -* birthdate claim value - -=== Using the JsonWebToken and Claim Injection - -Now that we can generate a JWT to access our secured REST endpoints, let's see what more we can do with the `JsonWebToken` -interface and the JWT claims. The `org.eclipse.microprofile.jwt.JsonWebToken` interface extends the `java.security.Principal` -interface, and is in fact the type of the object that is returned by the `javax.ws.rs.core.SecurityContext#getUserPrincipal()` call we -used previously. This means that code that does not use CDI but does have access to the REST container `SecurityContext` can get -hold of the caller `JsonWebToken` interface by casting the `SecurityContext#getUserPrincipal()`. - -The `JsonWebToken` interface defines methods for accessing claims in the underlying JWT. It provides accessors for common -claims that are required by the {mp-jwt} specification as well as arbitrary claims that may exist in the JWT. - -All the JWT claims can also be injected. Let's expand our `TokenSecuredResource` with another endpoint /secured/roles-allowed-admin which uses the injected `birthdate` claim -(as opposed to getting it from `JsonWebToken`): - -[source, java] ----- -package org.acme.security.jwt; - -import javax.annotation.security.PermitAll; -import javax.annotation.security.RolesAllowed; -import javax.enterprise.context.RequestScoped; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.InternalServerErrorException; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.SecurityContext; - -import org.eclipse.microprofile.jwt.Claim; -import org.eclipse.microprofile.jwt.Claims; -import org.eclipse.microprofile.jwt.JsonWebToken; - -@Path("/secured") -@RequestScoped -public class TokenSecuredResource { - - @Inject - JsonWebToken jwt; // <1> - @Inject - @Claim(standard = Claims.birthdate) - String birthdate; // <2> - - @GET - @Path("permit-all") - @PermitAll - @Produces(MediaType.TEXT_PLAIN) - public String hello(@Context SecurityContext ctx) { - return getResponseString(ctx); - } - - @GET - @Path("roles-allowed") - @RolesAllowed({ "User", "Admin" }) - @Produces(MediaType.TEXT_PLAIN) - public String helloRolesAllowed(@Context SecurityContext ctx) { - return getResponseString(ctx) + ", birthdate: " + jwt.getClaim("birthdate").toString(); - } - - @GET - @Path("roles-allowed-admin") - @RolesAllowed("Admin") - @Produces(MediaType.TEXT_PLAIN) - public String helloRolesAllowedAdmin(@Context SecurityContext ctx) { - return getResponseString(ctx) + ", birthdate: " + birthdate; // <3> - } - - private String getResponseString(SecurityContext ctx) { - String name; - if (ctx.getUserPrincipal() == null) { - name = "anonymous"; - } else if (!ctx.getUserPrincipal().getName().equals(jwt.getName())) { - throw new InternalServerErrorException("Principal and JsonWebToken names do not match"); - } else { - name = ctx.getUserPrincipal().getName(); - } - return String.format("hello + %s," - + " isHttps: %s," - + " authScheme: %s," - + " hasJWT: %s", - name, ctx.isSecure(), ctx.getAuthenticationScheme(), hasJwt()); - } - - private boolean hasJwt() { - return jwt.getClaimNames() != null; - } -} ----- -<1> Here we inject the JsonWebToken. -<2> Here we inject the `birthday` claim as `String` - this is why the `@RequestScoped` scope is now required. -<3> Here we use the injected `birthday` claim to build the final reply. - -Now generate the token again and run: - -[source,bash] ----- -curl -H "Authorization: Bearer eyJraWQiOiJcL3ByaXZhdGVLZXkucGVtIiwidHlwIjoiSldUIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJqZG9lLXVzaW5nLWp3dC1yYmFjIiwiYXVkIjoidXNpbmctand0LXJiYWMiLCJ1cG4iOiJqZG9lQHF1YXJrdXMuaW8iLCJiaXJ0aGRhdGUiOiIyMDAxLTA3LTEzIiwiYXV0aF90aW1lIjoxNTUxNjUyMDkxLCJpc3MiOiJodHRwczpcL1wvcXVhcmt1cy5pb1wvdXNpbmctand0LXJiYWMiLCJyb2xlTWFwcGluZ3MiOnsiZ3JvdXAyIjoiR3JvdXAyTWFwcGVkUm9sZSIsImdyb3VwMSI6Ikdyb3VwMU1hcHBlZFJvbGUifSwiZ3JvdXBzIjpbIkVjaG9lciIsIlRlc3RlciIsIlN1YnNjcmliZXIiLCJncm91cDIiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiamRvZSIsImV4cCI6MTU1MTY1MjM5MSwiaWF0IjoxNTUxNjUyMDkxLCJqdGkiOiJhLTEyMyJ9.aPA4Rlc4kw7n_OZZRRk25xZydJy_J_3BRR8ryYLyHTO1o68_aNWWQCgpnAuOW64svPhPnLYYnQzK-l2vHX34B64JySyBD4y_vRObGmdwH_SEufBAWZV7mkG3Y4mTKT3_4EWNu4VH92IhdnkGI4GJB6yHAEzlQI6EdSOa4Nq8Gp4uPGqHsUZTJrA3uIW0TbNshFBm47-oVM3ZUrBz57JKtr0e9jv0HjPQWyvbzx1HuxZd6eA8ow8xzvooKXFxoSFCMnxotd3wagvYQ9ysBa89bgzL-lhjWtusuMFDUVYwFqADE7oOSOD4Vtclgq8svznBQ-YpfTHfb9QEcofMlpyjNA" http://127.0.0.1:8080/secured/roles-allowed-admin; echo ----- - -[source,shell] ----- -$ curl -H "Authorization: Bearer eyJraWQ..." http://127.0.0.1:8080/secured/roles-allowed-admin; echo -hello + jdoe@quarkus.io, isHttps: false, authScheme: Bearer, hasJWT: true, birthdate: 2001-07-13 ----- - -=== Package and run the application - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed using `java -jar target/quarkus-app/quarkus-run.jar`: - -.Runner jar Example -[source,shell,subs=attributes+] ----- -$ java -jar target/quarkus-app/quarkus-run.jar -2019-03-28 14:27:48,839 INFO [io.quarkus] (main) Quarkus {quarkus-version} started in 0.796s. Listening on: http://[::]:8080 -2019-03-28 14:27:48,841 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson, security, smallrye-jwt] ----- - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -.Native Executable Example -[source,shell] ----- -[INFO] Scanning for projects... -... -[security-jwt-quickstart-runner:25602] universe: 493.17 ms -[security-jwt-quickstart-runner:25602] (parse): 660.41 ms -[security-jwt-quickstart-runner:25602] (inline): 1,431.10 ms -[security-jwt-quickstart-runner:25602] (compile): 7,301.78 ms -[security-jwt-quickstart-runner:25602] compile: 10,542.16 ms -[security-jwt-quickstart-runner:25602] image: 2,797.62 ms -[security-jwt-quickstart-runner:25602] write: 988.24 ms -[security-jwt-quickstart-runner:25602] [total]: 43,778.16 ms -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -[INFO] Total time: 51.500 s -[INFO] Finished at: 2019-03-28T14:30:56-07:00 -[INFO] ------------------------------------------------------------------------ - -$ ./target/security-jwt-quickstart-runner -2019-03-28 14:31:37,315 INFO [io.quarkus] (main) Quarkus 0.12.0 started in 0.006s. Listening on: http://[::]:8080 -2019-03-28 14:31:37,316 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson, security, smallrye-jwt] ----- - -=== Explore the Solution - -The solution repository located in the `security-jwt-quickstart` {quickstarts-tree-url}/security-jwt-quickstart[directory] contains all of the versions we have -worked through in this quickstart guide as well as some additional endpoints that illustrate subresources with injection -of ``JsonWebToken``s and their claims into those using the CDI APIs. We suggest that you check out the quickstart solutions and -explore the `security-jwt-quickstart` directory to learn more about the {extension-name} extension features. - -== Reference Guide - -[supported-injection-scopes] -=== Supported Injection Scopes - -`@ApplicationScoped`, `@Singleton` and `@RequestScoped` outer bean injection scopes are all supported when an `org.eclipse.microprofile.jwt.JsonWebToken` is injected, with the `@RequestScoped` scoping for `JsonWebToken` enforced to ensure the current token is represented. - -However, `@RequestScoped` must be used when the individual token claims are injected as simple types such as `String`, for example: - -[source, java] ----- -package org.acme.security.jwt; - -import javax.inject.Inject; -import org.eclipse.microprofile.jwt.Claim; -import org.eclipse.microprofile.jwt.Claims; - -@Path("/secured") -@RequestScoped -public class TokenSecuredResource { - - @Inject - @Claim(standard = Claims.birthdate) - String birthdate; -} ----- - -Note you can also use the injected `JsonWebToken` to access the individual claims in which case setting `@RequestScoped` is not necessary. - -Please see link:https://download.eclipse.org/microprofile/microprofile-jwt-auth-1.2/microprofile-jwt-auth-spec-1.2.html#_cdi_injection_requirements[MP JWT CDI Injection Requirements] for more details. - -=== Supported Public Key Formats - -Public Keys may be formatted in any of the following formats, specified in order of -precedence: - - - Public Key Cryptography Standards #8 (PKCS#8) PEM - - JSON Web Key (JWK) - - JSON Web Key Set (JWKS) - - JSON Web Key (JWK) Base64 URL encoded - - JSON Web Key Set (JWKS) Base64 URL encoded - -=== Dealing with the verification keys - -If you need to verify the token signature using the assymetric RSA or Elliptic Curve (EC) key then use the `mp.jwt.verify.publickey.location` property to refer to the local or remote verification key. - -Use `mp.jwt.verify.publickey.algorithm` to customize the verification algorithm (default is `RS256`), for example, set it to `ES256` when working with the EC keys. - -If you need to verify the token signature using the symmetric secret key then either a `JSON Web Key` (JWK) or `JSON Web Key Set` (JWK Set) format must be used to represent this secret key, for example: - -```json -{ - "keys": [ - { - "kty":"oct", - "kid":"secretKey", - "k":"AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow" - } - ] -} -``` - -This secret key JWK will also need to be referred to with `smallrye.jwt.verify.key.location`. -`smallrye.jwt.verify.algorithm` should be set to `HS256`/`HS384`/`HS512`. - -[[jwt-parser]] -=== Parse and Verify JsonWebToken with JWTParser - -If the JWT token can not be injected, for example, if it is embedded in the service request payload or the service endpoint acquires it out of band, then one can use `JWTParser`: - -[source,java] ----- -import org.eclipse.microprofile.jwt.JsonWebToken; -import io.smallrye.jwt.auth.principal.JWTParser; -... -@Inject JWTParser parser; - -String token = getTokenFromOidcServer(); - -// Parse and verify the token -JsonWebToken jwt = parser.parse(token); ----- - -You can also use it to customize the way the token is verified or decrypted. For example, one can supply a local `SecretKey`: - -[source,java] ----- -import javax.crypto.SecretKey; -import javax.ws.rs.GET; -import javax.ws.rs.core.NewCookie; -import javax.ws.rs.core.Response; -import org.eclipse.microprofile.jwt.JsonWebToken; -import io.smallrye.jwt.auth.principal.JWTParser; -import io.smallrye.jwt.build.Jwt; - -@Path("/secured") -public class SecuredResource { - @Inject JWTParser parser; - private String secret = "AyM1SysPpbyDfgZld3umj1qzKObwVMko"; - - @GET - @Produces("text/plain") - public Response getUserName(@CookieParam("jwt") String jwtCookie) { - Response response = null; - if (jwtCookie == null) { - // Create a JWT token signed using the 'HS256' algorithm - String newJwtCookie = Jwt.upn("Alice").signWithSecret(secret); - // or create a JWT token encrypted using the 'A256KW' algorithm - // Jwt.upn("alice").encryptWithSecret(secret); - return Response.ok("Alice").cookie(new NewCookie("jwt", newJwtCookie)).build(); - } else { - // All mp.jwt and smallrye.jwt properties are still effective, only the verification key is customized. - JsonWebToken jwt = parser.verify(jwtCookie, secret); - // or jwt = parser.decrypt(jwtCookie, secret); - return Response.ok(jwt.getName()).build(); - } - } -} ----- - -Please also see the <> section about using `JWTParser` without the `HTTP` support provided by `quarkus-smallrye-jwt`. - -=== Token Decryption - -If your application needs to accept the tokens with the encrypted claims or with the encrypted inner signed claims then all you have to do is to set -`smallrye.jwt.decrypt.key.location` pointing to the decryption key. - -If this is the only key property which is set then the incoming token is expected to contain the encrypted claims only. -If either `mp.jwt.verify.publickey` or `mp.jwt.verify.publickey.location` verification properties are also set then the incoming token is expected to contain -the encrypted inner-signed token. - -See xref:smallrye-jwt-build.adoc[Generate JWT tokens with SmallRye JWT] and learn how to generate the encrypted or inner-signed and then encrypted tokens fast. - -=== Custom Factories - -`io.smallrye.jwt.auth.principal.DefaultJWTCallerPrincipalFactory` is used by default to parse and verify JWT tokens and convert them to `JsonWebToken` principals. -It uses `MP JWT` and `smallrye-jwt` properties listed in the `Configuration` section to verify and customize JWT tokens. - -If you need to provide your own factory, for example, to avoid verifying the tokens again which have already been verified by the firewall, then you can either use a `ServiceLoader` mechanism by providing a `META-INF/services/io.smallrye.jwt.auth.principal.JWTCallerPrincipalFactory` resource or simply have an `Alternative` CDI bean implementation like this one: - -[source,java] ----- -import java.nio.charset.StandardCharsets; -import java.util.Base64; -import javax.annotation.Priority; -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.inject.Alternative; -import org.jose4j.jwt.JwtClaims; -import org.jose4j.jwt.consumer.InvalidJwtException; -import io.smallrye.jwt.auth.principal.DefaultJWTCallerPrincipal; -import io.smallrye.jwt.auth.principal.JWTAuthContextInfo; -import io.smallrye.jwt.auth.principal.JWTCallerPrincipal; -import io.smallrye.jwt.auth.principal.JWTCallerPrincipalFactory; -import io.smallrye.jwt.auth.principal.ParseException; - -@ApplicationScoped -@Alternative -@Priority(1) -public class TestJWTCallerPrincipalFactory extends JWTCallerPrincipalFactory { - - @Override - public JWTCallerPrincipal parse(String token, JWTAuthContextInfo authContextInfo) throws ParseException { - try { - // Token has already been verified, parse the token claims only - String json = new String(Base64.getUrlDecoder().decode(token.split("\\.")[1]), StandardCharsets.UTF_8); - return new DefaultJWTCallerPrincipal(JwtClaims.parse(json)); - } catch (InvalidJwtException ex) { - throw new ParseException(ex.getMessage()); - } - } -} ----- - -=== Token Propagation - -Please see xref:security-openid-connect-client.adoc#token-propagation[Token Propagation] section about the Bearer access token propagation to the downstream services. - -[[integration-testing]] -=== Testing - -[[integration-testing-wiremock]] -==== Wiremock - -If you configure `mp.jwt.verify.publickey.location` to point to HTTPS or HTTP based JsonWebKey (JWK) set then you can use the same approach as described in the xref:security-openid-connect.adoc#integration-testing[OpenID Connect Bearer Token Integration testing] `Wiremock` section but only change the `application.properties` to use MP JWT configuration properties instead: - -[source, properties] ----- -# keycloak.url is set by OidcWiremockTestResource -mp.jwt.verify.publickey.location=${keycloak.url}/realms/quarkus/protocol/openid-connect/certs -mp.jwt.verify.issuer=${keycloak.url}/realms/quarkus ----- - -[[integration-testing-keycloak]] -==== Keycloak - -If you work with Keycloak and configure `mp.jwt.verify.publickey.location` to point to HTTPS or HTTP based JsonWebKey (JWK) set then you can use the same approach as described in the xref:security-openid-connect.adoc#integration-testing-keycloak[OpenID Connect Bearer Token Integration testing] `Keycloak` section but only change the `application.properties` to use MP JWT configuration properties instead: - -[source, properties] ----- -# keycloak.url is set by OidcWiremockTestResource -mp.jwt.verify.publickey.location=${keycloak.url}/realms/quarkus/protocol/openid-connect/certs -mp.jwt.verify.issuer=${keycloak.url}/realms/quarkus ----- - -[[integration-testing-public-key]] -==== Local Public Key - -You can use the same approach as described in the xref:security-openid-connect#integration-testing.adoc[OpenID Connect Bearer Token Integration testing] `Local Public Key` section but only change the `application.properties` to use MP JWT configuration properties instead: - -[source, properties] ----- -mp.jwt.verify.publickey=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlivFI8qB4D0y2jy0CfEqFyy46R0o7S8TKpsx5xbHKoU1VWg6QkQm+ntyIv1p4kE1sPEQO73+HY8+Bzs75XwRTYL1BmR1w8J5hmjVWjc6R2BTBGAYRPFRhor3kpM6ni2SPmNNhurEAHw7TaqszP5eUF/F9+KEBWkwVta+PZ37bwqSE4sCb1soZFrVz/UT/LF4tYpuVYt3YbqToZ3pZOZ9AX2o1GCG3xwOjkc4x0W7ezbQZdC9iftPxVHR8irOijJRRjcPDtA6vPKpzLl6CyYnsIYPd99ltwxTHjr3npfv/3Lw50bAkbT4HeLFxTx4flEoZLKO/g0bAoV2uqBhkA9xnQIDAQAB -# set it to the issuer value which is used to generate the tokens -mp.jwt.verify.issuer=${keycloak.url}/realms/quarkus - -# required to sign the tokens -smallrye.jwt.sign.key.location=privateKey.pem ----- - -[[integration-testing-security-annotation]] -==== TestSecurity annotation - -Add the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-security-jwt - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-security-jwt") ----- - -and write a test code like this one: - -[source, java] ----- -import static org.hamcrest.Matchers.is; -import org.junit.jupiter.api.Test; -import io.quarkus.test.common.http.TestHTTPEndpoint; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.security.TestSecurity; -import io.quarkus.test.security.jwt.Claim; -import io.quarkus.test.security.jwt.JwtSecurity; -import io.restassured.RestAssured; - -@QuarkusTest -@TestHTTPEndpoint(ProtectedResource.class) -public class TestSecurityAuthTest { - - @Test - @TestSecurity(user = "userJwt", roles = "viewer") - public void testJwt() { - RestAssured.when().get("test-security-jwt").then() - .body(is("userJwt:viewer")); - } - - @Test - @TestSecurity(user = "userJwt", roles = "viewer") - @JwtSecurity(claims = { - @Claim(key = "email", value = "user@gmail.com") - }) - public void testJwtWithClaims() { - RestAssured.when().get("test-security-jwt-claims").then() - .body(is("userJwt:viewer:user@gmail.com")); - } - -} ----- - -where `ProtectedResource` class may look like this: - -[source, java] ----- -@Path("/web-app") -@Authenticated -public class ProtectedResource { - - @Inject - JsonWebToken accessToken; - - @GET - @Path("test-security-jwt") - public String testSecurityOidc() { - return accessToken.getName() + ":" + accessToken.getGroups().iterator().next(); - } - - @GET - @Path("test-security-jwt-claims") - public String testSecurityOidcUserInfoMetadata() { - return accessToken.getName() + ":" + accessToken.getGroups().iterator().next() - + ":" + accessToken.getClaim("email"); - } -} ----- - -Note that `@TestSecurity` annotation must always be used and its `user` property is returned as `JsonWebToken.getName()` and `roles` property - as `JsonWebToken.getGroups()`. -`@JwtSecurity` annotation is optional and can be used to set the additional token claims. - -=== How to check the errors in the logs - -Please enable `io.quarkus.smallrye.jwt.runtime.auth.MpJwtValidator` `TRACE` level logging to see more details about the token verification or decryption errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.smallrye.jwt.runtime.auth.MpJwtValidator".level=TRACE -quarkus.log.category."io.quarkus.smallrye.jwt.runtime.auth.MpJwtValidator".min-level=TRACE ----- - -=== Proactive Authentication - -If you'd like to skip the token verification when the public endpoint methods are invoked then please disable the xref:security-built-in-authentication.adoc#proactive-authentication[proactive authentication]. - -Note that you can't access the injected `JsonWebToken` in the public methods if the token verification has not been done. - -[[add-smallrye-jwt]] -=== How to Add SmallRye JWT directly - -If you work with Quarkus extensions which do not support `HTTP` (for example, `Quarkus GRPC`) or provide their own extension specific `HTTP` support conflicting with the one offered by `quarkus-smallrye-jwt` and `Vert.x HTTP` (example, `Quarkus Amazon Lambda`) and you would like to <> then please use `smallrye-jwt` directly instead of `quarkus-smallrye-jwt`. - -Add this dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye - smallrye-jwt - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye:smallrye-jwt") ----- - -and update `application.properties` to get all the CDI producers provided by `smallrye-jwt` included as follows: - -[source, properties] ----- -quarkus.index-dependency.smallrye-jwt.group-id=io.smallrye -quarkus.index-dependency.smallrye-jwt.artifact-id=smallrye-jwt ----- - -[[configuration-reference]] -== Configuration Reference - -=== Quarkus configuration - -include::{generated-dir}/config/quarkus-smallrye-jwt.adoc[opts=optional, leveloffset=+1] - -=== MicroProfile JWT configuration - -[cols=">. -|mp.jwt.verify.publickey.location|none|Config property allows for an external or internal location of Public Key to be specified. The value may be a relative path or a URL. If the value points to an HTTPS based JWK set then, for it to work in native mode, the `quarkus.ssl.native` property must also be set to `true`, see xref:native-and-ssl.adoc[Using SSL With Native Executables] for more details. -|mp.jwt.verify.publickey.algorithm|`RS256`|Signature algorithm. Set it to `ES256` to support the Elliptic Curve signature algorithm. -|mp.jwt.decrypt.key.location|none|Config property allows for an external or internal location of Private Decryption Key to be specified. -|mp.jwt.verify.issuer|none|Config property specifies the value of the `iss` (issuer) claim of the JWT that the server will accept as valid. -|mp.jwt.verify.audiences|none|Comma separated list of the audiences that a token `aud` claim may contain. -|mp.jwt.token.header|`Authorization`|Set this property if another header such as `Cookie` is used to pass the token. -|mp.jwt.token.cookie|none|Name of the cookie containing a token. This property will be effective only if `mp.jwt.token.header` is set to `Cookie`. -|=== - -=== Additional SmallRye JWT configuration - -SmallRye JWT provides more properties which can be used to customize the token processing: - -[cols=" - io.quarkus - quarkus-oidc - - - io.quarkus - quarkus-keycloak-authorization - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-oidc") -implementation("io.quarkus:quarkus-keycloak-authorization") ----- - -Let's start by implementing the `/api/users/me` endpoint. -As you can see from the source code below it is just a regular JAX-RS resource: - -[source,java] ----- -package org.acme.security.keycloak.authorization; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.jboss.resteasy.annotations.cache.NoCache; - -import io.quarkus.security.identity.SecurityIdentity; - -@Path("/api/users") -public class UsersResource { - - @Inject - SecurityIdentity identity; - - @GET - @Path("/me") - @NoCache - public User me() { - return new User(identity); - } - - public static class User { - - private final String userName; - - User(SecurityIdentity identity) { - this.userName = identity.getPrincipal().getName(); - } - - public String getUserName() { - return userName; - } - } -} ----- - -The source code for the `/api/admin` endpoint is also very simple: - -[source,java] ----- -package org.acme.security.keycloak.authorization; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import io.quarkus.security.Authenticated; - -@Path("/api/admin") -@Authenticated -public class AdminResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String admin() { - return "granted"; - } -} ----- - -Note that we did not define any annotation such as `@RolesAllowed` to explicitly enforce access to a resource. -The extension will be responsible to map the URIs of the protected resources you have in Keycloak and evaluate the permissions accordingly, granting or denying access depending on the permissions that will be granted by Keycloak. - -=== Configuring the application - -The OpenID Connect extension allows you to define the adapter configuration using the `application.properties` file which should be located at the `src/main/resources` directory. - -[source,properties] ----- -# OIDC Configuration -%prod.quarkus.oidc.auth-server-url=https://localhost:8543/auth/realms/quarkus -quarkus.oidc.client-id=backend-service -quarkus.oidc.credentials.secret=secret -quarkus.oidc.tls.verification=none - -# Enable Policy Enforcement -quarkus.keycloak.policy-enforcer.enable=true - -# Tell Dev Services for Keycloak to import the realm file -# This property is not effective when running the application in JVM or Native modes -quarkus.keycloak.devservices.realm-path=quarkus-realm.json ----- - -NOTE: Adding a `%prod.` profile prefix to `quarkus.oidc.auth-server-url` ensures that `Dev Services for Keycloak` will launch a container for you when the application is run in a dev mode. See <> section below for more information. - -NOTE: By default, applications using the `quarkus-oidc` extension are marked as a `service` type application (see `quarkus.oidc.application-type`). This extension also supports only `web-app` type applications but only if the access token returned as part of the authorization code grant response is marked as a source of roles: `quarkus.oidc.roles.source=accesstoken` (`web-app` type applications check ID token roles by default). - -== Starting and Configuring the Keycloak Server - -NOTE: Do not start the Keycloak server when you run the application in a dev mode - `Dev Services for Keycloak` will launch a container. See <> section below for more information. - -To start a Keycloak Server you can use Docker and just run the following command: - -[source,bash,subs=attributes+] ----- -docker run --name keycloak -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8180:8080 -p 8543:8443 quay.io/keycloak/keycloak:{keycloak version} ----- - -You should be able to access your Keycloak Server at http://localhost:8180/auth[localhost:8180/auth] or https://localhost:8543/auth[localhost:8543/auth]. - -Log in as the `admin` user to access the Keycloak Administration Console. -Username should be `admin` and password `admin`. - -Import the {quickstarts-tree-url}/security-keycloak-authorization-quickstart/config/quarkus-realm.json[realm configuration file] to create a new realm. -For more details, see the Keycloak documentation about how to https://www.keycloak.org/docs/latest/server_admin/index.html#_create-realm[create a new realm]. - -After importing the realm you can see the resource permissions: - -image::keycloak-authorization-permissions.png[alt=Keycloak Authorization Permissions,role="center"] - -It explains why the endpoint has no `@RolesAllowed` annotations - the resource access permissions are set directly in Keycloak. - -[[keycloak-dev-mode]] -== Running the Application in Dev mode - -To run the application in dev mode, use: - -include::includes/devtools/dev.adoc[] - -xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] will launch a Keycloak container and import a `quarkus-realm.json`. - -Open a xref:dev-ui.adoc[Dev UI] available at http://localhost:8080/q/dev[/q/dev] and click on a `Provider: Keycloak` link in an `OpenID Connect` `Dev UI` card. - -You will be asked to login into a `Single Page Application` provided by `OpenID Connect Dev UI`: - - * Login as `alice` (password: `alice`) who only has a `User Permission` to access the `/api/users/me` resource - ** accessing `/api/admin` will return `403` - ** accessing `/api/users/me` will return `200` - * Logout and login as `admin` (password: `admin`) who has both `Admin Permission` to access the `/api/admin` resource and `User Permission` to access the `/api/users/me` resource - ** accessing `/api/admin` will return `200` - ** accessing `/api/users/me` will return `200` - -== Running the Application in JVM mode - -When you're done playing with the `dev` mode" you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -== Running the Application in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; let's build again by enabling the `native` profile: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -./target/security-keycloak-authorization-quickstart-runner ----- - -== Testing the Application - -See <> section above about testing your application in a dev mode. - -You can test the application launched in JVM or Native modes with `curl`. - -The application is using bearer token authorization and the first thing to do is obtain an access token from the Keycloak Server in order to access the application resources: - -[source,bash] ----- -export access_token=$(\ - curl --insecure -X POST https://localhost:8543/auth/realms/quarkus/protocol/openid-connect/token \ - --user backend-service:secret \ - -H 'content-type: application/x-www-form-urlencoded' \ - -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' \ - ) ----- - -The example above obtains an access token for user `alice`. - -Any user is allowed to access the -`http://localhost:8080/api/users/me` endpoint -which basically returns a JSON payload with details about the user. - -[source,bash] ----- -curl -v -X GET \ - http://localhost:8080/api/users/me \ - -H "Authorization: Bearer "$access_token ----- - -The `http://localhost:8080/api/admin` endpoint can only be accessed by users with the `admin` role. -If you try to access this endpoint with the previously issued access token, you should get a `403` response from the server. - -[source,bash] ----- - curl -v -X GET \ - http://localhost:8080/api/admin \ - -H "Authorization: Bearer "$access_token ----- - -In order to access the admin endpoint you should obtain a token for the `admin` user: - -[source,bash] ----- -export access_token=$(\ - curl --insecure -X POST https://localhost:8543/auth/realms/quarkus/protocol/openid-connect/token \ - --user backend-service:secret \ - -H 'content-type: application/x-www-form-urlencoded' \ - -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' \ - ) ----- - -== Checking Permissions Programmatically - -In some cases, you may want to programmatically check whether or not a request is granted to access a protected resource. By -injecting a `SecurityIdentity` instance in your beans, you are allowed to check permissions as follows: - -[source,java] ----- -import io.quarkus.security.identity.SecurityIdentity; -import io.smallrye.mutiny.Uni; - -@Path("/api/protected") -public class ProtectedResource { - - @Inject - SecurityIdentity identity; - - - @GET - public Uni> get() { - return identity.checkPermission(new AuthPermission("{resource_name}")).onItem() - .transform(granted -> { - if (granted) { - return identity.getAttribute("permissions"); - } - throw new ForbiddenException(); - }); - } -} ----- - -== Injecting the Authorization Client - -In some cases, you may want to use the https://www.keycloak.org/docs/latest/authorization_services/#_service_client_api[Keycloak Authorization Client Java API] to perform -specific operations like managing resources and obtaining permissions directly from Keycloak. For that, you can inject a -`AuthzClient` instance into your beans as follows: - -[source,java] ----- -public class ProtectedResource { - @Inject - AuthzClient authzClient; -} ----- - -== Mapping Protected Resources - -By default, the extension is going to fetch resources on-demand from Keycloak where their `URI` are used to map the resources in your application that should be protected. - -If you want to disable this behavior and fetch resources during startup, you can use the following configuration: - -[source,properties] ----- -quarkus.keycloak.policy-enforcer.lazy-load-paths=false ----- - -Note that, depending on how many resources you have in Keycloak the time taken to fetch them may impact your application startup time. - -== More About Configuring Protected Resources - -In the default configuration, Keycloak is responsible for managing the roles and deciding who can access which routes. - -To configure the protected routes using the `@RolesAllowed` annotation or the `application.properties` file, check the xref:security-openid-connect.adoc[Using OpenID Connect Adapter to Protect JAX-RS Applications] and xref:security-authorization.adoc[Security Authorization] guides. For more details, check the xref:security.adoc[Security guide]. - -== Access to Public Resources - -If you'd like to access a public resource without `quarkus-keycloak-authorization` trying to apply its policies to it then you need to create a `permit` HTTP Policy configuration in `application.properties` as documented in the xref:security-authorization.adoc[Security Authorization] guide. - -Disabling a policy check using a Keycloak Authorization Policy such as: - -[source,properties] ----- -quarkus.keycloak.policy-enforcer.paths.1.path=/api/public -quarkus.keycloak.policy-enforcer.paths.1.enforcement-mode=DISABLED ----- - -is no longer required. - -If you'd like to block an access to the public resource to anonymous users then you can create an enforcing Keycloak Authorization Policy: - -[source,properties] ----- -quarkus.keycloak.policy-enforcer.paths.1.path=/api/public-enforcing -quarkus.keycloak.policy-enforcer.paths.1.enforcement-mode=ENFORCING ----- - -Note only the default tenant configuration applies when controlling an anonymous access to the public resource is required. - -== Multi-Tenancy - -It is possible to configure multiple policy enforcer configurations, one per each tenant, similarly to how it can be done for xref:security-openid-connect-multitenancy.adoc[Multi-Tenant OpenID Connect Service Applications]. - -For example: - -[source,properties] ----- -quarkus.keycloak.policy-enforcer.enable=true - -# Default Tenant -quarkus.oidc.auth-server-url=${keycloak.url}/realms/quarkus -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.secret=secret - -quarkus.keycloak.policy-enforcer.enforcement-mode=PERMISSIVE -quarkus.keycloak.policy-enforcer.paths.1.name=Permission Resource -quarkus.keycloak.policy-enforcer.paths.1.path=/api/permission -quarkus.keycloak.policy-enforcer.paths.1.claim-information-point.claims.static-claim=static-claim - -# Service Tenant - -quarkus.oidc.service-tenant.auth-server-url=${keycloak.url}/realms/quarkus -quarkus.oidc.service-tenant.client-id=quarkus-app -quarkus.oidc.service-tenant.credentials.secret=secret - -quarkus.keycloak.service-tenant.policy-enforcer.enforcement-mode=PERMISSIVE -quarkus.keycloak.service-tenant.policy-enforcer.paths.1.name=Permission Resource Service -quarkus.keycloak.service-tenant.policy-enforcer.paths.1.path=/api/permission -quarkus.keycloak.service-tenant.policy-enforcer.paths.1.claim-information-point.claims.static-claim=static-claim - - -# WebApp Tenant - -quarkus.oidc.webapp-tenant.auth-server-url=${keycloak.url}/realms/quarkus -quarkus.oidc.webapp-tenant.client-id=quarkus-app -quarkus.oidc.webapp-tenant.credentials.secret=secret -quarkus.oidc.webapp-tenant.application-type=web-app -quarkus.oidc.webapp-tenant.roles.source=accesstoken - -quarkus.keycloak.webapp-tenant.policy-enforcer.enforcement-mode=PERMISSIVE -quarkus.keycloak.webapp-tenant.policy-enforcer.paths.1.name=Permission Resource WebApp -quarkus.keycloak.webapp-tenant.policy-enforcer.paths.1.path=/api/permission -quarkus.keycloak.webapp-tenant.policy-enforcer.paths.1.claim-information-point.claims.static-claim=static-claim ----- - -== Configuration Reference - -The configuration is based on the official https://www.keycloak.org/docs/latest/authorization_services/index.html#_enforcer_filter[Keycloak Policy Enforcer Configuration]. If you are looking for more details about the different configuration options, please take a look at this documentation, - -include::{generated-dir}/config/quarkus-keycloak-keycloak-policy-enforcer-config.adoc[opts=optional] - -== References - -* https://www.keycloak.org/documentation.html[Keycloak Documentation] -* https://www.keycloak.org/docs/latest/authorization_services/index.html[Keycloak Authorization Services Documentation] -* https://openid.net/connect/[OpenID Connect] -* https://tools.ietf.org/html/rfc7519[JSON Web Token] -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-ldap.adoc b/_versions/2.7/guides/security-ldap.adoc deleted file mode 100644 index c605a17d074..00000000000 --- a/_versions/2.7/guides/security-ldap.adoc +++ /dev/null @@ -1,252 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Security with an LDAP Realm - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use an LDAP server to authenticate and authorize your user identities. - - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this example, we build a very simple microservice which offers three endpoints: - -* `/api/public` -* `/api/users/me` -* `/api/admin` - -The `/api/public` endpoint can be accessed anonymously. -The `/api/admin` endpoint is protected with RBAC (Role-Based Access Control) where only users granted with the `adminRole` role can access. At this endpoint, we use the `@RolesAllowed` annotation to declaratively enforce the access constraint. -The `/api/users/me` endpoint is also protected with RBAC (Role-Based Access Control) where only users granted with the `standardRole` role can access. As a response, it returns a JSON document with details about the user. - -WARNING: By default Quarkus will restrict the use of JNDI within an application, as a precaution to try and mitigate any future vulnerabilities similar to log4shell. Because LDAP based auth requires JNDI -this protection will be automatically disabled. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `security-ldap-quickstart` {quickstarts-tree-url}/security-ldap-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-ldap-quickstart -:create-app-extensions: elytron-security-ldap,resteasy -include::includes/devtools/create-app.adoc[] - -This command generates a project, importing the `elytron-security-ldap` extension -which is a `wildfly-elytron-realm-ldap` adapter for Quarkus applications. - -If you already have your Quarkus project configured, you can add the `elytron-security-ldap` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: elytron-security-ldap -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-elytron-security-ldap - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-elytron-security-ldap") ----- - -== Writing the application - -Let's start by implementing the `/api/public` endpoint. As you can see from the source code below, it is just a regular JAX-RS resource: - -[source,java] ----- -package org.acme.elytron.security.ldap; - -import javax.annotation.security.PermitAll; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/public") -public class PublicResource { - - @GET - @PermitAll - @Produces(MediaType.TEXT_PLAIN) - public String publicResource() { - return "public"; - } -} ----- - -The source code for the `/api/admin` endpoint is also very simple. The main difference here is that we are using a `@RolesAllowed` annotation to make sure that only users granted with the `adminRole` role can access the endpoint: - - -[source,java] ----- -package org.acme.elytron.security.ldap; - -import javax.annotation.security.RolesAllowed; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/admin") -public class AdminResource { - - @GET - @RolesAllowed("adminRole") - @Produces(MediaType.TEXT_PLAIN) - public String adminResource() { - return "admin"; - } -} ----- - -Finally, let's consider the `/api/users/me` endpoint. As you can see from the source code below, we are trusting only users with the `standardRole` role. -We are using `SecurityContext` to get access to the current authenticated Principal and we return the user's name. This information is loaded from the LDAP server. - -[source,java] ----- -package org.acme.elytron.security.ldap; - -import javax.annotation.security.RolesAllowed; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.SecurityContext; - -@Path("/api/users") -public class UserResource { - - @GET - @RolesAllowed("standardRole") - @Path("/me") - public String me(@Context SecurityContext securityContext) { - return securityContext.getUserPrincipal().getName(); - } -} ----- - -=== Configuring the Application - -[source,properties] ----- -quarkus.security.ldap.enabled=true - -quarkus.security.ldap.dir-context.principal=uid=tool,ou=accounts,o=YourCompany,c=DE -quarkus.security.ldap.dir-context.url=ldaps://ldap.server.local -quarkus.security.ldap.dir-context.password=PASSWORD - -quarkus.security.ldap.identity-mapping.rdn-identifier=uid -quarkus.security.ldap.identity-mapping.search-base-dn=ou=users,ou=tool,o=YourCompany,c=DE - -quarkus.security.ldap.identity-mapping.attribute-mappings."0".from=cn -quarkus.security.ldap.identity-mapping.attribute-mappings."0".to=groups -quarkus.security.ldap.identity-mapping.attribute-mappings."0".filter=(member=uid={0}) -quarkus.security.ldap.identity-mapping.attribute-mappings."0".filter-base-dn=ou=roles,ou=tool,o=YourCompany,c=DE ----- - -`{0}` is substituted by the `uid`, whereas `{1}` will be substituted by the `dn` of the user entry. - -The `elytron-security-ldap` extension requires a dir-context and an identity-mapping with at least one attribute-mapping to authenticate the user and its identity. - -== Testing the Application - -The application is now protected and the identities are provided by our LDAP server. -Let's start the application in dev mode: - -include::includes/devtools/dev.adoc[] - -The very first thing to check is to ensure the anonymous access works. - -[source,shell] ----- -$ curl -i -X GET http://localhost:8080/api/public -HTTP/1.1 200 OK -Content-Length: 6 -Content-Type: text/plain;charset=UTF-8 - -public% ----- - -Now, let's try a to hit a protected resource anonymously. - -[source,shell] ----- -$ curl -i -X GET http://localhost:8080/api/admin -HTTP/1.1 401 Unauthorized -Content-Length: 14 -Content-Type: text/html;charset=UTF-8 - -Not authorized% ----- - -So far so good, now let's try with an allowed user. - -[source,shell] ----- -$ curl -i -X GET -u adminUser:adminUserPassword http://localhost:8080/api/admin -HTTP/1.1 200 OK -Content-Length: 5 -Content-Type: text/plain;charset=UTF-8 - -admin% ----- -By providing the `adminUser:adminUserPassword` credentials, the extension authenticated the user and loaded their roles. -The `adminUser` user is authorized to access to the protected resources. - -The user `adminUser` should be forbidden to access a resource protected with `@RolesAllowed("standardRole")` because it doesn't have this role. - -[source,shell] ----- -$ curl -i -X GET -u adminUser:adminUserPassword http://localhost:8080/api/users/me -HTTP/1.1 403 Forbidden -Content-Length: 34 -Content-Type: text/html;charset=UTF-8 - -Forbidden% ----- - -Finally, using the user `standardUser` works and the security context contains the principal details (username for instance). - -[source,shell] ----- -$ curl -i -X GET -u standardUser:standardUserPassword http://localhost:8080/api/users/me -HTTP/1.1 200 OK -Content-Length: 4 -Content-Type: text/plain;charset=UTF-8 - -user% ----- - -[[configuration-reference]] -== Configuration Reference - -include::{generated-dir}/config/quarkus-elytron-security-ldap.adoc[opts=optional, leveloffset=+1] - -== References - -* https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol[LDAP] -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-oauth2.adoc b/_versions/2.7/guides/security-oauth2.adoc deleted file mode 100644 index 2e7e9543930..00000000000 --- a/_versions/2.7/guides/security-oauth2.adoc +++ /dev/null @@ -1,459 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OAuth2 RBAC - -include::./attributes.adoc[] -:extension-name: Elytron Security OAuth2 -:extension-status: preview - -This guide explains how your Quarkus application can utilize OAuth2 tokens to provide secured access to the JAX-RS endpoints. - -OAuth2 is an authorization framework that enables applications to obtain access to an HTTP resource on behalf of a user. -It can be used to implement an application authentication mechanism based on tokens by delegating to an external server (the authentication server) the user authentication and providing a token for the authentication context. - -This extension provides a light-weight support for using the opaque Bearer Tokens and validating them by calling an introspection endpoint. - -If the OAuth2 Authentication server provides JWT Bearer Tokens then you should consider using either xref:security-openid-connect.adoc[OpenID Connect] or xref:security-jwt.adoc[SmallRye JWT] extensions instead. -OpenID Connect extension has to be used if the Quarkus application needs to authenticate the users using OIDC Authorization Code Flow, please read xref:security-openid-connect-web-authentication.adoc[Using OpenID Connect to Protect Web Applications] guide for more information. - -include::./status-include.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: git clone https://github.com/quarkusio/quarkus-quickstarts.git, or download an archive. - -The solution is located in the `security-oauth2-quickstart` {quickstarts-tree-url}/security-oauth2-quickstart[directory]. -It contains a very simple UI to use the JAX-RS resources created here, too. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-oauth2-quickstart -:create-app-extensions: resteasy,resteasy-jackson,security-oauth2 -include::includes/devtools/create-app.adoc[] - -This command generates a project and imports the `elytron-security-oauth2` extension, which includes the OAuth2 opaque token support. - -If you don't want to use the Maven plugin, you can just include the dependency in your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-elytron-security-oauth2 - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-elytron-security-oauth2") ----- - -=== Examine the JAX-RS resource - -Create the `src/main/java/org/acme/security/oauth2/TokenSecuredResource.java` file with the following content: - -[source,java] ----- -package org.acme.security.oauth2; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/secured") -public class TokenSecuredResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -This is a basic REST endpoint that does not have any of the {extension-name} specific features, so let's add some. - -We will use the JSR 250 common security annotations, they are described in the xref:security.adoc[Using Security] guide. - -[source,java] ----- -package org.acme.security.oauth2; - -import java.security.Principal; - -import javax.annotation.security.PermitAll; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.SecurityContext; - -@Path("/secured") -@ApplicationScoped -public class TokenSecuredResource { - - - @GET() - @Path("permit-all") - @PermitAll // <1> - @Produces(MediaType.TEXT_PLAIN) - public String hello(@Context SecurityContext ctx) { // <2> - Principal caller = ctx.getUserPrincipal(); <3> - String name = caller == null ? "anonymous" : caller.getName(); - String helloReply = String.format("hello + %s, isSecure: %s, authScheme: %s", name, ctx.isSecure(), ctx.getAuthenticationScheme()); - return helloReply; // <4> - } -} ----- -<1> `@PermitAll` indicates that the given endpoint is accessible by any caller, authenticated or not. -<2> Here we inject the JAX-RS `SecurityContext` to inspect the security state of the call. -<3> Here we obtain the current request user/caller `Principal`. For an unsecured call this will be null, so we build the user name by checking `caller` against null. -<4> The reply we build up makes use of the caller name, the `isSecure()` and `getAuthenticationScheme()` states of the request `SecurityContext`. - - -=== Setting up application.properties - -You need to configure your application with the following minimal properties: - -[source, properties] ----- -quarkus.oauth2.client-id=client_id -quarkus.oauth2.client-secret=secret -quarkus.oauth2.introspection-url=http://oauth-server/introspect ----- - -You need to specify the introspection URL of your authentication server and the `client-id` / `client-secret` that your application will use to authenticate itself to the authentication server. + -The extension will then use this information to validate the token and recover the information associate with it. - -For all configuration properties, see the <> section at the end of this guide. - -== Run the application - -Now we are ready to run our application. Use: - -include::includes/devtools/dev.adoc[] - -Now that the REST endpoint is running, we can access it using a command line tool like curl: - -[source,shell] ----- -$ curl http://127.0.0.1:8080/secured/permit-all; echo -hello + anonymous, isSecure: false, authScheme: null ----- - -We have not provided any token in our request, so we would not expect that there is any security state seen by the endpoint, and the response is consistent with that: - -* user name is anonymous -* `isSecure` is false as https is not used -* `authScheme` is null - -=== Securing the endpoint - -So now let's actually secure something. Take a look at the new endpoint method `helloRolesAllowed` in the following: - -[source,java] ----- -package org.acme.security.oauth2; - -import java.security.Principal; - -import javax.annotation.security.PermitAll; -import javax.annotation.security.RolesAllowed; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.Context; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.SecurityContext; - -@Path("/secured") -@ApplicationScoped -public class TokenSecuredResource { - - @GET() - @Path("permit-all") - @PermitAll - @Produces(MediaType.TEXT_PLAIN) - public String hello(@Context SecurityContext ctx) { - Principal caller = ctx.getUserPrincipal(); - String name = caller == null ? "anonymous" : caller.getName(); - String helloReply = String.format("hello + %s, isSecure: %s, authScheme: %s", name, ctx.isSecure(), ctx.getAuthenticationScheme()); - return helloReply; - } - - @GET() - @Path("roles-allowed") // <1> - @RolesAllowed({"Echoer", "Subscriber"}) // <2> - @Produces(MediaType.TEXT_PLAIN) - public String helloRolesAllowed(@Context SecurityContext ctx) { - Principal caller = ctx.getUserPrincipal(); - String name = caller == null ? "anonymous" : caller.getName(); - String helloReply = String.format("hello + %s, isSecure: %s, authScheme: %s", name, ctx.isSecure(), ctx.getAuthenticationScheme()); - return helloReply; - } -} ----- -<1> This new endpoint will be located at `/secured/roles-allowed` -<2> `@RolesAllowed` indicates that the given endpoint is accessible by a caller if they have either a "Echoer" or "Subscriber" role assigned. - -After you make this addition to your `TokenSecuredResource`, try `curl -v http://127.0.0.1:8080/secured/roles-allowed; echo` to attempt to access the new endpoint. Your output should be: - -[source,shell] ----- -$ curl -v http://127.0.0.1:8080/secured/roles-allowed; echo -* Trying 127.0.0.1... -* TCP_NODELAY set -* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) -> GET /secured/roles-allowed HTTP/1.1 -> Host: 127.0.0.1:8080 -> User-Agent: curl/7.54.0 -> Accept: */* -> -< HTTP/1.1 401 Unauthorized -< Connection: keep-alive -< Content-Type: text/html;charset=UTF-8 -< Content-Length: 14 -< Date: Sun, 03 Mar 2019 16:32:34 GMT -< -* Connection #0 to host 127.0.0.1 left intact -Not authorized ----- - -Excellent, we have not provided any OAuth2 token in the request, so we should not be able to access the endpoint, and we were not. Instead we received an HTTP 401 Unauthorized error. We need to obtain and pass in a valid OAuth2 token to access that endpoint. There are two steps to this, 1) configuring our {extension-name} extension with information on how to validate the token, and 2) generating a matching token with the appropriate claims. - -=== Generating a token - -You need to obtain the token from a standard OAuth2 authentication server (https://www.keycloak.org/[Keycloak] for example) using the token endpoint. - -You can find below a curl example of such call for a `client_credential` flow: - -[source,bash] ----- -curl -X POST "http://oauth-server/token?grant_type=client_credentials" \ --H "Accept: application/json" -H "Authorization: Basic Y2xpZW50X2lkOmNsaWVudF9zZWNyZXQ=" ----- - -It should respond something like that... - -[source,json] ----- -{"access_token":"60acf56d-9daf-49ba-b3be-7a423d9c7288","token_type":"bearer","expires_in":1799,"scope":"READER"} ----- - - -=== Finally, make a secured request to /secured/roles-allowed -Now let's use this to make a secured request to the `/secured/roles-allowed` endpoint - -[source,shell] ----- -$ curl -H "Authorization: Bearer 60acf56d-9daf-49ba-b3be-7a423d9c7288" http://127.0.0.1:8080/secured/roles-allowed; echo -hello + client_id isSecure: false, authScheme: OAuth2 ----- - -Success! We now have: - -* a non-anonymous caller name of client_id -* an authentication scheme of OAuth2 - -== Roles mapping - -Roles are mapped from one of the claims of the introspection endpoint response. By default, it's the `scope` claim. Roles are obtained by splitting the claim with a space separator. If the claim is an array, no splitting is done, the roles are obtained from the array. - -You can customize the name of the claim to use for the roles with the `quarkus.oauth2.role-claim` property. - -== Package and run the application - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed using `java -jar target/quarkus-app/quarkus-run.jar`: - -[source,shell,subs=attributes+] ----- -[INFO] Scanning for projects... -... -$ java -jar target/quarkus-app/quarkus-run.jar -2019-03-28 14:27:48,839 INFO [io.quarkus] (main) Quarkus {quarkus-version} started in 0.796s. Listening on: http://[::]:8080 -2019-03-28 14:27:48,841 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson, security, security-oauth2] ----- - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -[source,shell] ----- -[INFO] Scanning for projects... -... -[security-oauth2-quickstart-runner:25602] universe: 493.17 ms -[security-oauth2-quickstart-runner:25602] (parse): 660.41 ms -[security-oauth2-quickstart-runner:25602] (inline): 1,431.10 ms -[security-oauth2-quickstart-runner:25602] (compile): 7,301.78 ms -[security-oauth2-quickstart-runner:25602] compile: 10,542.16 ms -[security-oauth2-quickstart-runner:25602] image: 2,797.62 ms -[security-oauth2-quickstart-runner:25602] write: 988.24 ms -[security-oauth2-quickstart-runner:25602] [total]: 43,778.16 ms -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -[INFO] Total time: 51.500 s -[INFO] Finished at: 2019-06-28T14:30:56-07:00 -[INFO] ------------------------------------------------------------------------ - -$ ./target/security-oauth2-quickstart-runner -2019-03-28 14:31:37,315 INFO [io.quarkus] (main) Quarkus 0.20.0 started in 0.006s. Listening on: http://[::]:8080 -2019-03-28 14:31:37,316 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson, security, security-oauth2] ----- - -[[integration-testing]] -== Integration testing - -If you don't want to use a real OAuth2 authorization server for your integration tests, you can use the -xref:security-properties.adoc[Properties based security] extension for your test, or mock an authorization server using Wiremock. - -First of all, Wiremock needs to be added as a test dependency. For a Maven project that would happen like so: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - com.github.tomakehurst - wiremock-jre8 - test - ${wiremock.version} // <1> - ----- -<1> Use a proper Wiremock version. All available versions can be found link:https://search.maven.org/artifact/com.github.tomakehurst/wiremock-jre8[here]. - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("com.github.tomakehurst:wiremock-jre8:${wiremock.version}") <1> ----- -<1> Use a proper Wiremock version. All available versions can be found link:https://search.maven.org/artifact/com.github.tomakehurst/wiremock-jre8[here]. - -In Quarkus tests when some service needs to be started before the Quarkus tests are ran, we utilize the `@io.quarkus.test.common.QuarkusTestResource` -annotation to specify a `io.quarkus.test.common.QuarkusTestResourceLifecycleManager` which can start the service and supply configuration -values that Quarkus will use. - -[NOTE] -==== -For more details about `@QuarkusTestResource` refer to xref:getting-started-testing.adoc#quarkus-test-resource[this part of the documentation]. -==== - -Let's create an implementation of `QuarkusTestResourceLifecycleManager` called `MockAuthorizationServerTestResource` like so: - -[source,java] ----- -import com.github.tomakehurst.wiremock.WireMockServer; -import com.github.tomakehurst.wiremock.client.WireMock; -import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; - -import java.util.Collections; -import java.util.Map; - -public class MockAuthorizationServerTestResource implements QuarkusTestResourceLifecycleManager { // <1> - - private WireMockServer wireMockServer; - - @Override - public Map start() { - wireMockServer = new WireMockServer(); - wireMockServer.start(); // <2> - - // define the mock for the introspect endpoint - WireMock.stubFor(WireMock.post("/introspect").willReturn(WireMock.aResponse() // <3> - .withBody( - "{\"active\":true,\"scope\":\"Echoer\",\"username\":null,\"iat\":1562315654,\"exp\":1562317454,\"expires_in\":1458,\"client_id\":\"my_client_id\"}"))); - - - return Collections.singletonMap("quarkus.oauth2.introspection-url", wireMockServer.baseUrl() + "/introspect"); // <4> - } - - @Override - public void stop() { - if (null != wireMockServer) { - wireMockServer.stop(); // <5> - } - } -} ----- - -<1> The `start` method is invoked by Quarkus before any test is run and returns a `Map` of configuration properties that apply during the test execution. -<2> Launch Wiremock. -<3> Configure Wiremock to stub the calls to `/introspect` by returning an OAuth2 introspect response. You need to customize this line to return what's needed for your application (at least the scope property as roles are derived from the scope). -<4> As the `start` method returns configuration that applies for tests, we set the `quarkus.oauth2.introspection-url` property that controls the URL of the introspect endpoint used by the OAuth2 extension. -<5> When all tests have finished, shutdown Wiremock. - - -Your test class needs to be annotated like with `@QuarkusTestResource(MockAuthorizationServerTestResource.class)` to use this `QuarkusTestResourceLifecycleManager`. - -Below is an example of a test that uses the `MockAuthorizationServerTestResource`. - -[source,java] ----- -@QuarkusTest -@QuarkusTestResource(MockAuthorizationServerTestResource.class) // <1> -class TokenSecuredResourceTest { - // use whatever token you want as the mock OAuth server will accept all tokens - private static final String BEARER_TOKEN = "337aab0f-b547-489b-9dbd-a54dc7bdf20d"; // <2> - - @Test - void testPermitAll() { - RestAssured.given() - .when() - .header("Authorization", "Bearer: " + BEARER_TOKEN) // <3> - .get("/secured/permit-all") - .then() - .statusCode(200) - .body(containsString("hello")); - } - - @Test - void testRolesAllowed() { - RestAssured.given() - .when() - .header("Authorization", "Bearer: " + BEARER_TOKEN) - .get("/secured/roles-allowed") - .then() - .statusCode(200) - .body(containsString("hello")); - } -} ----- - -<1> Use the previously created `MockAuthorizationServerTestResource` as a Quarkus test resource. -<2> Define whatever token you want, it will not be validated by the OAuth2 mock authorization server. -<3> Use this token inside the `Authorization` header to trigger OAuth2 authentication. - - -[WARNING] -==== -`@QuarkusTestResource` applies to all tests, not just `TokenSecuredResourceTest`. -==== - - -== References - -* https://tools.ietf.org/html/rfc6749[OAuth2] -* xref:security.adoc[Quarkus Security] - -[[config-reference]] -== Configuration Reference - -include::{generated-dir}/config/quarkus-elytron-security-oauth2.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/security-openid-connect-client.adoc b/_versions/2.7/guides/security-openid-connect-client.adoc deleted file mode 100644 index e4cdf9d1434..00000000000 --- a/_versions/2.7/guides/security-openid-connect-client.adoc +++ /dev/null @@ -1,956 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenID Connect (OIDC) and OAuth2 Client and Filters to manage access tokens - -include::./attributes.adoc[] -:toc: - -This guide explains how to use: - - - `quarkus-oidc-client`, `quarkus-oidc-client-reactive-filter` and `quarkus-oidc-client-filter` extensions to acquire and refresh access tokens from OpenID Connect and OAuth 2.0 compliant Authorization Servers such as https://www.keycloak.org[Keycloak] - - `quarkus-oidc-token-propagation` and `quarkus-oidc-token-propagation-reactive` extensions to propagate the current `Bearer` or `Authorization Code Flow` access tokens - -The access tokens managed by these extensions can be used as HTTP Authorization Bearer tokens to access the remote services. - -== OidcClient - -Add the following dependency: - -[source,xml] ----- - - io.quarkus - quarkus-oidc-client - ----- - -`quarkus-oidc-client` extension provides a reactive `io.quarkus.oidc.client.OidcClient` which can be used to acquire and refresh tokens using SmallRye Mutiny `Uni` and `Vert.x WebClient`. - -`OidcClient` is initialized at the build time with the IDP token endpoint URL which can be auto-discovered or manually configured and uses this endpoint to acquire access tokens using the token grants such as `client_credentials` or `password` and refresh the tokens using a `refresh_token` grant. - -=== Token Endpoint Configuration - -By default the token endpoint address is discovered by adding a `/.well-known/openid-configuration` path to the configured `quarkus.oidc-client.auth-server-url`. - -For example, given this Keycloak URL: - -[source, properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus ----- - -`OidcClient` will discover that the token endpoint URL is `http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens`. - -Alternatively, if the discovery endpoint is not available or you would like to save on the discovery endpoint roundtrip, you can disable the discovery and configure the token endpoint address with a relative path value, for example: - -[source, properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc-client.discovery-enabled=false -# Token endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens -quarkus.oidc-client.token-path=/protocol/openid-connect/tokens ----- - -A more compact way to configure the token endpoint URL without the discovery is to set `quarkus.oidc-client.token-path` to an absolute URL: - -[source, properties] ----- -quarkus.oidc-client.token-path=http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens ----- - -Setting 'quarkus.oidc-client.auth-server-url' and 'quarkus.oidc-client.discovery-enabled' is not required in this case. - -=== Supported Token Grants - -The main token grants which `OidcClient` can use to acquire the tokens are the `client_credentials` (default) and `password` grants. - -==== Client Credentials Grant - -Here is how `OidcClient` can be configured to use the `client_credentials` grant: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=secret ----- - -The `client_credentials` grant allows to set extra parameters to the token request via `quarkus.oidc-client.grant-options.client.=`. Here is how to set the intended token recipient via the `audience` parameter: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=secret -# 'client' is a shortcut for `client_credentials` -quarkus.oidc-client.grant.type=client -quarkus.oidc-client.grant-options.client.audience=https://example.com/api ----- - -==== Password Grant - -Here is how `OidcClient` can be configured to use the `password` grant: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=secret -quarkus.oidc-client.grant.type=password -quarkus.oidc-client.grant-options.password.username=alice -quarkus.oidc-client.grant-options.password.password=alice ----- - -It can be further customized using a `quarkus.oidc-client.grant-options.password` configuration prefix, similarly to how the client credentials grant can be customized. - -==== Other Grants - -`OidcClient` can also help with acquiring the tokens using the grants which require some extra input parameters which can not be captured in the configuration. These grants are `refresh token` (with the external refresh token), `token exchange` and `authorization code`. - -Using the `refresh_token` grant which uses an out of band refresh token to acquire a new set of tokens will be required if the existing refresh token has been posted to the current Quarkus endpoint for it to acquire the access token. In this case `OidcClient` needs to be configured as follows: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=secret -quarkus.oidc-client.grant.type=refresh ----- - -and then you can use `OidcClient.refreshTokens` method with a provided refresh token to get the access token. - -Using the `token exchange` grant may be required if you are building a complex microservices application and would like to avoid the same `Bearer` token be propagated to and used by more than one service. Please see <> for more details. - -Using `OidcClient` to support the `authorization code` grant might be required if for some reasons you can not use the xref:security-openid-connect-web-authentication.adoc[Quarkus OpenID Connect extension] to support Authorization Code Flow. If there is a very good reason for you to implement Authorization Code Flow then you can configure `OidcClient` as follows: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=secret -quarkus.oidc-client.grant.type=code ----- - -and then you can use `OidcClient.accessTokens` method accepting a Map of extra properties and pass the current `code` and `redirect_uri` parameters to exchange the authorization code for the tokens. - -==== Grant scopes - -You may need to request that a specific set of scopes is associated with an issued access token. -Use a dedicated `quarkus.oidc-client.scopes` list property, for example: `quarkus.oidc-client.scopes=email,phone` - -=== Use OidcClient directly - -One can use `OidcClient` directly as follows: - -[source,java] ----- -import javax.inject.PostConstruct; -import javax.inject.Inject; -import javax.ws.rs.GET; - -import io.quarkus.oidc.client.OidcClient; -import io.quarkus.oidc.client.Tokens; - -@Path("/service") -public class OidcClientResource { - - @Inject - OidcClient client; - - volatile Tokens currentTokens; - - @PostConstruct - public void init() { - currentTokens = client.getTokens().await().indefinitely(); - } - - @GET - public String getResponse() { - - Tokens tokens = currentTokens; - if (tokens.isAccessTokenExpired()) { - // Add @Blocking method annotation if this code is used with Reactive RestClient - tokens = client.refreshTokens(tokens.getRefreshToken()).await().indefinitely(); - currentTokens = tokens; - } - // Use tokens.getAccessToken() to configure MP RestClient Authorization header/etc - } -} ----- - -=== Inject Tokens - -You can inject `Tokens` which uses `OidcClient` internally. `Tokens` can be used to acquire the access tokens and refresh them if necessary: - -[source,java] ----- -import javax.inject.PostConstruct; -import javax.inject.Inject; -import javax.ws.rs.GET; - -import io.quarkus.oidc.client.Tokens; - -@Path("/service") -public class OidcClientResource { - - @Inject Tokens tokens; - - @GET - public String getResponse() { - // Get the access token which may have been refreshed. - String accessToken = tokens.getAccessToken(); - // Use the access token to configure MP RestClient Authorization header/etc - } -} ----- - -=== Use OidcClients - -`io.quarkus.oidc.client.OidcClients` is a container of ``OidcClient``s - it includes a default `OidcClient` and named clients which can be configured like this: - -[source,properties] ----- -quarkus.oidc-client.client-enabled=false - -quarkus.oidc-client.jwt-secret.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.jwt-secret.client-id=quarkus-app -quarkus.oidc-client.jwt-secret.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow ----- - -Note in this case the default client is disabled with a `client-enabled=false` property. The `jwt-secret` client can be accessed like this: - -[source,java] ----- -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.quarkus.oidc.client.OidcClient; -import io.quarkus.oidc.client.OidcClients; - -@Path("/clients") -public class OidcClientResource { - - @Inject - OidcClients clients; - - @GET - public String getResponse() { - OidcClient client = clients.getClient("jwt-secret"); - // use this client to get the token - } -} ----- - -[NOTE] -==== -If you also use xref:security-openid-connect-multitenancy.adoc[OIDC multitenancy] and each OIDC tenant has its own associated `OidcClient` then you can use a Vert.x `RoutingContext` `tenantId` attribute, for example: - -[source,java] ----- -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.quarkus.oidc.client.OidcClient; -import io.quarkus.oidc.client.OidcClients; -import io.vertx.ext.web.RoutingContext; - -@Path("/clients") -public class OidcClientResource { - - @Inject - OidcClients clients; - @Inject - RoutingContext context; - - @GET - public String getResponse() { - String tenantId = context.get("tenantId"); - // named OIDC tenant and client configurations use the same key: - OidcClient client = clients.getClient(tenantId); - // use this client to get the token - } -} ----- -==== - -If you need you can also create new `OidcClient` programmatically like this: - -[source,java] ----- -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.quarkus.oidc.client.OidcClient; -import io.quarkus.oidc.client.OidcClients; -import io.quarkus.oidc.client.OidcClientConfig; - -import io.smallrye.mutiny.Uni; - -@Path("/clients") -public class OidcClientResource { - - @Inject - OidcClients clients; - - @GET - public String getResponse() { - OidcClientConfig cfg = new OidcClientConfig(); - cfg.setId("myclient"); - cfg.setAuthServerUrl("http://localhost:8081/auth/realms/quarkus/"); - cfg.setClientId("quarkus"); - cfg.getCredentials().setSecret("secret"); - Uni client = clients.newClient(cfg); - // use this client to get the token - } -} ----- - -[[named-oidc-clients]] -=== Inject named OidcClient and Tokens - -In case of multiple configured ``OidcClient``s you can specify the `OidcClient` injection target by the extra qualifier `@NamedOidcClient` instead of working with `OidcClients`: - -[source,java] ----- -package io.quarkus.oidc.client; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -@Path("/clients") -public class OidcClientResource { - - @Inject - @NamedOidcClient("jwt-secret") - OidcClient client; - - @GET - public String getResponse() { - // use client to get the token - } -} ----- - -The same qualifier can be used to specify the `OidcClient` used for a `Tokens` injection: - -[source,java] ----- -@Provider -@Priority(Priorities.AUTHENTICATION) -@RequestScoped -public class OidcClientRequestCustomFilter implements ClientRequestFilter { - - @Inject - @NamedOidcClient("jwt-secret") - Tokens tokens; - - @Override - public void filter(ClientRequestContext requestContext) throws IOException { - requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + tokens.getAccessToken()); - } -} ----- - -[[oidc-client-reactive-filter]] -=== Use OidcClient in RestClient Reactive ClientFilter - -Add the following Maven Dependency: - -[source,xml] ----- - - io.quarkus - quarkus-oidc-client-reactive-filter - ----- - -Note it will also bring `io.quarkus:quarkus-oidc-client`. - -`quarkus-oidc-client-reactive-filter` extension provides `io.quarkus.oidc.client.filter.OidcClientRequestReactiveFilter`. - -It works similarly to the way `OidcClientRequestFilter` does (see <>) - it uses `OidcClient` to acquire the access token, refresh it if needed, and set it as an HTTP `Authorization` `Bearer` scheme value. The difference is that it works with xref:rest-client-reactive.adoc[Reactive RestClient] and implements a non-blocking client filter which does not block the current IO thread when acquiring or refreshing the tokens. - -`OidcClientRequestReactiveFilter` delays an initial token acquisition until it is executed to avoid blocking an IO thread and it currently can only be registered with `org.eclipse.microprofile.rest.client.annotation.RegisterProvider` annotation: - -[source,java] ----- -import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.client.reactive.filter.OidcClientRequestReactiveFilter; -import io.smallrye.mutiny.Uni; - -@RegisterRestClient -@RegisterProvider(OidcClientRequestReactiveFilter.class) -@Path("/") -public interface ProtectedResourceService { - - @GET - Uni getUserName(); -} ----- - -`OidcClientRequestReactiveFilter` uses a default `OidcClient` by default. A named `OidcClient` can be selected with a `quarkus.oidc-client-reactive-filter.client-name` configuration property. - - -[[oidc-client-filter]] -=== Use OidcClient in RestClient ClientFilter - -Add the following Maven Dependency: - -[source,xml] ----- - - io.quarkus - quarkus-oidc-client-filter - ----- - -Note it will also bring `io.quarkus:quarkus-oidc-client`. - -`quarkus-oidc-client-filter` extension provides `io.quarkus.oidc.client.filter.OidcClientRequestFilter` JAX-RS ClientRequestFilter which uses `OidcClient` to acquire the access token, refresh it if needed, and set it as an HTTP `Authorization` `Bearer` scheme value. - -By default, this filter will get `OidcClient` to acquire the first pair of access and refresh tokens at its initialization time. If the access tokens are short-lived and refresh tokens are not available then the token acquisition should be delayed with `quarkus.oidc-client.early-tokens-acquisition=false`. - -You can selectively register `OidcClientRequestFilter` by using either `io.quarkus.oidc.client.filter.OidcClientFilter` or `org.eclipse.microprofile.rest.client.annotation.RegisterProvider` annotations: - -[source,java] ----- -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.client.filter.OidcClientFilter; - -@RegisterRestClient -@OidcClientFilter -@Path("/") -public interface ProtectedResourceService { - - @GET - String getUserName(); -} ----- - -or - -[source,java] ----- -import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.client.filter.OidcClientRequestFilter; - -@RegisterRestClient -@RegisterProvider(OidcClientRequestFilter.class) -@Path("/") -public interface ProtectedResourceService { - - @GET - String getUserName(); -} ----- - -Alternatively, `OidcClientRequestFilter` can be registered automatically with all MP Rest or JAX-RS clients if `quarkus.oidc-client-filter.register-filter=true` property is set. - -`OidcClientRequestFilter` uses a default `OidcClient` by default. A named `OidcClient` can be selected with a `quarkus.oidc-client-filter.client-name` configuration property. - -=== Use Custom RestClient ClientFilter - -If you prefer you can use your own custom filter and inject `Tokens`: - -[source,java] ----- -import io.quarkus.oidc.client.Tokens; - -@Provider -@Priority(Priorities.AUTHENTICATION) -public class OidcClientRequestCustomFilter implements ClientRequestFilter { - - @Inject - Tokens tokens; - - @Override - public void filter(ClientRequestContext requestContext) throws IOException { - requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + tokens.getAccessToken()); - } -} ----- - -The `Tokens` producer will acquire and refresh the tokens, and the custom filter will decide how and when to use the token. - -You can also inject named `Tokens`, see <> - -[[refresh-access-tokens]] -=== Refreshing Access Tokens - -`OidcClientRequestReactiveFilter`, `OidcClientRequestFilter` and `Tokens` producers will refresh the current expired access token if the refresh token is available. -Additionally, `quarkus.oidc-client.refresh-token-time-skew` property can be used for a preemptive access token refreshment to avoid sending nearly expired access tokens which may cause HTTP 401 errors. For example if this property is set to `3S` and the access token will expire in less than 3 seconds then this token will be auto-refreshed. - -If the access token needs to be refreshed but no refresh token is available then an attempt will be made to acquire a new token using the configured grant such as `client_credentials`. - -Please note that some OpenID Connect Providers will not return a refresh token in a `client_credentials` grant response. For example, starting from Keycloak 12 a refresh token will not be returned by default for `client_credentials`. The providers may also restrict a number of times a refresh token can be used. - -[[oidc-client-authentication]] -=== OidcClient Authentication - -`OidcClient` has to authenticate to the OpenID Connect Provider for the `client_credentials` and other grant requests to succeed. -All the https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication[OIDC Client Authentication] options are supported, for example: - -`client_secret_basic`: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=mysecret ----- - -or - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.client-secret.value=mysecret ----- - -or with the secret retrieved from a xref:credentials-provider.adoc[CredentialsProvider]: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app - -# This is a key which will be used to retrieve a secret from the map of credentails returned from CredentialsProvider -quarkus.oidc-client.credentials.client-secret.provider.key=mysecret-key -# Set it only if more than one CredentialsProvider can be registered -quarkus.oidc-client.credentials.client-secret.provider.name=oidc-credentials-provider ----- - -`client_secret_post`: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.client-secret.value=mysecret -quarkus.oidc-client.credentials.client-secret.method=post ----- - -`client_secret_jwt`, signature algorithm is `HS256`: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow ----- - -or with the secret retrieved from a xref:credentials-provider.adoc[CredentialsProvider], signature algorithm is `HS256`: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app - -# This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider -quarkus.oidc-client.credentials.jwt.secret-provider.key=mysecret-key -# Set it only if more than one CredentialsProvider can be registered -quarkus.oidc-client.credentials.jwt.secret-provider.name=oidc-credentials-provider ----- - -`private_key_jwt` with the PEM key file, signature algorithm is `RS256`: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.jwt.key-file=privateKey.pem ----- - -`private_key_jwt` with the key store file, signature algorithm is `RS256`: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.jwt.key-store-file=keystore.jks -quarkus.oidc-client.credentials.jwt.key-store-password=mypassword -quarkus.oidc-client.credentials.jwt.key-password=mykeypassword - -# Private key alias inside the keystore -quarkus.oidc-client.credentials.jwt.key-id=mykeyAlias ----- - -Using `client_secret_jwt` or `private_key_jwt` authentication methods ensures that no client secret goes over the wire. - -==== Additional JWT Authentication options - -If either `client_secret_jwt` or `private_key_jwt` authentication methods are used then the JWT signature algorithm, key identifier, audience, subject and issuer can be customized, for example: - -[source,properties] ----- -# private_key_jwt client authentication - -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.jwt.key-file=privateKey.pem - -# This is a token key identifier 'kid' header - set it if your OpenID Connect provider requires it. -# Note if the key is represented in a JSON Web Key (JWK) format with a `kid` property then -# using 'quarkus.oidc-client.credentials.jwt.token-key-id' is not necessary. -quarkus.oidc-client.credentials.jwt.token-key-id=mykey - -# Use RS512 signature algorithm instead of the default RS256 -quarkus.oidc-client.credentials.jwt.signature-algorithm=RS512 - -# The token endpoint URL is the default audience value, use the base address URL instead: -quarkus.oidc-client.credentials.jwt.audience=${quarkus.oidc-client.auth-server-url} - -# custom subject instead of the client id : -quarkus.oidc-client.credentials.jwt.subject=custom-subject - -# custom issuer instead of the client id : -quarkus.oidc-client.credentials.jwt.issuer=custom-issuer ----- - -==== Apple POST JWT - -Apple OpenID Connect Provider uses a `client_secret_post` method where a secret is a JWT produced with a `private_key_jwt` authentication method but with Apple account specific issuer and subject properties. - -`quarkus-oidc-client` supports a non-standard `client_secret_post_jwt` authentication method which can be configured as follows: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=${apple.url} -quarkus.oidc-client.client-id=${apple.client-id} -quarkus.oidc-client.credentials.client-secret.method=post-jwt - -quarkus.oidc-client.credentials.jwt.key-file=ecPrivateKey.pem -quarkus.oidc-client.credentials.jwt.signature-algorithm=ES256 -quarkus.oidc-client.credentials.jwt.subject=${apple.subject} -quarkus.oidc-client.credentials.jwt.issuer=${apple.issuer} ----- - -==== Mutual TLS - -Some OpenID Connect Providers may require that a client is authenticated as part of the `Mutual TLS` (`MTLS`) authentication process. - -`quarkus-oidc-client` can be configured as follows to support `MTLS`: - -[source,properties] ----- -quarkus.oidc.tls.verification=certificate-validation - -# Keystore configuration -quarkus.oidc.client.tls.key-store-file=client-keystore.jks -quarkus.oidc.client.tls.key-store-password=${key-store-password} - -# Add more keystore properties if needed: -#quarkus.oidc.client.tls.key-store-alias=keyAlias -#quarkus.oidc.client.tls.key-store-alias-password=keyAliasPassword - -# Truststore configuration -quarkus.oidc.client.tls.trust-store-file=client-truststore.jks -quarkus.oidc.client.tls.trust-store-password=${trust-store-password} -# Add more truststore properties if needed: -#quarkus.oidc.client.tls.trust-store-alias=certAlias ----- - -[[integration-testing-oidc-client]] -=== Testing - -Start by adding the following dependencies to your test project: - -[source,xml] ----- - - io.quarkus - quarkus-junit5 - test - - - org.awaitility - awaitility - test - ----- - -[[integration-testing-wiremock]] -==== Wiremock - -Add the following dependencies to your test project: - -[source,xml] ----- - - com.github.tomakehurst - wiremock-jre8 - test - ----- - -Write Wiremock based `QuarkusTestResourceLifecycleManager`, for example: -[source, java] ----- -package io.quarkus.it.keycloak; - -import static com.github.tomakehurst.wiremock.client.WireMock.matching; -import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.wireMockConfig; - -import java.util.HashMap; -import java.util.Map; - -import com.github.tomakehurst.wiremock.WireMockServer; -import com.github.tomakehurst.wiremock.client.WireMock; -import com.github.tomakehurst.wiremock.core.Options.ChunkedEncodingPolicy; - -import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; - -public class KeycloakRealmResourceManager implements QuarkusTestResourceLifecycleManager { - private WireMockServer server; - - @Override - public Map start() { - - server = new WireMockServer(wireMockConfig().dynamicPort().useChunkedTransferEncoding(ChunkedEncodingPolicy.NEVER)); - server.start(); - - server.stubFor(WireMock.post("/tokens") - .withRequestBody(matching("grant_type=password&username=alice&password=alice")) - .willReturn(WireMock - .aResponse() - .withHeader("Content-Type", "application/json") - .withBody( - "{\"access_token\":\"access_token_1\", \"expires_in\":4, \"refresh_token\":\"refresh_token_1\"}"))); - server.stubFor(WireMock.post("/tokens") - .withRequestBody(matching("grant_type=refresh_token&refresh_token=refresh_token_1")) - .willReturn(WireMock - .aResponse() - .withHeader("Content-Type", "application/json") - .withBody( - "{\"access_token\":\"access_token_2\", \"expires_in\":4, \"refresh_token\":\"refresh_token_1\"}"))); - - - Map conf = new HashMap<>(); - conf.put("keycloak.url", server.baseUrl()); - return conf; - } - - @Override - public synchronized void stop() { - if (server != null) { - server.stop(); - server = null; - } - } -} ----- - -Prepare the REST test endpoints, you can have the test frontend endpoint which uses the injected MP REST client with a registered OidcClient filter to invoke on the downstream endpoint which echoes the token back, for example, see the `integration-tests/oidc-client-wiremock` in the `main` Quarkus repository. - -Set `application.properties`, for example: - -[source, properties] ----- -# Use 'keycloak.url' property set by the test KeycloakRealmResourceManager -quarkus.oidc-client.auth-server-url=${keycloak.url} -quarkus.oidc-client.discovery-enabled=false -quarkus.oidc-client.token-path=/tokens -quarkus.oidc-client.client-id=quarkus-service-app -quarkus.oidc-client.credentials.secret=secret -quarkus.oidc-client.grant.type=password -quarkus.oidc-client.grant-options.password.username=alice -quarkus.oidc-client.grant-options.password.password=alice ----- - -and finally write the test code. Given the Wiremock-based resource above, the first test invocation should return `access_token_1` access token which will expire in 4 seconds. Use `awaitility` to wait for about 5 seconds, and now the next test invocation should return `access_token_2` access token which confirms the expired `access_token_1` access token has been refreshed. - -==== Keycloak - -If you work with Keycloak then you can use the same approach as described in the xref:security-openid-connect#integration-testing-keycloak.adoc[OpenID Connect Bearer Token Integration testing] `Keycloak` section. - -=== How to check the errors in the logs - -Please enable `io.quarkus.oidc.client.runtime.OidcClientImpl` `TRACE` level logging to see more details about the token acquisition and refresh errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientImpl".level=TRACE -quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientImpl".min-level=TRACE ----- - -Please enable `io.quarkus.oidc.client.runtime.OidcClientRecorder` `TRACE` level logging to see more details about the OidcClient initialization errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientRecorder".level=TRACE -quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientRecorder".min-level=TRACE ----- - -[[token-propagation]] -== Token Propagation - -The `quarkus-oidc-token-propagation` extension provides two JAX-RS `javax.ws.rs.client.ClientRequestFilter` class implementations that simplify the propagation of authentication information. -`io.quarkus.oidc.token.propagation.AccessTokenRequestFilter` propagates the xref:security-openid-connect.adoc[Bearer] token present in the current active request or the token acquired from the xref:security-openid-connect-web-authentication.adoc[Authorization Code Flow], as the HTTP `Authorization` header's `Bearer` scheme value. -The `io.quarkus.oidc.token.propagation.JsonWebTokenRequestFilter` provides the same functionality, but in addition provides support for JWT tokens. - -When you need to propagate the current Authorization Code Flow access token then the immediate token propagation will work well - as the code flow access tokens (as opposed to ID tokens) are meant to be propagated for the current Quarkus endpoint to access the remote services on behalf of the currently authenticated user. - -However, the direct end to end Bearer token propagation should be avoided if possible. For example, `Client -> Service A -> Service B` where `Service B` receives a token sent by `Client` to `Service A`. In such cases `Service B` will not be able to distinguish if the token came from `Service A` or from `Client` directly. For `Service B` to verify the token came from `Service A` it should be able to assert a new issuer and audience claims. - -Additionally, a complex application may need to exchange or update the tokens before propagating them. For example, the access context might be different when `Service A` is accessing `Service B`. In this case, `Service A` might be granted a narrow or a completely different set of scopes to access `Service B`. - -The following sections show how `AccessTokenRequestFilter` and `JsonWebTokenRequestFilter` can help. - -=== RestClient AccessTokenRequestFilter - -`AccessTokenRequestFilter` treats all tokens as Strings and as such it can work with both JWT and opaque tokens. - -You can selectively register `AccessTokenRequestFilter` by using either `io.quarkus.oidc.token.propagation.AccessToken` or `org.eclipse.microprofile.rest.client.annotation.RegisterProvider`, for example: - -[source,java] ----- -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.token.propagation.AccessToken; - -@RegisterRestClient -@AccessToken -@Path("/") -public interface ProtectedResourceService { - - @GET - String getUserName(); -} ----- -or - -[source,java] ----- -import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.token.propagation.AccessTokenRequestFilter; - -@RegisterRestClient -@RegisterProvider(AccessTokenRequestFilter.class) -@Path("/") -public interface ProtectedResourceService { - - @GET - String getUserName(); -} ----- - -Alternatively, `AccessTokenRequestFilter` can be registered automatically with all MP Rest or JAX-RS clients if `quarkus.oidc-token-propagation.register-filter` property is set to `true` and `quarkus.oidc-token-propagation.json-web-token` property is set to `false` (which is a default value). - -==== Exchange Token Before Propagation - -If the current access token needs to be exchanged before propagation and you work with link:https://www.keycloak.org/docs/latest/securing_apps/#_token-exchange[Keycloak] or other OpenID Connect Provider which supports a link:https://tools.ietf.org/html/rfc8693[Token Exchange] token grant then you can configure `AccessTokenRequestFilter` like this: - -[source,properties] ----- -quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc-client.client-id=quarkus-app -quarkus.oidc-client.credentials.secret=secret -quarkus.oidc-client.grant.type=exchange -quarkus.oidc-client.grant-options.exchange.audience=quarkus-app-exchange - -quarkus.oidc-token-propagation.exchange-token=true ----- - -Note `AccessTokenRequestFilter` will use `OidcClient` to exchange the current token and you can use `quarkus.oidc-client.grant-options.exchange` to set the additional exchange properties expected by your OpenID Connect Provider. - -`AccessTokenRequestFilter` uses a default `OidcClient` by default. A named `OidcClient` can be selected with a `quarkus.oidc-token-propagation.client-name` configuration property. - -=== RestClient JsonWebTokenRequestFilter - -Using `JsonWebTokenRequestFilter` is recommended if you work with Bearer JWT tokens where these tokens can have their claims such as `issuer` and `audience` modified and the updated tokens secured (for example, re-signed) again. It expects an injected `org.eclipse.microprofile.jwt.JsonWebToken` and therefore will not work with the opaque tokens. Also, if your OpenID Connect Provider supports a Token Exchange protocol then it is recommended to use `AccessTokenRequestFilter` instead - as both JWT and opaque bearer tokens can be securely exchanged with `AccessTokenRequestFilter`. - -`JsonWebTokenRequestFilter` makes it easy for `Service A` implementations to update the injected `org.eclipse.microprofile.jwt.JsonWebToken` with the new `issuer` and `audience` claim values and secure the updated token again with a new signature. The only difficult step is to ensure `Service A` has a signing key - it should be provisioned from a secure file system or from the remote secure storage such as Vault. - -You can selectively register `JsonWebTokenRequestFilter` by using either `io.quarkus.oidc.token.propagation.JsonWebToken` or `org.eclipse.microprofile.rest.client.annotation.RegisterProvider`, for example: - -[source,java] ----- -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.token.propagation.JsonWebToken; - -@RegisterRestClient -@JsonWebToken -@Path("/") -public interface ProtectedResourceService { - - @GET - String getUserName(); -} ----- -or - -[source,java] ----- -import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; -import io.quarkus.oidc.token.propagation.JsonWebTokenRequestFilter; - -@RegisterRestClient -@RegisterProvider(JsonWebTokenRequestFilter.class) -@Path("/") -public interface ProtectedResourceService { - - @GET - String getUserName(); -} ----- - -Alternatively, `JsonWebTokenRequestFilter` can be registered automatically with all MP Rest or JAX-RS clients if both `quarkus.oidc-token-propagation.register-filter` and `quarkus.oidc-token-propagation.json-web-token` properties are set to `true`. - -==== Update Token Before Propagation - -If the injected token needs to have its `iss` (issuer) and/or `aud` (audience) claims updated and secured again with a new signature then you can configure `JsonWebTokenRequestFilter` like this: - -[source,properties] ----- -quarkus.oidc-token-propagation.secure-json-web-token=true -smallrye.jwt.sign.key.location=/privateKey.pem -# Set a new issuer -smallrye.jwt.new-token.issuer=http://frontend-resource -# Set a new audience -smallrye.jwt.new-token.audience=http://downstream-resource -# Override the existing token issuer and audience claims if they are already set -smallrye.jwt.new-token.override-matching-claims=true ----- - -As already noted above, please use `AccessTokenRequestFilter` if you work with Keycloak or OpenID Connect Provider which supports a Token Exchange protocol. - -[[integration-testing-token-propagation]] -=== Testing - -You can generate the tokens as described in xref:security-openid-connect.adoc#integration-testing[OpenID Connect Bearer Token Integration testing] section. -Prepare the REST test endpoints, you can have the test frontend endpoint which uses the injected MP REST client with a registered token propagation filter to invoke on the downstream endpoint, for example, see the `integration-tests/oidc-token-propagation` in the `main` Quarkus repository. - -[[reactive-token-propagation]] -== Token Propagation Reactive - -Add the following Maven Dependency: - -[source,xml] ----- - - io.quarkus - quarkus-oidc-token-propagation-reactive - ----- - -The `quarkus-oidc-token-propagation-reactive` extension provides `io.quarkus.oidc.token.propagation.reactive.AccessTokenRequestReactiveFilter` which can be used to propagate the current `Bearer` or `Authorization Code Flow` access tokens. - -The `quarkus-oidc-token-propagation-reactive` extension (as opposed to the non-reactive `quarkus-oidc-token-propagation` extension) does not currently support the exchanging or resigning the tokens before the propagation. -However these features may be added in the future. - -== References - -* xref:security.adoc[Quarkus Security] -* xref:security-openid-connect.adoc[Quarkus - Using OpenID Connect to Protect Service Applications using Bearer Token Authorization] -* xref:security-openid-connect-web-authentication.adoc[Quarkus - Using OpenID Connect to Protect Web Applications using Authorization Code Flow] diff --git a/_versions/2.7/guides/security-openid-connect-dev-services.adoc b/_versions/2.7/guides/security-openid-connect-dev-services.adoc deleted file mode 100644 index 0d8ff2dcd91..00000000000 --- a/_versions/2.7/guides/security-openid-connect-dev-services.adoc +++ /dev/null @@ -1,373 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Dev Services and UI for OpenID Connect (OIDC) - -include::./attributes.adoc[] - -This guide covers the Dev Services and UI for OpenID Connect (OIDC) Keycloak provider and explains how to support Dev Services and UI for other OpenID Connect providers. -It also describes Dev UI for all OpenID Connect providers which have already been started before Quarkus is launched in a dev mode. - -== Introduction - -Quarkus introduces an experimental `Dev Services For Keycloak` feature which is enabled by default when the `quarkus-oidc` extension is started in dev mode and when the integration tests are running in test mode, but only when no `quarkus.oidc.auth-server-url` property is configured. -It starts a Keycloak container for both the dev and/or test modes and initializes them by registering the existing Keycloak realm or creating a new realm with the client and users for you to start developing your Quarkus application secured by Keycloak immediately. It will restart the container when the `application.properties` or the realm file changes have been detected. - -Additionally, xref:dev-ui.adoc[Dev UI] available at http://localhost:8080/q/dev[/q/dev] complements this feature with a Dev UI page which helps to acquire the tokens from Keycloak and test your Quarkus application. - -If `quarkus.oidc.auth-server-url` is already set then a generic OpenID Connect Dev Console which can be used with all OpenID Connect providers will be activated, please see <> for more information. - -== Dev Services for Keycloak - -Start your application without configuring `quarkus.oidc` properties in `application.properties` with: - -include::includes/devtools/dev.adoc[] - -You will see in the console something similar to: - -[source,shell] ----- -KeyCloak Dev Services Starting: -2021-11-02 17:14:24,864 INFO [org.tes.con.wai.str.HttpWaitStrategy] (build-10) /unruffled_agnesi: Waiting for 60 seconds for URL: http://localhost:32781/auth (where port 32781 maps to container port 8080) -2021-11-02 17:14:44,170 INFO [io.qua.oid.dep.dev.key.KeycloakDevServicesProcessor] (build-10) Dev Services for Keycloak started. ----- - -The `quay.io/keycloak/keycloak:15.0.2` image which contains a `Keycloak` distribution powered by `WildFly` is currently used to start a container by default. See the <> section for more details about the image selection. - -[IMPORTANT] -==== -When logging in the Keycloak admin console, the username is `admin` and the password is `admin`. -==== - -Note that by default, `Dev Services for Keycloak` will not start a new container if it finds a container with a `quarkus-dev-service-keycloak` label and connect to it if this label's value matches the value of the `quarkus.keycloak.devservices.service-name` property (default value is `quarkus`). In such cases you will see a slighty different output when running: - -include::includes/devtools/dev.adoc[] - -[source,shell] ----- -2021-08-27 18:42:43,530 INFO [io.qua.dev.com.ContainerLocator] (build-15) Dev Services container found: 48fee151a31ddfe32c39965be8f61108587b25ed2f66cdc18bb926d9e2e570c5 (quay.io/keycloak/keycloak:14.0.0). Connecting to: 0.0.0.0:32797. -2021-08-27 18:42:43,600 INFO [io.qua.oid.dep.dev.key.KeycloakDevServicesProcessor] (build-15) Dev Services for Keycloak started. -... ----- - -Note that you can disable sharing the containers with `quarkus.keycloak.devservices.shared=false`. - -Now open the main link:http://localhost:8080/q/dev[Dev UI page] and you will see the `OpenID Connect Card` linking to a `Keycloak` page: - -image::dev-ui-oidc-keycloak-card.png[alt=Dev UI OpenID Connect Card,role="center"] - -Click on the `Provider: Keycloak` link and you will see a Keycloak page which will be presented slightly differently depending on how `Dev Services for Keycloak` feature has been configured. - -[[develop-service-applications]] -=== Developing Service Applications - -By default the Keycloak page can be used to support the development of a xref:security-openid-connect.adoc[Quarkus OIDC service application]. - -[[keycloak-authorization-code-grant]] -==== Authorization Code Grant - -If you set `quarkus.oidc.devui.grant.type=code` in `application.properties` (this is a default value) then an `authorization_code` grant will be used to acquire both access and ID tokens. Using this grant is recommended to emulate a typical flow where a `Single Page Application` acquires the tokens and uses them to access Quarkus services. - -First you will see an option to `Log into Single Page Application`: - -image::dev-ui-keycloak-sign-in-to-spa.png[alt=Dev UI OpenID Connect Keycloak Page - Log into Single Page Application,role="center"] - -Next, after you select this option, you will be redirected to Keycloak to authenticate, example, as `alice:alice` and then returned to the page representing the SPA: - -image::dev-ui-keycloak-test-service-from-spa.png[alt=Dev UI OpenID Connect Keycloak Single Page Application,role="center"] - -You can view the acquired access and ID tokens, for example: - -image::dev-ui-keycloak-decoded-tokens.png[alt=Dev UI OpenID Connect Keycloak Decoded Tokens View,role="center"] - -This view shows the encoded JWT token on the left hand side and highlights the headers (red colour), payload/claims (green colour) and signature (blue colour). It also shows the decoded JWT token on the right hand side where you can see the header and claim names and their values. - -Next test the service with either the current access or ID token. SPA usually sends the access tokens to the application endpoints but there could be cases where the ID tokens are forwarded to the application frontends for them to be aware about the user who is currently logged into SPA. - -Finally you can select a `Log Out` image::dev-ui-keycloak-logout.png option if you'd like to log out and authenticate to Keycloak as a different user. - -Note Keycloak may return an error when you try to `Log into Single Page Application`. For example, `quarkus.oidc.client-id` may not match the client id in the realm imported to Keycloak or the client in this realm is not configured correctly to support the authorization code flow, etc. In such cases Keycloak will return an `error_description` query parameter and `Dev UI` will also show this error description, for example: - -image::dev-ui-keycloak-login-error.png[alt=Dev UI Keycloak Login Error,role="center"] - -If the error occurs then log into Keycloak using the `Keycloak Admin` option and update the realm configuration as necesary and also check the `application.properties`. - -===== Test with Swagger UI or GrapghQL UI - -You can avoid manually entering the service paths and test your service with `Swagger UI` or `GraphQL UI` if `quarkus-smallrye-openapi` and/or `quarkus-smallrye-graphql` are used in your project. For example, if you start Quarkus in dev mode with both `quarkus-smallrye-openapi` and `quarkus-smallrye-graphql` dependencies then you will see the following options after logging in into Keycloak: - -image::dev-ui-keycloak-test-service-swaggerui-graphql.png[alt=Test your service with Swagger UI or GraphQL UI,role="center"] - -For example, clicking on `Swagger UI` will open `Swagger UI` in a new browser tab where you can test the service using the token acquired by Dev UI for Keycloak. -and `Swagger UI` will not try to re-authenticate again. - -Integration with `GraphQL UI` works in a similar way, the access token acquired by Dev UI for Keycloak will be used. - -[NOTE] -==== -You may need to register a redirect URI for the authorization code flow initiated by Dev UI for Keycloak to work because Keycloak may enforce that the authenticated users are redirected only to the configured redirect URI. It is recommended to do in production to avoid the users being redirected to the wrong endpoints which might happen if the correct `redirect_uri` parameter in the authentication request URI has been manipulated. - -If Keycloak does enforce it then you will see an authentication error informing you that the `redirect_uri` value is wrong. - -In this case select the `Keycloak Admin` option in the right top corner, login as `admin:admin`, select the test realm and the client which Dev UI for Keycloak is configured with and add `http://localhost:8080/q/dev/io.quarkus.quarkus-oidc/provider` to `Valid Redirect URIs`. If you used `-Dquarkus.http.port` when starting Quarkus then change `8080` to the value of `quarkus.http.port`. - -If the container is shared between multiple applications running on different ports then you will need to register `redirect_uri` values for each of these applications. - -You can set the `redirect_uri` value to `*` only for the test purposes, especially when the containers are shared between multiple applications. - -`*` `redirect_uri` value is set by `Dev Services for Keycloak` when it creates a default realm, if no custom realm is imported. -==== - -==== Implicit Grant - -If you set `quarkus.oidc.devui.grant.type=implicit` in `application.properties` then an `implicit` grant will be used to acquire both access and ID tokens. Use this grant for emulating a `Single Page Application` only if the authorization code grant does not work (for example, a client is configured in Keycloak to support an implicit grant, etc). - -==== Password Grant - -If you set `quarkus.oidc.devui.grant.type=password` in `application.properties` then you will see a screen like this one: - -image::dev-ui-keycloak-password-grant.png[alt=Dev UI OpenID Connect Keycloak Page - Password Grant,role="center"] - -Enter a registered user name, user password, a relative service endpoint path, click on `Test Service` and you will see a status code such as `200`, `403`, `401` or `404` printed. -If the user name is also set in `quarkus.keycloak.devservices.users` map property containing user names and passwords then you do not have to set a password when testing the service. -But note, you do not have to initialize `quarkus.keycloak.devservices.users` to test the service using the password grant. - -You will also see in the Dev UI console something similar to: - -[source,shell] ----- -2021-07-19 17:58:11,407 INFO [io.qua.oid.dep.dev.key.KeycloakDevConsolePostHandler] (security-openid-connect-quickstart-dev.jar) (DEV Console action) Using password grant to get a token from 'http://localhost:32818/auth/realms/quarkus/protocol/openid-connect/token' for user 'alice' in realm 'quarkus' with client id 'quarkus-app' -2021-07-19 17:58:11,533 INFO [io.qua.oid.dep.dev.key.KeycloakDevConsolePostHandler] (security-openid-connect-quickstart-dev.jar) (DEV Console action) Test token: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJ6Z2tDazJQZ1JaYnVlVG5kcTFKSW1sVnNoZ2hhbWhtbnBNcXU0QUt5MnJBIn0.ey... -2021-07-19 17:58:11,536 INFO [io.qua.oid.dep.dev.key.KeycloakDevConsolePostHandler] (security-openid-connect-quickstart-dev.jar) (DEV Console action) Sending token to 'http://localhost:8080/api/admin' -2021-07-19 17:58:11,674 INFO [io.qua.oid.dep.dev.key.KeycloakDevConsolePostHandler] (security-openid-connect-quickstart-dev.jar) (DEV Console action) Result: 200 ----- - -A token is acquired from Keycloak using a `password` grant and is sent to the service endpoint. - -==== Client Credentials Grant - -If you set `quarkus.oidc.devui.grant.type=client` then a `client_credentials` grant will be used to acquire a token, with the page showing no `User` field in this case: - -image::dev-ui-keycloak-client-credentials-grant.png[alt=Dev UI OpenID Connect Keycloak Page - Client Credentials Grant,role="center"] - -You can test the service the same way as with the `Password` grant. - -[[develop-web-app-applications]] -=== Developing OpenID Connect Web App Applications - -If you develop a xref:security-openid-connect-web-authentication.adoc[Quarkus OIDC web-app application] then you should set `quarkus.oidc.application-type=web-app` in `application.properties` before starting the application. - -You will see a screen like this one: - -image::dev-ui-keycloak-sign-in-to-service.png[alt=Dev UI OpenID Connect Keycloak Sign In,role="center"] - -Set a relative service endpoint path, click on `Sign In To Service` and you will be redirected to Keycloak to enter a username and password in a new browser tab and get a response from the Quarkus application. - -Note that in this case Dev UI does not really enrich a dev experience since it is a Quarkus OIDC `web-app` application which controls the authorization code flow and acquires the tokens. - -To make Dev UI more useful for supporting the development of OIDC `web-app` applications you may want to consider setting profile specific values for `quarkus.oidc.application-type`: - -[source,properties] ----- -%prod.quarkus.oidc.application-type=web-app -%test.quarkus.oidc.application-type=web-app -%dev.quarkus.oidc.application-type=service ----- - -It will ensure that all Dev UI options described in <> will be available when your `web-app` application is run in dev mode. The limitation of this approach is that both access and ID tokens returned with the code flow and acquired with Dev UI will be sent to the endpoint as HTTP `Bearer` tokens - which will not work well if your endpoint requires the injection of `IdToken`. -However it will work as expected if your `web-app` application only uses the access token, for example, as a source of roles or to get `UserInfo`, even if it is assumed to be a `service` application in devmode. - -=== Running the tests - -You can run the tests against a Keycloak container started in a test mode in a xref:continuous-testing.adoc[Continuous Testing] mode. - -It is also recommended to run the integration tests against Keycloak using `Dev Services for Keycloak`. -Please see xref:security-openid-connect.adoc#integration-testing-keycloak-devservices[Testing OpenID Connect Service Applications with Dev Services] and xref:security-openid-connect-web-authentication.adoc#integration-testing-keycloak-devservices[Testing OpenID Connect WebApp Applications with Dev Services] for more information. - -[[keycloak-initialization]] -=== Keycloak Initialization - -The `quay.io/keycloak/keycloak-x:16.0.0` image which contains a `Keycloak-X` distribution powered by `Quarkus` is used to start a container by default. -`quarkus.keycloak.devservices.image-name` can be used to change the Keycloak image name. For example, set it to `quay.io/keycloak/keycloak:16.0.0` to use a `Keycloak` distribution powered by `WildFly`. - -`Dev Services for Keycloak` will initialize a launched Keycloak server next. - -By default, the `quarkus`, `quarkus-app` client with a `secret` password, `alice` and `bob` users (with the passwords matching the names), and `user` and `admin` roles are created, with `alice` given both `admin` and `user` roles and `bob` - the `user` role. - -Usernames, secrets and their roles can be customized with `quarkus.keycloak.devservices.users` (the map which contains usernames and secrets) and `quarkus.keycloak.devservices.roles` (the map which contains user names and comma separated role values). - -For example: - -[source,properties] ----- -%dev.quarkus.keycloak.devservices.users.duke=dukePassword -%dev.quarkus.keycloak.devservices.roles.duke=reader -%dev.quarkus.keycloak.devservices.users.john=johnPassword -%dev.quarkus.keycloak.devservices.roles.john=reader,writer ----- - -This configuration creates two users: - * `duke` with a `dukePassword` password and a `reader` role - * `john` with a `johnPassword` password and `reader` and `writer` roles - -`quarkus.oidc.client-id` and `quarkus.oidc.credentials.secret` can be used to customize the client id and secret. - -However it is likely your Keycloak configuration may be more complex and require setting more properties. - -This is why `quarkus.keycloak.devservices.realm-path` is always checked first before trying to initialize Keycloak with the default or configured realm, client, user and roles properties. If the realm file exists on the file system or classpath then only this realm will be used to initialize Keycloak. - -Also the Keycloak page offers an option to `Sign In To Keycloak To Configure Realms` using a `Keycloak Admin` option in the right top corner: - -image::dev-ui-keycloak-admin.png[alt=Dev UI OpenID Connect Keycloak Page - Keycloak Admin,role="center"] - -Sign in to Keycloak as `admin:admin` in order to further customize the realm properties, create or import a new realm, export the realm. - -Note that even if you initialize Keycloak from a realm file, it is still needed to set `quarkus.keycloak.devservices.users` property if a `password` grant is used to acquire the tokens to test the OIDC `service` applications. - -== Disable Dev Services for Keycloak - -`Dev Services For Keycloak` will not be activated if either `quarkus.oidc.auth-server-url` is already initialized or the default OIDC tenant is disabled with `quarkus.oidc.tenant.enabled=false`, irrespectively of whether you work with Keycloak or not. - -If you prefer not to have a `Dev Services for Keycloak` container started or do not work with Keycloak then you can also disable this feature with `quarkus.keycloak.devservices.enabled=false` - it will only be necessary if you expect to start `quarkus:dev` without `quarkus.oidc.auth-server-url`. - -The main Dev UI page will include an empty `OpenID Connect Card` when `Dev Services for Keycloak` is disabled and the `quarkus.oidc.auth-server-url` property -has not been initialized: - -image::dev-ui-oidc-card.png[alt=Dev UI OpenID Connect Card,role="center"] - -If `quarkus.oidc.auth-server-url` is already set then a generic OpenID Connect Dev Console which can be used with all OpenID Connect providers may be activated, please see <> for more information. - -[[dev-ui-all-oidc-providers]] -== Dev UI for all OpenID Connect Providers - -If `quarkus.oidc.auth-server-url` points to an already started OpenID Connect provider (which can be Keycloak or other provider), `quarkus.oidc.auth-server-url` is set to `service` (which is a default value) and at least `quarkus.oidc.client-id` is set then `Dev UI for all OpenID Connect Providers` will be activated. - -Setting `quarkus.oidc.credentials.secret` will mostly likely be required for Keycloak and other providers for the authorization code flow initiated from Dev UI to complete, unless the client identified with `quarkus.oidc.client-id` is configured as a public client in your OpenID Connect provider's administration console. - -Run: - -include::includes/devtools/dev.adoc[] - -And you will see the following message: - -[source,shell] ----- -... -2021-09-07 15:53:42,697 INFO [io.qua.oid.dep.dev.OidcDevConsoleProcessor] (build-41) OIDC Dev Console: discovering the provider metadata at http://localhost:8180/auth/realms/quarkus/.well-known/openid-configuration -... ----- - -If the provider metadata discovery has been successful then, after you open the main link:http://localhost:8080/q/dev[Dev UI page], you will see the `OpenID Connect Card` page linking to `Dev Console`: - -image::dev-ui-oidc-devconsole-card.png[alt=Generic Dev UI OpenID Connect Card,role="center"] - -Follow the link and you'll be able log in to your provider, get the tokens and test the application. The experience will be the same as described in the <> section, where `Dev Services for Keycloak` container has been started, especially if you work with Keycloak (please also pay attention to a `redirect_uri` note in that section). - -If you work with other providers then a Dev UI experience described in the <> section might differ slightly. For example, an access token may not be in a JWT format so it won't be possible to show its internal content, though all providers should return an ID Token as JWT. - -[NOTE] -==== -The current access token is used by default to test the service with `Swagger UI` or `GrapghQL UI`. If the provider (other than Keycloak) returns a binary access token then it will be used with `Swagger UI` or `GrapghQL UI` only if this provider has a token introspection endpoint otherwise an `IdToken` which is always in a JWT format will be passed to `Swagger UI` or `GrapghQL UI`. In such cases you can verify with the manual Dev UI test that `401` will always be returned for the current binary access token. Also note that using `IdToken` as a fallback with either of these UIs is only possible with the authorization code flow. -==== - -Some providers such as `Auth0` do not support a standard RP initiated logout so the provider specific logout properties will have to be confogured for a logout option be visible, please see xref:security-openid-connect-web-authentication.adoc#user-initiated-logout[OpenID Connect User-Initiated Logout] for more information. - -Similarly, if you'd like to use a `password` or `client_credentials` grant for Dev UI to acquire the tokens then you may have to configure some extra provider specific properties, for example: - -[source,properties] ----- -quarkus.oidc.devui.grant.type=password -quarkus.oidc.devui.grant-options.password.audience=http://localhost:8080 ----- - -== Dev Services and UI Support for other OpenID Connect Providers - -Your custom extension would need to extend `quarkus-oidc` and add the dependencies required to support your provider to the extension's `deployment` module only. - -The build step dealing with the `Dev Services` should additionally register two runtime properties into the "io.quarkus.quarkus-oidc" namespace: `oidcProviderName` (for example, `Google`) and `oidcProviderUrlBase` (for example: `mycompany.devservices-google`) for the `OpenID Connect Card` to link to the Dev UI page representing your provider, for example: - -[source,java] ----- -package io.quarkus.oidc.okta.runtime; - -import java.util.function.Supplier; - -import io.quarkus.runtime.annotations.Recorder; - -// This simple recorder is the only code which will be located in the extension's `runtime` module -@Recorder -public class OktaDevServicesRecorder { - - public Supplier getProviderName() { - return new Supplier() { - - @Override - public String get() { - return "OKTA"; - } - }; - } - - public Supplier getProviderUrlBase() { - return new Supplier() { - - @Override - public String get() { - return "io.quarkus" + "." + "quarkus-oidc-okta"; - } - }; - } -} - - -package io.quarkus.oidc.okta.deployment.devservices; - -import static io.quarkus.deployment.annotations.ExecutionTime.RUNTIME_INIT; - -import java.util.Optional; - -import io.quarkus.deployment.IsDevelopment; -import io.quarkus.deployment.annotations.BuildProducer; -import io.quarkus.deployment.annotations.BuildStep; -import io.quarkus.deployment.annotations.Consume; -import io.quarkus.deployment.annotations.Record; -import io.quarkus.deployment.builditem.RuntimeConfigSetupCompleteBuildItem; -import io.quarkus.devconsole.spi.DevConsoleRouteBuildItem; -import io.quarkus.devconsole.spi.DevConsoleRuntimeTemplateInfoBuildItem; - -public class OktaDevConsoleProcessor { - - @BuildStep(onlyIf = IsDevelopment.class) - @Record(value = RUNTIME_INIT) - public void setOidcProviderProperties(BuildProducer provider, - OktaDevServicesRecorder recorder, - Optional configProps) { - if (configProps.isPresent()) { - provider.produce(new DevConsoleRuntimeTemplateInfoBuildItem("io.quarkus", "quarkus-oidc", "oidcProviderName", - recorder.getProviderName())); - provider.produce(new DevConsoleRuntimeTemplateInfoBuildItem("io.quarkus", "quarkus-oidc", "oidcProviderUrlBase", - recorder.getProviderUrlBase())); - } - } -} - ----- - -Additionally, the extension should produce a `io.quarkus.oidc.deployment.devservices.OidcProviderBuildItem` to disable the default `Dev Services for Keycloak`, instead of the users having to type `quarkus.keycloak.devservices.enabled=false`. - -Please follow the xref:dev-ui.adoc[Dev UI] tutorial as well as check the `extensions/oidc/deployment` sources for more ideas. - -== Non Application Root Path Considerations - -This document refers to the `http://localhost:8080/q/dev` Dev UI URL in several places where `q` is a default non application root path. If you customize `quarkus.http.root-path` and/or `quarkus.http.non-application-root-path` properties then replace `q` accordingly, please see https://quarkus.io/blog/path-resolution-in-quarkus/[Path Resolution in Quarkus] for more information. - -== References - -* xref:dev-ui.adoc[Dev UI] -* https://www.keycloak.org/documentation.html[Keycloak Documentation] -* https://openid.net/connect/[OpenID Connect] -* xref:security-openid-connect.adoc[Quarkus - Using OpenID Connect to Protect Service Applications using Bearer Token Authorization] -* xref:security-openid-connect-web-authentication.adoc[Quarkus - Using OpenID Connect to Protect Web Applications using Authorization Code Flow] -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-openid-connect-multitenancy.adoc b/_versions/2.7/guides/security-openid-connect-multitenancy.adoc deleted file mode 100644 index f7606b83d9b..00000000000 --- a/_versions/2.7/guides/security-openid-connect-multitenancy.adoc +++ /dev/null @@ -1,469 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenID Connect (OIDC) Multi-Tenancy - -include::./attributes.adoc[] -:toc: - -This guide demonstrates how your OpenID Connect (OIDC) application can support multi-tenancy so that you can serve multiple tenants from a single application. Tenants can be distinct realms or security domains within the same OpenID Provider or even distinct OpenID Providers. - -When serving multiple customers from the same application (e.g.: SaaS), each customer is a tenant. By enabling multi-tenancy support to your applications you are allowed to also support distinct authentication policies for each tenant even though if that means authenticating against different OpenID Providers, such as Keycloak and Google. - -Please read the xref:security-openid-connect.adoc[Using OpenID Connect to Protect Service Applications] guide if you need to authorize a tenant using Bearer Token Authorization. - -Please read the xref:security-openid-connect-web-authentication.adoc[Using OpenID Connect to Protect Web Applications] guide if you need to authenticate and authorize a tenant using OpenID Connect Authorization Code Flow. - -== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] -* https://stedolan.github.io/jq/[jq tool] - -== Architecture - -In this example, we build a very simple application which offers a single land page: - -* `/{tenant}` - -The land page is served by a JAX-RS Resource and shows information obtained from the OpenID Provider about the authenticated user and the current tenant. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `security-openid-connect-multi-tenancy-quickstart` {quickstarts-tree-url}/security-openid-connect-multi-tenancy-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-openid-connect-multi-tenancy-quickstart -:create-app-extensions: oidc,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -If you already have your Quarkus project configured, you can add the `oidc` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: oidc -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-oidc - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-oidc") ----- - -== Writing the application - -Let's start by implementing the `/{tenant}` endpoint. As you can see from the source code below it is just a regular JAX-RS resource: - -[source,java] ----- -package org.acme.quickstart.oidc; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.jwt.JsonWebToken; - -import io.quarkus.oidc.IdToken; - -@Path("/{tenant}") -public class HomeResource { - - /** - * Injection point for the ID Token issued by the OpenID Connect Provider - */ - @Inject - @IdToken - JsonWebToken idToken; - - /** - * Returns the tokens available to the application. This endpoint exists only for demonstration purposes, you should not - * expose these tokens in a real application. - * - * @return the landing page HTML - */ - @GET - public String getHome() { - StringBuilder response = new StringBuilder().append("").append(""); - - response.append("

Welcome, ").append(this.idToken.getClaim("email").toString()).append("

\n"); - response.append("

You are accessing the application within tenant ").append(idToken.getIssuer()).append(" boundaries

"); - - return response.append("").append("").toString(); - } -} - ----- - -In order to resolve the tenant from incoming requests and map it to a specific `quarkus-oidc` tenant configuration in application.properties, you need to create an implementation for the `io.quarkus.oidc.TenantResolver` interface. - -[source,java] ----- -package org.acme.quickstart.oidc; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.oidc.TenantResolver; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -public class CustomTenantResolver implements TenantResolver { - - @Override - public String resolve(RoutingContext context) { - String path = context.request().path(); - String[] parts = path.split("/"); - - if (parts.length == 0) { - // resolve to default tenant configuration - return null; - } - - return parts[1]; - } -} ----- - -From the implementation above, tenants are resolved from the request path so that in case no tenant could be inferred, `null` is returned to indicate that the default tenant configuration should be used. - -[NOTE] -=== -When a current tenant represents an OIDC `web-app` application, the current `io.vertx.ext.web.RoutingContext` will contain a `tenant-id` attribute by the time the custom tenant resolver has been called for all the requests completing the code authentication flow and the already authenticated requests, when either a tenant specific state or session cookie already exists. -Therefore, when working with mulltiple OpenID Connect Providers, you only need a path specific check to resolve a tenant id if the `RoutingContext` does not have the `tenant-id` attribute set, for example: - -[source,java] ----- -package org.acme.quickstart.oidc; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.oidc.TenantResolver; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -public class CustomTenantResolver implements TenantResolver { - - @Override - public String resolve(RoutingContext context) { - String tenantId = context.get("tenant-id"); - if (tenantId != null) { - return tenantId; - } else { - // Initial login request - String path = context.request().path(); - String[] parts = path.split("/"); - - if (parts.length == 0) { - // resolve to default tenant configuration - return null; - } - return parts[1]; - } - } -} ----- - -=== - -[NOTE] -=== -If you also use xref:hibernate-orm.adoc#multitenancy[Hibernate ORM multitenancy] and both OIDC and Hibernate ORM tenant IDs are the same and must be extracted from the Vert.x `RoutingContext` then you can pass the tenant id from the OIDC Tenant Resolver to the Hibernate ORM Tenant Resolver as a `RoutingContext` attribute, for example: - -[source,java] ----- -public class CustomTenantResolver implements TenantResolver { - - @Override - public String resolve(RoutingContext context) { - String tenantId = extractTenantId(context); - context.put("tenantId", tenantId); - return tenantId; - } -} ----- -=== - -== Configuring the application - -[source,properties] ----- -# Default Tenant Configuration -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.client-id=multi-tenant-client -quarkus.oidc.application-type=web-app - -# Tenant A Configuration -quarkus.oidc.tenant-a.auth-server-url=http://localhost:8180/auth/realms/tenant-a -quarkus.oidc.tenant-a.client-id=multi-tenant-client -quarkus.oidc.tenant-a.application-type=web-app - -# HTTP Security Configuration -quarkus.http.auth.permission.authenticated.paths=/* -quarkus.http.auth.permission.authenticated.policy=authenticated ----- - -The first configuration is the default tenant configuration that should be used when the tenant can not be inferred from the request. This configuration is using a Keycloak instance to authenticate users. - -The second configuration is the configuration that will be used when an incoming request is mapped to the tenant `tenant-a`. - -Note that both configurations map to the same Keycloak server instance while using distinct `realms`. - -You can define multiple tenants in your configuration file, just make sure they have a unique alias so that you can map them properly when resolving a tenant from your `TenantResolver` implementation. - -=== Google OpenID Provider Configuration - -In order to set-up the `tenant-a` configuration to use Google OpenID Provider, you need to create a project as described https://developers.google.com/identity/protocols/OpenIDConnect[here]. - -Once you create the project and have your project's `client_id` and `client_secret`, you can try to configure a tenant as follows: - -[source, properties] ----- -# Tenant configuration using Google OpenID Provider -quarkus.oidc.tenant-b.auth-server-url=https://accounts.google.com -quarkus.oidc.tenant-b.application-type=web-app -quarkus.oidc.tenant-b.client-id={GOOGLE_CLIENT_ID} -quarkus.oidc.tenant-b.credentials.secret={GOOGLE_CLIENT_SECRET} -quarkus.oidc.tenant-b.token.issuer=https://accounts.google.com -quarkus.oidc.tenant-b.authentication.scopes=email,profile,openid ----- - -== Starting and Configuring the Keycloak Server - -To start a Keycloak Server you can use Docker and just run the following command: - -[source,bash,subs=attributes+] ----- -docker run --name keycloak -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak version} ----- - -You should be able to access your Keycloak Server at http://localhost:8180/auth[localhost:8180/auth]. - -Log in as the `admin` user to access the Keycloak Administration Console. Username should be `admin` and password `admin`. - -Now, follow the steps below to import the realms for the two tenants: - -* Import the {quickstarts-tree-url}/security-openid-connect-multi-tenancy-quickstart/config/default-tenant-realm.json[default-tenant-realm.json] to create the default realm -* Import the {quickstarts-tree-url}/security-openid-connect-multi-tenancy-quickstart/config/tenant-a-realm.json[tenant-a-realm.json] to create the realm for the tenant `tenant-a`. - -For more details, see the Keycloak documentation about how to https://www.keycloak.org/docs/latest/server_admin/index.html#_create-realm[create a new realm]. - -== Running and Using the Application - -=== Running in Developer Mode - -To run the microservice in dev mode, use: - -include::includes/devtools/dev.adoc[] - -=== Running in JVM Mode - -When you're done playing with dev mode, you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build again by enabling the native build: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -./target/security-openid-connect-multi-tenancy-quickstart-runner ----- - -== Testing the Application - -To test the application, you should open your browser and access the following URL: - -* http://localhost:8080/default[http://localhost:8080/default] - -If everything is working as expected, you should be redirected to the Keycloak server to authenticate. Note that the requested path -defines a `default` tenant which we don't have mapped in the configuration file. In this case, the default configuration will be used. - -In order to authenticate to the application you should type the following credentials when at the Keycloak login page: - -* Username: *alice* -* Password: *alice* - -After clicking the `Login` button you should be redirected back to the application. - -If you try now to access the application at the following URL: - -* http://localhost:8080/tenant-a[http://localhost:8080/tenant-a] - -You should be redirected again to the login page at Keycloak. However, now you are going to authenticate using a different `realm`. - -In both cases, if the user is successfully authenticated, the landing page will show the user's name and e-mail. Even though -user `alice` exists in both tenants, for the application they are distinct users belonging to different realms/tenants. - -== Programmatically Resolving Tenants Configuration - -If you need a more dynamic configuration for the different tenants you want to support and don't want to end up with multiple -entries in your configuration file, you can use the `io.quarkus.oidc.TenantConfigResolver`. - -This interface allows you to dynamically create tenant configurations at runtime: - -[source,java] ----- -package io.quarkus.it.keycloak; - -import javax.enterprise.context.ApplicationScoped; -import java.util.function.Supplier; - -import io.smallrye.mutiny.Uni; -import io.quarkus.oidc.OidcTenantConfig; -import io.quarkus.oidc.TenantConfigResolver; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -public class CustomTenantConfigResolver implements TenantConfigResolver { - - @Override - public Uni resolve(RoutingContext context, TenantConfigResolver.TenantConfigRequestContext requestContext) { - String path = context.request().path(); - String[] parts = path.split("/"); - - if (parts.length == 0) { - // resolve to default tenant configuration - return null; - } - - if ("tenant-c".equals(parts[1])) { - // Do 'return requestContext.runBlocking(createTenantConfig());' - // if a blocking call is required to create a tenant config - return Uni.createFromItem(createTenantConfig()); - } - - // resolve to default tenant configuration - return null; - } - - private Supplier createTenantConfig() { - final OidcTenantConfig config = new OidcTenantConfig(); - - config.setTenantId("tenant-c"); - config.setAuthServerUrl("http://localhost:8180/auth/realms/tenant-c"); - config.setClientId("multi-tenant-client"); - OidcTenantConfig.Credentials credentials = new OidcTenantConfig.Credentials(); - - credentials.setSecret("my-secret"); - - config.setCredentials(credentials); - - // any other setting support by the quarkus-oidc extension - - return () -> config; - } -} ----- - -The `OidcTenantConfig` returned from this method is the same used to parse the `oidc` namespace configuration from the `application.properties`. You can populate it using any of the settings supported by the `quarkus-oidc` extension. - -== Tenant Resolution for OIDC 'web-app' applications - -Several options are available for selecting the tenant configuration which should be used to secure the current HTTP request for both `service` and `web-app` OIDC applications, such as: - -- Check URL paths, for example, a `tenant-service` configuration has to be used for the "/service" paths, while a `tenant-manage` configuration - for the "/management" paths -- Check HTTP headers, for example, with a URL path always being '/service', a header such as "Realm: service" or "Realm: management" can help selecting between the `tenant-service` and `tenant-manage` configurations -- Check URL query parameters - it can work similarly to the way the headers are used to select the tenant configuration - -All these options can be easily implemented with the custom `TenantResolver` and `TenantConfigResolver` implementations for the OIDC `service` applications. - -However, due to an HTTP redirect required to complete the code authentication flow for the OIDC `web-app` applications, a custom HTTP cookie may be needed to select the same tenant configuration before and after this redirect request because: - -- URL path may not be the same after the redirect request if a single redirect URL has been registered in the OIDC Provider - the original request path can be restored but after the the tenant configuration is resolved -- HTTP headers used during the original request are not available after the redirect -- Custom URL query parameters are restored after the redirect but after the tenant configuration is resolved - -One option to ensure the information for resolving the tenant configurations for `web-app` applications is available before and after the redirect is to use a cookie, for example: - -[source,java] ----- -package org.acme.quickstart.oidc; - -import java.util.List; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.oidc.TenantResolver; -import io.vertx.core.http.Cookie; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -public class CustomTenantResolver implements TenantResolver { - - @Override - public String resolve(RoutingContext context) { - List tenantIdQuery = context.queryParam("tenantId"); - if (!tenantIdQuery.isEmpty()) { - String tenantId = tenantIdQuery.get(0); - context.addCookie(Cookie.cookie("tenant", tenantId)); - return tenantId; - } else if (context.cookieMap().containsKey("tenant")) { - return context.getCookie("tenant").getValue(); - } - - return null; - } -} ----- - -[[disable-tenant]] -== Disabling Tenant Configurations - -Custom `TenantResolver` and `TenantConfigResolver` implementations may return `null` if no tenant can be inferred from the current request and a fallback to the default tenant configuration is required. - -If it is expected that the custom resolvers will always infer a tenant then the default tenant configuration is not needed. One can disable it with the `quarkus.oidc.tenant-enabled=false` setting. - -Note that tenant specific configurations can also be disabled, for example: `quarkus.oidc.tenant-a.tenant-enabled=false`. - -== Configuration Reference - -include::{generated-dir}/config/quarkus-oidc.adoc[opts=optional] - -== References - -* https://www.keycloak.org/documentation.html[Keycloak Documentation] -* https://openid.net/connect/[OpenID Connect] -* https://tools.ietf.org/html/rfc7519[JSON Web Token] -* https://developers.google.com/identity/protocols/OpenIDConnect[Google OpenID Connect] -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-openid-connect-web-authentication.adoc b/_versions/2.7/guides/security-openid-connect-web-authentication.adoc deleted file mode 100644 index 0492d0ece22..00000000000 --- a/_versions/2.7/guides/security-openid-connect-web-authentication.adoc +++ /dev/null @@ -1,1312 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenID Connect (OIDC) to Protect Web Applications using Authorization Code Flow - -include::./attributes.adoc[] -:toc: - -This guide demonstrates how to use Quarkus OpenID Connect (OIDC) Extension to protect your Quarkus HTTP endpoints using OpenID Connect Authorization Code Flow supported by OpenID Connect compliant Authorization Servers such as https://www.keycloak.org[Keycloak]. - -The extension allows to easily authenticate the users of your web application by redirecting them to the OpenID Connect Provider (e.g.: Keycloak) to login and, once the authentication is complete, return them back with the code confirming the successful authentication. The extension will request ID and access tokens from the OpenID Connect Provider using an authorization code grant and verify these tokens in order to authorize an access to the application. - -Please read the xref:security-openid-connect.adoc[Using OpenID Connect to Protect Service Applications] guide if you need to protect your applications using Bearer Token Authorization. - -Please read the xref:security-openid-connect-multitenancy.adoc[Using OpenID Connect Multi-Tenancy] guide how to support multiple tenants. - -== Quickstart - -=== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] - -=== Architecture - -In this example, we build a very simple web application with a single page: - -* `/index.html` - -This page is protected and can only be accessed by authenticated users. - -=== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `security-openid-connect-web-authentication-quickstart` {quickstarts-tree-url}/security-openid-connect-web-authentication-quickstart[directory]. - -=== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-openid-connect-web-authentication-quickstart -:create-app-extensions: resteasy,oidc -include::includes/devtools/create-app.adoc[] - -If you already have your Quarkus project configured, you can add the `oidc` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: oidc -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-oidc - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-oidc") ----- - -=== Writing the application - -Let's write a simple JAX-RS resource which has all the tokens returned in the authorization code grant response injected: - -[source,java] ----- -package org.acme.security.openid.connect.web.authentication; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.eclipse.microprofile.jwt.JsonWebToken; - -import io.quarkus.oidc.IdToken; -import io.quarkus.oidc.RefreshToken; - -@Path("/tokens") -public class TokenResource { - - /** - * Injection point for the ID Token issued by the OpenID Connect Provider - */ - @Inject - @IdToken - JsonWebToken idToken; - - /** - * Injection point for the Access Token issued by the OpenID Connect Provider - */ - @Inject - JsonWebToken accessToken; - - /** - * Injection point for the Refresh Token issued by the OpenID Connect Provider - */ - @Inject - RefreshToken refreshToken; - - /** - * Returns the tokens available to the application. This endpoint exists only for demonstration purposes, you should not - * expose these tokens in a real application. - * - * @return a map containing the tokens available to the application - */ - @GET - public String getTokens() { - StringBuilder response = new StringBuilder().append("") - .append("") - .append("
    "); - - Object userName = this.idToken.getClaim("preferred_username"); - - if (userName != null) { - response.append("
  • username: ").append(userName.toString()).append("
  • "); - } - - Object scopes = this.accessToken.getClaim("scope"); - - if (scopes != null) { - response.append("
  • scopes: ").append(scopes.toString()).append("
  • "); - } - - response.append("
  • refresh_token: ").append(refreshToken.getToken() != null).append("
  • "); - - return response.append("
").append("").append("").toString(); - } -} ----- - -This endpoint has ID, access and refresh tokens injected. It returns a `preferred_username` claim from the ID token, a `scope` claim from the access token and also a refresh token availability status. - -Note that you do not have to inject the tokens - it is only required if the endpoint needs to use the ID token to interact with the currently authenticated user or use the access token to access a downstream service on behalf of this user. - -Please see <> section below for more information. - -=== Configuring the application - -The OpenID Connect extension allows you to define the configuration using the `application.properties` file which should be located at the `src/main/resources` directory. - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.client-id=frontend -quarkus.oidc.credentials.secret=secret -quarkus.oidc.application-type=web-app -quarkus.http.auth.permission.authenticated.paths=/* -quarkus.http.auth.permission.authenticated.policy=authenticated ----- - -This is the simplest configuration you can have when enabling authentication to your application. - -The `quarkus.oidc.client-id` property references the `client_id` issued by the OpenID Connect Provider and the `quarkus.oidc.credentials.secret` property sets the client secret. - -The `quarkus.oidc.application-type` property is set to `web-app` in order to tell Quarkus that you want to enable the OpenID Connect Authorization Code Flow, so that your users are redirected to the OpenID Connect Provider to authenticate. - -For last, the `quarkus.http.auth.permission.authenticated` permission is set to tell Quarkus about the paths you want to protect. In this case, -all paths are being protected by a policy that ensures that only `authenticated` users are allowed to access. For more details check xref:security-authorization.adoc[Security Authorization Guide]. - -=== Starting and Configuring the Keycloak Server - -To start a Keycloak Server you can use Docker and just run the following command: - -[source,bash,subs=attributes+] ----- -docker run --name keycloak -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak version} ----- - -You should be able to access your Keycloak Server at http://localhost:8180/auth[localhost:8180/auth]. - -Log in as the `admin` user to access the Keycloak Administration Console. Username should be `admin` and password `admin`. - -Import the {quickstarts-tree-url}/security-openid-connect-web-authentication-quickstart/config/quarkus-realm.json[realm configuration file] to create a new realm. For more details, see the Keycloak documentation about how to https://www.keycloak.org/docs/latest/server_admin/index.html#_create-realm[create a new realm]. - -=== Running the Application in Dev and JVM modes - -To run the application in a dev mode, use: - -include::includes/devtools/dev.adoc[] - -When you're done playing with dev mode you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running the Application in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build again by enabling the native build: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -./target/security-openid-connect-web-authentication-quickstart-runner ----- - -=== Testing the Application - -To test the application, you should open your browser and access the following URL: - -* http://localhost:8080[http://localhost:8080] - -If everything is working as expected, you should be redirected to the Keycloak server to authenticate. - -In order to authenticate to the application you should type the following credentials when at the Keycloak login page: - -* Username: *alice* -* Password: *alice* - -After clicking the `Login` button you should be redirected back to the application. - -Please also see the <> section below about writing the integration tests which depend on `Dev Services for Keycloak`. - -== Reference Guide - -[[access_id_and_access_tokens]] -=== Accessing ID and Access Tokens - -OIDC Code Authentication Mechanism acquires three tokens during the authorization code flow: https://openid.net/specs/openid-connect-core-1_0.html#IDToken[IDToken], Access Token and Refresh Token. - -ID Token is always a JWT token and is used to represent a user authentication with the JWT claims. -One can access ID Token claims by injecting `JsonWebToken` with an `IdToken` qualifier: - -[source, java] ----- -import javax.inject.Inject; -import org.eclipse.microprofile.jwt.JsonWebToken; -import io.quarkus.oidc.IdToken; -import io.quarkus.security.Authenticated; - -@Path("/web-app") -@Authenticated -public class ProtectedResource { - - @Inject - @IdToken - JsonWebToken idToken; - - @GET - public String getUserName() { - return idToken.getName(); - } -} ----- - -Access Token is usually used by the OIDC `web-app` application to access other endpoints on behalf of the currently logged in user. The raw access token can be accessed as follows: - -[source, java] ----- -import javax.inject.Inject; -import org.eclipse.microprofile.jwt.JsonWebToken; -import io.quarkus.oidc.AccessTokenCredential; -import io.quarkus.security.Authenticated; - -@Path("/web-app") -@Authenticated -public class ProtectedResource { - - @Inject - JsonWebToken accessToken; - - // or - // @Inject - // AccessTokenCredential accessTokenCredential; - - @GET - public String getReservationOnBehalfOfUser() { - String rawAccessToken = accessToken.getRawToken(); - //or - //String rawAccessToken = accessTokenCredential.getToken(); - - // Use the raw access token to access a remote endpoint - return getReservationfromRemoteEndpoint(rawAccesstoken); - } -} ----- - -Note that `AccessTokenCredential` will have to be used if the Access Token issued to the Quarkus `web-app` application is opaque (binary) and can not be parsed to `JsonWebToken`. - -Injection of the `JsonWebToken` and `AccessTokenCredential` is supported in both `@RequestScoped` and `@ApplicationScoped` contexts. - -RefreshToken is only used to refresh the current ID and access tokens as part of its <> process. - -[[user-info]] -=== User Info - -If IdToken does not provide enough information about the currently authenticated user then you can set a `quarkus.oidc.authentication.user-info-required=true` property for a https://openid.net/specs/openid-connect-core-1_0.html#UserInfo[UserInfo] JSON object from the OIDC userinfo endpoint to be requested. - -A request will be sent to the OpenID Provider UserInfo endpoint using the access token returned with the authorization code grant response and an `io.quarkus.oidc.UserInfo` (a simple `javax.json.JsonObject` wrapper) object will be created. `io.quarkus.oidc.UserInfo` can be either injected or accessed as a SecurityIdentity `userinfo` attribute. - -[[config-metadata]] -=== Configuration Metadata - -The current tenant's discovered link:https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata[OpenID Connect Configuration Metadata] is represented by `io.quarkus.oidc.OidcConfigurationMetadata` and can be either injected or accessed as a `SecurityIdentity` `configuration-metadata` attribute. - -The default tenant's `OidcConfigurationMetadata` is injected if the endpoint is public. - -[[token-claims-roles]] -=== Token Claims And SecurityIdentity Roles - -The way the roles are mapped to the SecurityIdentity roles from the verified tokens is identical to how it is done for the xref:security-openid-connect.adoc#token-claims-and-securityidentity-roles[bearer tokens] with the only difference being is that https://openid.net/specs/openid-connect-core-1_0.html#IDToken[ID Token] is used as a source of the roles by default. - -Note if you use Keycloak then you should set a Microprofile JWT client scope for ID token to contain a `groups` claim, please see the https://www.keycloak.org/docs/latest/server_admin/#protocol[Keycloak Server Administration Guide] for more information. - -If only the access token contains the roles and this access token is not meant to be propagated to the downstream endpoints then set `quarkus.oidc.roles.source=accesstoken`. - -If UserInfo is the source of the roles then set `quarkus.oidc.authentication.user-info-required=true` and `quarkus.oidc.roles.source=userinfo`, and if needed, `quarkus.oidc.roles.role-claim-path`. - -Additionally a custom `SecurityIdentityAugmentor` can also be used to add the roles as documented xref:security.adoc#security-identity-customization[here]. - -[[token-verification-introspection]] -=== Token Verification And Introspection - -Please see xref:security-openid-connect.adoc#token-verification-introspection[Token Verification And Introspection] for details about how the tokens are verified and introspected. - -Note that in case of `web-app` applications only `IdToken` is verified by default since the access token is not used by default to access the current Quarkus `web-app` endpoint and instead meant to be propagated to the services expecting this access token, for example, to the OpenID Connect Provider's UserInfo endpoint, etc. However if you expect the access token to contain the roles required to access the current Quarkus endpoint (`quarkus.oidc.roles.source=accesstoken`) then it will also be verified. - -[[token-introspection-userinfo-cache]] -=== Token Introspection and UserInfo Cache - -Code flow access tokens are not introspected unless they are expected to be the source of roles but will be used to get `UserInfo`. So there will be one or two remote calls with the code flow access token, if the token introspection and/or `UserInfo` are required. - -Please see xref:security-openid-connect.adoc#token-introspection-userinfo-cache[Token Introspection and UserInfo cache] for more information about using a default token cache or registering a custom cache implementation. - -[[jwt-claim-verification]] -=== JSON Web Token Claim Verification - -Please see xref:security-openid-connect.adoc#jwt-claim-verification[JSON Web Token Claim verification] section about the claim verification, including the `iss` (issuer) claim. -It applies to ID tokens but also to access tokens in a JWT format if the `web-app` application has requested the access token verification. - -=== Redirection - -When the user is redirected to the OpenID Connect Provider to authenticate, the redirect URL includes a `redirect_uri` query parameter which indicates to the Provider where the user has to be redirected to once the authentication has been completed. - -Quarkus will set this parameter to the current request URL by default. For example, if the user is trying to access a Quarkus service endpoint at `http://localhost:8080/service/1` then the `redirect_uri` parameter will be set to `http://localhost:8080/service/1`. Similarly, if the request URL is `http://localhost:8080/service/2` then the `redirect_uri` parameter will be set to `http://localhost:8080/service/2`, etc. - -OpenID Connect Providers may be configured to require the `redirect_uri` parameter to have the same value (eg. `http://localhost:8080/service/callback`) for all the redirect URLs. -In such cases a `quarkus.oidc.authentication.redirect-path` property has to be set, for example, `quarkus.oidc.authentication.redirect-path=/service/callback`, and Quarkus will set the `redirect_uri` parameter to an absolute URL such as `http://localhost:8080/service/callback` which will be the same regardless of the current request URL. - -If `quarkus.oidc.authentication.redirect-path` is set but the original request URL has to be restored after the user has been redirected back to a callback URL such as `http://localhost:8080/service/callback` then a `quarkus.oidc.authentication.restore-path-after-redirect` property has to be set to `true` which will restore the request URL such as `http://localhost:8080/service/1`, etc. - -[[oidc-cookies]] -=== Dealing with Cookies - -The OIDC adapter uses cookies to keep the session, code flow and post logout state. - -`quarkus.oidc.authentication.cookie-path` property is used to ensure the cookies are visible especially when you access the protected resources with overlapping or different roots, for example: - -* `/index.html` and `/web-app/service` -* `/web-app/service1` and `/web-app/service2` -* `/web-app1/service` and `/web-app2/service` - -`quarkus.oidc.authentication.cookie-path` is set to `/` by default but can be narrowed to the more specific root path such as `/web-app`. - -You can also set a `quarkus.oidc.authentication.cookie-path-header` property if the cookie path needs to be set dynamically. -For example, setting `quarkus.oidc.authentication.cookie-path-header=X-Forwarded-Prefix` means that the value of HTTP `X-Forwarded-Prefix` header will be used to set a cookie path. - -If `quarkus.oidc.authentication.cookie-path-header` is set but no configured HTTP header is available in the current request then the `quarkus.oidc.authentication.cookie-path` will be checked. - -If your application is deployed across multiple domains, make sure to set a `quarkus.oidc.authentication.cookie-domain` property for the session cookie be visible to all protected Quarkus services, for example, if you have 2 services deployed at: - -* https://whatever.wherever.company.net/ -* https://another.address.company.net/ - -then the `quarkus.oidc.authentication.cookie-domain` property must be set to `company.net`. - -=== Logout - -By default the logout is based on the expiration time of the ID Token issued by the OpenID Connect Provider. When the ID Token expires, the current user session at the Quarkus endpoint is invalidated and the user is redirected to the OpenID Connect Provider again to authenticate. If the session at the OpenID Connect Provider is still active, users are automatically re-authenticated without having to provide their credentials again. - -The current user session may be automatically extended by enabling a `quarkus.oidc.token.refresh-expired` property. If it is set to `true` then when the current ID Token expires a Refresh Token Grant will be used to refresh ID Token as well as Access and Refresh Tokens. - -[[user-initiated-logout]] -==== User-Initiated Logout - -Users can request a logout by sending a request to the Quarkus endpoint logout path set with a `quarkus.oidc.logout.path` property. -For example, if the endpoint address is `https://application.com/webapp` and the `quarkus.oidc.logout.path` is set to "/logout" then the logout request has to be sent to `https://application.com/webapp/logout`. - -This logout request will start an https://openid.net/specs/openid-connect-session-1_0.html#RPLogout[RP-Initiated Logout] and the user will be redirected to the OpenID Connect Provider to logout where a user may be asked to confirm the logout is indeed intended. - -The user will be returned to the endpoint post logout page once the logout has been completed if the `quarkus.oidc.logout.post-logout-path` property is set. For example, if the endpoint address is `https://application.com/webapp` and the `quarkus.oidc.logout.post-logout-path` is set to "/signin" then the user will be returned to `https://application.com/webapp/signin` (note this URI must be registered as a valid `post_logout_redirect_uri` in the OpenID Connect Provider). - -If the `quarkus.oidc.logout.post-logout-path` is set then a `q_post_logout` cookie will be created and a matching `state` query parameter will be added to the logout redirect URI and the OpenID Connect Provider will return this `state` once the logout has been completed. It is recommended for the Quarkus `web-app` applications to check that a `state` query parameter matches the value of the `q_post_logout` cookie which can be done for example in a JAX-RS filter. - -Note that a cookie name will vary when using xref:security-openid-connect-multitenancy.adoc[OpenID Connect Multi-Tenancy]. For example, it will be named `q_post_logout_tenant_1` for a tenant with a `tenant_1` id, etc. - -Here is an example of how to configure an RP initiated logout flow: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.client-id=frontend -quarkus.oidc.credentials.secret=secret -quarkus.oidc.application-type=web-app - -quarkus.oidc.logout.path=/logout -quarkus.oidc.logout.post-logout-path=/welcome.html - -# Only the authenticated users can initiate a logout: -quarkus.http.auth.permission.authenticated.paths=/logout -quarkus.http.auth.permission.authenticated.policy=authenticated - -# Logged out users should be returned to the /welcome.html site which will offer an option to re-login: -quarkus.http.auth.permission.authenticated.paths=/welcome.html -quarkus.http.auth.permission.authenticated.policy=permit ----- - -You may also need to set `quarkus.oidc.authentication.cookie-path` to a path value common to all of the application resources which is `/` in this example. -See <> for more information. - -Note that some OpenID Connect providers do not support https://openid.net/specs/openid-connect-session-1_0.html#RPLogout[RP-Initiated Logout] specification (possibly because it is still technically a draft) and do not return an OpenID Connect well-known `end_session_endpoint` metadata property. However it should not be a problem since these providers' specific logout mechanisms may only differ in how the logout URL query parameters are named. - -According to the https://openid.net/specs/openid-connect-session-1_0.html#RPLogout[RP-Initiated Logout] specification, the `quarkus.oidc.logout.post-logout-path` property is represented as a `post_logout_redirect_uri` query parameter which will not be recognized by the providers which do not support this specification. - -You can use `quarkus.oidc.logout.post-logout-url-param` to work around this issue. You can also request more logout query parameters added with `quarkus.oidc.logout.extra-params`. For example, here is how you can support a logout with `Auth0`: - -[source,properties] ----- -quarkus.oidc.auth-server-url=https://dev-xxx.us.auth0.com -quarkus.oidc.client-id=redacted -quarkus.oidc.credentials.secret=redacted -quarkus.oidc.application-type=web-app - -quarkus.oidc.tenant-logout.logout.path=/logout -quarkus.oidc.tenant-logout.logout.post-logout-path=/welcome.html - -# Auth0 does not return the `end_session_endpoint` metadata property, configire it instead -quarkus.oidc.end-session-path=v2/logout -# Auth0 will not recognize the 'post_logout_redirect_uri' query parameter so make sure it is named as 'returnTo' -quarkus.oidc.logout.post-logout-uri-param=returnTo - -# Set more properties if needed. -# For example, if 'client_id' is provided then a valid logout URI should be set as Auth0 Application property, without it - as Auth0 Tenant property. -quarkus.oidc.logout.extra-params.client_id=${quarkus.oidc.client-id} ----- - -[[local-logout]] -==== Local Logout - -If you work with a social provider such as Google and are concerned that the users can be logged out from all their Google applications with the <> which redirects the users to the provider's logout endpoint then you can support a local logout with the help of the <> which only clears the local session cookie, for example: - -[source,java] ----- -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import io.quarkus.oidc.OidcSession; - -@Path("/service") -public class ServiceResource { - - @Inject - OidcSession oidcSession; - - @GET - @Path("logout") - public String logout() { - oidcSession.logout().await().indefinitely(); - return "You are logged out". - } ----- - - -[[session-management]] -=== Session Management - -If you have a xref:security-openid-connect.adoc#single-page-applications[Single Page Application for Service Applications] where your OpenID Connect Provider script such as `keycloak.js` is managing an authoriization code flow then that script will also control the SPA authentication session lifespan. - -If you work with a Quarkus OIDC `web-app` application then it is Quarkus OIDC Code Authentication mechanism which is managing the user session lifespan. - -The session age is calculated by adding the lifespan value of the current IDToken and the values of the `quarkus.oidc.authentication.session-age-extension` and `quarkus.oidc.token.lifespan-grace` properties. Of the last two properties only `quarkus.oidc.authentication.session-age-extension` should be used to significantly extend the session lifespan if required since `quarkus.oidc.token.lifespan-grace` is only meant for taking some small clock skews into consideration. - -When the currently authenticated user returns to the protected Quarkus endpoint and the ID token associated with the session cookie has expired then, by default, the user will be auto-redirected to the OIDC Authorization endpoint to re-authenticate. Most likely the OIDC provider will challenge the user again though not necessarily if the session between the user and this OIDC provider is still active which may happen if it is configured to last longer than the ID token. - -If the `quarkus.oidc.token.refresh-expired` then the expired ID token (as well as the access token) will be refreshed using the refresh token returned with the authorization code grant response. This refresh token may also be recycled (refreshed) itself as part of this process. As a result the new session cookie will be created and the session will be extended. - -Note, `quarkus.oidc.authentication.session-age-extension` can be important when dealing with expired ID tokens, when the user is not very active. In such cases, if the ID token expires, then the session cookie may not be returned back to the Quarkus endpoint during the next user request and Quarkus will assume it is the first authentication request. Therefore using `quarkus.oidc.authentication.session-age-extension` is important if you need to have even the expired ID tokens refreshed. - -You can also complement refreshing the expired ID tokens by proactively refreshing the valid ID tokens which are about to be expired within the `quarkus.oidc.token.refresh-token-time-skew` value. If, during the current user request, it is calculated that the current ID token will expire within this `quarkus.oidc.token.refresh-token-time-skew` then it will be refreshed and the new session cookie will be created. This property should be set to a value which is less than the ID token lifespan; the closer it is to this lifespan value the more often the ID token will be refreshed. - -You can have this process further optimized by having a simple JavaScript function periodically emulating the user activity by pinging your Quarkus endpoint thus minimizing the window during which the user may have to be re-authenticated. - -Note this user session can not be extended forever - the returning user with the expired ID token will have to re-authenticate at the OIDC provider endpoint once the refresh token has expired. - -[[oidc-session]] -==== OidcSession - -`io.quarkus.oidc.OidcSession` is a wrapper around the current `IdToken`. It can help to perform a <>, retrieve the current session's tenant identifier and check when the session will expire. More useful methods will be added to it over time. - -==== TokenStateManager - -OIDC `CodeAuthenticationMechanism` is using the default `io.quarkus.oidc.TokenStateManager` interface implementation to keep the ID, access and refresh tokens returned in the authorization code or refresh grant responses in a session cookie. It makes Quarkus OIDC endpoints completely stateless. - -Note that some endpoints do not require the access token. An access token is only required if the endpoint needs to retrieve `UserInfo` or access the downstream service with this access token or use the roles associated with the access token (the roles in the ID token are checked by default). In such cases you can set either `quarkus.oidc.token-state-manager.strategy=id-refresh-token` (keep ID and refresh tokens only) or `quarkus.oidc.token-state-manager.strategy=id-token` (keep ID token only). - -If the ID, access and refresh tokens are JWT tokens then combining all of them (if the strategy is the default `keep-all-tokens`) or only ID and refresh tokens (if the strategy is `id-refresh-token`) may produce a session cookie value larger than 4KB and the browsers may not be able to keep this cookie. -In such cases, you can use `quarkus.oidc.token-state-manager.split-tokens=true` to have a unique session token per each of these tokens. - -Register your own `io.quarkus.oidc.TokenStateManager' implementation as an `@ApplicationScoped` CDI bean if you need to customize the way the tokens are associated with the session cookie. For example, you may want to keep the tokens in a database and have only a database pointer stored in a session cookie. Note though that it may present some challenges in making the tokens available across multiple microservices nodes. - -Here is a simple example: - -[source, java] ----- -package io.quarkus.oidc.test; - -import javax.enterprise.context.ApplicationScoped; -import javax.inject.Inject; - -import io.quarkus.arc.AlternativePriority; -import io.quarkus.oidc.AuthorizationCodeTokens; -import io.quarkus.oidc.OidcTenantConfig; -import io.quarkus.oidc.TokenStateManager; -import io.quarkus.oidc.runtime.DefaultTokenStateManager; -import io.smallrye.mutiny.Uni; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -@AlternativePriority(1) -public class CustomTokenStateManager implements TokenStateManager { - - @Inject - DefaultTokenStateManager tokenStateManager; - - @Override - public Uni createTokenState(RoutingContext routingContext, OidcTenantConfig oidcConfig, - AuthorizationCodeTokens sessionContent, TokenStateManager.CreateTokenStateRequestContext requestContext) { - return tokenStateManager.createTokenState(routingContext, oidcConfig, sessionContent, requestContext) - .map(t -> (t + "|custom")); - } - - @Override - public Uni getTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, - String tokenState, TokenStateManager.GetTokensRequestContext requestContext) { - if (!tokenState.endsWith("|custom")) { - throw new IllegalStateException(); - } - String defaultState = tokenState.substring(0, tokenState.length() - 7); - return tokenStateManager.getTokens(routingContext, oidcConfig, defaultState, requestContext); - } - - @Override - public Uni deleteTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, - TokenStateManager.DeleteTokensRequestContext requestContext) { - if (!tokenState.endsWith("|custom")) { - throw new IllegalStateException(); - } - String defaultState = tokenState.substring(0, tokenState.length() - 7); - return tokenStateManager.deleteTokens(routingContext, oidcConfig, defaultState, requestContext); - } -} ----- - -=== Listening to important authentication events - -One can register `@ApplicationScoped` bean which will observe important OIDC authentication events. The listener will be updated when a user has logged in for the first time or re-authenticated, as well as when the session has been refreshed. More events may be reported in the future. For example: - -[source, java] ----- -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; - -import io.quarkus.oidc.IdTokenCredential; -import io.quarkus.oidc.SecurityEvent; -import io.quarkus.security.identity.AuthenticationRequestContext; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -public class SecurityEventListener { - - public void event(@Observes SecurityEvent event) { - String tenantId = event.getSecurityIdentity().getAttribute("tenant-id"); - RoutingContext vertxContext = event.getSecurityIdentity().getCredential(IdTokenCredential.class).getRoutingContext(); - vertxContext.put("listener-message", String.format("event:%s,tenantId:%s", event.getEventType().name(), tenantId)); - } -} ----- - -=== Single Page Applications - -Please check if implementing SPAs the way it is suggested in the xref:security-openid-connect.adoc#single-page-applications[Single Page Applications for Service Applications] section can meet your requirements. - -If you prefer to use SPA and JavaScript API such as `Fetch` or `XMLHttpRequest`(XHR) with Quarkus web applications, please be aware that OpenID Connect Providers may not support CORS for Authorization endpoints where the users are authenticated after a redirect from Quarkus. This will lead to authentication failures if the Quarkus application and the OpenID Connect Provider are hosted on the different HTTP domains/ports. - -In such cases, set the `quarkus.oidc.authentication.java-script-auto-redirect` property to `false` which will instruct Quarkus to return a `499` status code and `WWW-Authenticate` header with the `OIDC` value. The browser script also needs to be updated to set `X-Requested-With` header with the `JavaScript` value and reload the last requested page in case of `499`, for example: - -[source,javascript] ----- -Future callQuarkusService() async { - Map headers = Map.fromEntries([MapEntry("X-Requested-With", "JavaScript")]); - - await http - .get("https://localhost:443/serviceCall") - .then((response) { - if (response.statusCode == 499) { - window.location.assign("https://localhost.com:443/serviceCall"); - } - }); - } ----- - -=== Cross Origin Resource Sharing - -If you plan to consume this application from a Single Page Application running on a different domain, you will need to configure CORS (Cross-Origin Resource Sharing). Please read the xref:http-reference.adoc#cors-filter[HTTP CORS documentation] for more details. - -=== Integration with GitHub and other OAuth2 providers - -Some well known providers such as `GitHub` or `LinkedIn` are not `OpenID Connect` but `OAuth2` providers which support the `authorization code flow`, for example, link:https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps[GitHub OAuth2] and link:https://docs.microsoft.com/en-us/linkedin/shared/authentication/authorization-code-flow[LinkedIn OAuth2]. - -The main difference between `OpenID Connect` and `OAuth2` providers is that `OpenID Connect` providers, by building on top of `OAuth2`, return an `ID Token` representing a user authentication, in addition to the standard authorization code flow `access` and `refresh` tokens returned by `OAuth2` providers. - -`OAuth2` providers such as `GitHub` do not return `IdToken`, the fact of the user authentication is implicit and is indirectly represented by the `access` token which represents an authenticated user authorizing the current Quarkus `web-app` application to access some data on behalf of the authenticated user. - -For example, when working with `GitHub`, the Quarkus endpoint can acquire an `access` token which will allow it to request a `GitHub` profile of the current user. -In fact this is exactly how a standard `OpenID Connect` `UserInfo` acqusition also works - by authenticating into your `OpenID Connect` provider you also give a permission to Quarkus application to acquire your <> on your behalf - and it also shows what is meant by `OpenID Connect` being built on top of `OAuth2`. - -In order to support the integration with such `OAuth2` servers, `quarkus-oidc` needs to be configured to allow the authorization code flow responses without `IdToken`: `quarkus.oidc.authentication.id-token-required=false`. - -It is required because `quarkus-oidc` expects that not only `access` and `refresh` tokens but also `IdToken` will be returned once the authorization code flow completes. - -Note, even though you will configure the extension to support the authorization code flows without `IdToken`, an internal `IdToken` will be generated to support the way `quarkus-oidc` operates where an `IdToken` is used to support the authentication session and to avoid redirecting the user to the provider such as `GitHub` on every request. In this case the session lifespan is set to 5 minutes which can be extended further as described in the <> section. - -The next step is to ensure that the returned access token can be useful to the current Quarkus endpoint. -If the `OAuth2` provider supports the introspection endpoint then you may be able to use this access token as a source of roles with `quarkus.oidc.roles.source=accesstoken`. If no introspection endpoint is available then at the very least it should be possible to request <> from this provider with `quarkus.oidc.authentication.user-info-required` - this is the case with `GitHib`. - -Configuring the endpoint to request <> is the only way `quarkus-oidc` can be integrated with the providers such as `GitHib`. - -Note that requiring <> involves making a remote call on every request - therefore you may want to consider caching `UserInfo` data, see < for more details. - -Also, OAuth2 servers may not support a well-known configuration endpoint in which case the discovery has to be disabled and the authorization, token, and introspection and/or userinfo endpoint paths have to be configured manually. - -Here is how you can integrate `quarkus-oidc` with `GitHub` after you have link:https://docs.github.com/en/developers/apps/building-oauth-apps/creating-an-oauth-app[created a GitHub OAuth application]. Configure your Quarkus endpoint like this: - -[source,properties] ----- -quarkus.oidc.provider=github -quarkus.oidc.client-id=github_app_clientid -quarkus.oidc.credentials.secret=github_app_clientsecret - -# user:email scope is requested by default, use 'quarkus.oidc.authentication.scopes' to request differrent scopes such as `read:user`. -# See https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps for more information. - -# Consider enabling UserInfo Cache -# quarkus.oidc.token-cache.max-size=1000 -# quarkus.oidc.token-cache.time-to-live=5M ----- - -This is all what is needed for an endpoint like this one to return the currently authenticated user's profile with `GET http://localhost:8080/github/userinfo` and access it as the individual `UserInfo` properties: - -[source,java] ----- -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; - -import io.quarkus.oidc.UserInfo; -import io.quarkus.security.Authenticated; - -@Path("/github") -@Authenticated -public class TokenResource { - - @Inject - UserInfo userInfo; - - @GET - @Path("/userinfo") - @Produces("application/json") - public String getUserInfo() { - return userInfo.getUserInfoString(); - } -} ----- - -If you support more than one social provider with the help of xref:security-openid-connect-multitenancy.adoc[OpenID Connect Multi-Tenancy], for example, `Google` which is an OpenID Connect Provider returning `IdToken` and `GitHub` which is an `OAuth2` provider returning no `IdToken` and only allowing to access `UserInfo` then you can have your endpoint working with only the injected `SecurityIdentity` for both `Google` and `GitHub` flows. A simple augmentation of `SecurityIdentity` will be required where a principal created with the internally generated `IdToken` will be replaced with the `UserInfo` based principal when the GiHub flow is active: - -[source,java] ----- -package io.quarkus.it.keycloak; - -import java.security.Principal; - -import javax.enterprise.context.ApplicationScoped; - -import io.quarkus.oidc.UserInfo; -import io.quarkus.security.identity.AuthenticationRequestContext; -import io.quarkus.security.identity.SecurityIdentity; -import io.quarkus.security.identity.SecurityIdentityAugmentor; -import io.quarkus.security.runtime.QuarkusSecurityIdentity; -import io.smallrye.mutiny.Uni; -import io.vertx.ext.web.RoutingContext; - -@ApplicationScoped -public class CustomSecurityIdentityAugmentor implements SecurityIdentityAugmentor { - - @Override - public Uni augment(SecurityIdentity identity, AuthenticationRequestContext context) { - RoutingContext routingContext = identity.getAttribute(RoutingContext.class.getName()); - if (routingContext != null && routingContext.normalizedPath().endsWith("/github")) { - QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); - UserInfo userInfo = identity.getAttribute("userinfo"); - builder.setPrincipal(new Principal() { - - @Override - public String getName() { - return userInfo.getString("preferred_username"); - } - - }); - identity = builder.build(); - } - return Uni.createFrom().item(identity); - } - -} ----- - -Now, the following code will work when the user is signing in into your application with both `Google` or `GitHub`: - -[source,java] ----- -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; - -import io.quarkus.security.Authenticated; -import io.quarkus.security.identity.SecurityIdentity; - -@Path("/service") -@Authenticated -public class TokenResource { - - @Inject - SecurityIdentity identity; - - @GET - @Path("/google") - @Produces("application/json") - public String getUserName() { - return identity.getPrincipal().getName(); - } - - @GET - @Path("/github") - @Produces("application/json") - public String getUserName() { - return identity.getPrincipal().getUserName(); - } -} ----- - -Possibly a simpler alternative is to inject both `@IdToken JsonWebToken` and `UserInfo` and use `JsonWebToken` when dealing with the providers returning `IdToken` and `UserInfo` - with the providers which do not return `IdToken`. - -The last important point is to make sure the callback path you enter in the GitHub OAuth application configuration matches the endpoint path where you'd like the user be redirected to after a successful GitHub authentication and application authorization, in this case it has to be set to `http:localhost:8080/github/userinfo`. - -=== Cloud Services - -==== Google Cloud - -You can have Quarkus OIDC `web-app` applications access **Google Cloud services** such as **BigQuery** on behalf of the currently authenticated users who have enabled OpenID Connect (Authorization Code Flow) permissions to such services in their Google Developer Consoles. - -It is super easy to do with https://github.com/quarkiverse[Quarkiverse] https://github.com/quarkiverse/quarkiverse-google-cloud-services[Google Cloud Services], only add -the https://github.com/quarkiverse/quarkiverse-google-cloud-services/releases/latest[latest tag] service dependency, for example: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkiverse.googlecloudservices - quarkus-google-cloud-bigquery - ${quarkiverse.googlecloudservices.version} - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkiverse.googlecloudservices:quarkus-google-cloud-bigquery:${quarkiverse.googlecloudservices.version}") ----- - -and configure Google OIDC properties: - -[source, properties] ----- -quarkus.oidc.provider=google -quarkus.oidc.client-id={GOOGLE_CLIENT_ID} -quarkus.oidc.credentials.secret={GOOGLE_CLIENT_SECRET} -quarkus.oidc.token.issuer=https://accounts.google.com ----- - -=== Provider Endpoint configuration - -OIDC `web-app` application needs to know OpenID Connect provider's authorization, token, `JsonWebKey` (JWK) set and possibly `UserInfo`, introspection and end session (RP-initiated logout) endpoint addresses. - -By default they are discovered by adding a `/.well-known/openid-configuration` path to the configured `quarkus.oidc.auth-server-url`. - -Alternatively, if the discovery endpoint is not available or you would like to save on the discovery endpoint roundtrip, you can disable the discovery and configure them with relative path values, for example: - -[source, properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.discovery-enabled=false -# Authorization endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/auth -quarkus.oidc.authorization-path=/protocol/openid-connect/auth -# Token endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/token -quarkus.oidc.token-path=/protocol/openid-connect/token -# JWK set endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/certs -quarkus.oidc.jwks-path=/protocol/openid-connect/certs -# UserInfo endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/userinfo -quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo -# Token Introspection endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/token/introspect -quarkus.oidc.introspection-path=/protocol/openid-connect/token/introspect -# End session endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/logout -quarkus.oidc.end-session-path=/protocol/openid-connect/logout ----- - -=== Token Propagation -Please see xref:security-openid-connect-client.adoc#token-propagation[Token Propagation] section about the Authorization Code Flow access token propagation to the downstream services. - -[[oidc-provider-client-authentication]] -=== Oidc Provider Client Authentication - -`quarkus.oidc.runtime.OidcProviderClient` is used when a remote request to an OpenID Connect Provider has to be done. It has to authenticate to the OpenID Connect Provider when the authorization code has to be exchanged for the ID, access and refresh tokens, when the ID and access tokens have to be refreshed or introspected. - -All the https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication[OIDC Client Authentication] options are supported, for example: - -`client_secret_basic`: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.secret=mysecret ----- - -or - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.client-secret.value=mysecret ----- - -or with the secret retrieved from a xref:credentials-provider.adoc[CredentialsProvider]: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app - -# This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider -quarkus.oidc.credentials.client-secret.provider.key=mysecret-key -# Set it only if more than one CredentialsProvider can be registered -quarkus.oidc.credentials.client-secret.provider.name=oidc-credentials-provider ----- - -`client_secret_post`: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.client-secret.value=mysecret -quarkus.oidc.credentials.client-secret.method=post ----- - -`client_secret_jwt`, signature algorithm is HS256: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow ----- - -or with the secret retrieved from a xref:credentials-provider.adoc[CredentialsProvider]: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app - -# This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider -quarkus.oidc.credentials.jwt.secret-provider.key=mysecret-key -# Set it only if more than one CredentialsProvider can be registered -quarkus.oidc.credentials.jwt.secret-provider.name=oidc-credentials-provider ----- - -`private_key_jwt` with the PEM key file, signature algorithm is RS256: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.jwt.key-file=privateKey.pem ----- - -`private_key_jwt` with the key store file, signature algorithm is RS256: - -[source,properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.jwt.key-store-file=keystore.jks -quarkus.oidc.credentials.jwt.key-store-password=mypassword -quarkus.oidc.credentials.jwt.key-password=mykeypassword - -# Private key alias inside the keystore -quarkus.oidc.credentials.jwt.key-id=mykeyAlias ----- - -Using `client_secret_jwt` or `private_key_jwt` authentication methods ensures that no client secret goes over the wire. - -==== Additional JWT Authentication options - -If `client_secret_jwt`, `private_key_jwt` authentication methods are used or an `Apple` `post_jwt` method is used then the JWT signature algorithm, key identifier, audience, subject and issuer can be customized, for example: - -[source,properties] ----- -# private_key_jwt client authentication - -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus/ -quarkus.oidc.client-id=quarkus-app -quarkus.oidc.credentials.jwt.key-file=privateKey.pem - -# This is a token key identifier 'kid' header - set it if your OpenID Connect provider requires it. -# Note if the key is represented in a JSON Web Key (JWK) format with a `kid` property then -# using 'quarkus.oidc.credentials.jwt.token-key-id' is not necessary. -quarkus.oidc.credentials.jwt.token-key-id=mykey - -# Use RS512 signature algorithm instead of the default RS256 -quarkus.oidc.credentials.jwt.signature-algorithm=RS512 - -# The token endpoint URL is the default audience value, use the base address URL instead: -quarkus.oidc.credentials.jwt.audience=${quarkus.oidc-client.auth-server-url} - -# custom subject instead of the client id : -quarkus.oidc.credentials.jwt.subject=custom-subject - -# custom issuer instead of the client id : -quarkus.oidc.credentials.jwt.issuer=custom-issuer ----- - -==== Apple POST JWT - -Apple OpenID Connect Provider uses a `client_secret_post` method where a secret is a JWT produced with a `private_key_jwt` authentication method but with Apple account specific issuer and subject claims. - -`quarkus-oidc` supports a non-standard `client_secret_post_jwt` authentication method which can be configured as follows: - -[source,properties] ----- -# Apple provider configuration sets a 'client_secret_post_jwt' authentication method -quarkus.oidc.provider=apple - -quarkus.oidc.client-id=${apple.client-id} -quarkus.oidc.credentials.jwt.key-file=ecPrivateKey.pem -quarkus.oidc.credentials.jwt.token-key-id=${apple.key-id} -# Apple provider configuration sets ES256 signature algorithm - -quarkus.oidc.credentials.jwt.subject=${apple.subject} -quarkus.oidc.credentials.jwt.issuer=${apple.issuer} ----- - -==== Mutual TLS - -Some OpenID Connect Providers may require that a client is authenticated as part of the `Mutual TLS` (`MTLS`) authentication process. - -`quarkus-oidc` can be configured as follows to support `MTLS`: - -[source,properties] ----- -quarkus.oidc.tls.verification=certificate-validation - -# Keystore configuration -quarkus.oidc.tls.key-store-file=client-keystore.jks -quarkus.oidc.tls.key-store-password=${key-store-password} - -# Add more keystore properties if needed: -#quarkus.oidc.tls.key-store-alias=keyAlias -#quarkus.oidc.tls.key-store-alias-password=keyAliasPassword - -# Truststore configuration -quarkus.oidc.tls.trust-store-file=client-truststore.jks -quarkus.oidc.tls.trust-store-password=${trust-store-password} -# Add more truststore properties if needed: -#quarkus.oidc.tls.trust-store-alias=certAlias ----- - -[[integration-testing]] -=== Testing - -Start by adding the following dependencies to your test project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - net.sourceforge.htmlunit - htmlunit - - - org.eclipse.jetty - * - - - test - - - io.quarkus - quarkus-junit5 - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("net.sourceforge.htmlunit:htmlunit") -testImplementation("io.quarkus:quarkus-junit5") ----- - -[[integration-testing-wiremock]] -==== Wiremock - -Add the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-oidc-server - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-oidc-server") ----- - -Prepare the REST test endpoints, set `application.properties`, for example: - -[source, properties] ----- -# keycloak.url is set by OidcWiremockTestResource -quarkus.oidc.auth-server-url=${keycloak.url}/realms/quarkus/ -quarkus.oidc.client-id=quarkus-web-app -quarkus.oidc.credentials.secret=secret -quarkus.oidc.application-type=web-app ----- - -and finally write the test code, for example: - -[source, java] ----- -import static org.junit.jupiter.api.Assertions.assertEquals; - -import org.junit.jupiter.api.Test; - -import com.gargoylesoftware.htmlunit.SilentCssErrorHandler; -import com.gargoylesoftware.htmlunit.WebClient; -import com.gargoylesoftware.htmlunit.html.HtmlForm; -import com.gargoylesoftware.htmlunit.html.HtmlPage; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.oidc.server.OidcWiremockTestResource; - -@QuarkusTest -@QuarkusTestResource(OidcWiremockTestResource.class) -public class CodeFlowAuthorizationTest { - - @Test - public void testCodeFlow() throws Exception { - try (final WebClient webClient = createWebClient()) { - // the test REST endpoint listens on '/code-flow' - HtmlPage page = webClient.getPage("http://localhost:8081/code-flow"); - - HtmlForm form = page.getFormByName("form"); - // user 'alice' has the 'user' role - form.getInputByName("username").type("alice"); - form.getInputByName("password").type("alice"); - - page = form.getInputByValue("login").click(); - - assertEquals("alice", page.getBody().asText()); - } - } - - private WebClient createWebClient() { - WebClient webClient = new WebClient(); - webClient.setCssErrorHandler(new SilentCssErrorHandler()); - return webClient; - } -} ----- - -`OidcWiremockTestResource` recognizes `alice` and `admin` users. The user `alice` has the `user` role only by default - it can be customized with a `quarkus.test.oidc.token.user-roles` system property. The user `admin` has the `user` and `admin` roles by default - it can be customized with a `quarkus.test.oidc.token.admin-roles` system property. - -Additionally, `OidcWiremockTestResource` set token issuer and audience to `https://service.example.com` which can be customized with `quarkus.test.oidc.token.issuer` and `quarkus.test.oidc.token.audience` system properties. - -`OidcWiremockTestResource` can be used to emulate all OpenID Connect providers. - -[[integration-testing-keycloak-devservices]] -==== Dev Services for Keycloak - -Using xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] is recommended for the integration testing against Keycloak. -`Dev Services for Keycloak` will launch and initialize a test container: it will create a `quarkus` realm, a `quarkus-app` client (`secret` secret) and add `alice` (`admin` and `user` roles) and `bob` (`user` role) users, where all of these properties can be customized. - -First prepare `application.properties`. You can start with a completely empty `application.properties` as `Dev Services for Keycloak` will register `quarkus.oidc.auth-server-url` pointing to the running test container as well as `quarkus.oidc.client-id=quarkus-app` and `quarkus.oidc.credentials.secret=secret`. - -But if you already have all the required `quarkus-oidc` properties configured then you only need to associate `quarkus.oidc.auth-server-url` with the `prod` profile for `Dev Services for Keycloak`to start a container, for example: - -[source,properties] ----- -%prod.quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus ----- - -If a custom realm file has to be imported into Keycloak before running the tests then you can configure `Dev Services for Keycloak` as follows: - -[source,properties] ----- -%prod.quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.keycloak.devservices.realm-path=quarkus-realm.json ----- - -Finally write a test code the same way as it is described in the <> section above. -The only difference is that `@QuarkusTestResource` is no longer needed: - -[source, java] ----- -@QuarkusTest -public class CodeFlowAuthorizationTest { -} ----- - -[[integration-testing-keycloak]] -==== KeycloakTestResourceLifecycleManager - -If you need to do the integration testing against Keycloak then you are encouraged to do it with <>. -Use `KeycloakTestResourceLifecycleManager` for your tests only if there is a good reason not to use `Dev Services for Keycloak`. - -Start with adding the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-keycloak-server - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-keycloak-server") ----- - -which provides `io.quarkus.test.keycloak.server.KeycloakTestResourceLifecycleManager` - an implementation of `io.quarkus.test.common.QuarkusTestResourceLifecycleManager` which starts a Keycloak container. - -And configure the Maven Surefire plugin as follows: - -[source,xml] ----- - - maven-surefire-plugin - - - - ${keycloak.docker.image} - - - - ----- - -(and similarly the Maven Failsafe plugin when testing in native image). - -And now set the configuration and write the test code the same way as it is described in the <> section above. -The only difference is the name of `QuarkusTestResource`: - -[source, java] ----- -import io.quarkus.test.keycloak.server.KeycloakTestResourceLifecycleManager; - -@QuarkusTest -@QuarkusTestResource(KeycloakTestResourceLifecycleManager.class) -public class CodeFlowAuthorizationTest { -} ----- - -`KeycloakTestResourceLifecycleManager` registers `alice` and `admin` users. The user `alice` has the `user` role only by default - it can be customized with a `keycloak.token.user-roles` system property. The user `admin` has the `user` and `admin` roles by default - it can be customized with a `keycloak.token.admin-roles` system property. - -By default, `KeycloakTestResourceLifecycleManager` uses HTTPS to initialize a Keycloak instance which can be disabled with `keycloak.use.https=false`. -Default realm name is `quarkus` and client id - `quarkus-web-app` - set `keycloak.realm` and `keycloak.web-app.client` system properties to customize the values if needed. - -[[integration-testing-security-annotation]] -==== TestSecurity annotation - -Please see xref:security-openid-connect.adoc#integration-testing-security-annotation[Use TestingSecurity with injected JsonWebToken] section for more information about using `@TestSecurity` and `@OidcSecurity` annotations for testing the `web-app` application endpoint code which depends on the injected ID and access `JsonWebToken` as well as `UserInfo` and `OidcConfigurationMetadata`. - -=== How to check the errors in the logs - -Please enable `io.quarkus.oidc.runtime.OidcProvider` `TRACE` level logging to see more details about the token verification errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".level=TRACE -quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".min-level=TRACE ----- - -Please enable `io.quarkus.oidc.runtime.OidcRecorder` `TRACE` level logging to see more details about the OidcProvider client initialization errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".level=TRACE -quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".min-level=TRACE ----- - -=== Running behind a reverse proxy - -OIDC authentication mechanism can be affected if your Quarkus application is running behind a reverse proxy/gateway/firewall when HTTP `Host` header may be reset to the internal IP address, HTTPS connection may be terminated, etc. For example, an authorization code flow `redirect_uri` parameter may be set to the internal host instead of the expected external one. - -In such cases configuring Quarkus to recognize the original headers forwarded by the proxy will be required, see xref:http-reference.adoc#reverse-proxy[Running behind a reverse proxy] Vert.x documentation section for more information. - -For example, if your Quarkus endpoint runs in a cluster behind Kubernetes Ingress then a redirect from the OpenID Connect Provider back to this endpoint may not work since the calculated `redirect_uri` parameter may point to the internal endpoint address. This problem can be resolved with the following configuration: - -[source,properties] ----- -quarkus.http.proxy.proxy-address-forwarding=true -quarkus.http.proxy.allow-forwarded=false -quarkus.http.proxy.enable-forwarded-host=true -quarkus.http.proxy.forwarded-host-header=X-ORIGINAL-HOST ----- - -where `X-ORIGINAL-HOST` is set by Kubernetes Ingress to represent the external endpoint address. - -`quarkus.oidc.authentication.force-redirect-https-scheme` property may also be used when the Quarkus application is running behind a SSL terminating reverse proxy. - -=== External and Internal Access to OpenID Connect Provider - -Note that the OpenID Connect Provider externally accessible authorization, logout and other endpoints may have different HTTP(S) URLs compared to the URLs auto-discovered or configured relative to `quarkus.oidc.auth-server-url` internal URL. -In such cases an issuer verification failure may be reported by the endpoint and redirects to the externally accessible Connect Provider endpoints may fail. - -In such cases, if you work with Keycloak then please start it with a `KEYCLOAK_FRONTEND_URL` system property set to the externally accessible base URL. -If you work with other Openid Connect providers then please check your provider's documentation. - -=== Customize authentication requests - -By default, only the `response_type` (set to `code`), `scope` (set to 'openid'), `client_id`, `redirect_uri` and `state` properties are passed as HTTP query parameters to the OpenID Connect provider's authorization endpoint when the user is redirected to it to authenticate. - -You can add more properties to it with `quarkus.oidc.authentication.extra-params`. For example, some OpenID Connect providers may choose to return the authorization code as part of the redirect URI's fragment which would break the authentication process - it can be fixed as follows: - -[source,properties] ----- -quarkus.oidc.authentication.extra-params.response_mode=query ----- - -== Configuration Reference - -include::{generated-dir}/config/quarkus-oidc.adoc[opts=optional] - -== References - -* https://www.keycloak.org/documentation.html[Keycloak Documentation] -* https://openid.net/connect/[OpenID Connect] -* https://tools.ietf.org/html/rfc7519[JSON Web Token] -* xref:security-openid-connect-client.adoc[Quarkus - Using OpenID Connect and OAuth2 Client and Filters to manage access tokens] -* xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] -* xref:security.adoc#oidc-jwt-oauth2-comparison[Summary of Quarkus OIDC, JWT and OAuth2 features] -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-openid-connect.adoc b/_versions/2.7/guides/security-openid-connect.adoc deleted file mode 100644 index 66fb43d070d..00000000000 --- a/_versions/2.7/guides/security-openid-connect.adoc +++ /dev/null @@ -1,1151 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using OpenID Connect (OIDC) to Protect Service Applications using Bearer Token Authorization - -include::./attributes.adoc[] -:toc: - -This guide demonstrates how to use Quarkus OpenID Connect (OIDC) Extension to protect your JAX-RS applications using Bearer Token Authorization where Bearer Tokens are issued by OpenID Connect and OAuth 2.0 compliant Authorization Servers such as https://www.keycloak.org[Keycloak]. - -Bearer Token Authorization is the process of authorizing HTTP requests based on the existence and validity of a Bearer Token which provides valuable information to determine the subject of the call as well as whether or not an HTTP resource can be accessed. - -Please read the xref:security-openid-connect-web-authentication.adoc[Using OpenID Connect to Protect Web Applications] guide if you need to authenticate and authorize the users using OpenID Connect Authorization Code Flow. - -If you use Keycloak and Bearer tokens then also see the xref:security-keycloak-authorization.adoc[Using Keycloak to Centralize Authorization] guide. - -Please read the xref:security-openid-connect-multitenancy.adoc[Using OpenID Connect Multi-Tenancy] guide how to support multiple tenants. - -== Quickstart - -=== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] -* https://stedolan.github.io/jq/[jq tool] - -=== Architecture - -In this example, we build a very simple microservice which offers two endpoints: - -* `/api/users/me` -* `/api/admin` - -These endpoints are protected and can only be accessed if a client is sending a bearer token along with the request, which must be valid (e.g.: signature, expiration and audience) and trusted by the microservice. - -The bearer token is issued by a Keycloak Server and represents the subject to which the token was issued for. For being an OAuth 2.0 Authorization Server, the token also references the client acting on behalf of the user. - -The `/api/users/me` endpoint can be accessed by any user with a valid token. As a response, it returns a JSON document with details about the user where these details are obtained from the information carried on the token. - -The `/api/admin` endpoint is protected with RBAC (Role-Based Access Control) where only users granted with the `admin` role can access. At this endpoint, we use the `@RolesAllowed` annotation to declaratively enforce the access constraint. - -=== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `security-openid-connect-quickstart` {quickstarts-tree-url}/security-openid-connect-quickstart[directory]. - -=== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: security-openid-connect-quickstart -:create-app-extensions: resteasy,oidc,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project, importing the `keycloak` extension -which is an implementation of a Keycloak Adapter for Quarkus applications and provides all the necessary capabilities to integrate with a Keycloak Server and perform bearer token authorization. - -If you already have your Quarkus project configured, you can add the `oidc` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: oidc -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-oidc - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-oidc") ----- - -=== Writing the application - -Let's start by implementing the `/api/users/me` endpoint. As you can see from the source code below it is just a regular JAX-RS resource: - -[source,java] ----- -package org.acme.security.openid.connect; - -import javax.annotation.security.RolesAllowed; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.jboss.resteasy.annotations.cache.NoCache; -import io.quarkus.security.identity.SecurityIdentity; - -@Path("/api/users") -public class UsersResource { - - @Inject - SecurityIdentity securityIdentity; - - @GET - @Path("/me") - @RolesAllowed("user") - @NoCache - public User me() { - return new User(securityIdentity); - } - - public static class User { - - private final String userName; - - User(SecurityIdentity securityIdentity) { - this.userName = securityIdentity.getPrincipal().getName(); - } - - public String getUserName() { - return userName; - } - } -} ----- - -The source code for the `/api/admin` endpoint is also very simple. The main difference here is that we are using a `@RolesAllowed` annotation to make sure that only users granted with the `admin` role can access the endpoint: - -[source,java] ----- -package org.acme.security.openid.connect; - -import javax.annotation.security.RolesAllowed; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/admin") -public class AdminResource { - - @GET - @RolesAllowed("admin") - @Produces(MediaType.TEXT_PLAIN) - public String admin() { - return "granted"; - } -} ----- - -Injection of the `SecurityIdentity` is supported in both `@RequestScoped` and `@ApplicationScoped` contexts. - -=== Configuring the application - -The OpenID Connect extension allows you to define the adapter configuration using the `application.properties` file which should be located at the `src/main/resources` directory. - -include::{generated-dir}/config/quarkus-oidc.adoc[opts=optional, leveloffset=+1] - -Example configuration: - -[source,properties] ----- -%prod.quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.client-id=backend-service -quarkus.oidc.client-secret=secret - -# Tell Dev Services for Keycloak to import the realm file -# This property is not effective when running the application in JVM or Native modes - -quarkus.keycloak.devservices.realm-path=quarkus-realm.json ----- - -NOTE: Adding a `%prod.` profile prefix to `quarkus.oidc.auth-server-url` ensures that `Dev Services for Keycloak` will launch a container for you when the application is run in a dev mode. See <> section below for more information. - -=== Starting and Configuring the Keycloak Server - -NOTE: Do not start the Keycloak server when you run the application in a dev mode - `Dev Services for Keycloak` will launch a container. See <> section below for more information. - -To start a Keycloak Server you can use Docker and just run the following command: - -[source,bash,subs=attributes+] ----- -docker run --name keycloak -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak version} ----- - -You should be able to access your Keycloak Server at http://localhost:8180/auth[localhost:8180/auth]. - -Log in as the `admin` user to access the Keycloak Administration Console. Username should be `admin` and password `admin`. - -Import the {quickstarts-tree-url}/security-openid-connect-quickstart/config/quarkus-realm.json[realm configuration file] to create a new realm. For more details, see the Keycloak documentation about how to https://www.keycloak.org/docs/latest/server_admin/index.html#_create-realm[create a new realm]. - -NOTE: If you want to use the Keycloak Admin Client to configure your server from your application you need to include the -`quarkus-keycloak-admin-client` extension. - -[[keycloak-dev-mode]] -=== Running the Application in Dev mode - -To run the application in a dev mode, use: - -include::includes/devtools/dev.adoc[] - -xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] will launch a Keycloak container and import a `quarkus-realm.json`. - -Open a xref:dev-ui.adoc[Dev UI] available at http://localhost:8080/q/dev[/q/dev] and click on a `Provider: Keycloak` link in an `OpenID Connect` `Dev UI` card. - -You will be asked to login into a `Single Page Application` provided by `OpenID Connect Dev UI`: - - * Login as `alice` (password: `alice`) who has a `user` role - ** accessing `/api/admin` will return `403` - ** accessing `/api/users/me` will return `200` - * Logout and login as `admin` (password: `admin`) who has both `admin` and `user` roles - ** accessing `/api/admin` will return `200` - ** accessing `/api/users/me` will return `200` - -=== Running the Application in JVM mode - -When you're done playing with the `dev` mode" you can run it as a standard Java application. - -First compile it: - -include::includes/devtools/build.adoc[] - -Then run it: - -[source,bash] ----- -java -jar target/quarkus-app/quarkus-run.jar ----- - -=== Running the Application in Native Mode - -This same demo can be compiled into native code: no modifications required. - -This implies that you no longer need to install a JVM on your -production environment, as the runtime technology is included in -the produced binary, and optimized to run with minimal resource overhead. - -Compilation will take a bit longer, so this step is disabled by default; -let's build again by enabling the `native` profile: - -include::includes/devtools/build-native.adoc[] - -After getting a cup of coffee, you'll be able to run this binary directly: - -[source,bash] ----- -./target/security-openid-connect-quickstart-runner ----- - -=== Testing the Application - -See <> section above about testing your application in a dev mode. - -You can test the application launched in JVM or Native modes with `curl`. - -The application is using bearer token authorization and the first thing to do is obtain an access token from the Keycloak Server in -order to access the application resources: - -[source,bash] ----- -export access_token=$(\ - curl --insecure -X POST https://localhost:8543/auth/realms/quarkus/protocol/openid-connect/token \ - --user backend-service:secret \ - -H 'content-type: application/x-www-form-urlencoded' \ - -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' \ - ) ----- - -The example above obtains an access token for user `alice`. - -Any user is allowed to access the -`http://localhost:8080/api/users/me` endpoint -which basically returns a JSON payload with details about the user. - -[source,bash] ----- -curl -v -X GET \ - http://localhost:8080/api/users/me \ - -H "Authorization: Bearer "$access_token ----- - -The `http://localhost:8080/api/admin` endpoint can only be accessed by users with the `admin` role. If you try to access this endpoint with the - previously issued access token, you should get a `403` response - from the server. - -[source,bash] ----- -curl -v -X GET \ - http://localhost:8080/api/admin \ - -H "Authorization: Bearer "$access_token ----- - -In order to access the admin endpoint you should obtain a token for the `admin` user: - -[source,bash] ----- -export access_token=$(\ - curl --insecure -X POST https://localhost:8543/auth/realms/quarkus/protocol/openid-connect/token \ - --user backend-service:secret \ - -H 'content-type: application/x-www-form-urlencoded' \ - -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' \ - ) ----- - -Please also see the <> section below about writing the integration tests which depend on `Dev Services for Keycloak`. - -== Reference Guide - -=== Accessing JWT claims - -If you need to access JWT token claims then you can inject `JsonWebToken`: - -[source,java] ----- -package org.acme.security.openid.connect; - -import org.eclipse.microprofile.jwt.JsonWebToken; -import javax.inject.Inject; -import javax.annotation.security.RolesAllowed; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/api/admin") -public class AdminResource { - - @Inject - JsonWebToken jwt; - - @GET - @RolesAllowed("admin") - @Produces(MediaType.TEXT_PLAIN) - public String admin() { - return "Access for subject " + jwt.getSubject() + " is granted"; - } -} ----- - -Injection of `JsonWebToken` is supported in `@ApplicationScoped`, `@Singleton` and `@RequestScoped` scopes however the use of `@RequestScoped` is required if the individual claims are injected as simple types, please see xref:security-jwt.adoc#supported-injection-scopes[Support Injection Scopes for JsonWebToken and Claims] for more details. - -[[user-info]] -=== User Info - -Set `quarkus.oidc.authentication.user-info-required=true` if a UserInfo JSON object from the OIDC userinfo endpoint has to be requested. -A request will be sent to the OpenID Provider UserInfo endpoint and an `io.quarkus.oidc.UserInfo` (a simple `javax.json.JsonObject` wrapper) object will be created. -`io.quarkus.oidc.UserInfo` can be either injected or accessed as a SecurityIdentity `userinfo` attribute. - -[[config-metadata]] -=== Configuration Metadata - -The current tenant's discovered link:https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata[OpenID Connect Configuration Metadata] is represented by `io.quarkus.oidc.OidcConfigurationMetadata` and can be either injected or accessed as a `SecurityIdentity` `configuration-metadata` attribute. - -The default tenant's `OidcConfigurationMetadata` is injected if the endpoint is public. - -=== Token Claims And SecurityIdentity Roles - -SecurityIdentity roles can be mapped from the verified JWT access tokens as follows: - -* If `quarkus.oidc.roles.role-claim-path` property is set and matching array or string claims are found then the roles are extracted from these claims. - For example, `customroles`, `customroles/array`, `scope`, `"http://namespace-qualified-custom-claim"/roles`, `"http://namespace-qualified-roles"`, etc. -* If `groups` claim is available then its value is used -* If `realm_access/roles` or `resource_access/client_id/roles` (where `client_id` is the value of the `quarkus.oidc.client-id` property) claim is available then its value is used. - This check supports the tokens issued by Keycloak - -If the token is opaque (binary) then a `scope` property from the remote token introspection response will be used. - -If UserInfo is the source of the roles then set `quarkus.oidc.authentication.user-info-required=true` and `quarkus.oidc.roles.source=userinfo`, and if needed, `quarkus.oidc.roles.role-claim-path`. - -Additionally a custom `SecurityIdentityAugmentor` can also be used to add the roles as documented xref:security.adoc#security-identity-customization[here]. - -[[token-verification-introspection]] -=== Token Verification And Introspection - -If the token is a JWT token then, by default, it will be verified with a `JsonWebKey` (JWK) key from a local `JsonWebKeySet` retrieved from the OpenID Connect Provider's JWK endpoint. The token's key identifier `kid` header value will be used to find the matching JWK key. -If no matching `JWK` is available locally then `JsonWebKeySet` will be refreshed by fetching the current key set from the JWK endpoint. The `JsonWebKeySet` refresh can be repeated again only after the `quarkus.oidc.token.forced-jwk-refresh-interval` (default is 10 minutes) expires. -If no matching `JWK` is available after the refresh then the JWT token will be sent to the OpenID Connect Provider's token introspection endpoint. - -If the token is opaque (it can be a binary token or an encrypted JWT token) then it will always be sent to the OpenID Connect Provider's token introspection endpoint. - -If you work with JWT tokens only and expect that a matching `JsonWebKey` will always be available (possibly after a key set refresh) then you should disable the token introspection: - -[source, properties] ----- -quarkus.oidc.token.allow-jwt-introspection=false -quarkus.oidc.token.allow-opaque-token-introspection=false ----- - -However, there could be cases where JWT tokens must be verified via the introspection only. It can be forced by configuring an introspection endpoint address only, for example, in case of Keycloak you can do it like this: - -[source, properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.discovery-enabled=false -# Token Introspection endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens/introspect -quarkus.oidc.introspection-path=/protocol/openid-connect/tokens/introspect ----- - -Note that `io.quarkus.oidc.TokenIntrospection` (a simple `javax.json.JsonObject` wrapper) object will be created and can be either injected or accessed as a SecurityIdentity `introspection` attribute if either JWT or opaque token has been successfully introspected. - -[[token-introspection-userinfo-cache]] -=== Token Introspection and UserInfo Cache - -All opaque and sometimes JWT Bearer access tokens have to be remotely introspected. If `UserInfo` is also required then the same access token will be used to do a remote call to OpenID Connect Provider again. So, if `UserInfo` is required and the current access token is opaque then for every such token there will be 2 remote calls done - one to introspect it and one to get UserInfo with it, and if the token is JWT then usually only a single remote call will be needed - to get UserInfo with it. - -The cost of making up to 2 remote calls per every incoming bearer or code flow access token can sometimes be problematic. - -If it is the case in your production then it can be recommended that the token introspection and `UserInfo` data are cached for a short period of time, for example, for 3 or 5 minutes. - -`quarkus-oidc` provides `quarkus.oidc.TokenIntrospectionCache` and `quarkus.oidc.UserInfoCache` interfaces which can be used to implement `@ApplicationScoped` cache implementation which can be used to store and retrieve `quarkus.oidc.TokenIntrospection` and/or `quarkus.oidc.UserInfo` objects, for example: - -[source, java] ----- -@ApplicationScoped -@AlternativePriority(1) -public class CustomIntrospectionUserInfoCache implements TokenIntrospectionCache, UserInfoCache { -... -} ----- - -Each OIDC tenant can either permit or deny storing its `quarkus.oidc.TokenIntrospection` and/or `quarkus.oidc.UserInfo` data with boolean `quarkus.oidc."tenant".allow-token-introspection-cache` and `quarkus.oidc."tenant".allow-user-info-cache` properties. - -Additionally, `quarkus-oidc` provides a simple default memory based token cache which implements both `quarkus.oidc.TokenIntrospectionCache` and `quarkus.oidc.UserInfoCache` interfaces. - -It can be activated and configured as follows: - -[source, properties] ----- -# 'max-size' is 0 by default so the cache can be activated by setting 'max-size' to a positive value. -quarkus.oidc.token-cache.max-size=1000 -# 'time-to-live' specifies how long a cache entry can be valid for and will be used by a clean up timer. -quarkus.oidc.token-cache.time-to-live=3M -# 'clean-up-timer-interval' is not set by default so the clean up timer can be activated by setting 'clean-up-timer-interval'. -quarkus.oidc.token-cache.clean-up-timer-interval=1M ----- - -The default cache uses a token as a key and each entry can have `TokenIntrospection` and/or `UserInfo`. It will only keep up to a `max-size` number of entries. If the cache is full when a new entry is to be added then an attempt will be made to find a space for it by removing a single expired entry. Additionally, the clean up timer, if activated, will periodically check for the expired entries and remove them. - -Please experiment with the default cache implementation or register a custom one. - -[[jwt-claim-verification]] -=== JSON Web Token Claim Verification - -Once the bearer JWT token's signature has been verified and its `expires at` (`exp`) claim has been checked, the `iss` (`issuer`) claim value is verified next. - -By default, the `iss` claim value is compared to the `issuer` property which may have been discovered in the well-known provider configuration. -But if `quarkus.oidc.token.issuer` property is set then the `iss` claim value is compared to it instead. - -In some cases, this `iss` claim verification may not work. For example, if the discovered `issuer` property contains an internal HTTP/IP address while the token `iss` claim value contains an external HTTP/IP address. Or when a discovered `issuer` property contains the template tenant variable but the token `iss` claim value has the complete tenant-specific issuer value. - -In such cases you may want to consider skipping the issuer verification by setting `quarkus.oidc.token.issuer=any`. Please note that it is not recommended and should be avoided unless no other options are available: - -- If you work with Keycloak and observe the issuer verification errors due to the different host addresses then configure Keycloak with a `KEYCLOAK_FRONTEND_URL` property to ensure the same host address is used. -- If the `iss` property is tenant specific in a multi-tenant deployment then you can use the `SecurityIdentity` `tenant-id` attribute to check the issuer is correct in the endpoint itself or the custom JAX-RS filter, for example: - -[source, java] ----- -import javax.inject.Inject; -import javax.ws.rs.container.ContainerRequestContext; -import javax.ws.rs.container.ContainerRequestFilter; -import javax.ws.rs.ext.Provider; - -import org.eclipse.microprofile.jwt.JsonWebToken; -import io.quarkus.oidc.OidcConfigurationMetadata; -import io.quarkus.security.identity.SecurityIdentity; - -@Provider -public class IssuerValidator implements ContainerRequestFilter { - @Inject - OidcConfigurationMetadata configMetadata; - - @Inject JsonWebToken jwt; - @Inject SecurityIdentity identity; - - public void filter(ContainerRequestContext requestContext) { - String issuer = configMetadata.getIssuer().replace("{tenant-id}", identity.getAttribute("tenant-id")); - if (!issuer.equals(jwt.getIssuer())) { - requestContext.abortWith(Response.status(401).build()); - } - } -} ----- - -Note it is also recommended to use `quarkus.oidc.token.audience` property to verify the token `aud` (`audience`) claim value. - -[[single-page-applications]] -=== Single Page Applications - -Single Page Application (SPA) typically uses `XMLHttpRequest`(XHR) and the Java Script utility code provided by the OpenID Connect provider to acquire a bearer token and use it -to access Quarkus `service` applications. - -For example, here is how you can use `keycloak.js` to authenticate the users and refresh the expired tokens from the SPA: - -[source,html] ----- - - - keycloak-spa - - - - - - - - ----- - -=== Cross Origin Resource Sharing - -If you plan to consume your OpenID Connect `service` application from a Single Page Application running on a different domain, you will need to configure CORS (Cross-Origin Resource Sharing). Please read the xref:http-reference.adoc#cors-filter[HTTP CORS documentation] for more details. - -=== Provider Endpoint configuration - -OIDC `service` application needs to know OpenID Connect provider's token, `JsonWebKey` (JWK) set and possibly `UserInfo` and introspection endpoint addresses. - -By default they are discovered by adding a `/.well-known/openid-configuration` path to the configured `quarkus.oidc.auth-server-url`. - -Alternatively, if the discovery endpoint is not available or you would like to save on the discovery endpoint roundtrip, you can disable the discovery and configure them with relative path values, for example: - -[source, properties] ----- -quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.oidc.discovery-enabled=false -# Token endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/token -quarkus.oidc.token-path=/protocol/openid-connect/token -# JWK set endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/certs -quarkus.oidc.jwks-path=/protocol/openid-connect/certs -# UserInfo endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/userinfo -quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo -# Token Introspection endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens/introspect -quarkus.oidc.introspection-path=/protocol/openid-connect/tokens/introspect ----- - -=== Token Propagation - -Please see xref:security-openid-connect-client.adoc#token-propagation[Token Propagation] section about the Bearer access token propagation to the downstream services. - -[[oidc-provider-authentication]] -=== Oidc Provider Client Authentication - -`quarkus.oidc.runtime.OidcProviderClient` is used when a remote request to an OpenID Connect Provider has to be done. If the bearer token has to be introspected then `OidcProviderClient` has to authenticate to the OpenID Connect Provider. Please see xref:security-openid-connect-web-authentication.adoc#oidc-provider-client-authentication[OidcProviderClient Authentication] for more information about all the supported authentication options. - -[[integration-testing]] -=== Testing - -Start by adding the following dependencies to your test project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.rest-assured - rest-assured - test - - - io.quarkus - quarkus-junit5 - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.rest-assured:rest-assured") -testImplementation("io.quarkus:quarkus-junit5") ----- - -[[integration-testing-wiremock]] -==== Wiremock - -Add the following dependencies to your test project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-oidc-server - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-oidc-server") ----- - -Prepare the REST test endpoint, set `application.properties`, for example: - -[source, properties] ----- -# keycloak.url is set by OidcWiremockTestResource -quarkus.oidc.auth-server-url=${keycloak.url}/realms/quarkus/ -quarkus.oidc.client-id=quarkus-service-app -quarkus.oidc.application-type=service ----- - -and finally write the test code, for example: - -[source, java] ----- -import static org.hamcrest.Matchers.equalTo; - -import java.util.Arrays; -import java.util.HashSet; - -import org.hamcrest.Matchers; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.oidc.server.OidcWiremockTestResource; -import io.restassured.RestAssured; -import io.smallrye.jwt.build.Jwt; - -@QuarkusTest -@QuarkusTestResource(OidcWiremockTestResource.class) -public class BearerTokenAuthorizationTest { - - @Test - public void testBearerToken() { - RestAssured.given().auth().oauth2(getAccessToken("alice", new HashSet<>(Arrays.asList("user")))) - .when().get("/api/users/preferredUserName") - .then() - .statusCode(200) - // the test endpoint returns the name extracted from the injected SecurityIdentity Principal - .body("userName", equalTo("alice")); - } - - private String getAccessToken(String userName, Set groups) { - return Jwt.preferredUserName(userName) - .groups(groups) - .issuer("https://server.example.com") - .audience("https://service.example.com") - .sign(); - } -} ----- - -Note that the `quarkus-test-oidc-server` extension includes a signing RSA private key file in a `JSON Web Key` (`JWK`) format and points to it with a `smallrye.jwt.sign.key.location` configuration property. It allows to use a no argument `sign()` operation to sign the token. - -Testing your `quarkus-oidc` `service` application with `OidcWiremockTestResource` provides the best coverage as even the communication channel is tested against the Wiremock HTTP stubs. -`OidcWiremockTestResource` will be enhanced going forward to support more complex Bearer token test scenarios. - -If there is an immediate need for a test to define Wiremock stubs not currently supported by `OidcWiremockTestResource` -one can do so via a `WireMockServer` instance injected into the test class, for example: - -[source, java] ----- -package io.quarkus.it.keycloak; - -import static com.github.tomakehurst.wiremock.client.WireMock.matching; -import static org.hamcrest.Matchers.equalTo; - -import org.junit.jupiter.api.Test; - -import com.github.tomakehurst.wiremock.WireMockServer; -import com.github.tomakehurst.wiremock.client.WireMock; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.oidc.server.OidcWireMock; -import io.quarkus.test.oidc.server.OidcWiremockTestResource; -import io.restassured.RestAssured; - -@QuarkusTest -@QuarkusTestResource(OidcWiremockTestResource.class) -public class CustomOidcWireMockStubTest { - - @OidcWireMock - WireMockServer wireMockServer; - - @Test - public void testInvalidBearerToken() { - wireMockServer.stubFor(WireMock.post("/auth/realms/quarkus/protocol/openid-connect/token/introspect") - .withRequestBody(matching(".*token=invalid_token.*")) - .willReturn(WireMock.aResponse().withStatus(400))); - - RestAssured.given().auth().oauth2("invalid_token").when() - .get("/api/users/me/bearer") - .then() - .statusCode(401) - .header("WWW-Authenticate", equalTo("Bearer")); - } -} ----- - -[[integration-testing-keycloak-devservices]] -==== Dev Services for Keycloak - -Using xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] is recommended for the integration testing against Keycloak. -`Dev Services for Keycloak` will launch and initialize a test container: it will create a `quarkus` realm, a `quarkus-app` client (`secret` secret) and add `alice` (`admin` and `user` roles) and `bob` (`user` role) users, where all of these properties can be customized. - -First you need to add the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-keycloak-server - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-keycloak-server") ----- - -which provides a utility class `io.quarkus.test.keycloak.client.KeycloakTestClient` you can use in tests for acquiring the access tokens. - -Next prepare your `application.properties`. You can start with a completely empty `application.properties` as `Dev Services for Keycloak` will register `quarkus.oidc.auth-server-url` pointing to the running test container as well as `quarkus.oidc.client-id=quarkus-app` and `quarkus.oidc.credentials.secret=secret`. - -But if you already have all the required `quarkus-oidc` properties configured then you only need to associate `quarkus.oidc.auth-server-url` with the `prod` profile for `Dev Services for Keycloak`to start a container, for example: - -[source,properties] ----- -%prod.quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus ----- - -If a custom realm file has to be imported into Keycloak before running the tests then you can configure `Dev Services for Keycloak` as follows: - -[source,properties] ----- -%prod.quarkus.oidc.auth-server-url=http://localhost:8180/auth/realms/quarkus -quarkus.keycloak.devservices.realm-path=quarkus-realm.json ----- - -Finally write your test which will be executed in JVM mode: - -[source,java] ----- -package org.acme.security.openid.connect; - -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.keycloak.client.KeycloakTestClient; -import io.restassured.RestAssured; -import org.junit.jupiter.api.Test; - -@QuarkusTest -public class BearerTokenAuthenticationTest { - - KeycloakTestClient keycloakClient = new KeycloakTestClient(); - - @Test - public void testAdminAccess() { - RestAssured.given().auth().oauth2(getAccessToken("alice")) - .when().get("/api/admin") - .then() - .statusCode(200); - RestAssured.given().auth().oauth2(getAccessToken("bob")) - .when().get("/api/admin") - .then() - .statusCode(403); - } - - protected String getAccessToken(String userName) { - return keycloakClient.getAccessToken(userName); - } -} ----- - -and in native mode: - -[source,java] ----- -package org.acme.security.openid.connect; - -import io.quarkus.test.junit.QuarkusIntegrationTest; - -@QuarkusIntegrationTest -public class NativeBearerTokenAuthenticationIT extends BearerTokenAuthenticationTest { -} ----- - -Please see xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] for more information about the way it is initialized and configured. - -[[integration-testing-keycloak]] -==== KeycloakTestResourceLifecycleManager - -If you need to do some integration testing against Keycloak then you are encouraged to do it with <>. -Use `KeycloakTestResourceLifecycleManager` for your tests only if there is a good reason not to use `Dev Services for Keycloak`. - -Start with adding the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-keycloak-server - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-keycloak-server") ----- - -which provides `io.quarkus.test.keycloak.server.KeycloakTestResourceLifecycleManager` - an implementation of `io.quarkus.test.common.QuarkusTestResourceLifecycleManager` which starts a Keycloak container. - -And configure the Maven Surefire plugin as follows: - -[source,xml] ----- - - maven-surefire-plugin - - - - ${keycloak.docker.image} - - - - ----- - -(and similarly `maven.failsafe.plugin` when testing in native image). - -Prepare the REST test endpoint, set `application.properties`, for example: - -[source, properties] ----- -# keycloak.url is set by KeycloakTestResourceLifecycleManager -quarkus.oidc.auth-server-url=${keycloak.url}/realms/quarkus/ -quarkus.oidc.client-id=quarkus-service-app -quarkus.oidc.credentials=secret -quarkus.oidc.application-type=service ----- - -and finally write the test code, for example: - -[source, java] ----- -import static io.quarkus.test.keycloak.server.KeycloakTestResourceLifecycleManager.getAccessToken; -import static org.hamcrest.Matchers.equalTo; - -import org.hamcrest.Matchers; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.common.QuarkusTestResource; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.keycloak.server.KeycloakTestResourceLifecycleManager; -import io.restassured.RestAssured; - -@QuarkusTest -@QuarkusTestResource(KeycloakTestResourceLifecycleManager.class) -public class BearerTokenAuthorizationTest { - - @Test - public void testBearerToken() { - RestAssured.given().auth().oauth2(getAccessToken("alice")))) - .when().get("/api/users/preferredUserName") - .then() - .statusCode(200) - // the test endpoint returns the name extracted from the injected SecurityIdentity Principal - .body("userName", equalTo("alice")); - } - -} ----- - -`KeycloakTestResourceLifecycleManager` registers `alice` and `admin` users. The user `alice` has the `user` role only by default - it can be customized with a `keycloak.token.user-roles` system property. The user `admin` has the `user` and `admin` roles by default - it can be customized with a `keycloak.token.admin-roles` system property. - -By default, `KeycloakTestResourceLifecycleManager` uses HTTPS to initialize a Keycloak instance which can be disabled with `keycloak.use.https=false`. -Default realm name is `quarkus` and client id - `quarkus-service-app` - set `keycloak.realm` and `keycloak.service.client` system properties to customize the values if needed. - -[[integration-testing-public-key]] -==== Local Public Key - -You can also use a local inlined public key for testing your `quarkus-oidc` `service` applications: - -[source,properties] ----- -quarkus.oidc.client-id=test -quarkus.oidc.public-key=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlivFI8qB4D0y2jy0CfEqFyy46R0o7S8TKpsx5xbHKoU1VWg6QkQm+ntyIv1p4kE1sPEQO73+HY8+Bzs75XwRTYL1BmR1w8J5hmjVWjc6R2BTBGAYRPFRhor3kpM6ni2SPmNNhurEAHw7TaqszP5eUF/F9+KEBWkwVta+PZ37bwqSE4sCb1soZFrVz/UT/LF4tYpuVYt3YbqToZ3pZOZ9AX2o1GCG3xwOjkc4x0W7ezbQZdC9iftPxVHR8irOijJRRjcPDtA6vPKpzLl6CyYnsIYPd99ltwxTHjr3npfv/3Lw50bAkbT4HeLFxTx4flEoZLKO/g0bAoV2uqBhkA9xnQIDAQAB - -smallrye.jwt.sign.key.location=/privateKey.pem ----- - -copy `privateKey.pem` from the `integration-tests/oidc-tenancy` in the `main` Quarkus repository and use a test code similar to the one in the `Wiremock` section above to generate JWT tokens. You can use your own test keys if preferred. - -This approach provides a more limited coverage compared to the Wiremock approach - for example, the remote communication code is not covered. - -[[integration-testing-security-annotation]] -==== TestSecurity annotation - -You can use `@TestSecurity` and `@OidcSecurity` annotations for testing the `service` application endpoint code which depends on the injected `JsonWebToken` as well as `UserInfo` and `OidcConfigurationMetadata`. - -Add the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-security-oidc - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-security-oidc") ----- - -and write a test code like this one: - -[source, java] ----- -import static org.hamcrest.Matchers.is; -import org.junit.jupiter.api.Test; -import io.quarkus.test.common.http.TestHTTPEndpoint; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.security.TestSecurity; -import io.quarkus.test.security.oidc.Claim; -import io.quarkus.test.security.oidc.ConfigMetadata; -import io.quarkus.test.security.oidc.OidcSecurity; -import io.quarkus.test.security.oidc.OidcConfigurationMetadata; -import io.quarkus.test.security.oidc.UserInfo; -import io.restassured.RestAssured; - -@QuarkusTest -@TestHTTPEndpoint(ProtectedResource.class) -public class TestSecurityAuthTest { - - @Test - @TestSecurity(user = "userOidc", roles = "viewer") - public void testOidc() { - RestAssured.when().get("test-security-oidc").then() - .body(is("userOidc:viewer")); - } - - @Test - @TestSecurity(user = "userOidc", roles = "viewer") - @OidcSecurity(claims = { - @Claim(key = "email", value = "user@gmail.com") - }, userinfo = { - @UserInfo(key = "sub", value = "subject") - }, config = { - @ConfigMetadata(key = "issuer", value = "issuer") - }) - public void testOidcWithClaimsUserInfoAndMetadata() { - RestAssured.when().get("test-security-oidc-claims-userinfo-metadata").then() - .body(is("userOidc:viewer:user@gmail.com:subject:issuer")); - } - -} ----- - -where `ProtectedResource` class may look like this: - -[source, java] ----- -import io.quarkus.oidc.OidcConfigurationMetadata; -import io.quarkus.oidc.UserInfo; -import org.eclipse.microprofile.jwt.JsonWebToken; - -@Path("/service") -@Authenticated -public class ProtectedResource { - - @Inject - JsonWebToken accessToken; - @Inject - UserInfo userInfo; - @Inject - OidcConfigurationMetadata configMetadata; - - @GET - @Path("test-security-oidc") - public String testSecurityOidc() { - return accessToken.getName() + ":" + accessToken.getGroups().iterator().next(); - } - - @GET - @Path("test-security-oidc-claims-userinfo-metadata") - public String testSecurityOidcWithClaimsUserInfoMetadata() { - return accessToken.getName() + ":" + accessToken.getGroups().iterator().next() - + ":" + accessToken.getClaim("email") - + ":" + userInfo.getString("sub") - + ":" + configMetadata.get("issuer"); - } -} ----- - -Note that `@TestSecurity` annotation must always be used and its `user` property is returned as `JsonWebToken.getName()` and `roles` property - as `JsonWebToken.getGroups()`. -`@OidcSecurity` annotation is optional and can be used to set the additional token claims, as well as `UserInfo` and `OidcConfigurationMetadata` properties. -Additionally, if `quarkus.oidc.token.issuer` property is configured then it will be used as an `OidcConfigurationMetadata` `issuer` property value. - -If you work with the opaque tokens then you can test them as follows: - -[source, java] ----- -import static org.hamcrest.Matchers.is; -import org.junit.jupiter.api.Test; -import io.quarkus.test.common.http.TestHTTPEndpoint; -import io.quarkus.test.junit.QuarkusTest; -import io.quarkus.test.security.TestSecurity; -import io.quarkus.test.security.oidc.OidcSecurity; -import io.quarkus.test.security.oidc.TokenIntrospection; -import io.restassured.RestAssured; - -@QuarkusTest -@TestHTTPEndpoint(ProtectedResource.class) -public class TestSecurityAuthTest { - - @Test - @TestSecurity(user = "userOidc", roles = "viewer") - @OidcSecurity(introspectionRequired = true, - introspection = { - @TokenIntrospection(key = "email", value = "user@gmail.com") - } - ) - public void testOidcWithClaimsUserInfoAndMetadata() { - RestAssured.when().get("test-security-oidc-claims-userinfo-metadata").then() - .body(is("userOidc:viewer:userOidc:viewer")); - } - -} ----- - -where `ProtectedResource` class may look like this: - -[source, java] ----- -import io.quarkus.oidc.TokenIntrospection; -import io.quarkus.security.identity.SecurityIdentity; - -@Path("/service") -@Authenticated -public class ProtectedResource { - - @Inject - SecurityIdentity securityIdentity; - @Inject - TokenIntrospection introspection; - - @GET - @Path("test-security-oidc-opaque-token") - public String testSecurityOidcOpaqueToken() { - return securityIdentity.getPrincipal().getName() + ":" + securityIdentity.getRoles().iterator().next() - + ":" + introspection.getString("username") - + ":" + introspection.getString("scope") - + ":" + introspection.getString("email"); - } -} ----- - -Note that `@TestSecurity` `user` and `roles` attributes are available as `TokenIntrospection` `username` and `scope` properties and you can use `io.quarkus.test.security.oidc.TokenIntrospection` to add the additional introspection response properties such as an `email`, etc. - -=== How to check the errors in the logs - -Please enable `io.quarkus.oidc.runtime.OidcProvider` `TRACE` level logging to see more details about the token verification errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".level=TRACE -quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".min-level=TRACE ----- - -Please enable `io.quarkus.oidc.runtime.OidcRecorder` `TRACE` level logging to see more details about the OidcProvider client initialization errors: - -[source, properties] ----- -quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".level=TRACE -quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".min-level=TRACE ----- - -=== External and Internal Access to OpenID Connect Provider - -Note that the OpenID Connect Provider externally accessible token and other endpoints may have different HTTP(S) URLs compared to the URLs auto-discovered or configured relative to `quarkus.oidc.auth-server-url` internal URL. For example, if your SPA acquires a token from an external token endpoint address and sends it to Quarkus as a Bearer token then an issuer verification failure may be reported by the endpoint. - -In such cases, if you work with Keycloak then please start it with a `KEYCLOAK_FRONTEND_URL` system property set to the externally accessible base URL. -If you work with other Openid Connect providers then please check your provider's documentation. - -=== How to use 'client-id' property - -`quarkus.oidc.client-id` property identifies an OpenID Connect Client which requested the current bearer token. It can be an SPA application running in a browser or a Quarkus `web-app` confidential client application propagating the access token to the Quarkus `service` application. - -This property is required if the `service` application is expected to introspect the tokens remotely - which is always the case for the opaque tokens. -This property is optional if the local Json Web Key token verification only is used. - -Nonetheless, setting this property is encouraged even if the endpoint does not require an access to the remote introspection endpoint. The reasons behind it that `client-id`, if set, can be used to verify the token audience and will also be included in the logs when the token verification fails for the better traceability of the tokens issued to specific clients to be analyzed over a longer period of time. - -For example, if your OpenID Connect provider sets a token audience then the following configuration pattern is recommended: - -[source, properties] ----- -# Set client-id -quarkus.oidc.client-id=quarkus-app -# Token audience claim must contain 'quarkus-app' -quarkus.oidc.token.audience=${quarkus.oidc.client-id} ----- - -If you set `quarkus.oidc.client-id` but your endpoint does not require a remote access to one of OpenID Connect Provider endpoints (introspection, token acquisition, etc) then do not set a client secret with the `quarkus.oidc.credentials` or similar properties as it will not be used. - -Note Quarkus `web-app` applications always require `quarkus.oidc.client-id` property. - -== References - -* https://www.keycloak.org/documentation.html[Keycloak Documentation] -* https://openid.net/connect/[OpenID Connect] -* https://tools.ietf.org/html/rfc7519[JSON Web Token] -* xref:security-openid-connect-client.adoc[Quarkus - Using OpenID Connect and OAuth2 Client and Filters to manage access tokens] -* xref:security-openid-connect-dev-services.adoc[Dev Services for Keycloak] -* xref:security-jwt-build.adoc[Sign and encrypt JWT tokens with SmallRye JWT Build] -* xref:security.adoc#oidc-jwt-oauth2-comparison[Summary of Quarkus OIDC, JWT and OAuth2 features] -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-properties.adoc b/_versions/2.7/guides/security-properties.adoc deleted file mode 100644 index 8c0418027ae..00000000000 --- a/_versions/2.7/guides/security-properties.adoc +++ /dev/null @@ -1,147 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Security with .properties File - -include::./attributes.adoc[] - -Quarkus provides support for properties file based authentication that is intended for -development and testing purposes. It is not recommended that this be used in production as at present only -plaintext and MD5 hashed passwords are used, and properties files are generally too limited to use in production. - -Add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-elytron-security-properties-file - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-elytron-security-properties-file") ----- - -== Configuration - -The elytron-security-properties-file extension currently supports two different realms for the storage of authentication -and authorization information. Both support storage of this information in properties files. The following sections -detail the specific configuration properties. - -include::{generated-dir}/config/quarkus-elytron-security.adoc[opts=optional, leveloffset=+2] - -=== Properties Files Realm Configuration - -The properties files realm supports mapping of users to password and users to roles with a combination of properties files. They are configured with properties starting with `quarkus.security.users.file`. - -.example application.properties file section for property files realm -[source,properties] ----- -quarkus.security.users.file.enabled=true -quarkus.security.users.file.users=test-users.properties -quarkus.security.users.file.roles=test-roles.properties -quarkus.security.users.file.realm-name=MyRealm -quarkus.security.users.file.plain-text=true ----- - -==== Users.properties - -The `quarkus.security.users.file.users` configuration property specifies a classpath resource which is a properties file with a user to password mapping, one per line. The following <> illustrates the format: - -[#test-users-example] -.example test-users.properties file -[source,properties] ----- -scott=jb0ss <1> -jdoe=p4ssw0rd <2> -stuart=test -noadmin=n0Adm1n ----- -<1> User `scott` has password defined as `jb0ss` -<2> User `jdoe` has password defined as `p4ssw0rd` - -This file has the usernames and passwords stored in plain text, which is not recommended. If plain-text is set to false -(or omitted) in the config then passwords must be stored in the form `MD5 ( username : realm : password )`. This can -be generated for the first example above by running the command `echo -n scott:MyRealm:jb0ss | md5` from the command line. - -==== Roles.properties - -.example test-roles.properties file -[source,properties] ----- -scott=Admin,admin,Tester,user <1> -jdoe=NoRolesUser <2> -stuart=admin,user <3> -noadmin=user ----- -<1> User `scott` has been assigned the roles `Admin`, `admin`, `Tester` and `user` -<2> User `jdoe` has been assigned the role `NoRolesUser` -<3> User `stuart` has been assigned the roles `admin` and `user`. - -=== Embedded Realm Configuration - -The embedded realm also supports mapping of users to password and users to roles. It uses the main `application.properties` Quarkus configuration file to embed this information. They are configured with properties starting with `quarkus.security.users.embedded`. - -The following is an example application.properties file section illustrating the embedded realm configuration: - -.example application.properties file section for embedded realm -[source,properties] ----- -quarkus.security.users.embedded.enabled=true -quarkus.security.users.embedded.plain-text=true -quarkus.security.users.embedded.users.scott=jb0ss -quarkus.security.users.embedded.users.stuart=test -quarkus.security.users.embedded.users.jdoe=p4ssw0rd -quarkus.security.users.embedded.users.noadmin=n0Adm1n -quarkus.security.users.embedded.roles.scott=Admin,admin,Tester,user -quarkus.security.users.embedded.roles.stuart=admin,user -quarkus.security.users.embedded.roles.jdoe=NoRolesUser -quarkus.security.users.embedded.roles.noadmin=user ----- - -As with the first example this file has the usernames and passwords stored in plain text, which is not recommended. If plain-text is set to false -(or omitted) in the config then passwords must be stored in the form `MD5 ( username : realm : password )`. This can -be generated for the first example above by running the command `echo -n scott:MyRealm:jb0ss | md5` from the command line. - - -==== Embedded Users - -The user to password mappings are specified in the `application.properties` file by properties keys of the form `quarkus.security.users.embedded.users.=`. The following <> illustrates the syntax with 4 user to password mappings: - -[#password-example] -.Example Passwords -[source,properties,linenums] ----- -quarkus.security.users.embedded.users.scott=jb0ss # <1> -quarkus.security.users.embedded.users.stuart=test # <2> -quarkus.security.users.embedded.users.jdoe=p4ssw0rd -quarkus.security.users.embedded.users.noadmin=n0Adm1n ----- -<1> User `scott` has password `jb0ss` -<2> User `stuart` has password `test` - -==== Embedded Roles - -The user to role mappings are specified in the `application.properties` file by properties keys of the form `quarkus.security.users.embedded.roles.=role1[,role2[,role3[,...]]]`. The following <> illustrates the syntax with 4 user to role mappings: - -[#roles-example] -.Example Roles -[source,properties,linenums] ----- -quarkus.security.users.embedded.roles.scott=Admin,admin,Tester,user # <1> -quarkus.security.users.embedded.roles.stuart=admin,user # <2> -quarkus.security.users.embedded.roles.jdoe=NoRolesUser -quarkus.security.users.embedded.roles.noadmin=user ----- -<1> User `scott` has roles `Admin`, `admin`, `Tester`, and `user` -<2> User `stuart` has roles `admin` and `user` - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security-testing.adoc b/_versions/2.7/guides/security-testing.adoc deleted file mode 100644 index 08c1796135a..00000000000 --- a/_versions/2.7/guides/security-testing.adoc +++ /dev/null @@ -1,112 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Security Testing - -include::./attributes.adoc[] - -This document describes how to test Quarkus Security. - -[[configuring-user-information]] -== Configuring User Information - -You can use xref:security-properties.adoc[quarkus-elytron-security-properties-file] for testing security. This supports both embedding user info in `application.properties` and standalone properties files. - -For example, the following configuration will allow for configuring the users in both the production where OAuth2 is required and development modes using xref:config.adoc#configuration-profiles[Configuration Profiles]. - -[source,properties] ----- -# Configure embedded authentication -%dev.quarkus.security.users.embedded.enabled=true -%dev.quarkus.security.users.embedded.plain-text=true -%dev.quarkus.security.users.embedded.users.scott=reader -%dev.quarkus.security.users.embedded.users.stuart=writer -%dev.quarkus.security.users.embedded.roles.scott=READER -%dev.quarkus.security.users.embedded.roles.stuart=READER,WRITER - -# Configure OAuth2 -quarkus.oauth2.enabled=true -%dev.quarkus.oauth2.enabled=false -quarkus.oauth2.client-id=client-id -quarkus.oauth2.client-secret=client-secret -quarkus.oauth2.introspection-url=http://host:port/introspect ----- - -[#testing-security] -== Test Security Extension - -Quarkus provides explicit support for testing with different users, and with the security subsystem disabled. To use -this you must include the `quarkus-test-security` dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-test-security - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-test-security") ----- - -This artifact provides the `io.quarkus.test.security.TestSecurity` annotation, that can be applied to test methods and -test classes to control the security context that the test is run with. This allows you to do two things, you can disable -authorization so tests can access secured endpoints without needing to be authenticated, and you can specify the identity -that you want the tests to run under. - -A test that runs with authorization disabled can just set the enabled property to false: - -[source,java] ----- -@Test -@TestSecurity(authorizationEnabled = false) -void someTestMethod() { -... -} ----- - -This will disable all access checks, which allows the test to access secured endpoints without needing to authenticate. - -You can also use this to configure the current user that the test will run as: - -[source,java] ----- -@Test -@TestSecurity(user = "testUser", roles = {"admin", "user"}) -void someTestMethod() { -... -} ----- - -This will run the test with an identity with the given username and roles. Note that these can be combined, so you can -disable authorization while also providing an identity to run the test under, which can be useful if the endpoint expects an -identity to be present. - -See xref:security-openid-connect.adoc#integration-testing-security-annotation[OpenID Connect Bearer Token Integration testing], xref:security-openid-connect-web-authentication.adoc#integration-testing-security-annotation[OpenID Connect Authorization Code Flow Integration testing] and xref:security-jwt.adoc#integration-testing-security-annotation[SmallRye JWT Integration testing] for more details about testing the endpoint code which depends on the injected `JsonWebToken`. - -[WARNING] -==== -The feature is only available for `@QuarkusTest` and will **not** work on a `@NativeImageTest`. -==== - -=== Mixing security tests - -If it becomes necessary to test security features using both `@TestSecurity` and Basic Auth (which is the fallback auth -mechanism when none is defined), then Basic Auth needs to be enabled explicitly, -for example by setting `quarkus.http.auth.basic=true` or `%test.quarkus.http.auth.basic=true`. - -== Use Wiremock for Integration Testing - -You can also use Wiremock to mock the authorization OAuth2 and OIDC services: -See xref:security-oauth2#integration-testing.adoc[OAuth2 Integration testing], xref:security-openid-connect.adoc#integration-testing-wiremock[OpenID Connect Bearer Token Integration testing], xref:security-openid-connect-web-authentication.adoc#integration-testing-wiremock[OpenID Connect Authorization Code Flow Integration testing] and xref:security-jwt.adoc#integration-testing-wiremock[SmallRye JWT Integration testing] for more details. - -== References - -* xref:security.adoc[Quarkus Security] diff --git a/_versions/2.7/guides/security.adoc b/_versions/2.7/guides/security.adoc deleted file mode 100644 index 2d4fae1cb20..00000000000 --- a/_versions/2.7/guides/security.adoc +++ /dev/null @@ -1,369 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Security Architecture and Guides - -include::./attributes.adoc[] - -Quarkus Security provides the architecture, multiple authentication and authorization mechanisms, and other tools for the developers to build a production-quality security for their Quarkus applications. - -This document provides a brief overview of Quarkus Security and links to the individual guides. - -== Architecture - -`HttpAuthenticationMechanism` is the main entry into Quarkus HTTP Security. - -Quarkus Security Manager uses `HttpAuthenticationMechanism` to extract the authentication credentials from the HTTP request and delegates to `IdentityProvider` to -complete the conversion of these credentials to `SecurityIdentity`. - -For example, the credentials may be coming with the HTTP `Authorization` header, client HTTPS certificates or cookies. - -`IdentityProvider` verifies the authentication credentials and maps them to `SecurityIdentity` which contains the username, roles, the original authentication credentials, and other attributes. - -For every authenticated resource, you can inject a `SecurityIdentity` instance to get the authenticated identity information. - -In some other contexts you may have other parallel representations of the same information (or parts of it) such as `SecurityContext` -for JAX-RS or `JsonWebToken` for JWT. - -== Authentication mechanisms - -Quarkus supports several sources to load authentication information from. - -=== Basic and Form Authentication Mechanisms - -Basic and Form HTTP-based authentication mechanisms are the core authentication mechanisms supported in Quarkus. -Please see xref:security-built-in-authentication.adoc#basic-auth[Basic HTTP Authentication] and xref:security-built-in-authentication.adoc#form-auth[Form HTTP Authentication] for more information. - -=== Mutual TLS Authentication - -Quarkus provides Mutual TLS authentication so that you can authenticate users based on their X.509 certificates. - -Please see xref:security-built-in-authentication.adoc#mutual-tls[Mutual TLS Authentication] for more information. - -=== OpenID Connect - -`quarkus-oidc` extension provides a reactive, interoperable, multi-tenant enabled OpenID Connect adapter which supports `Bearer Token` and `Authorization Code Flow` authentication mechanisms. - -`Bearer Token` mechanism extracts the token from HTTP `Authorization` header. -`Authorization Code Flow` mechanism uses OpenID Connect Authorization Code flow. It redirects the user to IDP to authenticate and completes the authentication process after the user has been redirected back to Quarkus by exchanging the provided code grant for ID, access and refresh tokens. - -ID and access `JWT` tokens are verified with the refreshable `JWK` key set but both JWT and opaque (binary) tokens can be introspected remotely. - -See the xref:security-openid-connect.adoc[Using OpenID Connect to Protect Service Applications] guide for more information about `Bearer Token` authentication mechanism. - -See the xref:security-openid-connect-web-authentication.adoc[Using OpenID Connect to Protect Web Application] guide for more information about `Authorization Code Flow` authentication mechanism. - -[NOTE] -==== -Both `quarkus-oidc` `Bearer` and `Authorization Code Flow` Authentication mechanisms use <> to represent JWT tokens as Microprofile JWT `org.eclipse.microprofile.jwt.JsonWebToken`. -==== - -See xref:security-openid-connect-multitenancy.adoc[Using OpenID Connect Multi-Tenancy] for more information about multiple tenants which can support `Bearer` or `Authorization Code Flow` authentication mechanism and configured statically or dynamically. - -[NOTE] -==== -If you would like to have Quarkus OIDC extension enabled at runtime then set `quarkus.oidc.tenant-enabled=false` at build time and re-enable it at runtime using a system property. -See also xref:security-openid-connect-multitenancy.adoc#disable-tenant[Disabling Tenant Configurations] for more information about managing the individual tenant configurations in the multi-tenant OIDC deployments. -==== - -If you use Keycloak and Bearer tokens then also see the xref:security-keycloak-authorization.adoc[Using Keycloak to Centralize Authorization] guide. - -[NOTE] -==== -If you need to configure Keycloak programmatically then consider using https://www.keycloak.org/docs/latest/server_development/#admin-rest-api[Keycloak Admin REST API] with the help of the `quarkus-keycloak-admin-client` extension. -==== - -=== OpenID Connect Client and Filters - -`quarkus-oidc-client` extension provides `OidcClient` for acquiring and refreshing access tokens from OpenID Connect and OAuth2 providers which support `client-credentials`, `password` and `refresh_token` token grants. - -`quarkus-oidc-client-filter` extension depends on the `quarkus-oidc-client` extension and provides JAX-RS `OidcClientRequestFilter` which sets the access token acquired by `OidcClient` as an HTTP `Authorization` header's `Bearer` scheme value. This filter can be registered with MP RestClient implementations injected into the current Quarkus endpoint but it is not related to the authentication requirements of this service endpoint. For example, it can be a public endpoint or it can be protected with MTLS - the important point is that this Quarkus endpoint does not have to be protected itself with the Quarkus OpenID Connect adapter. - -`quarkus-oidc-token-propagation` extension depends on the `quarkus-oidc` extension and provides JAX-RS `TokenCredentialRequestFilter` which sets the OpenID Connect Bearer or Authorization Code Flow access token as an HTTP `Authorization` header's `Bearer` scheme value. This filter can be registered with MP RestClient implementations injected into the current Quarkus endpoint and the Quarkus endpoint must be protected itself with the Quarkus OpenID Connect adapter. This filter can be used to propagate the access token to the downstream services. - -See the xref:security-openid-connect-client.adoc[Using OpenID Connect and OAuth2 Client] guide for more information. - -[[smallrye-jwt]] -=== SmallRye JWT - -`quarkus-smallrye-jwt` provides Microprofile JWT 1.1.1 implementation and many more options to verify signed and encrypted `JWT` tokens and represent them as `org.eclipse.microprofile.jwt.JsonWebToken`. - -It provides an alternative to `quarkus-oidc` Bearer Token Authentication Mechanism. It can currently verify only `JWT` tokens using the PEM keys or refreshable `JWK` key set. - -Additionally it provides `JWT Generation API` for creating `signed`, `inner-signed` and/or `encrypted` `JWT` tokens with ease. - -See the xref:security-jwt.adoc[Using SmallRye JWT] guide for more information. - -=== OAuth2 - -`quarkus-elytron-security-oauth2` provides an alternative to `quarkus-oidc` Bearer Token Authentication Mechanism. It is based on `Elytron` and is primarily meant for introspecting the opaque tokens remotely. - -See the xref:security-oauth2.adoc[Using OAuth2] guide for more information. - -[[oidc-jwt-oauth2-comparison]] -=== Choosing between OpenID Connect, SmallRye JWT and OAuth2 extensions - -`quarkus-oidc` extension requires an OpenID Connect provider such as Keycloak which can be used to verify the Bearer tokens or authenticate the end users with the Authorization Code flow. In both cases `quarkus-oidc` requires a connection to this OpenID Connect provider. - -`quarkus-oidc` is the only option when the user authentication via Authorization Code flow or supporting multiple tenants is required. It can also request a UserInfo using both Authorization Code Flow and Bearer access tokens. - -When the Bearer tokens have to be verified then `quarkus-oidc`, `quarkus-smallrye-jwt` and `quarkus-elytron-security-oauth2` can be used. - -If you have Bearer tokens in a JWT format then all these 3 extensions can be used. Both `quarkus-oidc` and `quarkus-smallrye-jwt` support refreshing the JsonWebKey (JWK) set when the OpenID Connect provider rotates the keys, therefore `quarkus-oidc` or `quarkus-smallrye-jwt` should be used for verifying JWT tokens if the remote token introspection has to be avoided or not supported by the providers. - -`quarkus-smallrye-jwt` does not support the remote introspection of the opaque tokens or even JWT tokens - it always relies on the locally available keys - possibly fetched from the OpenID Connect provider. So if you need to introspect the JWT tokens remotely then both `quarkus-oidc` and `quarkus-elytron-security-oauth2` will work. Both extensions also support the verification of the opaque/binary tokens via the remote introspection. - -`quarkus-oidc` and `quarkus-smallrye-jwt` can have both JWT and opaque tokens injected into the endpoint code - the injected JWT tokens may offer a richer information about the user. All extensions can have the tokens injected as `Principal`. - -`quarkus-smallrye-jwt` supports more key formats than `quarkus-oidc`. The latter will only use the JWK-formatted keys which are part of a JWK set. The former - can also work with PEM keys. - -`quarkus-smallrye-jwt` can handle locally not only signed but also inner-signed-and-encrypted or only encrypted tokens. In fact `quarkus-oidc` and `quarkus-elytron-security-oauth2` can verify such tokens too but only by treating them as opaque tokens and verifying them via the remote introspection. - -`quarkus-elytron-security-oauth2` is the best choice if you need a light weight library for the remote introspection of either opaque or JWT tokens. - -Note that a choice of using the opaque versus JWT token format is often driven by the architectural considerations. Opaque tokens are usually much shorter than JWT tokens but they require maintaining most of the token associated state in the provider database - the opaque tokens are effectively the database pointers. JWT tokens are significantly longer than the opaque tokens - but the providers are effectively delegating storing most of the token associated state to the client by storing it as the token claims and either signing and/or encrypting them. - -Below is a summary of the options. - -|=== -| | quarkus-oidc| quarkus-smallrye-jwt | quarkus-elytron-security-oauth2 - -|Bearer JWT verification is required -|Local Verification or Introspection -|Local Verification -|Introspection -|Bearer Opaque Token verification is required -|Introspection -|No -|Introspection -|Refreshing JsonWebKey set for verifying JWT tokens -|Yes -|Yes -|No -|Represent token as Principal -|Yes -|Yes -|Yes -|Inject JWT as MP JWT JsonWebToken -|Yes -|Yes -|No -|Authorization Code Flow -|Yes -|No -|No -|Multi-tenancy -|Yes -|No -|No -|UserInfo support -|Yes -|No -|No -|Pem Key format support -|No -|Yes -|No -|SecretKey support -|No -|In JsonWebKey format -|No -|InnerSigned/Encrypted or Encrypted tokens -|Introspection -|Local Verification -|Introspection -|Custom Token Verificition -|No -|With Injected JWTParser -|No -|Accept JWT as cookie -|No -|Yes -|No -|=== - -=== LDAP - -Please see the xref:security-ldap.adoc[Authenticate with LDAP] guide for more information about LDAP authentication mechanism. - -[[identity-providers]] -== Identity Providers - -`IdentityProvider` converts the authentication credentials provided by `HttpAuthenticationMechanism` to `SecurityIdentity`. - -Some extensions such as `OIDC`, `OAuth2`, `SmallRye JWT`, `LDAP` have the inlined `IdentityProvider` implementations which are specific to the supported authentication flow. -For example, `quarkus-oidc` uses its own `IdentityProvider` to convert a token to `SecurityIdentity`. - -If you use `Basic` or `Form` HTTP-based authentication then you have to add an `IdentityProvider` which can convert a user name and password to `SecurityIdentity`. - -See xref:security-jpa.adoc[JPA IdentityProvider] and xref:security-jdbc.adoc[JDBC IdentityProvider] for more information. -You can also use xref:security-testing.adoc#configuring-user-information[User Properties IdentityProvider] for testing. - -== Combining Authentication Mechanisms - -One can combine multiple authentication mechanisms if they get the authentication credentials from the different sources. -For example, combining built-in `Basic` and `quarkus-oidc` `Bearer` authentication mechanisms is allowed, but combining `quarkus-oidc` `Bearer` and `smallrye-jwt` authentication mechanisms is not allowed because both will attempt to verify the token extracted from the HTTP `Authorization Bearer` scheme. - -=== Path Specific Authentication Mechanism - -You can enforce that only a single authentication mechanism is selected for a given request path, for example: -[source,properties] ----- -quarkus.http.auth.permission.basic-or-bearer.paths=/service -quarkus.http.auth.permission.basic-or-bearer.policy=authenticated - -quarkus.http.auth.permission.basic.paths=/basic-only -quarkus.http.auth.permission.basic.policy=authenticated -quarkus.http.auth.permission.basic.auth-mechanism=basic - -quarkus.http.auth.permission.bearer.paths=/bearer-only -quarkus.http.auth.permission.bearer.policy=authenticated -quarkus.http.auth.permission.bearer.auth-mechanism=bearer ----- - -The value of the `auth-mechanism` property must match the authentication scheme supported by HttpAuthenticationMechanism such as `basic` or `bearer` or `form`, etc. - -== Proactive Authentication - -By default, Quarkus does what we call proactive authentication. This means that if an incoming request has a -credential then that request will always be authenticated (even if the target page does not require authentication). - -See xref:security-built-in-authentication.adoc#proactive-authentication[Proactive Authentication] for more information. - -== Authorization - -See xref:security-authorization.adoc[Security Authorization] for more information about Role Based Access Control and other authorization options. - -== Customization and other useful tips - -Quarkus Security is highly customizable. One can register custom ``HttpAuthenticationMechanism``s, ``IdentityProvider``s and ``SecurityidentityAugmentor``s. - -See xref:security-customization.adoc[Security Customization] for more information about customizing Quarkus Security and other useful tips about the reactive security, registering the security providers, etc. - -== Secure connections with SSL - -See the xref:http-reference#adoc[Supporting secure connections with SSL] guide for more information. - -== Cross-Origin Resource Sharing - -If you plan to make your Quarkus application accessible to another application running on a different domain, you will need to configure CORS (Cross-Origin Resource Sharing). Please read the xref:http-reference.adoc#cors-filter[HTTP CORS documentation] for more information. - -== SameSite cookies - -Please see xref:http-reference.adoc#same-site-cookie[SameSite cookies] for information about adding a https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite[SameSite] cookie property to any of the cookies set by a Quarkus endpoint. - -== Testing - -See xref:security-testing.adoc[Security Testing] for more information about testing Quarkus Security. - -== Secret Engines -=== Vault -Quarkus provides a very comprehensive HashiCorp Vault support, please see the link:{vault-guide}[Quarkus and HashiCorp Vault] documentation for more information. - -== Secure serialization - -When using Security along with RESTEasy Reactive and Jackson, Quarkus can limit the fields that are included in JSON serialization based on the configured security. See the xref:resteasy-reactive.adoc#secure-serialization[RESTEasy Reactive documentation] for details. - -== National Vulnerability Database - -Most of Quarkus tags have been registered in link:https://nvd.nist.gov[National Vulnerability Database] (NVD) using a Common Platform Enumeration (CPE) name format. -All registered Quarkus CPE names can be found using link:https://nvd.nist.gov/products/cpe/search/results?namingFormat=2.3&keyword=quarkus[this search query]. -If a Quarkus tag represented by the given CPE name entry is affected by some CVE then you'll be able to follow a provided link to that CVE. - -We will be asking the NVD CPE team to update the list as well as link Quarkus CPE name entries with the related CVEs on a regular basis. -If you work with the link:https://jeremylong.github.io/DependencyCheck/dependency-check-maven/[OWASP Dependency Check Plugin] which is using NVD feeds to detect the vulnerabilities at the application build time and see a false positive reported then please re-open link:https://github.com/quarkusio/quarkus/issues/2611[this issue] and provide the details. - -You can add `OWASP Dependency Check Plugin` to your project's `pom.xml` like this: - -[source,xml] ----- - - org.owasp - dependency-check-maven - ${owasp-dependency-check-plugin.version} - - - 7 - - ${project.basedir}/dependency-cpe-suppression.xml - - - ----- - -You can change `failBuildOnCVSS` value to detect less severe issues as well. - -A suppression list may vary depending on whether you'd like to keep checking the false positives to avoid missing something or not. -For example, it can look like this: - -[source,xml] ----- - - - - - - - - ^io\.netty:netty-tcnative-classes.*:.*$ - cpe:/a:netty:netty - - - - - - ^io\.quarkus:quarkus-mutiny.*:.*$ - cpe:/a:mutiny:mutiny - - - - - - ^io\.smallrye.reactive:mutiny.*:.*$ - cpe:/a:mutiny:mutiny - - - - - - ^io\.smallrye.reactive:smallrye-mutiny.*:.*$ - cpe:/a:mutiny:mutiny - - - - - - ^io\.smallrye.reactive:vertx-mutiny.*:.*$ - cpe:/a:mutiny:mutiny - - - - - - ^org\.graalvm\.sdk:graal-sdk:.*$ - cpe:/a:oracle:graalvm - - ----- - -Such a suppression list has to be carefully prepared and revisited from time to time. You should consider making individual suppressions time limited by adding an `until` tribute, for example: `...`. It will let you doublecheck that only the same known false positives are reported when the suppression period expires, and after reviewing the report you can set a new expiry date. - -Note link:https://jeremylong.github.io/DependencyCheck/dependency-check-maven/[OWASP Dependency Check Plugin] `6.5.3` or later should be used with Quarkus. diff --git a/_versions/2.7/guides/smallrye-fault-tolerance.adoc b/_versions/2.7/guides/smallrye-fault-tolerance.adoc deleted file mode 100644 index 8d50d3e8ff9..00000000000 --- a/_versions/2.7/guides/smallrye-fault-tolerance.adoc +++ /dev/null @@ -1,533 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= SmallRye Fault Tolerance - -include::./attributes.adoc[] - -One of the challenges brought by the distributed nature of microservices is that communication with external systems is -inherently unreliable. This increases demand on resiliency of applications. To simplify making more resilient -applications, Quarkus provides https://github.com/smallrye/smallrye-fault-tolerance/[SmallRye Fault Tolerance] an -implementation of the https://github.com/eclipse/microprofile-fault-tolerance/[MicroProfile Fault Tolerance] -specification. - -In this guide, we demonstrate usage of MicroProfile Fault Tolerance annotations such as `@Timeout`, `@Fallback`, -`@Retry` and `@CircuitBreaker`. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== The Scenario - -The application built in this guide simulates a simple backend for a gourmet coffee e-shop. It implements a REST -endpoint providing information about coffee samples we have on store. - -Let's imagine, although it's not implemented as such, that some of the methods in our endpoint require communication -to external services like a database or an external microservice, which introduces a factor of unreliability. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `microprofile-fault-tolerance-quickstart` {quickstarts-tree-url}/microprofile-fault-tolerance-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: microprofile-fault-tolerance-quickstart -:create-app-extensions: resteasy,smallrye-fault-tolerance,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a project, importing the extensions for RESTEasy/JAX-RS and SmallRye Fault Tolerance. - -If you already have your Quarkus project configured, you can add the `smallrye-fault-tolerance` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: smallrye-fault-tolerance -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-fault-tolerance - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-fault-tolerance") ----- - -== Preparing an Application: REST Endpoint and CDI Bean - -In this section we create a skeleton of our application, so that we have something that we can extend and to which -we can add fault tolerance features later on. - -First, create a simple entity representing a coffee sample in our store: - -[source,java] ----- -package org.acme.microprofile.faulttolerance; - -public class Coffee { - - public Integer id; - public String name; - public String countryOfOrigin; - public Integer price; - - public Coffee() { - } - - public Coffee(Integer id, String name, String countryOfOrigin, Integer price) { - this.id = id; - this.name = name; - this.countryOfOrigin = countryOfOrigin; - this.price = price; - } -} ----- - -Let's continue with a simple CDI bean, that would work as a repository of our coffee samples. - -[source,java] ----- -package org.acme.microprofile.faulttolerance; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.stream.Collectors; -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class CoffeeRepositoryService { - - private Map coffeeList = new HashMap<>(); - - public CoffeeRepositoryService() { - coffeeList.put(1, new Coffee(1, "Fernandez Espresso", "Colombia", 23)); - coffeeList.put(2, new Coffee(2, "La Scala Whole Beans", "Bolivia", 18)); - coffeeList.put(3, new Coffee(3, "Dak Lak Filter", "Vietnam", 25)); - } - - public List getAllCoffees() { - return new ArrayList<>(coffeeList.values()); - } - - public Coffee getCoffeeById(Integer id) { - return coffeeList.get(id); - } - - public List getRecommendations(Integer id) { - if (id == null) { - return Collections.emptyList(); - } - return coffeeList.values().stream() - .filter(coffee -> !id.equals(coffee.id)) - .limit(2) - .collect(Collectors.toList()); - } -} ----- - -Finally, create the `org.acme.microprofile.faulttolerance.CoffeeResource` class as follows: - -[source,java] ----- -package org.acme.microprofile.faulttolerance; - -import java.util.List; -import java.util.Random; -import java.util.concurrent.atomic.AtomicLong; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.jboss.logging.Logger; - -@Path("/coffee") -public class CoffeeResource { - - private static final Logger LOGGER = Logger.getLogger(CoffeeResource.class); - - @Inject - CoffeeRepositoryService coffeeRepository; - - private AtomicLong counter = new AtomicLong(0); - - @GET - public List coffees() { - final Long invocationNumber = counter.getAndIncrement(); - - maybeFail(String.format("CoffeeResource#coffees() invocation #%d failed", invocationNumber)); - - LOGGER.infof("CoffeeResource#coffees() invocation #%d returning successfully", invocationNumber); - return coffeeRepository.getAllCoffees(); - } - - private void maybeFail(String failureLogMessage) { - if (new Random().nextBoolean()) { - LOGGER.error(failureLogMessage); - throw new RuntimeException("Resource failure."); - } - } -} ----- - -At this point, we expose a single REST method that will show a list of coffee samples in a JSON format. Note -that we introduced some fault making code in our `CoffeeResource#maybeFail()` method, which is going to cause failures -in the `CoffeeResource#coffees()` endpoint method in about 50 % of requests. - -Why not check that our application works? Run the Quarkus development server with: - -include::includes/devtools/dev.adoc[] - -and open `http://localhost:8080/coffee` in your browser. Make couple of requests (remember, some of them we expect -to fail). At least some of the requests should show us the list of our coffee samples in JSON, the rest will fail -with a `RuntimeException` thrown in `CoffeeResource#maybeFail()`. - -Congratulations, you've just made a working (although somewhat unreliable) Quarkus application! - -== Adding Resiliency: Retries - -Let the Quarkus development server running and in your IDE add the `@Retry` annotation to the `CoffeeResource#coffees()` -method as follows and save the file: - -[source,java] ----- -import org.eclipse.microprofile.faulttolerance.Retry; -... - -public class CoffeeResource { - ... - @GET - @Retry(maxRetries = 4) - public List coffees() { - ... - } - ... -} ----- - -Hit refresh in your browser. The Quarkus development server will automatically detect the changes -and recompile the app for you, so there's no need to restart it. - -You can hit refresh couple more times. Practically all requests should now be succeeding. The `CoffeeResource#coffees()` -method is still in fact failing in about 50 % of time, but every time it happens, the platform will automatically retry -the call! - -To see that that the failures still happen, check the output of the development server. The log messages should be -similar to these: - -[source] ----- -2019-03-06 12:17:41,725 INFO [org.acm.fau.CoffeeResource] (XNIO-1 task-1) CoffeeResource#coffees() invocation #5 returning successfully -2019-03-06 12:17:44,187 INFO [org.acm.fau.CoffeeResource] (XNIO-1 task-1) CoffeeResource#coffees() invocation #6 returning successfully -2019-03-06 12:17:45,166 ERROR [org.acm.fau.CoffeeResource] (XNIO-1 task-1) CoffeeResource#coffees() invocation #7 failed -2019-03-06 12:17:45,172 ERROR [org.acm.fau.CoffeeResource] (XNIO-1 task-1) CoffeeResource#coffees() invocation #8 failed -2019-03-06 12:17:45,176 INFO [org.acm.fau.CoffeeResource] (XNIO-1 task-1) CoffeeResource#coffees() invocation #9 returning successfully ----- - -You can see that every time an invocation fails, it's immediately followed by another invocation, until one succeeds. -Since we allowed 4 retries, it would require 5 invocations to fail in a row, in order for the user to be actually exposed -to a failure. Which is fairly unlikely to happen. - -== Adding Resiliency: Timeouts - -So what else have we got in MicroProfile Fault Tolerance? Let's look into timeouts. - -Add following two methods to our `CoffeeResource` endpoint. Again, no need to restart the server, just paste the code -and save the file. - -[source,java] ----- -import org.jboss.resteasy.annotations.jaxrs.PathParam; -import org.eclipse.microprofile.faulttolerance.Timeout; -... -public class CoffeeResource { - ... - @GET - @Path("/{id}/recommendations") - @Timeout(250) - public List recommendations(@PathParam int id) { - long started = System.currentTimeMillis(); - final long invocationNumber = counter.getAndIncrement(); - - try { - randomDelay(); - LOGGER.infof("CoffeeResource#recommendations() invocation #%d returning successfully", invocationNumber); - return coffeeRepository.getRecommendations(id); - } catch (InterruptedException e) { - LOGGER.errorf("CoffeeResource#recommendations() invocation #%d timed out after %d ms", - invocationNumber, System.currentTimeMillis() - started); - return null; - } - } - - private void randomDelay() throws InterruptedException { - Thread.sleep(new Random().nextInt(500)); - } -} ----- - -We added some new functionality. We want to be able to recommend some related coffees based on a coffee that a user -is currently looking at. It's not a critical functionality, it's a nice-to-have. When the system is overloaded and the -logic behind obtaining recommendations takes too long to execute, we would rather time out and render the UI without -recommendations. - -Note that the timeout was configured to 250 ms, and a random artificial delay between 0 to 500 ms was introduced -into the `CoffeeResource#recommendations()` method. - -In your browser, go to `http://localhost:8080/coffee/2/recommendations` and hit refresh a couple of times. - -You should see some requests time out with `org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException`. -Requests that do not time out should show two recommended coffee samples in JSON. - -== Adding Resiliency: Fallbacks - -Let's make our recommendations feature even better by providing a fallback (and presumably faster) way of getting related -coffees. - -Add a fallback method to `CoffeeResource` and a `@Fallback` annotation to `CoffeeResource#recommendations()` method -as follows: - -[source,java] ----- -import java.util.Collections; -import org.jboss.resteasy.annotations.jaxrs.PathParam; -import org.eclipse.microprofile.faulttolerance.Fallback; - -... -public class CoffeeResource { - ... - @Fallback(fallbackMethod = "fallbackRecommendations") - public List recommendations(@PathParam int id) { - ... - } - - public List fallbackRecommendations(int id) { - LOGGER.info("Falling back to RecommendationResource#fallbackRecommendations()"); - // safe bet, return something that everybody likes - return Collections.singletonList(coffeeRepository.getCoffeeById(1)); - } - ... -} ----- - -Hit refresh several times on `http://localhost:8080/coffee/2/recommendations`. -The `TimeoutException` should not appear anymore. Instead, in case of a timeout, the page will -display a single recommendation that we hardcoded in our fallback method `fallbackRecommendations()`, rather than -two recommendations returned by the original method. - -Check the server output to see that fallback is really happening: - -[source] ----- -2020-01-09 13:21:34,250 INFO [org.acm.fau.CoffeeResource] (executor-thread-1) CoffeeResource#recommendations() invocation #1 returning successfully -2020-01-09 13:21:36,354 ERROR [org.acm.fau.CoffeeResource] (executor-thread-1) CoffeeResource#recommendations() invocation #2 timed out after 250 ms -2020-01-09 13:21:36,355 INFO [org.acm.fau.CoffeeResource] (executor-thread-1) Falling back to RecommendationResource#fallbackRecommendations() ----- - -NOTE: The fallback method is required to have the same parameters as the original method. - -== Adding Resiliency: Circuit Breaker - -A circuit breaker is useful for limiting number of failures happening in the system, when part of the system becomes -temporarily unstable. The circuit breaker records successful and failed invocations of a method, and when the ratio -of failed invocations reaches the specified threshold, the circuit breaker _opens_ and blocks all further invocations -of that method for a given time. - -Add the following code into the `CoffeeRepositoryService` bean, so that we can demonstrate a circuit breaker in action: - -[source,java] ----- -import java.util.concurrent.atomic.AtomicLong; -import org.eclipse.microprofile.faulttolerance.CircuitBreaker; -... - -public class CoffeeRepositoryService { - ... - - private AtomicLong counter = new AtomicLong(0); - - @CircuitBreaker(requestVolumeThreshold = 4) - public Integer getAvailability(Coffee coffee) { - maybeFail(); - return new Random().nextInt(30); - } - - private void maybeFail() { - // introduce some artificial failures - final Long invocationNumber = counter.getAndIncrement(); - if (invocationNumber % 4 > 1) { // alternate 2 successful and 2 failing invocations - throw new RuntimeException("Service failed."); - } - } -} ----- - -And inject the code below into the `CoffeeResource` endpoint: - -[source,java] ----- -public class CoffeeResource { - ... - @Path("/{id}/availability") - @GET - public Response availability(@PathParam int id) { - final Long invocationNumber = counter.getAndIncrement(); - - Coffee coffee = coffeeRepository.getCoffeeById(id); - // check that coffee with given id exists, return 404 if not - if (coffee == null) { - return Response.status(Response.Status.NOT_FOUND).build(); - } - - try { - Integer availability = coffeeRepository.getAvailability(coffee); - LOGGER.infof("CoffeeResource#availability() invocation #%d returning successfully", invocationNumber); - return Response.ok(availability).build(); - } catch (RuntimeException e) { - String message = e.getClass().getSimpleName() + ": " + e.getMessage(); - LOGGER.errorf("CoffeeResource#availability() invocation #%d failed: %s", invocationNumber, message); - return Response.status(Response.Status.INTERNAL_SERVER_ERROR) - .entity(message) - .type(MediaType.TEXT_PLAIN_TYPE) - .build(); - } - } - ... -} ----- - -We added another functionality - the application can return the amount of remaining packages of given coffee on our store -(just a random number). - -This time an artificial failure was introduced in the CDI bean: the `CoffeeRepositoryService#getAvailability()` method is -going to alternate between two successful and two failed invocations. - -We also added a `@CircuitBreaker` annotation with `requestVolumeThreshold = 4`. `CircuitBreaker.failureRatio` is -by default 0.5, and `CircuitBreaker.delay` is by default 5 seconds. That means that a circuit breaker will open -when 2 of the last 4 invocations failed and it will stay open for 5 seconds. - -To test this out, do the following: - -1. Go to `http://localhost:8080/coffee/2/availability` in your browser. You should see a number being returned. -2. Hit refresh, this second request should again be successful and return a number. -3. Refresh two more times. Both times you should see text "RuntimeException: Service failed.", which is the exception - thrown by `CoffeeRepositoryService#getAvailability()`. -4. Refresh a couple more times. Unless you waited too long, you should again see exception, but this time it's - "CircuitBreakerOpenException: getAvailability". This exception indicates that the circuit breaker opened - and the `CoffeeRepositoryService#getAvailability()` method is not being called anymore. -5. Give it 5 seconds during which circuit breaker should close and you should be able to make two successful requests - again. - -== Runtime configuration - -You can override the annotations parameters at runtime inside your `application.properties` file. - -If we take the retry example that we already saw: - -[source,java] ----- -package org.acme; - -import org.eclipse.microprofile.faulttolerance.Retry; -... - -public class CoffeeResource { - ... - @GET - @Retry(maxRetries = 4) - public List coffees() { - ... - } - ... -} ----- - -We can override the `maxRetries` parameter with 6 retries instead of 4 by the following configuration item: -[source,properties] ----- -org.acme.CoffeeResource/coffees/Retry/maxRetries=6 ----- - -NOTE: The format is `fully-qualified-class-name/method-name/annotation-name/property-name=value`. -You can also configure a property for all the annotation via `annotation-name/property-name=value`. - -== Conclusion - -SmallRye Fault Tolerance allows to improve resiliency of your application, without having an impact on the complexity -of our business logic. - -All that is needed to enable the fault tolerance features in Quarkus is: - -:devtools-wrapped: -* adding the `smallrye-fault-tolerance` Quarkus extension to your project using the `quarkus-maven-plugin`: -+ -include::includes/devtools/extension-add.adoc[] -* or simply adding the following Maven dependency: -+ -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-fault-tolerance - ----- -+ -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-fault-tolerance") ----- -:!devtools-wrapped: - -== Additional resources - -SmallRye Fault Tolerance has more features than shown here. -Please check the link:https://smallrye.io/docs/smallrye-fault-tolerance/5.2.0/index.html[SmallRye Fault Tolerance documentation] to learn about them. - -In Quarkus, you can use the SmallRye Fault Tolerance optional features out of the box. - -Support for Mutiny is present, so your asynchronous methods can return `Uni` in addition to `CompletionStage`. - -MicroProfile Context Propagation is integrated with Fault Tolerance, so existing contexts are automatically propagated to your asynchronous methods. - -[NOTE] -==== -This also applies to the CDI request context: if it is active on the original thread, it is propagated to the new thread, but if it's not, then the new thread won't have it either. -This is contrary to MicroProfile Fault Tolerance specification, which states that the request context must be active during the `@Asynchronous` method invocation. - -We believe that in presence of MicroProfile Context Propagation, this requirement should not apply. -The entire point of context propagation is to make sure the new thread has the same contexts as the original thread. -==== - -Non-compatible mode is enabled by default, so methods that return `CompletionStage` (or `Uni`) have asynchronous fault tolerance applied without any `@Asynchronous`, `@Blocking` or `@NonBlocking` annotation. - -[NOTE] -==== -This mode is not compatible with the MicroProfile Fault Tolerance specification, albeit the incompatibility is very small. -To restore full compatibility, add this configuration property: - -[source,properties] ----- -smallrye.faulttolerance.mp-compatibility=true ----- -==== diff --git a/_versions/2.7/guides/smallrye-graphql-client.adoc b/_versions/2.7/guides/smallrye-graphql-client.adoc deleted file mode 100644 index 72a3ceef280..00000000000 --- a/_versions/2.7/guides/smallrye-graphql-client.adoc +++ /dev/null @@ -1,352 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= SmallRye GraphQL Client - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use the GraphQL client library. -The client is implemented by the https://github.com/smallrye/smallrye-graphql/[SmallRye GraphQL] project. -This guide is specifically geared towards the client side, so if you need an introduction to GraphQL in -general, first refer to the xref:smallrye-graphql.adoc[SmallRye GraphQL guide], which provides an introduction -to the GraphQL query language, general concepts and server-side development. - -The guide will walk you through developing and running a simple application that uses both supported -types of GraphQL clients to retrieve data from a remote resource, that being a database related to Star Wars. -It's available at https://graphql.org/swapi-graphql[this webpage] if you want to experiment with it manually. -The web UI allows you to write and execute GraphQL queries against it. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== GraphQL client types introduction - -Two types of GraphQL clients are supported. - -The *typesafe* client works very much like the MicroProfile REST Client adjusted for calling GraphQL endpoints. -A client instance is basically a proxy that you can call like a regular Java object, but under the hood, -the call will be translated to a GraphQL operation. It works with domain classes directly. -Any input and output objects for the operation will be translated to/from their representations -in the GraphQL query language. - -The *dynamic* client, on the other hand, works rather like an equivalent of the JAX-RS client -from the `javax.ws.rs.client` package. It does not require the domain classes to work, it works with -abstract representations of GraphQL documents instead. Documents are built using a domain-specific language (DSL). -The exchanged objects are treated as an abstract `JsonObject`, but, when necessary, -it is possible to convert them to concrete model objects (if suitable model classes are available). - -The typesafe client can be viewed as a rather high-level and more declarative approach designed for ease of use, -whereas the dynamic client is lower-level, more imperative, somewhat more verbose to use, but allows finer grained -control over operations and responses. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `microprofile-graphql-client-quickstart` {quickstarts-tree-url}/microprofile-graphql-client-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: microprofile-graphql-client-quickstart -:create-app-extensions: resteasy-reactive-jsonb,graphql-client,rest-client-reactive -include::includes/devtools/create-app.adoc[] - -NOTE: The typesafe GraphQL client depends on REST client, thus we included the `rest-client-reactive` extension -in the `extensions` list. You may also switch to the traditional non-reactive `rest-client` if the rest of -your application depends on the non-reactive RESTEasy stack (you can't mix reactive and non-reactive RESTEasy). -If you're only going to use the dynamic GraphQL client and don't use RESTEasy in your application, -you may leave out the REST client dependency completely. -This command generates a project, importing the `smallrye-graphql-client` extension. - -If you already have your Quarkus project configured, you can add the `smallrye-graphql-client` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: graphql-client,rest-client-reactive -include::includes/devtools/extension-add.adoc[] - -Again, you may leave out `rest-client-reactive` if you're only going to use the dynamic client. - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-graphql-client - - - io.quarkus - quarkus-rest-client-reactive - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-graphql-client") -implementation("io.quarkus:quarkus-rest-client-reactive") ----- - -== The application - -The application we will build makes use of both types of GraphQL clients. In both cases, -they will connect to the Star Wars service at https://graphql.org/swapi-graphql[SWAPI] and -query it for a list of Star Wars films, and, for each film, the names of the planets which -appear in that film. - -The corresponding GraphQL query looks like this: - -[source] ----- -{ - allFilms { - films { - title - planetConnection { - planets { - name - } - } - } - } -} ----- - -You may go to https://graphql.org/swapi-graphql[the webpage] to execute this query manually. - -== Using the Typesafe client - -To use the typesafe client, we need the corresponding model classes that are compatible with -the schema. There are two ways to obtain them. First is to use the client generator offered by SmallRye GraphQL, -which generates classes from the schema document and a list of queries. This generator is considered highly -experimental for now, and is not covered in this example. If interested, refer to the -https://github.com/smallrye/smallrye-graphql/tree/main/client/generator[Client Generator] and its documentation. - -In this example, we will create a slimmed down version of the model classes manually, with only the fields -that we need, and ignore all the stuff that we don't need. We will need the classes for `Film` and `Planet`. -But, the service is also using specific wrappers named `FilmConnection` and `PlanetConnection`, which, -for our purpose, will serve just to contain the actual list of `Film` and `Planet` instances, respectively. - -Let's create all the model classes and put them into the `org.acme.microprofile.graphql.client.model` package: - -[source,java] ----- -public class FilmConnection { - - private List films; - - public List getFilms() { - return films; - } - - public void setFilms(List films) { - this.films = films; - } -} - -public class Film { - - private String title; - - private PlanetConnection planetConnection; - - public String getTitle() { - return title; - } - - public void setTitle(String title) { - this.title = title; - } - - public PlanetConnection getPlanetConnection() { - return planetConnection; - } - - public void setPlanetConnection(PlanetConnection planetConnection) { - this.planetConnection = planetConnection; - } -} - -public class PlanetConnection { - - private List planets; - - public List getPlanets() { - return planets; - } - - public void setPlanets(List planets) { - this.planets = planets; - } - -} - -public class Planet { - - private String name; - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } -} ----- - -Now that we have the model classes, we can create the interface that represents the actual set -of operations we want to call on the remote GraphQL service. - ----- -@GraphQLClientApi(configKey = "star-wars-typesafe") -public interface StarWarsClientApi { - - FilmConnection allFilms(); - -} ----- - -For simplicity, we're only calling the query named `allFilms`. We named our corresponding method -`allFilms` too. If we named the method differently, we would need to annotate it with -`@Query(value="allFilms")` to specify the name of the query that should be executed when this -method is called. - -The client also needs some configuration, namely at least the URL of the remote service. We can either -specify that within the `@GraphQLClientApi` annotation (by setting the `endpoint` parameter), -or move this over to the configuration file, `application.properties`: - ----- -quarkus.smallrye-graphql-client.star-wars-typesafe.url=https://swapi-graphql.netlify.app/.netlify/functions/index ----- - -`star-wars-typesafe` is the name of the configured client instance, and corresponds to the `configKey` -in the `@GraphQLClientApi` annotation. If you don't want to specify a custom name, you can leave -out the `configKey`, and then refer to it by using the fully qualified name of the interface. - -Now that we have the client instance properly configured, we need a way to have it -perform something when we start the application. For that, we will use a REST endpoint that, -when called by a user, obtains the client instance and lets it execute the query. - -[source, java] ----- -@Path("/") -public class StarWarsResource { - @Inject - StarWarsClientApi typesafeClient; - - @GET - @Path("/typesafe") - @Produces(MediaType.APPLICATION_JSON) - @Blocking - public List getAllFilmsUsingTypesafeClient() { - return typesafeClient.allFilms().getFilms(); - } -} ----- - -With this REST endpoint included in your application, you can simply send a GET request to `/typesafe`, -and the application will use an injected typesafe client instance to call the remote service, obtain -the films and planets, and return the JSON representation of the resulting list. - -== Using the Dynamic client - -For the dynamic client, the model classes are optional, because we can work with abstract -representations of the GraphQL types and documents. The client API interface is not needed at all. - -We still need to configure the URL for the client, so let's put this into `application.properties`: ----- -quarkus.smallrye-graphql-client.star-wars-dynamic.url=https://swapi-graphql.netlify.app/.netlify/functions/index ----- - -We decided to name the client `star-wars-dynamic`. We will use this name when injecting a dynamic client -to properly qualify the injection point. - -If you need to add an authorization header, or any other custom HTTP header (in our case -it's not required), this can be done by: ----- -quarkus.smallrye-graphql-client.star-wars-dynamic.header.HEADER-KEY=HEADER-VALUE" ----- - -Add this to the `StarWarsResource` created earlier: - -[source,java] ----- -import static io.smallrye.graphql.client.core.Document.document; -import static io.smallrye.graphql.client.core.Field.field; -import static io.smallrye.graphql.client.core.Operation.operation; - -// .... - -@Inject -@GraphQLClient("star-wars-dynamic") // <1> -DynamicGraphQLClient dynamicClient; - -@GET -@Path("/dynamic") -@Produces(MediaType.APPLICATION_JSON) -@Blocking -public List getAllFilmsUsingDynamicClient() throws Exception { - Document query = document( // <2> - operation( - field("allFilms", - field("films", - field("title"), - field("planetConnection", - field("planets", - field("name") - ) - ) - ) - ) - ) - ); - Response response = dynamicClient.executeSync(query); <3> - return response.getObject(FilmConnection.class, "allFilms").getFilms(); <4> -} ----- - -<1> Qualifies the injection point so that we know which named client needs to be injected here. - -<2> Here we build a document representing the GraphQL query, using the provided DSL language. -We use static imports to make the code easier to read. The DSL is designed in a way that -it looks quite similar to writing a GraphQL query as a string. - -<3> Execute the query and block while waiting for the response. There is also an asynchronous -variant that returns a `Uni`. - -<4> Here we did the optional step of converting the response to instances of our model classes, -because we have the classes available. If you don't have the classes available or don't want to -use them, simply calling `response.getData()` would get you a `JsonObject` representing -all the returned data. - -== Running the application - -Launch the application in dev mode using: - -include::includes/devtools/dev.adoc[] - -To execute the queries, you need to send GET requests to our REST endpoint: -[source,bash] ----- -curl -s http://localhost:8080/dynamic # to use the dynamic client -curl -s http://localhost:8080/typesafe # to use the typesafe client ----- - -Whether you use dynamic or typesafe, the result should be the same. -If the JSON document is hard to read, you might want to run it through a tool that -formats it for better readability by humans, for example by piping the output through `jq`. - -== Conclusion - -This example showed how to use both the dynamic and typesafe GraphQL clients to call an external -GraphQL service and explained the difference between the client types. diff --git a/_versions/2.7/guides/smallrye-graphql.adoc b/_versions/2.7/guides/smallrye-graphql.adoc deleted file mode 100644 index 5b59e4ae6cd..00000000000 --- a/_versions/2.7/guides/smallrye-graphql.adoc +++ /dev/null @@ -1,853 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= SmallRye GraphQL - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use https://github.com/smallrye/smallrye-graphql/[SmallRye GraphQL], -an implementation of the https://github.com/eclipse/microprofile-graphql/[MicroProfile GraphQL] specification. - -As the https://www.graphql.org/[GraphQL] specification website states: - -[quote,] -GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. -GraphQL provides a complete and understandable description of the data in your API, -gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, -and enables powerful developer tools. - -**GraphQL** was originally developed by **Facebook** in 2012 and has been -an open standard since 2015. - -GraphQL is not a replacement for REST API specification but merely an -alternative. Unlike REST, GraphQL API's have the ability to benefit the client by: - -Preventing Over-fetching and Under-fetching:: - REST API's are server-driven fixed data responses that cannot be determined by - the client. Although the client does not require all the fields the client - must retrieve all the data hence `Over-fetching`. A client may also require - multiple REST API calls according to the first call (HATEOAS) to retrieve - all the data that is required thereby `Under-fetching`. - -API Evolution:: - Since GraphQL API's returns data that are requested by the client adding additional - fields and capabilities to existing API will not create breaking changes to existing - clients. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we build a simple GraphQL application that exposes a GraphQL API -at `/graphql`. - -This example was inspired by a popular GraphQL API. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `microprofile-graphql-quickstart` {quickstarts-tree-url}/microprofile-graphql-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: microprofile-graphql-quickstart -:create-app-extensions: resteasy,graphql -include::includes/devtools/create-app.adoc[] - -This command generates a project, importing the `smallrye-graphql` extension. - -If you already have your Quarkus project configured, you can add the `smallrye-graphql` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: graphql -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-graphql - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-graphql") ----- - -== Preparing an Application: GraphQL API - -In this section we will start creating the GraphQL API. - -First, create the following entities representing a film from a galaxy far far away: - -[source,java] ----- -package org.acme.microprofile.graphql; - -public class Film { - - public String title; - public Integer episodeID; - public String director; - public LocalDate releaseDate; - -} - -public class Hero { - - public String name; - public String surname; - public Double height; - public Integer mass; - public Boolean darkSide; - public LightSaber lightSaber; - public List episodeIds = new ArrayList<>(); - -} - -enum LightSaber { - RED, BLUE, GREEN -} ----- - -NOTE: For readability we use classes with public fields, but classes with private fields with public getters and setters will also work. - -The classes we have just created describe the GraphQL schema which is a -set of possible data (objects, fields, relationships) that a client can access. - -Let's continue with an example CDI bean, that would work as a repository: - -[source,java] ----- -@ApplicationScoped -public class GalaxyService { - - private List heroes = new ArrayList<>(); - - private List films = new ArrayList<>(); - - public GalaxyService() { - - Film aNewHope = new Film(); - aNewHope.title = "A New Hope"; - aNewHope.releaseDate = LocalDate.of(1977, Month.MAY, 25); - aNewHope.episodeID = 4; - aNewHope.director = "George Lucas"; - - Film theEmpireStrikesBack = new Film(); - theEmpireStrikesBack.title = "The Empire Strikes Back"; - theEmpireStrikesBack.releaseDate = LocalDate.of(1980, Month.MAY, 21); - theEmpireStrikesBack.episodeID = 5; - theEmpireStrikesBack.director = "George Lucas"; - - Film returnOfTheJedi = new Film(); - returnOfTheJedi.title = "Return Of The Jedi"; - returnOfTheJedi.releaseDate = LocalDate.of(1983, Month.MAY, 25); - returnOfTheJedi.episodeID = 6; - returnOfTheJedi.director = "George Lucas"; - - films.add(aNewHope); - films.add(theEmpireStrikesBack); - films.add(returnOfTheJedi); - - Hero luke = new Hero(); - luke.name = "Luke"; - luke.surname = "Skywalker"; - luke.height = 1.7; - luke.mass = 73; - luke.lightSaber = LightSaber.GREEN; - luke.darkSide = false; - luke.episodeIds.addAll(Arrays.asList(4, 5, 6)); - - Hero leia = new Hero(); - leia.name = "Leia"; - leia.surname = "Organa"; - leia.height = 1.5; - leia.mass = 51; - leia.darkSide = false; - leia.episodeIds.addAll(Arrays.asList(4, 5, 6)); - - - Hero vader = new Hero(); - vader.name = "Darth"; - vader.surname = "Vader"; - vader.height = 1.9; - vader.mass = 89; - vader.darkSide = true; - vader.lightSaber = LightSaber.RED; - vader.episodeIds.addAll(Arrays.asList(4, 5, 6)); - - heroes.add(luke); - heroes.add(leia); - heroes.add(vader); - - } - - public List getAllFilms() { - return films; - } - - public Film getFilm(int id) { - return films.get(id); - } - - public List getHeroesByFilm(Film film) { - return heroes.stream() - .filter(hero -> hero.episodeIds.contains(film.episodeID)) - .collect(Collectors.toList()); - } - - public void addHero(Hero hero) { - heroes.add(hero); - } - - public Hero deleteHero(int id) { - return heroes.remove(id); - } - - public List getHeroesBySurname(String surname) { - return heroes.stream() - .filter(hero -> hero.surname.equals(surname)) - .collect(Collectors.toList()); - } -} ----- - -Now, let's create our first GraphQL API. - -Edit the `org.acme.microprofile.graphql.FilmResource` class as following: - -[source,java] ----- -@GraphQLApi // <1> -public class FilmResource { - - @Inject - GalaxyService service; - - @Query("allFilms") // <2> - @Description("Get all Films from a galaxy far far away") // <3> - public List getAllFilms() { - return service.getAllFilms(); - } -} ----- - -<1> `@GraphQLApi` annotation indicates that the CDI bean will be a GraphQL endpoint -<2> `@Query` annotation defines that this method will be queryable with the name `allFilms` -<3> Documentation of the queryable method - -TIP: The value of the `@Query` annotation is optional and would implicitly -be defaulted to the method name if absent. - -This way we have created our first queryable API which we will later expand. - -== Launch - -Launch the quarkus application in dev mode: - -include::includes/devtools/dev.adoc[] - -== Introspect - -The full schema of the GraphQL API can be retrieved by calling the following: - -[source,bash] ----- -curl http://localhost:8080/graphql/schema.graphql ----- - -The server will return the complete schema of the GraphQL API. - -[[ui]] -== GraphiQL UI - -NOTE: Experimental - not included in the MicroProfile specification - -GraphiQL UI is a great tool permitting easy interaction with your GraphQL APIs. - -The Quarkus `smallrye-graphql` extension ships with `GraphiQL` and enables it by default in `dev` and `test` modes, -but it can also be explicitly configured for `production` mode as well. - -GraphiQL can be accessed from http://localhost:8080/q/graphql-ui/ . - -image:graphql-ui-screenshot01.png[alt=GraphQL UI] - -Have a look at the link:security-authorization[Authorization of Web Endpoints] Guide on how to add/remove security for the GraphQL UI. - -== Query the GraphQL API - -Now visit the GraphiQL page that has been deployed in `dev` mode. - -Enter the following query to GraphiQL and press the `play` button: - -[source, graphql] ----- -query allFilms { - allFilms { - title - director - releaseDate - episodeID - } -} ----- - -Since our query contains all the fields in the `Film` class -we will retrieve all the fields in our response. Since GraphQL API -responses are client determined, the client can choose which fields -it will require. - -Let's assume that our client only requires `title` and `releaseDate` -making the previous call to the API `Over-fetching` of unnecessary -data. - -Enter the following query into GraphiQL and hit the `play` button: - -[source, graphql] ----- -query allFilms { - allFilms { - title - releaseDate - } -} ----- - -Notice in the response we have only retrieved the required fields. -Therefore, we have prevented `Over-fetching`. - -Let's continue to expand our GraphQL API by adding the following to the -`FilmResource` class. - -[source,java] ----- - @Query - @Description("Get a Films from a galaxy far far away") - public Film getFilm(@Name("filmId") int id) { - return service.getFilm(id); - } ----- - -WARNING: Notice how we have excluded the value in the `@Query` annotation. -Therefore, the name of the query is implicitly set as the method name -excluding the `get`. - -This query will allow the client to retrieve the film by id, and the `@Name` annotation on the parameter -changes the parameter name to `filmId` rather than the default `id` that it would be if you omit the `@Name` annotation. - -Enter the following into `GraphiQL` and make a request. - -[source, graphql] ----- -query getFilm { - film(filmId: 1) { - title - director - releaseDate - episodeID - } -} ----- - -The `film` query method requested fields can be determined -as such in our previous example. This way we can retrieve individual -film information. - -However, say our client requires both films with filmId `0` and `1`. -In a REST API the client would have to make two calls to the API. -Therefore, the client would be `Under-fetching`. - -In GraphQL it is possible to make multiple queries at once. - -Enter the following into GraphiQL to retrieve two films: - -[source, graphql] ----- -query getFilms { - film0: film(filmId: 0) { - title - director - releaseDate - episodeID - } - film1: film(filmId: 1) { - title - director - releaseDate - episodeID - } -} ----- - -This enabled the client to fetch the required data in a single request. - -== Expanding the API - -Until now, we have created a GraphQL API to retrieve film data. -We now want to enable the clients to retrieve the `Hero` data of the `Film`. - -Add the following to our `FilmResource` class: - -[source,java] ----- - public List heroes(@Source Film film) { // <1> - return service.getHeroesByFilm(film); - } ----- - -<1> Enable `List` data to be added to queries that respond with `Film` - -By adding this method we have effectively changed the schema of the GraphQL API. -Although the schema has changed the previous queries will still work. -Since we only expanded the API to be able to retrieve the `Hero` data of the `Film`. - -Enter the following into GraphiQL to retrieve the film and hero data. - -[source,graphql] ----- -query getFilmHeroes { - film(filmId: 1) { - title - director - releaseDate - episodeID - heroes { - name - height - mass - darkSide - lightSaber - } - } -} ----- - -The response now includes the heroes of the film. - -=== Batching - -When you are exposing a `Collection` return like our `getAllFilms`, you might want to use the batch form of the above, to more efficiently fetch -the heroes: - -[source,java] ----- - public List> heroes(@Source List films) { // <1> - // Here fetch all hero lists - } ----- - -<1> Here receive the films as a batch, allowing you to fetch the corresponding heroes. - -=== Reactive - -Queries can be made reactive by using `Uni`, or `CompletionStage` as a return type, for example: - -[source,java] ----- - @Query - @Description("Get a Films from a galaxy far far away") - public Uni getFilm(int filmId) { - // ... - } ----- - -NOTE: Due to the underlying library, graphql-java, `Uni` is creating a `CompletionStage` under the hood. - -Or you can use `CompletionStage`: - -[source,java] ----- - @Query - @Description("Get a Films from a galaxy far far away") - public CompletionStage getFilm(int filmId) { - // ... - } ----- - -Using `Uni` or `CompletionStage` means that when a request contains more than one query, they will be executed concurrently. - -For instance, the query below will fetch `film0` and `film1` concurrently: - -[source, graphql] ----- -query getFilms { - film0: film(filmId: 0) { - title - director - releaseDate - episodeID - } - film1: film(filmId: 1) { - title - director - releaseDate - episodeID - } -} ----- - -== Mutations - -Mutations are used when data is created, updated or deleted. - -Let's now add the ability to add and delete heroes to our GraphQL API. - -Add the following to our `FilmResource` class: - -[source,java] ----- - @Mutation - public Hero createHero(Hero hero) { - service.addHero(hero); - return hero; - } - - @Mutation - public Hero deleteHero(int id) { - return service.deleteHero(id); - } ----- - -Enter the following into `GraphiQL` to insert a `Hero`: - -[source,graphql] ----- -mutation addHero { - createHero(hero: { - name: "Han", - surname: "Solo" - height: 1.85 - mass: 80 - darkSide: false - episodeIds: [4, 5, 6] - } - ) - { - name - surname - } -} ----- - -By using this mutation we have created a `Hero` entity in our service. - -Notice how in the response we have retrieved the `name` and `surname` -of the created Hero. This is because we selected to retrieve -these fields in the response within the `{ }` in the mutation query. -This can easily be a server side generated field that the client may require. - -Let's now try deleting an entry: - -[source,graphql] ----- -mutation DeleteHero { - deleteHero(id :3){ - name - surname - } -} ----- - -Similar to the `createHero` mutation method we also retrieve the `name` and -`surname` of the hero we have deleted which is defined in `{ }`. - -== Subscriptions - -Subscriptions allows you to subscribe to a query. It allows you to receive events. - -NOTE: Subscription is currently still considered experimental. - -Example: We want to know when new Heroes are being created: - -[source,java] ----- - - BroadcastProcessor processor = BroadcastProcessor.create(); // <1> - - @Mutation - public Hero createHero(Hero hero) { - service.addHero(hero); - processor.onNext(hero); // <2> - return hero; - } - - @Subscription - public Multi heroCreated(){ - return processor; // <3> - } - ----- - -<1> The `Multi` processor that will broadcast any new Heros -<2> When adding a new Hero, also broadcast it -<3> Make the stream available in the schema and as a WebSocket during runtime - - -Any client that now connect to the `/graphql` WebSocket connection will receive events on new Heroes being created: - -[source,graphql] ----- - -subscription ListenForNewHeroes { - heroCreated { - name - surname - } -} - ----- - -== Creating Queries by fields - -Queries can also be done on individual fields. For example, let's -create a method to query heroes by their last name. - -Add the following to our `FilmResource` class: - -[source,java] ----- - @Query - public List getHeroesWithSurname(@DefaultValue("Skywalker") String surname) { - return service.getHeroesBySurname(surname); - } ----- - -By using the `@DefaultValue` annotation we have determined that the surname value -will be `Skywalker` when the parameter is not provided. - -Test the following queries with GraphiQL: - -[source,graphql] ----- -query heroWithDefaultSurname { - heroesWithSurname{ - name - surname - lightSaber - } -} -query heroWithSurnames { - heroesWithSurname(surname: "Vader") { - name - surname - lightSaber - } -} ----- - -== Context - -You can get information about the GraphQL request anywhere in your code, using this experimental, SmallRye specific feature: - -[source,java] ----- -@Inject -Context context; ----- - -or as a parameter in your method if you are in the `GraphQLApi` class, for instance: - -[source,java] ----- - @Query - @Description("Get a Films from a galaxy far far away") - public Film getFilm(Context context, int filmId) { - // ... - } ----- - -The context object allows you to get: - -- the original request (Query/Mutation) -- the arguments -- the path -- the selected fields -- any variables - -This allows you to optimize the downstream queries to the datastore. - -See the https://javadoc.io/doc/io.smallrye/smallrye-graphql-api/latest/io/smallrye/graphql/api/Context.html[JavaDoc] for more details. - -=== GraphQL-Java - -This context object also allows you to fall down to the underlying https://www.graphql-java.com/[graphql-java] features by using the leaky abstraction: - -[source,java] ----- -DataFetchingEnvironment dfe = context.unwrap(DataFetchingEnvironment.class); ----- - -You can also get access to the underlying `graphql-java` during schema generation, to add your own features directly: - -[source,java] ----- -public GraphQLSchema.Builder addMyOwnEnum(@Observes GraphQLSchema.Builder builder) { - - // Here add your own features directly, example adding an Enum - GraphQLEnumType myOwnEnum = GraphQLEnumType.newEnum() - .name("SomeEnum") - .description("Adding some enum type") - .value("value1") - .value("value2").build(); - - return builder.additionalType(myOwnEnum); -} ----- - -By using the `@Observer` you can add anything to the Schema builder. - -NOTE: For the Observer to work, you need to enable events. In `application.properties`, add the following: `quarkus.smallrye-graphql.events.enabled=true`. - -== Map to Scalar - -Another SmallRye specific experimental feature, allows you to map an existing scalar (that is mapped by the implementation to a certain Java type) to another type, -or to map complex object, that would typically create a `Type` or `Input` in GraphQL, to an existing scalar. - -=== Mapping an existing Scalar to another type: - -[source,java] ----- -public class Movie { - - @ToScalar(Scalar.Int.class) - Long idLongThatShouldChangeToInt; - - // .... -} ----- - -Above will map the `Long` java type to an `Int` Scalar type, rather than the https://download.eclipse.org/microprofile/microprofile-graphql-1.0/microprofile-graphql.html#scalars[default] `BigInteger`. - -=== Mapping a complex object to a Scalar type: - -[source,java] ----- -public class Person { - - @ToScalar(Scalar.String.class) - Phone phone; - - // .... -} ----- - -This will, rather than creating a `Type` or `Input` in GraphQL, map to a String scalar. - -To be able to do the above, the `Phone` object needs to have a constructor that takes a String (or `Int` / `Date` / etc.), -or have a setter method for the String (or `Int` / `Date` / etc.), -or have a `fromString` (or `fromInt` / `fromDate` - depending on the Scalar type) static method. - -For example: - -[source,java] ----- -public class Phone { - - private String number; - - // Getters and setters.... - - public static Phone fromString(String number) { - Phone phone = new Phone(); - phone.setNumber(number); - return phone; - } -} ----- - -See more about the `@ToScalar` feature in the https://javadoc.io/static/io.smallrye/smallrye-graphql-api/1.0.6/index.html?io/smallrye/graphql/api/ToScalar.html[JavaDoc]. - -== Error code - -You can add an error code on the error output in the GraphQL response by using the (SmallRye specific) `@ErrorCode`: - -[source,java] ----- -@ErrorCode("some-business-error-code") -public class SomeBusinessException extends RuntimeException { - // ... -} ----- - -When `SomeBusinessException` occurs, the error output will contain the Error code: - -[source,graphql] ----- -{ - "errors": [ - { - "message": "Unexpected failure in the system. Jarvis is working to fix it.", - "locations": [ - { - "line": 2, - "column": 3 - } - ], - "path": [ - "annotatedCustomBusinessException" - ], - "extensions": { - "exception": "io.smallrye.graphql.test.apps.error.api.ErrorApi$AnnotatedCustomBusinessException", - "classification": "DataFetchingException", - "code": "some-business-error-code" <1> - } - } - ], - "data": { - ... - } -} ----- - -<1> The error code - -== Additional Notes - -If you are using the `smallrye-graphql` extension and the `micrometer` metrics extension is present and metrics are -enabled, you may encounter a `java.lang.NoClassDefFoundError` as some versions of the `smallrye-graphql` extension -have runtime requirements on the Microprofile Metrics API. Add the following MicroProfile Metrics API dependency -to resolve the issue: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - org.eclipse.microprofile.metrics - microprofile-metrics-api - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("org.eclipse.microprofile.metrics:microprofile-metrics-api") ----- - -== Conclusion - -SmallRye GraphQL enables clients to retrieve the exact data that is -required preventing `Over-fetching` and `Under-fetching`. - -The GraphQL API can be expanded without breaking previous queries enabling easy -API `evolution`. - -[[configuration-reference]] -== Configuration Reference - -include::{generated-dir}/config/quarkus-smallrye-graphql.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/smallrye-health.adoc b/_versions/2.7/guides/smallrye-health.adoc deleted file mode 100644 index 61c9f1b1a05..00000000000 --- a/_versions/2.7/guides/smallrye-health.adoc +++ /dev/null @@ -1,447 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= SmallRye Health - -include::./attributes.adoc[] - -This guide demonstrates how your Quarkus application can use https://github.com/smallrye/smallrye-health/[SmallRye Health] -an implementation of the https://github.com/eclipse/microprofile-health/[MicroProfile Health] specification. - -SmallRye Health allows applications to provide information about their state -to external viewers which is typically useful in cloud environments where automated -processes must be able to determine whether the application should be discarded -or restarted. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we build a simple REST application that exposes MicroProfile Health -functionalities at the `/q/health/live` and `/q/health/ready` endpoints according to the -specification. - -== Solution - -We recommend that you follow the instructions in the next sections and create the -application step by step. However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an -{quickstarts-archive-url}[archive]. - -The solution is located in the `microprofile-health-quickstart` -{quickstarts-tree-url}/microprofile-health-quickstart[directory]. - -== Creating the Maven Project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: microprofile-health-quickstart -:create-app-extensions: smallrye-health -include::includes/devtools/create-app.adoc[] - -This command generates a project, importing the `smallrye-health` extension. - -If you already have your Quarkus project configured, you can add the `smallrye-health` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: smallrye-health -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-health - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-health") ----- - -== Running the health check - -Importing the `smallrye-health` extension directly exposes three REST endpoints: - -- `/q/health/live` - The application is up and running. -- `/q/health/ready` - The application is ready to serve requests. -- `/q/health/started` - The application is started. -- `/q/health` - Accumulating all health check procedures in the application. - -To check that the `smallrye-health` extension is working as expected: - -:devtools-wrapped: - -* start your Quarkus application with: -+ -include::includes/devtools/dev.adoc[] -* access the `http://localhost:8080/q/health/live` endpoint using your browser or -`curl http://localhost:8080/q/health/live` - -:!devtools-wrapped: - -All of the health REST endpoints return a simple JSON object with two fields: - -* `status` -- the overall result of all the health check procedures -* `checks` -- an array of individual checks - -The general `status` of the health check is computed as a logical AND of all the -declared health check procedures. The `checks` array is empty as we have not specified -any health check procedure yet so let's define some. - -== Creating your first health check - -In this section, we create our first simple health check procedure. - -Create the `org.acme.microprofile.health.SimpleHealthCheck` class: - -[source,java] ----- -package org.acme.microprofile.health; - -import org.eclipse.microprofile.health.HealthCheck; -import org.eclipse.microprofile.health.HealthCheckResponse; -import org.eclipse.microprofile.health.Liveness; - -import javax.enterprise.context.ApplicationScoped; - -@Liveness -@ApplicationScoped <1> <2> -public class SimpleHealthCheck implements HealthCheck { - - @Override - public HealthCheckResponse call() { - return HealthCheckResponse.up("Simple health check"); - } -} ----- -<1> It's recommended to annotate the health check class with `@ApplicationScoped` or the `@Singleton` scope so that a single bean instance is used for all health check requests. -<2> If a bean class annotated with one of the health check annotations declares no scope then the `@Singleton` scope is used automatically. - -As you can see, the health check procedures are defined as CDI beans that implement the `HealthCheck` interface and are annotated with one of the health check qualifiers, such as: - -- `@Liveness` - the liveness check accessible at `/q/health/live` -- `@Readiness` - the readiness check accessible at `/q/health/ready` - -`HealthCheck` is a functional interface whose single method `call` returns a -`HealthCheckResponse` object which can be easily constructed by the fluent builder -API shown in the example. - -As we have started our Quarkus application in dev mode simply repeat the request -to `http://localhost:8080/q/health/live` by refreshing your browser window or by -using `curl http://localhost:8080/q/health/live`. Because we defined our health check -to be a liveness procedure (with `@Liveness` qualifier) the new health check procedure -is now present in the `checks` array. - -Congratulations! You've created your first Quarkus health check procedure. Let's -continue by exploring what else can be done with SmallRye Health. - -== Adding a readiness health check procedure - -In the previous section, we created a simple liveness health check procedure which states -whether our application is running or not. In this section, we will create a readiness -health check which will be able to state whether our application is able to process -requests. - -We will create another health check procedure that simulates a connection to -an external service provider such as a database. For starters, we will always return -the response indicating the application is ready. - -Create `org.acme.microprofile.health.DatabaseConnectionHealthCheck` class: - -[source,java] ----- -package org.acme.microprofile.health; - -import org.eclipse.microprofile.health.HealthCheck; -import org.eclipse.microprofile.health.HealthCheckResponse; -import org.eclipse.microprofile.health.Readiness; - -import javax.enterprise.context.ApplicationScoped; - -@Readiness -@ApplicationScoped -public class DatabaseConnectionHealthCheck implements HealthCheck { - - @Override - public HealthCheckResponse call() { - return HealthCheckResponse.up("Database connection health check"); - } -} - ----- - -If you now rerun the health check at `http://localhost:8080/q/health/live` the `checks` -array will contain only the previously defined `SimpleHealthCheck` as it is the only -check defined with the `@Liveness` qualifier. However, if you access -`http://localhost:8080/q/health/ready` (in the browser or with -`curl http://localhost:8080/q/health/ready`) you will see only the -`Database connection health check` as it is the only health check defined with the -`@Readiness` qualifier as the readiness health check procedure. - -NOTE: If you access `http://localhost:8080/q/health` you will get back both checks. - -More information about which health check procedures should be used in which situation -is detailed in the MicroProfile Health specification. Generally, the liveness -procedures determine whether the application should be restarted while readiness -procedures determine whether it makes sense to contact the application with requests. - -== Adding a startup health check procedure - -The third and final type of health check procedures is startup. Startup procedures are defined as an option for slow starting containers (should not be needed in Quarkus) to delay the invocations of liveness probe which will take over from startup once the startup responds UP for the first time. Startup health checks are defined with the `@Startup` qualifier. - -NOTE: Please make sure that you import the microprofile `org.eclipse.microprofile.health.Startup` annotation since there is an unfortunate clash with `io.quarkus.runtime.Startup`. - -Create `org.acme.microprofile.health.StartupHealthCheck` class: - -[source,java] ----- -package org.acme.microprofile.health; - -import org.eclipse.microprofile.health.HealthCheck; -import org.eclipse.microprofile.health.HealthCheckResponse; -import org.eclipse.microprofile.health.Startup; - -import javax.enterprise.context.ApplicationScoped; - -@Startup -@ApplicationScoped -public class StartupHealthCheck implements HealthCheck { - - @Override - public HealthCheckResponse call() { - return HealthCheckResponse.up("Startup health check"); - } -} ----- - -The startup health check will be available either at `http://localhost:8080/q/health/started` or together with other health check procedure at `http://localhost:8080/q/health`. - -== Negative health check procedures - -In this section, we extend our `Database connection health check` with the option of -stating that our application is not ready to process requests as the underlying -database connection cannot be established. For simplicity reasons, we only determine -whether the database is accessible or not by a configuration property. - -Update the `org.acme.microprofile.health.DatabaseConnectionHealthCheck` class: - -[source,java] ----- -package org.acme.microprofile.health; - -import org.eclipse.microprofile.config.inject.ConfigProperty; -import org.eclipse.microprofile.health.HealthCheck; -import org.eclipse.microprofile.health.HealthCheckResponse; -import org.eclipse.microprofile.health.HealthCheckResponseBuilder; -import org.eclipse.microprofile.health.Readiness; - -import javax.enterprise.context.ApplicationScoped; - -@Readiness -@ApplicationScoped -public class DatabaseConnectionHealthCheck implements HealthCheck { - - @ConfigProperty(name = "database.up", defaultValue = "false") - private boolean databaseUp; - - @Override - public HealthCheckResponse call() { - - HealthCheckResponseBuilder responseBuilder = HealthCheckResponse.named("Database connection health check"); - - try { - simulateDatabaseConnectionVerification(); - responseBuilder.up(); - } catch (IllegalStateException e) { - // cannot access the database - responseBuilder.down(); - } - - return responseBuilder.build(); - } - - private void simulateDatabaseConnectionVerification() { - if (!databaseUp) { - throw new IllegalStateException("Cannot contact database"); - } - } -} ----- - -NOTE: Until now we used a simplified method of building a `HealthCheckResponse` -through the `HealthCheckResponse#up(String)` (there is also -`HealthCheckResponse#down(String)`) which will directly build the response object. -From now on, we utilize the full builder capabilities provided by the -`HealthCheckResponseBuilder` class. - -If you now rerun the readiness health check (at `http://localhost:8080/q/health/ready`) -the overall `status` should be DOWN. You can also check the liveness check at -`http://localhost:8080/q/health/live` which will return the overall `status` UP because -it isn't influenced by the readiness checks. - -As we shouldn't leave this application with a readiness check in a DOWN state and -because we are running Quarkus in dev mode you can add `database.up=true` in -`src/main/resources/application.properties` and rerun the readiness health check again --- it should be up again. - - -== Adding user-specific data to the health check response - -In previous sections, we saw how to create simple health checks with only the minimal -attributes, namely, the health check name and its status (UP or DOWN). However, the -MicroProfile Health specification also provides a way for the applications to supply -arbitrary data in the form of key-value pairs sent to the consuming end. This can be -done by using the `withData(key, value)` method of the health check response -builder API. - -Let's create a new health check procedure `org.acme.microprofile.health.DataHealthCheck`: - -[source,java] ----- -package org.acme.microprofile.health; - -import org.eclipse.microprofile.health.Liveness; -import org.eclipse.microprofile.health.HealthCheck; -import org.eclipse.microprofile.health.HealthCheckResponse; - -import javax.enterprise.context.ApplicationScoped; - -@Liveness -@ApplicationScoped -public class DataHealthCheck implements HealthCheck { - - @Override - public HealthCheckResponse call() { - return HealthCheckResponse.named("Health check with data") - .up() - .withData("foo", "fooValue") - .withData("bar", "barValue") - .build(); - } -} ----- - -If you rerun the liveness health check procedure by accessing the `/q/health/live` -endpoint you can see that the new health check `Health check with data` is present -in the `checks` array. This check contains a new attribute called `data` which is a -JSON object consisting of the properties we have defined in our health check procedure. - -This functionality is specifically useful in failure scenarios where you can pass the -error along with the health check response. - - -[source,java] ----- - try { - simulateDatabaseConnectionVerification(); - responseBuilder.up(); - } catch (IllegalStateException e) { - // cannot access the database - responseBuilder.down() - .withData("error", e.getMessage()); // pass the exception message - } ----- - -== Context propagation into the health check invocations - -For the perfomance reasons the context (e.g., CDI or security context) is not propagated into each health check invocation. However, if you need to enable this functionality you can set the config property `quarkus.smallrye-health.context-propagation=true` to allow the context propagation into every health check call. - -== Reactive health checks - -MicroProfile Health currently doesn't support returning reactive types, but SmallRye Health does. - -If you want to provide a reactive health check, you can implement the `io.smallrye.health.api.AsyncHealthCheck` interface instead of the `org.eclipse.microprofile.health.HealthCheck` one. -The `io.smallrye.health.api.AsyncHealthCheck` interface allows you to return a `Uni`. - -The following example shows a reactive liveness check: - -[source,java] ----- -import io.smallrye.health.api.AsyncHealthCheck; - -import org.eclipse.microprofile.health.Liveness; -import org.eclipse.microprofile.health.HealthCheckResponse; - -import javax.enterprise.context.ApplicationScoped; - -@Liveness -@ApplicationScoped -public class LivenessAsync implements AsyncHealthCheck { - - @Override - public Uni call() { - return Uni.createFrom().item(HealthCheckResponse.up("liveness-reactive")) - .onItem().delayIt().by(Duration.ofMillis(10)); - } -} ----- - -== Extension health checks - -Some extension may provide default health checks, including the extension will automatically register its health checks. - -For example, `quarkus-agroal` that is used to manage Quarkus datasource(s) automatically register a readiness health check -that will validate each datasources: xref:datasource.adoc#datasource-health-check[Datasource Health Check]. - -You can disable extension health check via the property `quarkus.health.extensions.enabled` so none will be automatically registered. - -[[ui]] -== Health UI - -NOTE: Experimental - not included in the MicroProfile specification - -`health-ui` allows you to see your Health Checks in a Web GUI. - -The Quarkus `smallrye-health` extension ships with `health-ui` and enables it by default in dev and test modes, but it can also be explicitly configured for production mode as well. - -`health-ui` can be accessed from http://localhost:8080/q/health-ui/ . - -image:health-ui-screenshot01.png[alt=Health UI] - -== Conclusion - -SmallRye Health provides a way for your application to distribute information -about its healthiness state to state whether or not it is able to function properly. -Liveness checks are utilized to tell whether the application should be restarted and -readiness checks are used to tell whether the application is able to process requests. - -All that is needed to enable the SmallRye Health features in Quarkus is: - -:devtools-wrapped: -* adding the `smallrye-health` Quarkus extension to your project using the -`quarkus-maven-plugin`: -+ -include::includes/devtools/extension-add.adoc[] -:!devtools-wrapped: - -* or simply adding the following Maven dependency: -+ -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-health - ----- -+ -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-health") ----- - -== Configuration Reference - -include::{generated-dir}/config/quarkus-smallrye-health.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/smallrye-kafka-incoming.adoc b/_versions/2.7/guides/smallrye-kafka-incoming.adoc deleted file mode 100644 index 7a2afa50bdb..00000000000 --- a/_versions/2.7/guides/smallrye-kafka-incoming.adoc +++ /dev/null @@ -1,175 +0,0 @@ -.Incoming Attributes of the 'smallrye-kafka' connector -[cols="25, 30, 15, 20",options="header"] -|=== -|Attribute (_alias_) | Description | Mandatory | Default - -| [.no-hyphens]#*bootstrap.servers*# - -[.no-hyphens]#_(kafka.bootstrap.servers)_# | A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster. - -Type: _string_ | false | `localhost:9092` - -| [.no-hyphens]#*topic*# | The consumed / populated Kafka topic. If neither this property nor the `topics` properties are set, the channel name is used - -Type: _string_ | false | - -| [.no-hyphens]#*health-enabled*# | Whether health reporting is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*health-readiness-enabled*# | Whether readiness health reporting is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*health-readiness-topic-verification*# | _deprecated_ - Whether the readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin connection. Deprecated: Use 'health-topic-verification-enabled' instead. - -Type: _boolean_ | false | - -| [.no-hyphens]#*health-readiness-timeout*# | _deprecated_ - During the readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready. Deprecated: Use 'health-topic-verification-timeout' instead. - -Type: _long_ | false | - -| [.no-hyphens]#*health-topic-verification-enabled*# | Whether the startup and readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin client connection. - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*health-topic-verification-timeout*# | During the startup and readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready. - -Type: _long_ | false | `2000` - -| [.no-hyphens]#*tracing-enabled*# | Whether tracing is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*cloud-events*# | Enables (default) or disables the Cloud Event support. If enabled on an _incoming_ channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an _outgoing_, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*kafka-configuration*# | Identifier of a CDI bean that provides the default Kafka consumer/producer configuration for this channel. The channel configuration can still override any attribute. The bean must have a type of Map and must use the @io.smallrye.common.annotation.Identifier qualifier to set the identifier. - -Type: _string_ | false | - -| [.no-hyphens]#*topics*# | A comma-separating list of topics to be consumed. Cannot be used with the `topic` or `pattern` properties - -Type: _string_ | false | - -| [.no-hyphens]#*pattern*# | Indicate that the `topic` property is a regular expression. Must be used with the `topic` property. Cannot be used with the `topics` property - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*key.deserializer*# | The deserializer classname used to deserialize the record's key - -Type: _string_ | false | `org.apache.kafka.common.serialization.StringDeserializer` - -| [.no-hyphens]#*value.deserializer*# | The deserializer classname used to deserialize the record's value - -Type: _string_ | true | - -| [.no-hyphens]#*fetch.min.bytes*# | The minimum amount of data the server should return for a fetch request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. - -Type: _int_ | false | `1` - -| [.no-hyphens]#*group.id* | A unique string that identifies the consumer group the application belongs to. - -If not set, defaults to the application name as set by the `quarkus.application.name` configuration property. - -If that is not set either, a unique, generated id is used. - -It is recommended to always define a `group.id`, the automatic generation is only a convenient feature for development. -You can explicitly ask for automatically generated unique id by setting this property to `${quarkus.uuid}`. - -Type: _string_ | false | - -| [.no-hyphens]#*enable.auto.commit*# | If enabled, consumer's offset will be periodically committed in the background by the underlying Kafka client, ignoring the actual processing outcome of the records. It is recommended to NOT enable this setting and let Reactive Messaging handles the commit. - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*retry*# | Whether or not the connection to the broker is re-attempted in case of failure - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*retry-attempts*# | The maximum number of reconnection before failing. -1 means infinite retry - -Type: _int_ | false | `-1` - -| [.no-hyphens]#*retry-max-wait*# | The max delay (in seconds) between 2 reconnects - -Type: _int_ | false | `30` - -| [.no-hyphens]#*broadcast*# | Whether the Kafka records should be dispatched to multiple consumer - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*auto.offset.reset*# | What to do when there is no initial offset in Kafka.Accepted values are earliest, latest and none - -Type: _string_ | false | `latest` - -| [.no-hyphens]#*failure-strategy*# | Specify the failure strategy to apply when a message produced from a record is acknowledged negatively (nack). Values can be `fail` (default), `ignore`, or `dead-letter-queue` - -Type: _string_ | false | `fail` - -| [.no-hyphens]#*commit-strategy*# | Specify the commit strategy to apply when a message produced from a record is acknowledged. Values can be `latest`, `ignore` or `throttled`. If `enable.auto.commit` is true then the default is `ignore` otherwise it is `throttled` - -Type: _string_ | false | - -| [.no-hyphens]#*throttled.unprocessed-record-max-age.ms*# | While using the `throttled` commit-strategy, specify the max age in milliseconds that an unprocessed message can be before the connector is marked as unhealthy. Setting this attribute to 0 disables this monitoring. - -Type: _int_ | false | `60000` - -| [.no-hyphens]#*dead-letter-queue.topic*# | When the `failure-strategy` is set to `dead-letter-queue` indicates on which topic the record is sent. Defaults is `dead-letter-topic-$channel` - -Type: _string_ | false | - -| [.no-hyphens]#*dead-letter-queue.key.serializer*# | When the `failure-strategy` is set to `dead-letter-queue` indicates the key serializer to use. If not set the serializer associated to the key deserializer is used - -Type: _string_ | false | - -| [.no-hyphens]#*dead-letter-queue.value.serializer*# | When the `failure-strategy` is set to `dead-letter-queue` indicates the value serializer to use. If not set the serializer associated to the value deserializer is used - -Type: _string_ | false | - -| [.no-hyphens]#*partitions*# | The number of partitions to be consumed concurrently. The connector creates the specified amount of Kafka consumers. It should match the number of partition of the targeted topic - -Type: _int_ | false | `1` - -| [.no-hyphens]#*requests*# | When `partitions` is greater than 1, this attribute allows configuring how many records are requested by each consumers every time. - -Type: _int_ | false | `128` - -| [.no-hyphens]#*consumer-rebalance-listener.name*# | The name set in `@Identifier` of a bean that implements `io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener`. If set, this rebalance listener is applied to the consumer. - -Type: _string_ | false | - -| [.no-hyphens]#*key-deserialization-failure-handler*# | The name set in `@Identifier` of a bean that implements `io.smallrye.reactive.messaging.kafka.DeserializationFailureHandler`. If set, deserialization failure happening when deserializing keys are delegated to this handler which may retry or provide a fallback value. - -Type: _string_ | false | - -| [.no-hyphens]#*value-deserialization-failure-handler*# | The name set in `@Identifier` of a bean that implements `io.smallrye.reactive.messaging.kafka.DeserializationFailureHandler`. If set, deserialization failure happening when deserializing values are delegated to this handler which may retry or provide a fallback value. - -Type: _string_ | false | - -| [.no-hyphens]#*fail-on-deserialization-failure*# | When no deserialization failure handler is set and a deserialization failure happens, report the failure and mark the application as unhealthy. If set to `false` and a deserialization failure happens, a `null` value is forwarded. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*graceful-shutdown*# | Whether or not a graceful shutdown should be attempted when the application terminates. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*poll-timeout*# | The polling timeout in milliseconds. When polling records, the poll will wait at most that duration before returning records. Default is 1000ms - -Type: _int_ | false | `1000` - -| [.no-hyphens]#*pause-if-no-requests*# | Whether the polling must be paused when the application does not request items and resume when it does. This allows implementing back-pressure based on the application capacity. Note that polling is not stopped, but will not retrieve any records when paused. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*batch*# | Whether the Kafka records are consumed in batch. The channel injection point must consume a compatible type, such as `List` or `KafkaRecordBatch`. - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*max-queue-size-factor*# | Multiplier factor to determine maximum number of records queued for processing, using `max.poll.records` * `max-queue-size-factor`. Defaults to 2. In `batch` mode `max.poll.records` is considered `1`. - -Type: _int_ | false | `2` - -|=== diff --git a/_versions/2.7/guides/smallrye-kafka-outgoing.adoc b/_versions/2.7/guides/smallrye-kafka-outgoing.adoc deleted file mode 100644 index 551110c5b09..00000000000 --- a/_versions/2.7/guides/smallrye-kafka-outgoing.adoc +++ /dev/null @@ -1,148 +0,0 @@ -.Outgoing Attributes of the 'smallrye-kafka' connector -[cols="25, 30, 15, 20",options="header"] -|=== -|Attribute (_alias_) | Description | Mandatory | Default - -| [.no-hyphens]#*acks*# | The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. Accepted values are: 0, 1, all - -Type: _string_ | false | `1` - -| [.no-hyphens]#*bootstrap.servers*# - -[.no-hyphens]#_(kafka.bootstrap.servers)_# | A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster. - -Type: _string_ | false | `localhost:9092` - -| [.no-hyphens]#*buffer.memory*# | The total bytes of memory the producer can use to buffer records waiting to be sent to the server. - -Type: _long_ | false | `33554432` - -| [.no-hyphens]#*close-timeout*# | The amount of milliseconds waiting for a graceful shutdown of the Kafka producer - -Type: _int_ | false | `10000` - -| [.no-hyphens]#*cloud-events*# | Enables (default) or disables the Cloud Event support. If enabled on an _incoming_ channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an _outgoing_, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata. - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*cloud-events-data-content-type*# - -[.no-hyphens]#_(cloud-events-default-data-content-type)_# | Configure the default `datacontenttype` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `datacontenttype` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-data-schema*# - -[.no-hyphens]#_(cloud-events-default-data-schema)_# | Configure the default `dataschema` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `dataschema` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-insert-timestamp*# - -[.no-hyphens]#_(cloud-events-default-timestamp)_# | Whether or not the connector should insert automatically the `time` attribute into the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `time` attribute itself - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*cloud-events-mode*# | The Cloud Event mode (`structured` or `binary` (default)). Indicates how are written the cloud events in the outgoing record - -Type: _string_ | false | `binary` - -| [.no-hyphens]#*cloud-events-source*# - -[.no-hyphens]#_(cloud-events-default-source)_# | Configure the default `source` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `source` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-subject*# - -[.no-hyphens]#_(cloud-events-default-subject)_# | Configure the default `subject` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `subject` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*cloud-events-type*# - -[.no-hyphens]#_(cloud-events-default-type)_# | Configure the default `type` attribute of the outgoing Cloud Event. Requires `cloud-events` to be set to `true`. This value is used if the message does not configure the `type` attribute itself - -Type: _string_ | false | - -| [.no-hyphens]#*health-enabled*# | Whether health reporting is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*health-readiness-enabled*# | Whether readiness health reporting is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*health-readiness-timeout*# | _deprecated_ - During the readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready. Deprecated: Use 'health-topic-verification-timeout' instead. - -Type: _long_ | false | - -| [.no-hyphens]#*health-readiness-topic-verification*# | _deprecated_ - Whether the readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin connection. Deprecated: Use 'health-topic-verification-enabled' instead. - -Type: _boolean_ | false | - -| [.no-hyphens]#*health-topic-verification-enabled*# | Whether the startup and readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin client connection. - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*health-topic-verification-timeout*# | During the startup and readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready. - -Type: _long_ | false | `2000` - -| [.no-hyphens]#*kafka-configuration*# | Identifier of a CDI bean that provides the default Kafka consumer/producer configuration for this channel. The channel configuration can still override any attribute. The bean must have a type of Map and must use the @io.smallrye.common.annotation.Identifier qualifier to set the identifier. - -Type: _string_ | false | - -| [.no-hyphens]#*key*# | A key to used when writing the record - -Type: _string_ | false | - -| [.no-hyphens]#*key-serialization-failure-handler*# | The name set in `@Identifier` of a bean that implements `io.smallrye.reactive.messaging.kafka.SerializationFailureHandler`. If set, serialization failure happening when serializing keys are delegated to this handler which may provide a fallback value. - -Type: _string_ | false | - -| [.no-hyphens]#*key.serializer*# | The serializer classname used to serialize the record's key - -Type: _string_ | false | `org.apache.kafka.common.serialization.StringSerializer` - -| [.no-hyphens]#*max-inflight-messages*# | The maximum number of messages to be written to Kafka concurrently. It limits the number of messages waiting to be written and acknowledged by the broker. You can set this attribute to `0` remove the limit - -Type: _long_ | false | `1024` - -| [.no-hyphens]#*merge*# | Whether the connector should allow multiple upstreams - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*partition*# | The target partition id. -1 to let the client determine the partition - -Type: _int_ | false | `-1` - -| [.no-hyphens]#*propagate-record-key*# | Propagate incoming record key to the outgoing record - -Type: _boolean_ | false | `false` - -| [.no-hyphens]#*retries*# | If set to a positive number, the connector will try to resend any record that was not delivered successfully (with a potentially transient error) until the number of retries is reached. If set to 0, retries are disabled. If not set, the connector tries to resend any record that failed to be delivered (because of a potentially transient error) during an amount of time configured by `delivery.timeout.ms`. - -Type: _long_ | false | `2147483647` - -| [.no-hyphens]#*topic*# | The consumed / populated Kafka topic. If neither this property nor the `topics` properties are set, the channel name is used - -Type: _string_ | false | - -| [.no-hyphens]#*tracing-enabled*# | Whether tracing is enabled (default) or disabled - -Type: _boolean_ | false | `true` - -| [.no-hyphens]#*value-serialization-failure-handler*# | The name set in `@Identifier` of a bean that implements `io.smallrye.reactive.messaging.kafka.SerializationFailureHandler`. If set, serialization failure happening when serializing values are delegated to this handler which may provide a fallback value. - -Type: _string_ | false | - -| [.no-hyphens]#*value.serializer*# | The serializer classname used to serialize the payload - -Type: _string_ | true | - -| [.no-hyphens]#*waitForWriteCompletion*# | Whether the client waits for Kafka to acknowledge the written record before acknowledging the message - -Type: _boolean_ | false | `true` - -|=== diff --git a/_versions/2.7/guides/smallrye-metrics.adoc b/_versions/2.7/guides/smallrye-metrics.adoc deleted file mode 100644 index 3c06698fcbe..00000000000 --- a/_versions/2.7/guides/smallrye-metrics.adoc +++ /dev/null @@ -1,225 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// - -= SmallRye Metrics - -include::./attributes.adoc[] - -The following guide demonstrates how a Quarkus application can use link:https://github.com/smallrye/smallrye-metrics/[SmallRye Metrics], -an implementation of the link:https://github.com/eclipse/microprofile-metrics/[MicroProfile Metrics] specification. - -SmallRye Metrics allows applications to gather metrics and statistics that provide insights into what is happening inside an application. The metrics can be read remotely using the JSON or OpenMetrics format to be processed by additional tools such as Prometheus and stored for analysis and visualization. - -Apart from application-specific metrics described in this guide, you may also use built-in metrics exposed by various Quarkus extensions. These are described in the guide for each particular extension that supports built-in metrics. - -IMPORTANT: xref:micrometer.adoc[Micrometer] is the recommended approach to metrics for Quarkus. Use the SmallRye Metrics extension when it is required to retain MicroProfile specification compatibility. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this example, we build a very simple microservice that offers one REST endpoint. This endpoint serves for determining whether a number is prime. The implementation class is annotated with certain metric annotations so that while responding to users' requests, certain metrics are gathered. The meaning of each metric is explained later. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. However, you can skip to the completed example. - -. Clone the Git repository: -+ -[source,bash,subs=attributes+] ----- -git clone {quickstarts-clone-url} ----- - -* Alternatively, download a {quickstarts-archive-url}[Quickstarts archive]. The solution is located in the `microprofile-metrics-quickstart` {quickstarts-tree-url}/microprofile-metrics-quickstart[directory] and follow with the xref:running-and-using-the-application_{context}[] section. - -[id="creating-a-maven-project_{context}"] -== Creating a Maven project - -To create a new project: - -:create-app-artifact-id: microprofile-metrics-quickstart -:create-app-extensions: resteasy,smallrye-metrics -include::includes/devtools/create-app.adoc[] - -This command generates a Quarkus project that uses the `smallrye-metrics` extension. - -If you already have your Quarkus project configured, you can add the `smallrye-metrics` extension to your project by running the following command in your project base directory: - -:add-extension-extensions: smallrye-metrics -include::includes/devtools/extension-add.adoc[] - -This adds the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-smallrye-metrics - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-smallrye-metrics") ----- - -[id="writing-an-application_{context}"] -== Writing an application - -The following procedures create a Quarkus application that consists of a single class that implements an algorithm for checking whether a number is prime. This algorithm is exposed over a REST interface. Additionally, specific annotations are required to ensure that the desired metrics are calculated over time and can be exported for manual analysis or processing by additional tooling. - -The application will gather the following metrics: - -* `performedChecks`: A counter that increases by one each time the user asks about a number. -* `highestPrimeNumberSoFar`: A gauge that stores the highest number asked about by the user if the number was determined to be prime. -* `checksTimer`: A compound metric that benchmarks how much time the primality tests take. Additional details are provided later. - -The full source code looks as follows: -[source,java] ----- -package org.acme.microprofile.metrics; - -import org.eclipse.microprofile.metrics.MetricUnits; -import org.eclipse.microprofile.metrics.annotation.Counted; -import org.eclipse.microprofile.metrics.annotation.Gauge; -import org.eclipse.microprofile.metrics.annotation.Timed; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/") -public class PrimeNumberChecker { - - private long highestPrimeNumberSoFar = 2; - - @GET - @Path("/{number}") - @Produces(MediaType.TEXT_PLAIN) - @Counted(name = "performedChecks", description = "How many primality checks have been performed.") - @Timed(name = "checksTimer", description = "A measure of how long it takes to perform the primality test.", unit = MetricUnits.MILLISECONDS) - public String checkIfPrime(@PathParam long number) { - if (number < 1) { - return "Only natural numbers can be prime numbers."; - } - if (number == 1) { - return "1 is not prime."; - } - if (number == 2) { - return "2 is prime."; - } - if (number % 2 == 0) { - return number + " is not prime, it is divisible by 2."; - } - for (int i = 3; i < Math.floor(Math.sqrt(number)) + 1; i = i + 2) { - if (number % i == 0) { - return number + " is not prime, is divisible by " + i + "."; - } - } - if (number > highestPrimeNumberSoFar) { - highestPrimeNumberSoFar = number; - } - return number + " is prime."; - } - - @Gauge(name = "highestPrimeNumberSoFar", unit = MetricUnits.NONE, description = "Highest prime number so far.") - public Long highestPrimeNumberSoFar() { - return highestPrimeNumberSoFar; - } - -} ----- - -[id="running-and-using-the-application_{context}"] -== Running and using the application - -To execute the application created in xref:writing-an-application_{context}[], do the following: - -:devtools-wrapped: -. Run the microservice in dev mode: -+ -include::includes/devtools/dev.adoc[] -:!devtools-wrapped: - -. Generate values for the metrics. - -.. Query the endpoint to determine whether some numbers are prime numbers: -+ -[source,bash] ----- -curl localhost:8080/350 ----- -+ -The application will respond that 350 is not a prime number because it can be divided by 2. - -* For large prime numbers, the test takes more time. -+ -[source,bash] ----- -curl localhost:8080/629521085409773 ----- -+ -The application will respond that 629521085409773 is a prime number. - -.. Perform additional calls with numbers of your choice. - -. Review the generated metrics: -+ -[source,bash] ----- -curl -H"Accept: application/json" localhost:8080/q/metrics/application ----- -+ -You will receive a response such as: -+ -[source,json] ----- -{ - "org.acme.microprofile.metrics.PrimeNumberChecker.checksTimer" : { <1> - "p50": 217.231273, <2> - "p75": 217.231273, - "p95": 217.231273, - "p98": 217.231273, - "p99": 217.231273, - "p999": 217.231273, - "min": 0.58961, <3> - "mean": 112.15909190834819, <4> - "max": 217.231273, <5> - "stddev": 108.2721053982776, <6> - "count": 2, <7> - "meanRate": 0.04943519091742238, <8> - "oneMinRate": 0.2232140583080189, - "fiveMinRate": 0.3559527083952095, - "fifteenMinRate": 0.38474303050928976 - }, - "org.acme.microprofile.metrics.PrimeNumberChecker.performedChecks" : 2, <9> - "org.acme.microprofile.metrics.PrimeNumberChecker.highestPrimeNumberSoFar" : 629521085409773 <10> -} ----- - -<1> `checksTimer`: A compound metric that benchmarks how much time the primality tests take. All durations are measured in milliseconds. It consists of these values below. -<2> `p50, p75, p95, p99, p999`: Percentiles of the durations. For example, the value in `p95` means that 95 % of the measurements were faster than this duration. -<3> `min`: The shortest duration it took to perform a primality test was probably performed for a small number. -<4> `mean`: The mean value of the measured durations. -<5> `max`: The longest duration, probably it was with a large prime number. -<6> `stddev`: The standard deviation. -<7> `count`: The number of observations, the value of which is the same as `performedChecks`. -<8> `meanRate, oneMinRate, fiveMinRate, fifteenMinRate`: Mean throughput and one-, five-, and fifteen-minute exponentially-weighted moving average throughput. -<9> `performedChecks`: A counter which is increased by one each time the user asks about a number. -<10> `highestPrimeNumberSoFar`: A gauge that stores the highest number that was asked about by the user and which was determined to be prime. - -NOTE: If you prefer an OpenMetrics export rather than the JSON format, remove the `-H"Accept: application/json"` argument from your command line. - -.Configuration Reference - -include::{generated-dir}/config/quarkus-smallrye-metrics.adoc[opts=optional, leveloffset=+1] diff --git a/_versions/2.7/guides/software-transactional-memory.adoc b/_versions/2.7/guides/software-transactional-memory.adoc deleted file mode 100644 index 8d5831d75ee..00000000000 --- a/_versions/2.7/guides/software-transactional-memory.adoc +++ /dev/null @@ -1,244 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Software Transactional Memory in Quarkus - -include::./attributes.adoc[] -:extension-status: preview - -Software Transactional Memory (STM) has been around in research environments since the late -1990's and has relatively recently started to appear in products and various programming -languages. We won't go into all of the details behind STM but the interested reader could look at https://groups.csail.mit.edu/tds/papers/Shavit/ShavitTouitou-podc95.pdf[this paper]. -However, suffice it to say that STM offers an approach to developing transactional applications in a highly -concurrent environment with some of the same characteristics of ACID transactions, which you've probably already used -through JTA. Importantly though, the Durability property is relaxed (removed) within STM implementations, -or at least made optional. This is not the situation with JTA, where state changes are made durable -to a relational database which supports https://pubs.opengroup.org/onlinepubs/009680699/toc.pdf[the X/Open XA -standard]. - -Note, the STM implementation provided by Quarkus is based on the https://narayana.io/docs/project/index.html#d0e16066[Narayana STM] implementation. This document isn't meant to be a replacement for that project's documentation so you may want -to look at that for more detail. However, we will try to focus more on how you can combine some of the key capabilities -into Quarkus when developing Kubernetes native applications and microservices. - -== Why use STM with Quarkus? - -Now you may still be asking yourself "Why STM instead of JTA?" or "What are the benefits -to STM that I don't get from JTA?" Let's try to answer those or similar questions, with -a particular focus on why we think they're great for Quarkus, microservices and Kubernetes -native applications. So in no specific order ... - -* The goal of STM is to simplify object reads and writes from multiple threads/protect -state from concurrent updates. The Quarkus STM implementation will safely manage any conflicts between -these threads using whatever isolation model has been chosen to protect that specific state -instance (object in the case of Quarkus). In Quarkus STM, there are two isolation implementations, -pessimistic (the default), which would cause conflicting threads to be blocked until the original -has completed its updates (committed or aborted the transaction); then there's the optimistic -approach which allows all of the threads to proceed and checks for conflicts at commit time, where -one or more of the threads may be forced to abort if there have been conflicting updates. - -* STM objects have state but it doesn't need to be persistent (durable). In fact the -default behaviour is for objects managed within transactional memory to be volatile, such that -if the service or microservice within which they are being used crashes or is spawned elsewhere, e.g., -by a scheduler, all state in memory is lost and the objects start from scratch. But surely you get this and more -with JTA (and a suitable transactional datastore) and don't need to worry about restarting your application? -Not quite. There's a trade-off here: we're doing away -with persistent state and the overhead of reading from and then writing (and sync-ing) to the datastore during each -transaction. This makes updates to (volatile) state very fast but you still get the benefits of atomic updates -across multiple STM objects (e.g., objects your team wrote then calling objects you inherited from another team and requiring -them to make all-or-nothing updates), as well as consistency -and isolation in the presence of concurrent threads/users (common in distributed microservices architectures). -Furthermore, not all stateful applications need to be durable - even when JTA transactions are used, it tends to be the -exception and not the rule. And as you'll see later, because applications can optionally start and control transactions, it's possible to build microservices which can undo state changes and try alternative paths. - -* Another benefit of STM is composability and modularity. You can write concurrent Quarkus objects/services that -can be easily composed with any other services built using STM, without exposing the details of how the objects/services -are implemented. As we discussed earlier, this ability to compose objects you wrote with those other teams may have -written weeks, months or years earlier, and have A, C and I properties can be hugely beneficial. Furthermore, some -STM implementations, including the one Quarkus uses, support nested transactions and these allow changes made within -the context of a nested (sub) transaction to later be rolled back by the parent transaction. - -* Although the default for STM object state is volatile, it is possible to configure the STM implementation -such that an object's state is durable. Although it's possible to configure Narayana such that different -backend datastores can be used, including relational databases, the default is the local operating system -file system, which means you don't need to configure anything else with Quarkus such as a database. - -* Many STM implementations allow "plain old language objects" to be made STM-aware with little or no changes to -the application code. You can build, test and deploy applications without wanting them to be STM-aware and -then later add those capabilities if they become necessary and without much development overhead at all. - -== Building STM applications - -There is also a fully worked example in the quickstarts which you may access by cloning the -Git repository: `git clone {quickstarts-clone-url}`, or by downloading an {quickstarts-archive-url}[archive]. -Look for the `software-transactional-memory-quickstart` example. This will help to understand how you -can build STM-aware applications with Quarkus. However, before we do so there are a few basic concepts -which we need to cover. - -Note, as you will see, STM in Quarkus relies on a number of annotations to define behaviours. The lack -of these annotations causes sensible defaults to be assumed but it is important for the developer to -understand what these may be. Please refer to the https://narayana.io/docs/project/index.html#d0e16066[Narayana STM manual] -and the https://narayana.io//docs/project/index.html#d0e16133[STM annotations guide] for more details on -all of the annotations Narayana STM provides. - -include::./status-include.adoc[] - -== Setting it up - -To use the extension include it as a dependency in your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-narayana-stm - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-narayana-stm") ----- - -== Defining STM-aware classes - -In order for the STM subsystem to have knowledge about which classes are to be managed within the context -of transactional memory it is necessary to provide a minimal level of instrumentation. This occurs by -categorising STM-aware and STM-unaware classes through an interface boundary; specifically all STM-aware objects -must be instances of classes which inherit from interfaces that themselves have been annotated to identify them -as STM-aware. Any other objects (and their classes) which do not follow this rule will not be managed by the -STM subsystem and hence any of their state changes will not be rolled back, for example. - -The specific annotation that STM-aware application interfaces must use is `org.jboss.stm.annotations.Transactional`. -For example: - -[source,java] ----- -@Transactional -public interface FlightService { - int getNumberOfBookings(); - void makeBooking(String details); -} ----- - -Classes which implement this interface are able to use additional annotations from Narayana to tell the STM -subsystem about things such as whether a method will modify the state of the object, or what state variables -within the class should be managed transactionally, e.g., some instance variables may not need to be rolled back -if a transaction aborts. As mentioned earlier, if those annotations are not present then defaults are chosen to -guarantee safety, such as assuming all methods will modify state. - -[source,java] ----- -public class FlightServiceImpl implements FlightService { - @ReadLock - public int getNumberOfBookings() { ... } - public void makeBooking(String details) {...} - - @NotState - private int timesCalled; -} ----- - -For example, by using the `@ReadLock` annotation on the `getNumberOfBookings` method, we are able to tell the -STM subsystem that no state modifications will occur in this object when it is used in the transactional -memory. Also, the `@NotState` annotation tells the system to ignore `timesCalled` when transactions commit or -abort, so this value only changes due to application code. - -Please refer to the Narayana guide for details of how to exert finer grained control over the transactional -behaviour of objects that implement interfaces marked with the `@Transactional` annotation. - -== Creating STM objects - -The STM subsystem needs to be told about which objects it should be managing. The Quarkus (aka Narayana) STM implementation -does this by providing containers of transactional memory within which these object instances reside. Until an object -is placed within one of these STM containers it cannot be managed within transactions and any state changes will -not possess the A, C, I (or even D) properties. - -Note, the term "container" was defined within the STM implementation years before Linux containers came along. It may -be confusing to use especially in a Kubernetes native environment such as Quarkus, but hopefully -the reader can do the mental mapping. - -The default STM container (`org.jboss.stm.Container`) provides support for volatile objects that can only be shared between -threads in the same microservice/JVM instance. When a STM-aware object is placed into the container it returns a handle -through which that object should then be used in the future. It is important to use this handle as continuing to access -the object through the original reference will not allow the STM subsystem to track access and manage state and -concurrency control. - -[source,java] ----- - import org.jboss.stm.Container; - - ... - - Container container = new Container<>(); <1> - FlightServiceImpl instance = new FlightServiceImpl(); <2> - FlightService flightServiceProxy = container.create(instance); <3> ----- - -<1> You need to tell each Container about the type of objects for which it will be responsible. In this example - it will be instances that implement the FlightService interface. -<2> Then you create an instance that implements `FlightService`. You should not use it directly at this stage because - access to it is not being managed by the STM subsystem. -<3> To obtain a managed instance, pass the original object to the STM `container` which then returns a reference - through which you will be able to perform transactional operations. This reference can be used safely from multiple threads. - -== Defining transaction boundaries - -Once an object is placed within an STM container the application developer can manage the scope of transactions -within which it is used. There are some annotations which can be applied to the STM-aware class to have the -container automatically create a transaction whenever a specific method is invoked. - -=== Declarative approach - -If the `@NestedTopLevel` or `@Nested` annotation is placed on a method signature then the STM container will -start a new transaction when that method is invoked and attempt to commit it when the method returns. If there is -a transaction already associated with the calling thread then each of these annotations behaves slightly differently: -the former annotation will always create a new top-level transaction within which the method will execute, so the enclosing -transaction does not behave as a parent, i.e., the nested top-level transaction will commit or abort independently; the -latter annotation will create a transaction with is properly nested within the calling transaction, i.e., that -transaction acts as the parent of this newly created transaction. - -=== Programmatic approach - -The application can programmatically start a transaction before accessing the methods of STM objects: - -[source,java] ----- -AtomicAction aa = new AtomicAction(); <1> - -aa.begin(); <2> -{ - try { - flightService.makeBooking("BA123 ..."); - taxiService.makeBooking("East Coast Taxis ..."); <3> - <4> - aa.commit(); - <5> - } catch (Exception e) { - aa.abort(); <6> - } -} ----- - -<1> An object for manually controlling transaction boundaries (AtomicAction and many other useful - classes are included in the extension). - Refer https://narayana.io//docs/api/com/arjuna/ats/arjuna/AtomicAction.html[to the javadoc] for more detail. -<2> Programmatically begin a transaction. -<3> Notice that object updates can be composed which means that updates to multiple objects can be committed together as a single action. - [Note that it is also possible to begin nested transactions so that you can perform speculative work which may then be abandoned - without abandoning other work performed by the outer transaction]. -<4> Since the transaction has not yet been committed the changes made by the flight and taxi services are not visible outside of the transaction. -<5> Since the commit was successful the changes made by the flight and taxi services are now visible to other threads. - Note that other transactions that relied on the old state may or may not now incur conflicts when they commit (the STM library - provides a number of features for managing conflicting behaviour and these are covered in the Narayana STM manual). -<6> Programmatically decide to abort the transaction which means that the changes made by the flight and taxi services are discarded. - -== Distributed transactions - -Sharing a transaction between multiple services is possible but is currently -an advanced use case only and the Narayana documentation should be consulted -if this behaviour is required. In particular, STM does not yet support the features -described in the xref:context-propagation.adoc[Context Propagation guide]. diff --git a/_versions/2.7/guides/spring-boot-properties.adoc b/_versions/2.7/guides/spring-boot-properties.adoc deleted file mode 100644 index e6b99ce0938..00000000000 --- a/_versions/2.7/guides/spring-boot-properties.adoc +++ /dev/null @@ -1,397 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Accessing application properties with Spring Boot properties API - -include::./attributes.adoc[] - -If you prefer to use Spring Boot `@ConfigurationProperties` annotated class to access application properties instead of -a Quarkus native `@ConfigProperties` or a MicroProfile `@ConfigProperty` approach, you can do that with this extension. - -IMPORTANT: Spring Boot `@ConfigurationProperties` has a few limitations. For instance, `Map` injection is not -supported. Consider using xref:config-mappings.adoc[Mapping configuration to objects]. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-boot-properties-quickstart` {quickstarts-tree-url}/spring-boot-properties-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-boot-properties-quickstart -:create-app-extensions: resteasy,spring-boot-properties -include::includes/devtools/create-app.adoc[] - -This command generates a project and imports the `spring-boot-properties` extension. - -If you already have your Quarkus project configured, you can add the `spring-boot-properties` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-boot-properties -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-boot-properties - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-boot-properties") ----- - -== GreetingController - -First, create a `GreetingResource` JAX-RS resource in the -`src/main/java/org/acme/spring/boot/properties/GreetingResource.java` file that looks like: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class GreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -== Injecting properties - -Create a new class `src/main/java/org/acme/spring/boot/properties/GreetingProperties.java` with a message field: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import org.springframework.boot.context.properties.ConfigurationProperties; - -@ConfigurationProperties("greeting") -public class GreetingProperties { - - public String text; -} ----- - -Here `text` field is public, but it could also be a private field with getter and setter or just a public getter in an interface. -Because `text` does not have a default value it is considered required and unless it is defined in a configuration file (`application.properties` by default) your application will fail to start. -Define this property in your `src/main/resources/application.properties` file: - -[source,properties] ----- -# Your configuration properties -greeting.text = hello ----- - -Now modify `GreetingResource` to start using the `GreetingProperties`: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/greeting") -public class GreetingResource { - - @Inject - GreetingProperties properties; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return properties.text; - } -} ----- - -Run the tests to verify that application still functions correctly. - -== Package and run the application - -Run the application in dev mode with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/greeting. - -Changing the configuration file is immediately reflected. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed using `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Default values - -Now let's add a suffix for a greeting for which we'll set a default value. - - -Properties with default values can be configured in a configuration file just like any other property. -However, the default value will be used if the property was not defined in a configuration file. - -Go ahead and add the new field to the `GreetingProperties` class: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import org.springframework.boot.context.properties.ConfigurationProperties; - -@ConfigurationProperties("greeting") -public class GreetingProperties { - - public String text; - - public String suffix = "!"; -} ----- - -And update the `GreetingResource` and its test `GreetingResourceTest`: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/greeting") -public class GreetingResource { - - @Inject - GreetingProperties properties; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return properties.text + properties.suffix; - } -} ----- - -[source,java] ----- -package org.acme.spring.boot.properties; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello!")); - } -} ----- - -Run the tests to verify the change. - -== Optional values - -Properties with optional values are the middle-ground between standard and properties with default values. -While a missing property in a configuration file will not cause your application to fail, it will nevertheless not have a value set. -We use `java.util.Optional` type to define such properties. - -Add an optional `name` property to the `GreetingProperties`: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import java.util.Optional; - -import org.springframework.boot.context.properties.ConfigurationProperties; - -@ConfigurationProperties("greeting") -public class GreetingProperties { - - public String text; - - public String suffix = "!"; - - public Optional name; -} ----- - -And update the `GreetingResource` and its test `GreetingResourceTest`: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/greeting") -public class GreetingResource { - - @Inject - GreetingProperties properties; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return properties.text + ", " + properties.name.orElse("You") + properties.suffix; - } -} ----- - -[source,java] ----- -package org.acme.spring.boot.properties; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello, You!")); - } -} ----- - -Run the tests to verify the change. - -== Grouping properties - -Now we have three properties in our `GreetingProperties` class. -While `name` could be considered more of a runtime property (and maybe could be passed as an HTTP query parameter in the future), `text` and `suffix` are used to define a message template. -Let's group these two properties in a separate inner class: - -[source,java] ----- -package org.acme.spring.boot.properties; - -import java.util.Optional; - -import org.springframework.boot.context.properties.ConfigurationProperties; - -@ConfigurationProperties("greeting") -public class GreetingProperties { - - public Message message; - - public Optional name; - - public static class Message { - - public String text; - - public String suffix = "!"; - } -} ----- - -Here `Message` properties class is defined as an inner class, but it could also be a top level class. - -Having such property groups brings more structure to your configuration. -This is especially useful when then number of properties grows. - -Because of the additional class, our property names have changed. -Let's update the properties file and the `GreetingResource` class. - -[source,properties] ----- -# Your configuration properties -greeting.message.text = hello ----- - -[source,java] ----- -package org.acme.spring.boot.properties; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/greeting") -public class GreetingResource { - - @Inject - GreetingProperties properties; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return properties.message.text + ", " + properties.name.orElse("You") + properties.message.suffix; - } -} ----- - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] diff --git a/_versions/2.7/guides/spring-cache.adoc b/_versions/2.7/guides/spring-cache.adoc deleted file mode 100644 index 709d6c9508f..00000000000 --- a/_versions/2.7/guides/spring-cache.adoc +++ /dev/null @@ -1,270 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Extension for Spring Cache API - -include::./attributes.adoc[] - -While users are encouraged to use xref:cache.adoc[Quarkus annotations for caching], Quarkus nevertheless provides a compatibility layer for Spring Cache annotations in the form of the `spring-cache` extension. - -This guide explains how a Quarkus application can leverage the well known Spring Cache annotations to enable application data caching for their Spring beans. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* Some familiarity with the Spring DI extension - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-cache-quickstart -:create-app-extensions: resteasy,spring-di,spring-cache -include::includes/devtools/create-app.adoc[] - -This command generates a project which imports the `spring-cache` and `spring-di` extensions. - -If you already have your Quarkus project configured, you can add the `spring-cache` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-cache -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-cache - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-cache") ----- - -== Creating the REST API - -Let's start by creating a service which will simulate an extremely slow call to an external meteorological service. -Create `src/main/java/org/acme/spring/cache/WeatherForecastService.java` with the following content: - -[source,java] ----- -package org.acme.spring.cache; - -import java.time.LocalDate; - -import org.springframework.stereotype.Component; - -@Component -public class WeatherForecastService { - - public String getDailyForecast(LocalDate date, String city) { - try { - Thread.sleep(2000L); // <1> - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - } - return date.getDayOfWeek() + " will be " + getDailyResult(date.getDayOfMonth() % 4) + " in " + city; - } - - private String getDailyResult(int dayOfMonthModuloFour) { - switch (dayOfMonthModuloFour) { - case 0: - return "sunny"; - case 1: - return "cloudy"; - case 2: - return "chilly"; - case 3: - return "rainy"; - default: - throw new IllegalArgumentException(); - } - } -} ----- -<1> This is where the slowness comes from. - -We also need a class which contains the response sent to the users when they ask for the next three days weather forecast. -Create `src/main/java/org/acme/spring/cache/WeatherForecast.java` this way: - -[source,java] ----- -package org.acme.spring.cache; - -import java.util.List; - -public class WeatherForecast { - - private List dailyForecasts; - - private long executionTimeInMs; - - public WeatherForecast(List dailyForecasts, long executionTimeInMs) { - this.dailyForecasts = dailyForecasts; - this.executionTimeInMs = executionTimeInMs; - } - - public List getDailyForecasts() { - return dailyForecasts; - } - - public long getExecutionTimeInMs() { - return executionTimeInMs; - } -} ----- - -Now, we just need to create the `src/main/java/org/acme/spring/cache/WeatherForecastResource.java` class to use the service and response: - -[source,java] ----- -package org.acme.spring.cache; - -import java.time.LocalDate; -import java.util.Arrays; -import java.util.List; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; - -import org.jboss.resteasy.annotations.jaxrs.QueryParam; - -@Path("/weather") -public class WeatherForecastResource { - - @Inject - WeatherForecastService service; - - @GET - public WeatherForecast getForecast(@QueryParam String city, @QueryParam long daysInFuture) { // <1> - long executionStart = System.currentTimeMillis(); - List dailyForecasts = Arrays.asList( - service.getDailyForecast(LocalDate.now().plusDays(daysInFuture), city), - service.getDailyForecast(LocalDate.now().plusDays(daysInFuture + 1L), city), - service.getDailyForecast(LocalDate.now().plusDays(daysInFuture + 2L), city) - ); - long executionEnd = System.currentTimeMillis(); - return new WeatherForecast(dailyForecasts, executionEnd - executionStart); - } -} ----- -<1> If the `daysInFuture` query parameter is omitted, the three days weather forecast will start from the current day. -Otherwise, it will start from the current day plus the `daysInFuture` value. - -We're all done! Let's check if everything's working. - -First, run the application using: - -include::includes/devtools/dev.adoc[] - -Then, call `http://localhost:8080/weather?city=Raleigh` from a browser. -After six long seconds, the application will answer something like this: - -[source] ----- -{"dailyForecasts":["MONDAY will be cloudy in Raleigh","TUESDAY will be chilly in Raleigh","WEDNESDAY will be rainy in Raleigh"],"executionTimeInMs":6001} ----- - -[TIP] -==== -The response content may vary depending on the day you run the code. -==== - -You can try calling the same URL again and again, it will always take six seconds to answer. - -== Enabling the cache - -Now that your Quarkus application is up and running, let's tremendously improve its response time by caching the external meteorological service responses. -Update the `WeatherForecastService` class as follows: - -[source,java] ----- -package org.acme.cache; - -import java.time.LocalDate; - -import org.springframework.cache.annotation.Cacheable; -import org.springframework.stereotype.Component; - -@Component -public class WeatherForecastService { - - @Cacheable("weather-cache") // <1> - public String getDailyForecast(LocalDate date, String city) { - try { - Thread.sleep(2000L); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - } - return date.getDayOfWeek() + " will be " + getDailyResult(date.getDayOfMonth() % 4) + " in " + city; - } - - private String getDailyResult(int dayOfMonthModuloFour) { - switch (dayOfMonthModuloFour) { - case 0: - return "sunny"; - case 1: - return "cloudy"; - case 2: - return "chilly"; - case 3: - return "rainy"; - default: - throw new IllegalArgumentException(); - } - } -} ----- -<1> We only added this annotation (and the associated import of course). - -Let's try to call `http://localhost:8080/weather?city=Raleigh` again. -You're still waiting a long time before receiving an answer. -This is normal since the server just restarted and the cache was empty. - -Wait a second! The server restarted by itself after the `WeatherForecastService` update? -Yes, this is one of Quarkus amazing features for developers called `live coding`. - -Now that the cache was loaded during the previous call, try calling the same URL. -This time, you should get a super fast answer with an `executionTimeInMs` value close to 0. - -Let's see what happens if we start from one day in the future using the `http://localhost:8080/weather?city=Raleigh&daysInFuture=1` URL. -You should get an answer two seconds later since two of the requested days were already loaded in the cache. - -You can also try calling the same URL with a different city and see the cache in action again. -The first call will take six seconds and the following ones will be answered immediately. - -Congratulations! You just added application data caching to your Quarkus application with a single line of code! - -== Supported features - -Quarkus provides compatibility with the following Spring Cache annotations: - -* `@Cacheable` -* `@CachePut` -* `@CacheEvict` - -Note that in this first version of the Spring Cache annotations extension, not all features of these annotations are supported -(with proper errors being logged when trying to use an unsupported feature). -However, additional features are planned for future releases. - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] diff --git a/_versions/2.7/guides/spring-cloud-config-client.adoc b/_versions/2.7/guides/spring-cloud-config-client.adoc deleted file mode 100644 index 211054d44ed..00000000000 --- a/_versions/2.7/guides/spring-cloud-config-client.adoc +++ /dev/null @@ -1,155 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Reading properties from Spring Cloud Config Server - -include::./attributes.adoc[] - -This guide explains how your Quarkus application can read configuration properties at runtime from the https://cloud.spring.io/spring-cloud-config[Spring Cloud Config Server]. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. - -== Stand up a Config Server - -To stand up the Config Server required for this guide, please follow the instructions outlined https://github.com/spring-guides/gs-centralized-configuration#stand-up-a-config-server[here]. -The end result of that process is a running Config Server that will provide the `Hello world` value for a configuration property named `message` when the application querying the server is named `a-bootiful-client`. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-cloud-config-quickstart -:create-app-extensions: spring-cloud-config-client -include::includes/devtools/create-app.adoc[] - -This command generates a project which imports the `spring-cloud-config-client` extension. - -If you already have your Quarkus project configured, you can add the `spring-cloud-config-client` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-cloud-config-client -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-cloud-config-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-cloud-config-client") ----- - -== GreetingController - -First, create a simple `GreetingResource` JAX-RS resource in the -`src/main/java/org/acme/spring/cloud/config/client/GreetingResource.java` file that looks like: - -[source,java] ----- -package org.acme.spring.spring.cloud.config.client; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class GreetingResource { - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -As we want to use configuration properties obtained from the Config Server, we will update the `GreetingResource` to inject the `message` property. The updated code will look like this: - -[source,java] ----- -package org.acme.spring.spring.cloud.config.client; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.eclipse.microprofile.config.inject.ConfigProperty; - -@Path("/hello") -public class GreetingResource { - - @ConfigProperty(name = "message", defaultValue="hello default") - String message; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return message; - } -} ----- - -== Configuring the application - -Quarkus provides various configuration knobs under the `quarkus.spring-cloud-config` root. For the purposes of this guide, our Quarkus application is going to be configured in `application.properties` as follows: - -[source,properties] ----- -# use the same name as the application name that was configured when standing up the Config Server -quarkus.application.name=a-bootiful-client -# enable retrieval of configuration from the Config Server - this is off by default -quarkus.spring-cloud-config.enabled=true -# configure the URL where the Config Server listens to HTTP requests - this could have been left out since http://localhost:8888 is the default -quarkus.spring-cloud-config.url=http://localhost:8888 ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/greeting. - -The result should be: `Hello world` as it is the value obtained from the Spring Cloud Config server. - -== Run the application as a native executable - -You can of course create a native image using the instructions of the xref:building-native-image.adoc[Building a native executable guide]. - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] - -[[spring-cloud-config-client-configuration-reference]] -== Spring Cloud Config Client Reference - -include::{generated-dir}/config/quarkus-spring-cloud-config-client.adoc[leveloffset=+1, opts=optional] - diff --git a/_versions/2.7/guides/spring-data-jpa.adoc b/_versions/2.7/guides/spring-data-jpa.adoc deleted file mode 100644 index e87c78448ac..00000000000 --- a/_versions/2.7/guides/spring-data-jpa.adoc +++ /dev/null @@ -1,625 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Extension for Spring Data API - -include::./attributes.adoc[] - -While users are encouraged to use Hibernate ORM with Panache for Relational Database access, Quarkus provides a compatibility layer for -Spring Data JPA repositories in the form of the `spring-data-jpa` extension. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-data-jpa-quickstart` {quickstarts-tree-url}/spring-data-jpa-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-data-jpa-quickstart -:create-app-extensions: resteasy,spring-data-jpa,resteasy-jackson,quarkus-jdbc-postgresql -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project with a REST endpoint and imports the `spring-data-jpa` extension. - -If you already have your Quarkus project configured, you can add the `spring-data-jpa` extension -to your project by running the following command in your project base directory: - -[source,bash] ----- -./mvnw quarkus:add-extension -Dextensions="spring-data-jpa" ----- - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-data-jpa - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-data-jpa") ----- - -== Define the Entity - -Throughout the course of this guide, the following JPA Entity will be used: - -[source,java] ----- -package org.acme.spring.data.jpa; - -import javax.persistence.Entity; -import javax.persistence.GeneratedValue; -import javax.persistence.Id; - -@Entity -public class Fruit { - - @Id - @GeneratedValue - private Long id; - - private String name; - - private String color; - - - public Fruit() { - } - - public Fruit(String name, String color) { - this.name = name; - this.color = color; - } - - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - public String getColor() { - return color; - } - - public void setColor(String color) { - this.color = color; - } -} ----- - - -== Configure database access properties - -Add the following properties to `application.properties` to configure access to a local PostgreSQL instance. - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus_test -quarkus.datasource.password=quarkus_test -quarkus.datasource.jdbc.url=jdbc:postgresql:quarkus_test -quarkus.datasource.jdbc.max-size=8 -quarkus.datasource.jdbc.min-size=2 -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -This configuration assumes that PostgreSQL will be running locally. - -A very easy way to accomplish that is by using the following Docker command: - -[source,bash] ----- -docker run -it --rm=true --name quarkus_test -e POSTGRES_USER=quarkus_test -e POSTGRES_PASSWORD=quarkus_test -e POSTGRES_DB=quarkus_test -p 5432:5432 postgres:14.1 ----- - -If you plan on using a different setup, please change your `application.properties` accordingly. - -== Prepare the data - -To make it easier to showcase some capabilities of Spring Data JPA on Quarkus, some test data should be inserted into the database -by adding the following content to a new file named `src/main/resources/import.sql`: - -[source,sql] ----- -INSERT INTO fruit(id, name, color) VALUES (1, 'Cherry', 'Red'); -INSERT INTO fruit(id, name, color) VALUES (2, 'Apple', 'Red'); -INSERT INTO fruit(id, name, color) VALUES (3, 'Banana', 'Yellow'); -INSERT INTO fruit(id, name, color) VALUES (4, 'Avocado', 'Green'); -INSERT INTO fruit(id, name, color) VALUES (5, 'Strawberry', 'Red'); ----- - -Hibernate ORM will execute these queries on application startup. - -== Define the repository - -It is now time to define the repository that will be used to access `Fruit`. -In a typical Spring Data fashion create a repository like so: - -[source,java] ----- -package org.acme.spring.data.jpa; - -import org.springframework.data.repository.CrudRepository; - -import java.util.List; - -public interface FruitRepository extends CrudRepository { - - List findByColor(String color); -} ----- - -The `FruitRepository` above extends Spring Data's `org.springframework.data.repository.CrudRepository` which means that all of the latter's methods are -available to `FruitRepository`. -Additionally `findByColor` is defined whose purpose is to return all Fruit entities that match the specified color. - -== Update the JAX-RS resource - -With the repository in place, the next order of business is to create the JAX-RS resource that will use the `FruitRepository`. -Create `FruitResource` with the following content: - -[source,java] ----- -package org.acme.spring.data.jpa; - -import javax.ws.rs.DELETE; -import javax.ws.rs.GET; -import javax.ws.rs.POST; -import javax.ws.rs.PUT; -import javax.ws.rs.Path; - -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import java.util.List; -import java.util.Optional; - -@Path("/fruits") -public class FruitResource { - - private final FruitRepository fruitRepository; - - public FruitResource(FruitRepository fruitRepository) { - this.fruitRepository = fruitRepository; - } - - @GET - public Iterable findAll() { - return fruitRepository.findAll(); - } - - - @DELETE - @Path("{id}") - public void delete(@PathParam long id) { - fruitRepository.deleteById(id); - } - - @POST - @Path("/name/{name}/color/{color}") - public Fruit create(@PathParam String name, @PathParam String color) { - return fruitRepository.save(new Fruit(name, color)); - } - - @PUT - @Path("/id/{id}/color/{color}") - public Fruit changeColor(@PathParam Long id, @PathParam String color) { - Optional optional = fruitRepository.findById(id); - if (optional.isPresent()) { - Fruit fruit = optional.get(); - fruit.setColor(color); - return fruitRepository.save(fruit); - } - - throw new IllegalArgumentException("No Fruit with id " + id + " exists"); - } - - @GET - @Path("/color/{color}") - public List findByColor(@PathParam String color) { - return fruitRepository.findByColor(color); - } -} - ----- - -`FruitResource` now provides a few REST endpoints that can be used to perform CRUD operation on `Fruit`. - -=== Note on Spring Web - -The JAX-RS resource can also be substituted with a Spring Web controller as Quarkus supports REST endpoint definition using Spring controllers. -See the xref:spring-web.adoc[Spring Web guide] for more details. - -== Update the test - -To test the capabilities of `FruitRepository` proceed to update the content of `FruitResourceTest` to: - -[source,java] ----- -package org.acme.spring.data.jpa; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.containsString; -import static org.hamcrest.CoreMatchers.is; -import static org.hamcrest.CoreMatchers.notNullValue; -import static org.hamcrest.core.IsNot.not; - -@QuarkusTest -class FruitResourceTest { - - @Test - void testListAllFruits() { - //List all, should have all 3 fruits the database has initially: - given() - .when().get("/fruits") - .then() - .statusCode(200) - .body( - containsString("Cherry"), - containsString("Apple"), - containsString("Banana") - ); - - //Delete the Cherry: - given() - .when().delete("/fruits/1") - .then() - .statusCode(204) - ; - - //List all, cherry should be missing now: - given() - .when().get("/fruits") - .then() - .statusCode(200) - .body( - not(containsString("Cherry")), - containsString("Apple"), - containsString("Banana") - ); - - //Create a new Fruit - given() - .when().post("/fruits/name/Orange/color/Orange") - .then() - .statusCode(200) - .body(containsString("Orange")) - .body("id", notNullValue()) - .extract().body().jsonPath().getString("id"); - - //List all, Orange should be present now: - given() - .when().get("/fruits") - .then() - .statusCode(200) - .body( - not(containsString("Cherry")), - containsString("Apple"), - containsString("Orange") - ); - } - - @Test - void testFindByColor() { - //Find by color that no fruit has - given() - .when().get("/fruits/color/Black") - .then() - .statusCode(200) - .body("size()", is(0)); - - //Find by color that multiple fruits have - given() - .when().get("/fruits/color/Red") - .then() - .statusCode(200) - .body( - containsString("Apple"), - containsString("Strawberry") - ); - - //Find by color that matches - given() - .when().get("/fruits/color/Green") - .then() - .statusCode(200) - .body("size()", is(1)) - .body(containsString("Avocado")); - - //Update color of Avocado - given() - .when().put("/fruits/id/4/color/Black") - .then() - .statusCode(200) - .body(containsString("Black")); - - //Find by color that Avocado now has - given() - .when().get("/fruits/color/Black") - .then() - .statusCode(200) - .body("size()", is(1)) - .body( - containsString("Black"), - containsString("Avocado") - ); - } - -} ----- - -The test can be easily run by issuing: - -include::includes/devtools/test.adoc[] - -== Package and run the application - -Quarkus dev mode works with the defined repositories just like with any other Quarkus extension, greatly enhancing your productivity during the dev cycle. -The application can be started in dev mode as usual using: - -include::includes/devtools/dev.adoc[] - -== Run the application as a native binary - -You can of course create a native executable following the instructions of the xref:building-native-image.adoc[this guide]. - -== Supported Spring Data JPA functionalities - -Quarkus currently supports a subset of Spring Data JPA's features, namely the most useful and most commonly used features. - -An important part of this support is that all repository generation is done at build time thus ensuring that all supported features work correctly in native mode. -Moreover, developers know at build time whether or not their repository method names can be converted to proper JPQL queries. -This also means that if a method name indicates that a field should be used that is not part of the Entity, developers will get -the relevant error at build time. - -=== What is supported - -The following sections described the most important supported features of Spring Data JPA. - -==== Automatic repository implementation generation - -Interfaces that extend any of the following Spring Data repositories are automatically implemented: - -* `org.springframework.data.repository.Repository` -* `org.springframework.data.repository.CrudRepository` -* `org.springframework.data.repository.PagingAndSortingRepository` -* `org.springframework.data.jpa.repository.JpaRepository` - -The generated repositories are also registered as beans so they can be injected into any other bean. -Furthermore the methods that update the database are automatically annotated with `@Transactional`. - -==== Fine tuning of repository definition - -This allows user defined repository interfaces to cherry-pick methods from any of the supported Spring Data repository interfaces without having to extend those interfaces. -This is particularly useful when for example a repository needs to use some methods from `CrudRepository` but it's undesirable to expose the full list of methods of said interface. - -Assume for example that a `PersonRepository` that shouldn't extend `CrudRepository` but would like to use `save` and `findById` methods which are defined in said interface. -In such a case, `PersonRepository` would look like so: - -[source,java] ----- -package org.acme.spring.data.jpa; - -import org.springframework.data.repository.Repository; - -public interface PersonRepository extends Repository { - - Person save(Person entity); - - Optional findById(Person entity); -} ----- - -==== Customizing individual repositories using repository fragments - -Repositories can be enriched with additional functionality or override the default implementation of methods of the supported Spring Data repositories. -This is best shown with an example. - -A repository fragment is defined as so: - -[source,java] ----- -public interface PersonFragment { - - // custom findAll - List findAll(); - - void makeNameUpperCase(Person person); -} ----- - -The implementation of that fragment looks like this: - -[source,java] ----- -import java.util.List; - -import io.quarkus.hibernate.orm.panache.runtime.JpaOperations; - -public class PersonFragmentImpl implements PersonFragment { - - @Override - public List findAll() { - // do something here - return (List) JpaOperations.findAll(Person.class).list(); - } - - @Override - public void makeNameUpperCase(Person person) { - person.setName(person.getName().toUpperCase()); - } -} ----- - -Then the actual `PersonRepository` interface to be used would look like: - -[source,java] ----- -public interface PersonRepository extends JpaRepository, PersonFragment { - -} ----- - -==== Derived query methods - -Methods of repository interfaces that follow the Spring Data conventions can be automatically implemented (unless they fall into one of the unsupported cases listed later on). -This means that methods like the following will all work: - -[source,java] ----- -public interface PersonRepository extends CrudRepository { - - List findByName(String name); - - Person findByNameBySsn(String ssn); - - Optional findByNameBySsnIgnoreCase(String ssn); - - boolean existsBookByYearOfBirthBetween(Integer start, Integer end); - - List findByName(String name, Sort sort); - - Page findByNameOrderByJoined(String name, Pageable pageable); - - List findByNameOrderByAge(String name); - - List findByNameOrderByAgeDesc(String name, Pageable pageable); - - List findByAgeBetweenAndNameIsNotNull(int lowerAgeBound, int upperAgeBound); - - List findByAgeGreaterThanEqualOrderByAgeAsc(int age); - - List queryByJoinedIsAfter(Date date); - - Collection readByActiveTrueOrderByAgeDesc(); - - Long countByActiveNot(boolean active); - - List findTop3ByActive(boolean active, Sort sort); - - Stream findPersonByNameAndSurnameAllIgnoreCase(String name, String surname); -} ----- - -==== User defined queries - -User supplied queries contained in the `@Query` annotation. For example things like the following all work: - -[source,java] ----- -public interface MovieRepository extends CrudRepository { - - Movie findFirstByOrderByDurationDesc(); - - @Query("select m from Movie m where m.rating = ?1") - Iterator findByRating(String rating); - - @Query("from Movie where title = ?1") - Movie findByTitle(String title); - - @Query("select m from Movie m where m.duration > :duration and m.rating = :rating") - List withRatingAndDurationLargerThan(@Param("duration") int duration, @Param("rating") String rating); - - @Query("from Movie where title like concat('%', ?1, '%')") - List someFieldsWithTitleLike(String title, Sort sort); - - @Modifying - @Query("delete from Movie where rating = :rating") - void deleteByRating(@Param("rating") String rating); - - @Modifying - @Query("delete from Movie where title like concat('%', ?1, '%')") - Long deleteByTitleLike(String title); - - @Modifying - @Query("update Movie m set m.rating = :newName where m.rating = :oldName") - int changeRatingToNewName(@Param("newName") String newName, @Param("oldName") String oldName); - - @Modifying - @Query("update Movie set rating = null where title =?1") - void setRatingToNullForTitle(String title); - - @Query("from Movie order by length(title)") - Slice orderByTitleLength(Pageable pageable); -} ----- -All methods that are annotated with `@Modifying` will automatically be annotated with `@Transactional`. - -TIP: In Quarkus, `@Param` is optional when parameter names have been compiled to bytecode (which is active by default in generated projects). - -==== Naming Strategies - -Hibernate ORM maps property names using a physical naming strategy and an implicit naming strategy. If you wish to use Spring Boot's default naming strategies, the following properties need to be set: - -[source, properties] ----- -quarkus.hibernate-orm.physical-naming-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy -quarkus.hibernate-orm.implicit-naming-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy ----- - -==== More examples - -An extensive list of examples can be seen in the https://github.com/quarkusio/quarkus/tree/main/integration-tests/spring-data-jpa[integration tests] directory which is located inside the Quarkus source code. - -=== What is currently unsupported - -* Methods of the `org.springframework.data.repository.query.QueryByExampleExecutor` interface - if any of these are invoked, a Runtime exception will be thrown. -* QueryDSL support. No attempt will be made to generate implementations of any of the QueryDSL related repositories. -* Customizing the base repository for all repository interfaces in the code base. -** In Spring Data JPA this is done by registering a class that extends `org.springframework.data.jpa.repository.support.SimpleJpaRepository` however in Quarkus this class -is not used at all (since all the necessary plumbing is done at build time). Similar support might be added to Quarkus in the future. -* Using `java.util.concurrent.Future` and classes that extend it as return types of repository methods. -* Native and named queries when using `@Query` -* https://github.com/spring-projects/spring-data-jpa/blob/main/src/main/asciidoc/jpa.adoc#entity-state-detection-strategies[Entity State-detection Strategies] -via `EntityInformation`. - -The Quarkus team is exploring various alternatives to bridging the gap between the JPA and Reactive worlds. - -== Important Technical Note - -Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. -Spring classes and annotations are only used for reading metadata and / or are used as user code method return types or parameter types. - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] diff --git a/_versions/2.7/guides/spring-data-rest.adoc b/_versions/2.7/guides/spring-data-rest.adoc deleted file mode 100644 index 5270aba63c5..00000000000 --- a/_versions/2.7/guides/spring-data-rest.adoc +++ /dev/null @@ -1,444 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Extension for Spring Data REST - -include::./attributes.adoc[] -:extension-status: preview - -While users are encouraged to use REST Data with Panache for the REST data access endpoints generation, -Quarkus provides a compatibility layer for Spring Data REST in the form of the `spring-data-rest` extension. - - -include::./status-include.adoc[] - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-data-rest-quickstart` {quickstarts-tree-url}/spring-data-rest-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-data-rest-quickstart -:create-app-extensions: spring-data-rest,resteasy-jackson,quarkus-jdbc-postgresql -include::includes/devtools/create-app.adoc[] - -This command generates a project with the `spring-data-rest` extension. - -If you already have your Quarkus project configured, you can add the `spring-data-rest` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-data-rest -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-data-rest - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-data-rest") ----- - -Furthermore, the following dependency needs to be added - -For the tests you will also need REST Assured. Add it to the build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.rest-assured - rest-assured - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.rest-assured:rest-assured") ----- - -Note: both `resteasy-jackson` and `resteasy-jsonb` are supported and can be interchanged. - -== Define the Entity - -Throughout the course of this guide, the following JPA Entity will be used: - -[source,java] ----- -package org.acme.spring.data.rest; - -import javax.persistence.Entity; -import javax.persistence.GeneratedValue; -import javax.persistence.Id; - -@Entity -public class Fruit { - - @Id - @GeneratedValue - private Long id; - - private String name; - - private String color; - - - public Fruit() { - } - - public Fruit(String name, String color) { - this.name = name; - this.color = color; - } - - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - public String getColor() { - return color; - } - - public void setColor(String color) { - this.color = color; - } -} ----- - - -== Configure database access properties - -Add the following properties to `application.properties` to configure access to a local PostgreSQL instance. - -[source,properties] ----- -quarkus.datasource.db-kind=postgresql -quarkus.datasource.username=quarkus_test -quarkus.datasource.password=quarkus_test -quarkus.datasource.jdbc.url=jdbc:postgresql:quarkus_test -quarkus.datasource.jdbc.max-size=8 -quarkus.hibernate-orm.database.generation=drop-and-create ----- - -This configuration assumes that PostgreSQL will be running locally. - -A very easy way to accomplish that is by using the following Docker command: - -[source,bash] ----- -docker run -it --rm=true --name quarkus_test -e POSTGRES_USER=quarkus_test -e POSTGRES_PASSWORD=quarkus_test -e POSTGRES_DB=quarkus_test -p 5432:5432 postgres:14.1 ----- - -If you plan on using a different setup, please change your `application.properties` accordingly. - -== Prepare the data - -To make it easier to showcase some capabilities of Spring Data REST on Quarkus, some test data should be inserted into the database -by adding the following content to a new file named `src/main/resources/import.sql`: - -[source,sql] ----- -INSERT INTO fruit(id, name, color) VALUES (1, 'Cherry', 'Red'); -INSERT INTO fruit(id, name, color) VALUES (2, 'Apple', 'Red'); -INSERT INTO fruit(id, name, color) VALUES (3, 'Banana', 'Yellow'); -INSERT INTO fruit(id, name, color) VALUES (4, 'Avocado', 'Green'); -INSERT INTO fruit(id, name, color) VALUES (5, 'Strawberry', 'Red'); ----- - -Hibernate ORM will execute these queries on application startup. - -== Define the repository - -It is now time to define the repository that will be used to access `Fruit`. -In a typical Spring Data fashion, create a repository like so: - -[source,java] ----- -package org.acme.spring.data.rest; - -import org.springframework.data.repository.CrudRepository; - -public interface FruitsRepository extends CrudRepository { -} ----- - -The `FruitsRepository` above extends Spring Data's `org.springframework.data.repository.CrudRepository` which means that all of the latter's methods are -available to `FruitsRepository`. - -The `spring-data-jpa` extension will generate an implementation for this repository. Then the `spring-data-rest` extension will generate a REST CRUD resource for it. - -== Update the test - -To test the capabilities of `FruitsRepository` proceed to update the content of `FruitsRepositoryTest` to: - -[source,java] ----- -package org.acme.spring.data.rest; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.containsString; -import static org.hamcrest.CoreMatchers.notNullValue; -import static org.hamcrest.core.IsNot.not; - -@QuarkusTest -class FruitsRepositoryTest { - - @Test - void testListAllFruits() { - //List all, should have all 3 fruits the database has initially: - given() - .accept("application/json") - .when().get("/fruits") - .then() - .statusCode(200) - .body( - containsString("Cherry"), - containsString("Apple"), - containsString("Banana") - ); - - //Delete the Cherry: - given() - .when().delete("/fruits/1") - .then() - .statusCode(204); - - //List all, cherry should be missing now: - given() - .accept("application/json") - .when().get("/fruits") - .then() - .statusCode(200) - .body( - not(containsString("Cherry")), - containsString("Apple"), - containsString("Banana") - ); - - //Create a new Fruit - given() - .contentType("application/json") - .accept("application/json") - .body("{\"name\": \"Orange\", \"color\": \"Orange\"}") - .when().post("/fruits") - .then() - .statusCode(201) - .body(containsString("Orange")) - .body("id", notNullValue()) - .extract().body().jsonPath().getString("id"); - - //List all, Orange should be present now: - given() - .accept("application/json") - .when().get("/fruits") - .then() - .statusCode(200) - .body( - not(containsString("Cherry")), - containsString("Apple"), - containsString("Orange") - ); - } -} - ----- - -The test can be easily run by issuing: - -include::includes/devtools/test.adoc[] - -== Package and run the application - -Quarkus dev mode works with the defined repositories just like with any other Quarkus extension, greatly enhancing your productivity during the dev cycle. -The application can be started in dev mode as usual using: - -include::includes/devtools/dev.adoc[] - -== Run the application as a native binary - -You can of course create a native executable following the instructions of the xref:building-native-image.adoc[Building native executables] guide. - -== Supported Spring Data REST functionalities - -Quarkus currently supports a subset of Spring Data REST features, namely the most useful and most commonly used features. - -=== What is supported - -The following sections describe the most important supported features of Spring Data REST. - -==== Automatic REST endpoint generation - -Interfaces that extend any of the following Spring Data repositories get automatically generated REST endpoints: - -* `org.springframework.data.repository.CrudRepository` -* `org.springframework.data.repository.PagingAndSortingRepository` -* `org.springframework.data.jpa.repository.JpaRepository` - -Endpoints generated from the above repositories expose five common REST operations: - -* `GET /fruits` - lists all entities or returns a page if `PagingAndSortingRepository` or `JpaRepository` is used. -* `GET /fruits/:id` - returns an entity by ID. -* `POST /fruits` - creates a new entity. -* `PUT /fruits/:id` - updates an existing entity or creates a new one with a specified ID (if allowed by the entity definition). -* `DELETE /fruits/:id` - deletes an entity by ID. - -There are two supported data types: `application/json` and `application/hal+json`. -The former is used by default, but it is highly recommended to specify which one you prefer with an `Accept` header. - -==== Exposing many entities - -If a database contains many entities, it might not be a great idea to return them all at once. -`PagingAndSortingRepository` allows the `spring-data-rest` extension to access data in chunks. - -Replace the `CrudRepository` with `PagingAndSortingRepository` in the `FruitsRepository`: - -[source,java] ----- -package org.acme.spring.data.rest; - -import org.springframework.data.repository.PagingAndSortingRepository; - -public interface FruitsRepository extends PagingAndSortingRepository { -} ----- - -Now the `GET /fruits` will accept three new query parameters: `sort`, `page` and `size`. - -|=== -| Query parameter | Description | Default value | Example values - -| `sort` -| Sorts the entities that are returned by the list operation -| "" -| `?sort=name` (ascending name), `?sort=name,-color` (ascending name and descending color) - -| `page` -| Zero indexed page number. Invalid value is interpreted as 0. -| 0 -| 0, 11, 100 - -| `size` -| Page size. Minimal accepted value is 1. Any lower value is interpreted as 1. -| 20 -| 1, 11, 100 -|=== - -For paged responses, `spring-data-rest` also returns a set of link headers that can be used to access other pages: first, previous, next and last. - -==== Fine tuning endpoints generation - -This allows user to specify which methods should be exposed and what path should be used to access them. -Spring Data REST provides two annotations that can be used: `@RepositoryRestResource` and `@RestResource`. -`spring-data-rest` extension supports the `exported`, `path` `collectionResourceRel` attributes of these annotations. - -Assume for example that fruits repository should be accessible by a `/my-fruits` path and only allow `GET` operation. -In such a case, `FruitsRepository` would look like so: - -[source,java] ----- -package org.acme.spring.data.rest; - -import java.util.Optional; - -import org.springframework.data.repository.CrudRepository; -import org.springframework.data.rest.core.annotation.RepositoryRestResource; -import org.springframework.data.rest.core.annotation.RestResource; - -@RepositoryRestResource(exported = false, path = "/my-fruits") -public interface FruitsRepository extends CrudRepository { - - @RestResource(exported = true) - Optional findById(Long id); - - @RestResource(exported = true) - Iterable findAll(); -} ----- - -`spring-data-rest` uses only a subset of the repository methods for data access. -It is important to annotate the correct method in order to customize its REST endpoint: - -|=== -|REST operation |CrudRepository |PagingAndSortingRepository and JpaRepository - -|Get by ID -|`Optional findById(ID id)` -|`Optional findById(ID id)` - -|List -|`Iterable findAll()` -|`Page findAll(Pageable pageable)` - -|Create -|` S save(S entity)` -|` S save(S entity)` - -|Update -|` S save(S entity)` -|` S save(S entity)` - -|Delete -|`void deleteById(ID id)` -|`void deleteById(ID id)` -|=== - -=== What is currently unsupported - -* Only the repository methods listed above are supported. No other standard or custom methods are supported. -* Only the `exposed`, `path` and `collectionResourceRel` annotation properties are supported. - -== Important Technical Note - -Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. -Spring classes and annotations are only used for reading metadata and / or are used as user code method return types or parameter types. - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] diff --git a/_versions/2.7/guides/spring-di.adoc b/_versions/2.7/guides/spring-di.adoc deleted file mode 100644 index 2cee0cf3c16..00000000000 --- a/_versions/2.7/guides/spring-di.adoc +++ /dev/null @@ -1,339 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Extension for Spring DI API - -include::./attributes.adoc[] - -While users are encouraged to use CDI annotations for injection, Quarkus provides a compatibility layer for Spring dependency injection in the form of the `spring-di` extension. - -This guide explains how a Quarkus application can leverage the well known Dependency Injection annotations included in the Spring Framework. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-di-quickstart` {quickstarts-tree-url}/spring-di-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-di-quickstart -:create-app-extensions: resteasy,spring-di -include::includes/devtools/create-app.adoc[] - -This command generates a project which imports the `spring-di` extension. - -If you already have your Quarkus project configured, you can add the `spring-di` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-di -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-di - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-di") ----- - -== Add beans using Spring annotations - -Let's proceed to create some beans using various Spring annotations. - -First we will create a `StringFunction` interface that some of our beans will implement and which will be injected into another bean later on. -Create a `src/main/java/org/acme/spring/di/StringFunction.java` file and set the following content: - -[source,java] ----- -package org.acme.spring.di; - -import java.util.function.Function; - -public interface StringFunction extends Function { - -} ----- - -With the interface in place, we will add an `AppConfiguration` class which will use the Spring's Java Config style for defining a bean. -It will be used to create a `StringFunction` bean that will capitalize the text passed as parameter. -Create a `src/main/java/org/acme/spring/di/AppConfiguration.java` file with the following content: - -[source,java] ----- -package org.acme.spring.di; - -import org.springframework.context.annotation.Bean; -import org.springframework.context.annotation.Configuration; - -@Configuration -public class AppConfiguration { - - @Bean(name = "capitalizeFunction") - public StringFunction capitalizer() { - return String::toUpperCase; - } -} ----- -As a Spring developer, you might be tempted to add the `@ComponentScan` annotation in order to define specific packages to scan for additional beans. Do note that `@ComponentScan` is entirely unnecessary since Quarkus performs xref:cdi-reference.adoc#bean_discovery[bean discovery] only in `annotated` mode with no visibility boundaries. Moreover, note that the bean discovery in Quarkus happens at build time. -In the same vein, Quarkus does not support the Spring `@Import` annotation. - -Now we define another bean that will implement `StringFunction` using Spring's stereotype annotation style. -This bean will effectively be a no-op bean that simply returns the input as is. -Create a `src/main/java/org/acme/spring/di/NoOpSingleStringFunction.java` file and set the following content: - -[source,java] ----- -package org.acme.spring.di; - -import org.springframework.stereotype.Component; - -@Component("noopFunction") -public class NoOpSingleStringFunction implements StringFunction { - - @Override - public String apply(String s) { - return s; - } -} ----- - -Quarkus also provides support for injecting configuration values using Spring's `@Value` annotation. -To see that in action, first edit the `src/main/resources/application.properties` with the following content: - -[source,properties] ----- -# Your configuration properties -greeting.message = hello ----- - -Next create a new Spring bean in `src/main/java/org/acme/spring/di/MessageProducer.java` with the following content: - - -[source,java] ----- -package org.acme.spring.di; - -import org.springframework.beans.factory.annotation.Value; -import org.springframework.stereotype.Service; - -@Service -public class MessageProducer { - - @Value("${greeting.message}") - String message; - - public String getPrefix() { - return message; - } -} ----- - -The final bean we will create ties together all the previous beans. -Create a `src/main/java/org/acme/spring/di/GreeterBean.java` file and copy the following content: - -[source,java] ----- -package org.acme.spring.di; - -import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.beans.factory.annotation.Qualifier; -import org.springframework.beans.factory.annotation.Value; -import org.springframework.stereotype.Component; - -@Component -public class GreeterBean { - - private final MessageProducer messageProducer; - - @Autowired - @Qualifier("noopFunction") - StringFunction noopStringFunction; - - @Autowired - @Qualifier("capitalizeFunction") - StringFunction capitalizerStringFunction; - - @Value("${greeting.suffix:!}") - String suffix; - - public GreeterBean(MessageProducer messageProducer) { - this.messageProducer = messageProducer; - } - - public String greet(String name) { - final String initialValue = messageProducer.getPrefix() + " " + name + suffix; - return noopStringFunction.andThen(capitalizerStringFunction).apply(initialValue); - } -} ----- - -In the code above, we see that both field injection and constructor injection are being used (note that constructor injection does not need the `@Autowired` annotation since there is a single constructor). -Furthermore, the `@Value` annotation on `suffix` has also a default value defined, which in this case will be used since we have not defined `greeting.suffix` in `application.properties`. - - -=== Create the JAX-RS resource - -Create the `src/main/java/org/acme/spring/di/GreeterResource.java` file with the following content: - -[source,java] ----- -package org.acme.spring.di; - -import org.springframework.beans.factory.annotation.Autowired; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/greeting") -public class GreeterResource { - - @Autowired - GreeterBean greeterBean; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return greeterBean.greet("world"); - } -} ----- - -== Update the test - -We also need to update the functional test to reflect the changes made to the endpoint. -Edit the `src/test/java/org/acme/spring/di/GreetingResourceTest.java` file and change the content of the `testHelloEndpoint` method to: - - -[source, java] ----- -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("HELLO WORLD!")); - } - -} ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/greeting. - -The result should be: `HELLO WORLD!`. - -== Run the application as a native - -You can of course create a native image using instructions similar to xref:building-native-image.adoc[this] guide. - -== Important Technical Note - -Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. -Spring classes and annotations are only used for reading metadata and / or are used as user code method return types or parameter types. -What that means for end users, is that adding arbitrary Spring libraries will not have any effect. Moreover Spring infrastructure -classes (like `org.springframework.beans.factory.config.BeanPostProcessor` , `org.springframework.context.ApplicationContext` for example) will not be executed. -Regarding the dependency injection in particular, Quarkus uses a Dependency Injection mechanism (called ArC) based on the https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html[Contexts and Dependency Injection for Java 2.0] specification. If you want to learn more about it we recommend you to read the xref:cdi.adoc[Quarkus introduction to CDI] and the xref:cdi-reference.adoc#arc-configuration-reference[CDI reference guide] -The various Spring Boot test features are not supported by Quarkus. For testing purposes, please, check the xref:getting-started-testing.adoc[Quarkus testing guide]. - -== Conversion Table - -The following table shows how Spring DI annotations can be converted to CDI and / or MicroProfile annotations. - -|=== -|Spring |CDI / MicroProfile |Comments - -|@Autowired -|@Inject -| - -|@Qualifier -|@Named -| - -|@Value -|@ConfigProperty -|@ConfigProperty doesn't support an expression language the way @Value does, but makes the typical use cases much easier to handle - -|@Component -|@Singleton -|By default Spring stereotype annotations are singleton beans - -|@Service -|@Singleton -|By default Spring stereotype annotations are singleton beans - -|@Repository -|@Singleton -|By default Spring stereotype annotations are singleton beans - -|@Configuration -|@ApplicationScoped -|In CDI a producer bean isn't limited to the application scope, it could just as well be @Singleton or @Dependent - -|@Bean -|@Produces -| - -|@Scope -| -|Doesn't have a one-to-one mapping to a CDI annotation. Depending on the value of @Scope, one of the @Singleton, @ApplicationScoped, @SessionScoped, @RequestScoped, @Dependent could be used - -|@ComponentScan -| -|Doesn't have a one-to-one mapping to a CDI annotation. It is not used in Quarkus because Quarkus does all classpath scanning at build time. - -|@Import -| -|Doesn't have a one-to-one mapping to a CDI annotation. -|=== - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] diff --git a/_versions/2.7/guides/spring-scheduled.adoc b/_versions/2.7/guides/spring-scheduled.adoc deleted file mode 100644 index 9ba863c1027..00000000000 --- a/_versions/2.7/guides/spring-scheduled.adoc +++ /dev/null @@ -1,243 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Extension for Spring Scheduling API - -include::./attributes.adoc[] - -While users are encouraged to use xref:scheduler.adoc#standard-scheduling[regular Quarkus scheduler], Quarkus provides a compatibility layer for Spring Scheduled in the form of the `spring-scheduled` extension. - -This guide explains how a Quarkus application can leverage the well known Spring Scheduled annotation to configure and schedule tasks. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* Some familiarity with the Spring Web extension - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-scheduled-quickstart` {quickstarts-tree-url}/spring-scheduled-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-scheduler-quickstart -:create-app-extensions: resteasy,spring-scheduled -include::includes/devtools/create-app.adoc[] - -This command generates a Maven project with a REST endpoint and adds the `spring-scheduled` extension. - -If you already have your Quarkus project configured, you can add the `spring-scheduled` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-scheduled -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-scheduled - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-scheduled") ----- - -== Creating a scheduled job - -In the `org.acme.spring.scheduler` package, create the `CounterBean` class, with the following content: - -[source,java] ----- -package org.acme.spring.scheduler; - -import org.springframework.scheduling.annotation.Scheduled; - -import java.util.concurrent.atomic.AtomicInteger; -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped // <1> -public class CounterBean { - - private AtomicInteger counter = new AtomicInteger(); - - public int get() { // <2> - return counter.get(); - } - - @Scheduled(cron="*/5 * * * * ?") // <3> - void cronJob() { - counter.incrementAndGet(); //<4> - System.out.println("Cron expression hardcoded"); - } - - @Scheduled(cron = "{cron.expr}") //<5> - void cronJobWithExpressionInConfig() { - counter.incrementAndGet(); - System.out.println("Cron expression configured in application.properties"); - } - - @Scheduled(fixedRate = 1000) //<6> - void jobAtFixedRate() { - counter.incrementAndGet(); - System.out.println("Fixed Rate expression"); - } - - @Scheduled(fixedRateString = "${fixedRate.expr}") //<7> - void jobAtFixedRateInConfig() { - counter.incrementAndGet(); - System.out.println("Fixed Rate expression configured in application.properties"); - } -} ----- -<1> Declare the bean in the _application_ scope. Spring only detects @Scheduled annotations in beans. -<2> The `get()` method allows retrieving the current value. -<3> Use the Spring `@Scheduled` annotation with a cron-like expression to instruct Quarkus to schedule this method run. In this example we're scheduling a task to be executed at 10:15am every day. -<4> The code is pretty straightforward. Every day at 10:15am, the counter is incremented. -<5> Define a job with a cron-like expression `cron.expr` which is configurable in `application.properties`. -<6> Define a method to be executed at a fixed interval of time. The period is expressed in milliseconds. -<7> Define a job to be executed at a fixed interval of time `fixedRate.expr` which is configurable in `application.properties`. - -== Updating the application configuration file - -Edit the `application.properties` file and add the `cron.expr` and the `fixedRate.expr` configuration: -[source,properties] ----- -# The syntax used by Spring for cron expressions is the same as which is used by regular Quarkus scheduler. -cron.expr=*/5 * * * * ? -fixedRate.expr=1000 ----- - -== Creating the resource and the test - -Create the `CountResource` class with the following content: - -[source,java] ----- -package org.acme.spring.scheduler; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/count") -public class CountResource { - - @Inject - CounterBean counter; // <1> - - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "count: " + counter.get(); // <2> - } -} ----- -<1> Inject the `CounterBean` -<2> Send back the current counter value - -We also need to update the tests. Edit the `CountResourceTest` class to match: - -[source, java] ----- -package org.acme.spring.scheduler; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.containsString; - -import org.junit.jupiter.api.Test; - -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -public class CountResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/count") - .then() - .statusCode(200) - .body(containsString("count")); // <1> - } - -} ----- -<1> Ensure that the response contains `count` - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -In another terminal, run `curl localhost:8080/count` to check the counter value. -After a few seconds, re-run `curl localhost:8080/count` to verify the counter has been incremented. - -Observe the console to verify that the following messages has been displayed: -- `Cron expression hardcoded` -- `Cron expression configured in application.properties` -- `Fixed Rate expression` -- `Fixed Rate expression configured in application.properties` -These messages indicate that the executions of methods annotated with `@Scheduled` have been triggered. - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed using `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Using Property Expressions - -Quarkus supports the use of property expressions in the `application.properties` file so to externalize the configuration of the tasks you should store the properties in the `application.properties` file and use the -`fixedRateString`, `initialDelayString` params respectively. - -Note that this configuration is a build time configuration, the property expression will be resolved at build time. - -== Unsupported Spring Scheduled functionalities - -Quarkus currently only supports a subset of the functionalities that Spring @Scheduled provides with more features being planned. -Currently, the `fixedDelay` and `fixedDelayString` parameters are not supported, in other words, `@Scheduled` methods are always executed independently. - -== Important Technical Note - -Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. -Spring classes and annotations are only used for reading metadata and / or are used as user code method return types or parameter types. -What that means for end users, is that adding arbitrary Spring libraries will not have any effect. Moreover Spring infrastructure -classes (like `org.springframework.beans.factory.config.BeanPostProcessor` for example) will not be executed. - - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] diff --git a/_versions/2.7/guides/spring-security.adoc b/_versions/2.7/guides/spring-security.adoc deleted file mode 100644 index d834d124305..00000000000 --- a/_versions/2.7/guides/spring-security.adoc +++ /dev/null @@ -1,441 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Extension for Spring Security API - -include::./attributes.adoc[] - -While users are encouraged to use xref:security.adoc#standard-security-annotations[Java standard annotations for security authorizations], Quarkus provides a compatibility layer for Spring Security in the form of the `spring-security` extension. - -This guide explains how a Quarkus application can leverage the well known Spring Security annotations to define authorizations on RESTful services using roles. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* Some familiarity with the Spring Web extension - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-security-quickstart` {quickstarts-tree-url}/spring-security-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-security-quickstart -:create-app-extensions: spring-web,spring-security,quarkus-elytron-security-properties-file,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a project which imports the `spring-web`, `spring-security` and `security-properties-file` extensions. - -If you already have your Quarkus project configured, you can add the `spring-web`, `spring-security` and `security-properties-file` extensions -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-web,spring-security,quarkus-elytron-security-properties-file,resteasy-jackson -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-web - - - io.quarkus - quarkus-spring-security - - - io.quarkus - quarkus-elytron-security-properties-file - - - io.quarkus - quarkus-resteasy-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-web") -implementation("io.quarkus:quarkus-spring-security") -implementation("io.quarkus:quarkus-elytron-security-properties-file") -implementation("io.quarkus:quarkus-resteasy-jackson") ----- - -For more information about `security-properties-file`, you can check out the guide of the xref:security-properties.adoc[quarkus-elytron-security-properties-file] extension. - -== GreetingController - -The Quarkus Maven plugin automatically generated a controller with the Spring Web annotations to define our REST endpoint (instead of the JAX-RS ones used by default). -First create a `src/main/java/org/acme/spring/web/GreetingController.java`, a controller with the Spring Web annotations to define our REST endpoint, as follows: - -[source,java] ----- -package org.acme.spring.security; - -import org.springframework.web.bind.annotation.GetMapping; -import org.springframework.web.bind.annotation.RequestMapping; -import org.springframework.web.bind.annotation.RestController; - -@RestController -@RequestMapping("/greeting") -public class GreetingController { - - @GetMapping - public String hello() { - return "hello"; - } -} ----- - -== GreetingControllerTest - -Note that a test for the controller has been created as well: - -[source, java] ----- -package org.acme.spring.security; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingControllerTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello")); - } - -} ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/greeting. - -The result should be: `{"message": "hello"}`. - -== Modify the controller to secure the `hello` method - -In order to restrict access to the `hello` method to users with certain roles, the `@Secured` annotation will be utilized. -The updated controller will be: - -[source,java] ----- -package org.acme.spring.security; - -import org.springframework.security.access.annotation.Secured; -import org.springframework.web.bind.annotation.GetMapping; -import org.springframework.web.bind.annotation.RequestMapping; -import org.springframework.web.bind.annotation.RestController; - -@RestController -@RequestMapping("/greeting") -public class GreetingController { - - @Secured("admin") - @GetMapping - public String hello() { - return "hello"; - } -} ----- - -The easiest way to setup users and roles for our example is to use the `security-properties-file` extension. This extension essentially allows users and roles to be defined in the main Quarkus configuration file - `application.properties`. -For more information about this extension check xref:security-properties.adoc[the associated guide]. -An example configuration would be the following: - -[source,properties] ----- -quarkus.security.users.embedded.enabled=true -quarkus.security.users.embedded.plain-text=true -quarkus.security.users.embedded.users.scott=jb0ss -quarkus.security.users.embedded.roles.scott=admin,user -quarkus.security.users.embedded.users.stuart=test -quarkus.security.users.embedded.roles.stuart=user ----- - -Note that the test also needs to be updated. It could look like: - -== GreetingControllerTest - -[source, java] ----- -package org.acme.spring.security; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingControllerTest { - - @Test - public void testHelloEndpointForbidden() { - given().auth().preemptive().basic("stuart", "test") - .when().get("/greeting") - .then() - .statusCode(403); - } - - @Test - public void testHelloEndpoint() { - given().auth().preemptive().basic("scott", "jb0ss") - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello")); - } - -} ----- - -== Test the changes - -Access allowed:: - -Open your browser again to http://localhost:8080/greeting and introduce `scott` and `jb0ss` in the dialog displayed. -+ -The word `hello` should be displayed. - -Access forbidden:: - -Open your browser again to http://localhost:8080/greeting and let empty the dialog displayed. -+ -The result should be: -+ -[source] ----- -Access to localhost was denied -You don't have authorization to view this page. -HTTP ERROR 403 ----- - -== Run the application as a native executable - -You can generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Supported Spring Security functionalities - -Quarkus currently only supports a subset of the functionalities that Spring Security provides with more features being planned. More specifically, Quarkus supports the security related features of role-based authorization semantics -(think of `@Secured` instead of `@RolesAllowed`). - -=== Annotations - -The table below summarizes the supported annotations: - -.Supported Spring Security annotations -|=== -|Name|Comments - -|@Secured -| - -|@PreAuthorize -|See next section for more details - -|=== - -==== @PreAuthorize - -Quarkus provides support for some of the most used features of Spring Security's `@PreAuthorize` annotation. -The expressions that are supported are the following: - -hasRole:: -+ -To test if the current user has a specific role, the `hasRole` expression can be used inside `@PreAuthorize`. -+ -Some examples are: `@PreAuthorize("hasRole('admin')")`, `@PreAuthorize("hasRole(@roles.USER)")` where the `roles` is a bean that could be defined like so: -+ -[source, java] ----- -import org.springframework.stereotype.Component; - -@Component -public class Roles { - - public final String ADMIN = "admin"; - public final String USER = "user"; -} ----- - -hasAnyRole:: - -In the same fashion as `hasRole`, users can use `hasAnyRole` to check if the logged in user has any of the specified roles. -+ -Some examples are: `@PreAuthorize("hasAnyRole('admin')")`, `@PreAuthorize("hasAnyRole(@roles.USER, 'view')")` - -permitAll:: Adding `@PreAuthorize("permitAll()")` to a method will ensure that that method is accessible by any user (including anonymous users). Adding it to a class will ensure that all public methods -of the class that are not annotated with any other Spring Security annotation will be accessible. - -denyAll:: Adding `@PreAuthorize("denyAll()")` to a method will ensure that that method is not accessible by any user. Adding it to a class will ensure that all public methods -of the class that are not annotated with any other Spring Security annotation will not be accessible to any user. - -isAnonymous:: When annotating a bean method with `@PreAuthorize("isAnonymous()")` the method will only be accessible if the current user is anonymous - i.e. a non logged in user. - -isAuthenticated:: When annotating a bean method with `@PreAuthorize("isAuthenticated()")` the method will only be accessible if the current user is a logged in user. Essentially the -method is only unavailable for anonymous users. - -#paramName == authentication.principal.username:: This syntax allows users to check if a parameter (or a field of the parameter) of the secured method is equal to the logged in username. -+ -Examples of this use case are: -+ -[source,java] ----- -public class Person { - - private final String name; - - public Person(String name) { - this.name = name; - } - - public String getName() { - return name; - } -} - -@Component -public class MyComponent { - - @PreAuthorize("#username == authentication.principal.username") <1> - public void doSomething(String username, String other){ - - } - - @PreAuthorize("#person.name == authentication.principal.username") <2> - public void doSomethingElse(Person person){ - - } -} ----- -<1> `doSomething` can be executed if the current logged in user is the same as the `username` method parameter -<2> `doSomethingElse` can be executed if the current logged in user is the same as the `name` field of `person` method parameter -+ -TIP: the use of `authentication.` is optional, so using `principal.username` has the same result. - -#paramName != authentication.principal.username:: This is similar to the previous expression with the difference being that the method parameter must be different than the logged in username. - -@beanName.method():: This syntax allows developers to specify that the execution of method of a specific bean will determine if the current user can access the secured method. -+ -The syntax is best explained with an example. -Let's assume that a `MyComponent` bean has been created like so: -+ -[source,java] ----- -@Component -public class MyComponent { - - @PreAuthorize("@personChecker.check(#person, authentication.principal.username)") - public void doSomething(Person person){ - - } -} ----- -+ -The `doSomething` method has been annotated with `@PreAuthorize` using an expression that indicates that method `check` of a bean named `personChecker` needs -to be invoked to determine whether the current user is authorized to invoke the `doSomething` method. -+ -An example of the `PersonChecker` could be: -+ -[source,java] ----- -@Component -public class PersonChecker { - - @Override - public boolean check(Person person, String username) { - return person.getName().equals(username); - } -} ----- -+ -Note that for the `check` method the parameter types must match what is specified in `@PreAuthorize` and that the return type must be a `boolean`. - -===== Combining expressions - -The `@PreAuthorize` annotations allows for the combination of expressions using logical `AND` / `OR`. Currently there is a limitation where only a single -logical operation can be used (meaning mixing `AND` and `OR` isn't allowed). - -Some examples of allowed expressions are: - -[source,java] ----- - - @PreAuthorize("hasAnyRole('user', 'admin') AND #user == principal.username") - public void allowedForUser(String user) { - - } - - @PreAuthorize("hasRole('user') OR hasRole('admin')") - public void allowedForUserOrAdmin() { - - } - - @PreAuthorize("hasAnyRole('view1', 'view2') OR isAnonymous() OR hasRole('test')") - public void allowedForAdminOrAnonymous() { - - } ----- - -Also to be noted that currently parentheses are not supported and expressions are evaluated from left to right when needed. - -== Important Technical Note - -Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. -Spring classes and annotations are only used for reading metadata and / or are used as user code method return types or parameter types. -What that means for end users, is that adding arbitrary Spring libraries will not have any effect. Moreover Spring infrastructure -classes (like `org.springframework.beans.factory.config.BeanPostProcessor` for example) will not be executed. - -== Conversion Table - -The following table shows how Spring Security annotations can be converted to JAX-RS annotations. - -|=== -|Spring |JAX-RS |Comments - -|@Secured("admin") -|@RolesAllowed("admin") -| - -|=== - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-web.adoc[Quarkus - Extension for Spring Web] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] diff --git a/_versions/2.7/guides/spring-web.adoc b/_versions/2.7/guides/spring-web.adoc deleted file mode 100644 index b7477e2f34b..00000000000 --- a/_versions/2.7/guides/spring-web.adoc +++ /dev/null @@ -1,530 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Quarkus Extension for Spring Web API - -include::./attributes.adoc[] - -While users are encouraged to use JAX-RS annotation for defining REST endpoints, Quarkus provides a compatibility layer for Spring Web in the form of the `spring-web` extension. - -This guide explains how a Quarkus application can leverage the well known Spring Web annotations to define RESTful services. - -== Prerequisites - -To complete this guide, you need: - -include::includes/devtools/prerequisites.adoc[] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `spring-web-quickstart` {quickstarts-tree-url}/spring-web-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: spring-web-quickstart -:create-app-extensions: spring-web,resteasy-jackson -include::includes/devtools/create-app.adoc[] - -This command generates a project which imports the `spring-web` extension. - -If you already have your Quarkus project configured, you can add the `spring-web` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: spring-web,resteasy-jackson -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-spring-web - - - io.quarkus - quarkus-resteasy-jackson - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-spring-web") -implementation("io.quarkus:quarkus-resteasy-jackson") ----- - -[IMPORTANT] -==== -`quarkus-spring-web` needs to be complemented with either `quarkus-resteasy-jackson` or `quarkus-resteasy-reactive-jackson` in order to work. -==== - -== GreetingController - -Create the `src/main/java/org/acme/spring/web/GreetingController.java` file, a controller with the Spring Web annotations to define our REST endpoint, as follows: - -[source,java] ----- -package org.acme.spring.web; - -import org.springframework.web.bind.annotation.GetMapping; -import org.springframework.web.bind.annotation.RequestMapping; -import org.springframework.web.bind.annotation.RestController; - -@RestController -@RequestMapping("/greeting") -public class GreetingController { - - @GetMapping - public String hello() { - return "hello"; - } -} ----- - -== GreetingControllerTest - -Note that a test for the controller has been created as well: - -[source, java] ----- -package org.acme.spring.web; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingControllerTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting") - .then() - .statusCode(200) - .body(is("hello")); - } - -} ----- - -== Package and run the application - -Run the application with: - -include::includes/devtools/dev.adoc[] - -Open your browser to http://localhost:8080/greeting. - -The result should be: `{"message": "hello"}`. - -== Run the application as a native executable - -You can generate the native executable with: - -include::includes/devtools/build-native.adoc[] - -== Going further with an endpoint returning JSON - -The `GreetingController` above was an example of a very simple endpoint. In many cases however it is required to return JSON content. -The following example illustrates how that could be achieved using a Spring RestController: - -[source, java] ----- -import org.springframework.web.bind.annotation.GetMapping; -import org.springframework.web.bind.annotation.PathVariable; -import org.springframework.web.bind.annotation.RequestMapping; -import org.springframework.web.bind.annotation.RestController; - -@RestController -@RequestMapping("/greeting") -public class GreetingController { - - @GetMapping("/{name}") - public Greeting hello(@PathVariable(name = "name") String name) { - return new Greeting("hello " + name); - } - - public static class Greeting { - private final String message; - - public Greeting(String message) { - this.message = message; - } - - public String getMessage(){ - return message; - } - } -} ----- - -The corresponding test could look like: - -[source, java] ----- -package org.acme.spring.web; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingControllerTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/greeting/quarkus") - .then() - .statusCode(200) - .body("message", is("hello quarkus")); - } - -} ----- - -It should be noted that when using the Spring Web support in Quarkus, link:https://github.com/FasterXML/jackson[Jackson] is automatically added to the classpath and properly setup. - -== Adding OpenAPI and Swagger-UI - -You can add support for link:https://www.openapis.org/[OpenAPI] and link:https://swagger.io/tools/swagger-ui/[Swagger-UI] by using the `quarkus-smallrye-openapi` extension. - -Add the extension by running this command: - -[source,bash] ----- -./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-smallrye-openapi" ----- - -This will add the following to your `pom.xml`: - -[source,xml] ----- - - io.quarkus - quarkus-smallrye-openapi - ----- - -This is enough to generate a basic OpenAPI schema document from your REST Endpoints: - -[source,bash] ----- -curl http://localhost:8080/q/openapi ----- - -You will see the generated OpenAPI schema document: - -[source, yaml] ----- ---- -openapi: 3.0.1 -info: - title: Generated API - version: "1.0" -paths: - /greeting: - get: - responses: - "200": - description: OK - content: - '*/*': - schema: - type: string - /greeting/{name}: - get: - parameters: - - name: name - in: path - required: true - schema: - type: string - responses: - "200": - description: OK - content: - 'application/json': - schema: - $ref: '#/components/schemas/Greeting' -components: - schemas: - Greeting: - type: object - properties: - message: - type: string ----- - -Also see xref:openapi-swaggerui.adoc[the OpenAPI Guide] - -=== Adding MicroProfile OpenAPI Annotations - -You can use link:https://github.com/eclipse/microprofile-open-api[MicroProfile OpenAPI] to better document your schema, -example, adding the following to the class level of the `GreetingController`: - -[source, java] ----- -@OpenAPIDefinition( - info = @Info( - title="Greeting API", - version = "1.0.1", - contact = @Contact( - name = "Greeting API Support", - url = "http://exampleurl.com/contact", - email = "techsupport@example.com"), - license = @License( - name = "Apache 2.0", - url = "https://www.apache.org/licenses/LICENSE-2.0.html")) -) ----- - -And describe your endpoints like this: - -[source, java] ----- -@Tag(name = "Hello", description = "Just say hello") -@GetMapping(produces=MediaType.TEXT_PLAIN_VALUE) -public String hello() { - return "hello"; -} - -@GetMapping(value = "/{name}", produces=MediaType.APPLICATION_JSON_VALUE) -@Tag(name = "Hello to someone", description = "Just say hello to someone") -public Greeting hello(@PathVariable(name = "name") String name) { - return new Greeting("hello " + name); -} ----- - -will generate this OpenAPI schema: - -[source, yaml] ----- ---- -openapi: 3.0.1 -info: - title: Greeting API - contact: - name: Greeting API Support - url: http://exampleurl.com/contact - email: techsupport@example.com - license: - name: Apache 2.0 - url: https://www.apache.org/licenses/LICENSE-2.0.html - version: 1.0.1 -tags: -- name: Hello - description: Just say hello -- name: Hello to someone - description: Just say hello to someone -paths: - /greeting: - get: - tags: - - Hello - responses: - "200": - description: OK - content: - '*/*': - schema: - type: string - /greeting/{name}: - get: - tags: - - Hello to someone - parameters: - - name: name - in: path - required: true - schema: - type: string - responses: - "200": - description: OK - content: - '*/*': - schema: - $ref: '#/components/schemas/Greeting' -components: - schemas: - Greeting: - type: object - properties: - message: - type: string ----- - -=== Using Swagger UI - -Swagger UI is included by default when running in `Dev` or `Test` mode, and can optionally added to `Prod` mode. -See xref:openapi-swaggerui.adoc#use-swagger-ui-for-development[the Swagger UI] Guide for more details. - -Navigate to link:http://localhost:8080/q/swagger-ui/[localhost:8080/q/swagger-ui/] and you will see the Swagger UI screen: - -image:spring-web-guide-screenshot01.png[alt=Swagger UI] - -== Supported Spring Web functionalities - -Quarkus currently supports a subset of the functionalities that Spring Web provides. More specifically Quarkus supports the REST related features of Spring Web -(think of `@RestController` instead of `@Controller`). - -=== Annotations - -The table below summarizes the supported annotations: - -.Supported Spring Web annotation -|=== -|Name|Comments - -|@RestController -| - -|@RequestMapping -| -|@GetMapping -| -|@PostMapping -| -|@PutMapping -| -|@DeleteMapping -| -|@PatchMapping -| -|@RequestParam -| -|@RequestHeader -| -|@MatrixVariable -| -|@PathVariable -| -|@CookieValue -| -|@RequestBody -| -|@ResponseStatus -| -|@ExceptionHandler -|Can only be used in a @RestControllerAdvice class, not on a per-controller basis -|@RestControllerAdvice -|Only the @ExceptionHandler capability is supported -|=== - -=== Controller method return types - -The following method return types are supported: - -* Primitive types -* String (which will be used as a literal, no Spring MVC view support is provided) -* POJO classes which will be serialized via JSON -* `org.springframework.http.ResponseEntity` - -=== Controller method parameter types - -In addition to the method parameters that can be annotated with the appropriate Spring Web annotations from the previous table, -`javax.servlet.http.HttpServletRequest` and `javax.servlet.http.HttpServletResponse` are also supported. -For this to function however, users need to add the `quarkus-undertow` dependency. - -=== Exception handler method return types - -The following method return types are supported: - -* `org.springframework.http.ResponseEntity` -* `java.util.Map` - -Other return types mentioned in the Spring `https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/bind/annotation/ExceptionHandler.html[ExceptionHandler javadoc]` are not supported. - -=== Exception handler method parameter types - -The following parameter types are supported, in arbitrary order: - -* An exception argument: declared as a general `Exception` or as a more specific exception. This also serves as a mapping hint if the annotation itself does not narrow the exception types through its `value()`. -* Request and/or response objects (typically from the Servlet API). You may choose any specific request/response type, e.g. `ServletRequest` / `HttpServletRequest`. To use Servlet API, the `quarkus-undertow` dependency needs to be added. - -Other parameter types mentioned in the Spring `https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/bind/annotation/ExceptionHandler.html[ExceptionHandler javadoc]` are not supported. - -== Important Technical Note - -Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. -Spring classes and annotations are only used for reading metadata and / or are used as user code method return types or parameter types. -What that means for end users, is that adding arbitrary Spring libraries will not have any effect. Moreover Spring infrastructure -classes (like `org.springframework.beans.factory.config.BeanPostProcessor` for example) will not be executed. - -== Conversion Table - -The following table shows how Spring Web annotations can be converted to JAX-RS annotations. - -|=== -|Spring |JAX-RS |Comments - -|@RestController -| -|There is no equivalent in JAX-RS. Annotating a class with @Path suffices - -|@RequestMapping(path="/api") -|@Path("/api") -| - -|@RequestMapping(consumes="application/json") -|@Consumes("application/json") -| - -|@RequestMapping(produces="application/json") -|@Produces("application/json") -| - -|@RequestParam -|@QueryParam -| - -|@PathVariable -|@PathParam -| - -|@RequestBody -| -|No equivalent in JAX-RS. Method parameters corresponding to the body of the request are handled in JAX-RS without requiring any annotation - -|@RestControllerAdvice -| -|No equivalent in JAX-RS - -|@ResponseStatus -| -|No equivalent in JAX-RS - -|@ExceptionHandler -| -|No equivalent annotation in JAX-RS. Exceptions are handled by implementing `javax.ws.rs.ext.ExceptionMapper` -|=== - -== More Spring guides - -Quarkus has more Spring compatibility features. See the following guides for more details: - -* xref:spring-di.adoc[Quarkus - Extension for Spring DI] -* xref:spring-data-jpa.adoc[Quarkus - Extension for Spring Data JPA] -* xref:spring-data-rest.adoc[Quarkus - Extension for Spring Data REST] -* xref:spring-security.adoc[Quarkus - Extension for Spring Security] -* xref:spring-cloud-config-client.adoc[Quarkus - Reading properties from Spring Cloud Config Server] -* xref:spring-boot-properties.adoc[Quarkus - Extension for Spring Boot properties] -* xref:spring-cache.adoc[Quarkus - Extension for Spring Cache] -* xref:spring-scheduled.adoc[Quarkus - Extension for Spring Scheduled] diff --git a/_versions/2.7/guides/status-include.adoc b/_versions/2.7/guides/status-include.adoc deleted file mode 100644 index 621cb9c7170..00000000000 --- a/_versions/2.7/guides/status-include.adoc +++ /dev/null @@ -1,20 +0,0 @@ -[NOTE] -==== -This technology is considered {extension-status}. - -ifeval::["{extension-status}" == "experimental"] -In _experimental_ mode, early feedback is requested to mature the idea. -There is no guarantee of stability nor long term presence in the platform until the solution matures. -Feedback is welcome on our https://groups.google.com/d/forum/quarkus-dev[mailing list] or as issues in our https://github.com/quarkusio/quarkus/issues[GitHub issue tracker]. -endif::[] -ifeval::["{extension-status}" == "preview"] -In _preview_, backward compatibility and presence in the ecosystem is not guaranteed. -Specific improvements might require changing configuration or APIs, and plans to become _stable_ are under way. -Feedback is welcome on our https://groups.google.com/d/forum/quarkus-dev[mailing list] or as issues in our https://github.com/quarkusio/quarkus/issues[GitHub issue tracker]. -endif::[] -ifeval::["{extension-status}" == "stable"] -Being _stable_, backward compatibility and presence in the ecosystem are taken very seriously. -endif::[] - -For a full list of possible statuses, check our https://quarkus.io/faq/#extension-status[FAQ entry]. -==== diff --git a/_versions/2.7/guides/stork-reference.adoc b/_versions/2.7/guides/stork-reference.adoc deleted file mode 100644 index c08622dbb64..00000000000 --- a/_versions/2.7/guides/stork-reference.adoc +++ /dev/null @@ -1,347 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Stork Reference Guide - -include::./attributes.adoc[] - -This guide is the companion from the xref:stork.adoc[Stork Getting Started Guide]. -It explains the configuration and usage of SmallRye Stork integration in Quarkus. - -== Supported clients - -The current integration of Stork supports: - -* the Reactive REST Client -* the gRPC clients - -Warning: The gRPC client integration does not support statistic-based load balancers. - -== Available service discovery and selection - -Check the https://smallrye.io/smallrye-stork[SmallRye Stork website] to find more about the provided service discovery and selection. - -== Using Stork in Kubernetes - -Stork provides a service discovery support for Kubernetes, which goes beyond what Kubernetes provides by default. -It looks for all the pods backing up a Kubernetes service, but instead of applying a round-robin (as Kubernetes would do), it gives you the option to select the pod using a Stork load-balancer. - -To use this feature, add the following dependency to your project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.stork - stork-service-discovery-kubernetes - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye.stork:stork-service-discovery-kubernetes") ----- - -For each service expected to be exposed as a Kubernetes Service, configure the lookup: - -[source, properties] ----- -stork.my-service.service-discovery=kubernetes -stork.my-service.service-discovery.k8s-namespace=my-namespace ----- - -Stork looks for the Kubernetes Service with the given name (`my-service` in the previous example) in the specified namespace. -Instead of using the Kubernetes Service IP directly and let Kubernetes handle the selection and balancing, Stork inspects the service and retrieves the list of pods providing the service. Then, it can select the instance. - -== Implementing a custom service discovery - -Stork is extensible, and you can implement your own service discovery mechanism. - -=== Dependency -To implement your Service Discovery Provider, make sure your project depends on Core and Configuration Generator. The former brings classes necessary to implement custom discovery, the latter contains an annotation processor that generates classes needed by Stork. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.stork - stork-core - - - io.smallrye.stork - stork-configuration-generator - - provided - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye.stork:stork-core") -compileOnly("io.smallrye.stork:stork-configuration-generator") ----- - -[NOTE] -==== -If the provider is located in an extension, the configuration generator should be declared in the -`annotationProcessorPaths` section of the runtime module using the default scope: - -[source,xml] ----- - - ... - - io.smallrye.stork - stork-configuration-generator - - ----- -==== - -=== Implementing a service discovery provider - -The custom provider is a factory that creates an `io.smallrye.stork.ServiceDiscovery` instance for each configured service using this service discovery provider. -A type, for example, `acme` identifies each provider. -This type is used in the configuration to reference the provider: - -[source, properties] ----- -stork.my-service.service-discovery=acme ----- - -The first step consists of implementing the `io.smallrye.stork.spi.ServiceDiscoveryProvider` interface: - -[source, java] ----- -package examples; - -import io.smallrye.stork.api.ServiceDiscovery; -import io.smallrye.stork.api.config.ServiceConfig; -import io.smallrye.stork.api.config.ServiceDiscoveryAttribute; -import io.smallrye.stork.api.config.ServiceDiscoveryType; -import io.smallrye.stork.spi.StorkInfrastructure; -import io.smallrye.stork.spi.ServiceDiscoveryProvider; - -@ServiceDiscoveryType("acme") // <1> -@ServiceDiscoveryAttribute(name = "host", - description = "Host name of the service discovery server.", required = true) // <2> -@ServiceDiscoveryAttribute(name = "port", - description = "Port of the service discovery server.", required = false) -public class AcmeServiceDiscoveryProvider // <3> - implements ServiceDiscoveryProvider { - - // <4> - @Override - public ServiceDiscovery createServiceDiscovery(AcmeServiceDiscoveryProviderConfiguration config, - String serviceName, - ServiceConfig serviceConfig, - StorkInfrastructure storkInfrastructure) { - return new AcmeServiceDiscovery(config); - } -} ----- - -This implementation is straightforward. - -<1> `@ServiceDiscoveryType` annotation defines the type of the service discovery provider. For each `ServiceDiscoveryProvider` annotated with this annotation, a configuration class will be generated. The name of the configuration class is constructed by appending `Configuration` to the name of the provider. -<2> Use `@ServiceDiscoveryAttribute` to define configuration properties for services configured with this service discovery provider. Configuration properties are gathered from all properties of a form: `stork.my-service.service-discovery.attr=value`. -<3> The provider needs to implement `ServiceDiscoveryType` typed by the configuration class. -<4> `createServiceDiscovery` method is the factory method. It receives the configuration and access to the name of the service and available infrastructure. - -Then, we need to implement the `ServiceDiscovery` interface: - -[source, java] ----- -package examples; - -import java.util.Collections; -import java.util.List; - -import io.smallrye.mutiny.Uni; -import io.smallrye.stork.api.ServiceDiscovery; -import io.smallrye.stork.api.ServiceInstance; -import io.smallrye.stork.impl.DefaultServiceInstance; -import io.smallrye.stork.utils.ServiceInstanceIds; - -public class AcmeServiceDiscovery implements ServiceDiscovery { - - private final String host; - private final int port; - - public AcmeServiceDiscovery(AcmeServiceDiscoveryProviderConfiguration configuration) { - this.host = configuration.getHost(); - this.port = Integer.parseInt(configuration.getPort()); - } - - @Override - public Uni> getServiceInstances() { - // Proceed to the lookup... - // Here, we just return a DefaultServiceInstance with the configured host and port - // The last parameter specifies whether the communication with the instance should happen over a secure connection - DefaultServiceInstance instance = - new DefaultServiceInstance(ServiceInstanceIds.next(), host, port, false); - return Uni.createFrom().item(() -> Collections.singletonList(instance)); - } -} ----- - -Again, this implementation is simplistic. -Typically, instead of creating a service instance with values from the configuration, you would connect to a service discovery backend, look for the service and build the list of service instances accordingly. -That's why the method returns a `Uni`. -Most of the time, the lookup is a remote operation. - -=== Using your service discovery - -In the project using it, don't forget to add the dependency on the module providing your implementation. -Then, in the configuration, just add: - -[source, properties] ----- -stork.my-service.service-discovery=acme -stork.my-service.service-discovery.host=localhost -stork.my-service.service-discovery.port=1234 ----- - -Then, Stork will use your implementation to locate the `my-service` service. - -== Implementing a custom service selection / load-balancer - -Stork is extensible, and you can implement your own service selection (load-balancer) mechanism. - -=== Dependency -To implement your Load Balancer Provider, make sure your project depends on Core and Configuration Generator. The former brings classes necessary to implement custom load balancer, the latter contains an annotation processor that generates classes needed by Stork. - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.stork - stork-core - - - io.smallrye.stork - stork-configuration-generator - - provided - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye.stork:stork-core") -compileOnly("io.smallrye.stork:stork-configuration-generator") ----- - -[NOTE] -==== -Similar to custom discovery providers, if the provider is located in an extension, the configuration generator should be declared in the `annotationProcessorPaths` section of the runtime module using the default scope. -==== - -=== Implementing a load balancer provider - -Load balancer implementation consists of three elements: - -- `LoadBalancer` which is responsible for selecting service instances for a single Stork service -- `LoadBalancerProvider` which creates instances of `LoadBalancer` for a given load balancer _type_ -- `LoadBalancerProviderConfiguration` which is a configuration for the load balancer - - -A _type_, for example, `acme`, identifies each provider. -This _type_ is used in the configuration to reference the provider: - -[source, properties] ----- -stork.my-service.load-balancer=acme ----- - -Similarly to `ServiceDiscoveryProvider, a `LoadBalancerProvider` implementation needs to be annotated with `@LoadBalancerType` that defines the _type_. -Any configuration properties that the provider expects should be defined with `@LoadBalancerAttribute` annotations placed on the provider. -[source, java] ----- -package examples; - -import io.smallrye.stork.api.LoadBalancer; -import io.smallrye.stork.api.ServiceDiscovery; -import io.smallrye.stork.api.config.LoadBalancerAttribute; -import io.smallrye.stork.api.config.LoadBalancerType; -import io.smallrye.stork.spi.LoadBalancerProvider; - -@LoadBalancerType("acme") -@LoadBalancerAttribute(name = "my-attribute", - description = "Attribute that alters the behavior of the LoadBalancer") -public class AcmeLoadBalancerProvider implements - LoadBalancerProvider { - - @Override - public LoadBalancer createLoadBalancer(AcmeLoadBalancerProviderConfiguration config, - ServiceDiscovery serviceDiscovery) { - return new AcmeLoadBalancer(config); - } -} ----- - -Note, that similarly to the `ServiceDiscoveryProvider`, the `LoadBalancerProvider` interface takes a configuration class as a parameter. This configuration class is generated automatically by the _Configuration Generator_. -Its name is created by appending `Configuration` to the name of the provider class. - -The next step is to implement the `LoadBalancer` interface: - -[source, java] ----- -package examples; - -import java.util.ArrayList; -import java.util.Collection; -import java.util.Random; - -import io.smallrye.stork.api.LoadBalancer; -import io.smallrye.stork.api.NoServiceInstanceFoundException; -import io.smallrye.stork.api.ServiceInstance; - -public class AcmeLoadBalancer implements LoadBalancer { - - private final Random random; - - public AcmeLoadBalancer(AcmeLoadBalancerProviderConfiguration config) { - random = new Random(); - } - - @Override - public ServiceInstance selectServiceInstance(Collection serviceInstances) { - if (serviceInstances.isEmpty()) { - throw new NoServiceInstanceFoundException("No services found."); - } - int index = random.nextInt(serviceInstances.size()); - return new ArrayList<>(serviceInstances).get(index); - } -} ----- - -Again, this implementation is simplistic and just picks a random instance from the received list. - - -[source, text] ----- -examples.AcmeLoadBalancerProvider ----- - -=== Using your load balancer - -In the project using it, don't forget to add the dependency on the module providing your implementation. -Then, in the configuration, just add: - -[source, properties] ----- -stork.my-service.service-discovery=... -stork.my-service.load-balancer=acme ----- - -Then, Stork will use your implementation to select the `my-service` service instance. - - - - diff --git a/_versions/2.7/guides/stork.adoc b/_versions/2.7/guides/stork.adoc deleted file mode 100644 index 099d3391220..00000000000 --- a/_versions/2.7/guides/stork.adoc +++ /dev/null @@ -1,381 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Getting Started with SmallRye Stork -:extension-status: preview - -// Temporary until stork in in the BOM -:stork-version: 1.0.0.Beta1 -include::./attributes.adoc[] - -The essence of distributed systems resides in the interaction between services. -In modern architecture, you often have multiple instances of your service to share the load or improve the resilience by redundancy. -But how do you select the best instance of your service? -That's where https://smallrye.io/smallrye-stork[SmallRye Stork] helps. -Stork is going to choose the most appropriate instance. -It offers: - -* Extensible service discovery mechanisms -* Built-in support for Consul and Kubernetes -* Customizable client load-balancing strategies - -include::./status-include.adoc[] - -== Prerequisites - -:prerequisites-docker: -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we will build an application composed of: - -* A simple blue service exposed on port 9000 -* A simple red service exposed on port 9001 -* A REST Client calling the blue or red service (the selection is delegated to Stork) -* A REST endpoint using the REST client and calling the services -* The blue and red services are registered in https://www.consul.io/[Consul]. - -image::stork-getting-started-architecture.png[Architecture of the application,width=50%, align=center] - -For the sake of simplicity, everything (except Consul) will be running in the same Quarkus application. -Of course, each component will run in its own process in the real world. - -== Solution - -We recommend that you follow the instructions in the next sections and create the applications step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `stork-quickstart` {quickstarts-tree-url}/stork-quickstart[directory]. - -== Discovery and selection - -Before going further, we need to discuss discovery vs. selection. - -- Service discovery is the process of locating service instances. -It produces a list of service instances that is potentially empty (if no service matches the request) or contains multiple service instances. - -- Service selection, also called load-balancing, chooses the best instance from the list returned by the discovery process. -The result is a single service instance or an exception when no suitable instance can be found. - -Stork handles both discovery and selection. -However, it does not handle the communication with the service but only provides a service instance. -The various integrations in Quarkus extract the location of the service from that service instance. - -image::stork-process.png[Discovery and Selection of services,width=50%, align=center] - -== Bootstrapping the project - -Create a Quarkus project importing the quarkus-rest-client-reactive and quarkus-resteasy-reactive extensions using your favorite approach: - -:create-app-artifact-id: stork-quickstart -:create-app-extensions: quarkus-rest-client-reactive,quarkus-resteasy-reactive -include::includes/devtools/create-app.adoc[] - -In the generated project, also add the following dependencies: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.stork - stork-service-discovery-consul - - - io.smallrye.reactive - smallrye-mutiny-vertx-consul-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye.stork:stork-service-discovery-consul") -implementation("io.smallrye.reactive:smallrye-mutiny-vertx-consul-client") ----- - -`stork-service-discovery-consul` provides an implementation of service discovery for Consul. -`smallrye-mutiny-vertx-consul-client` is a Consul client which we will use to register our services in Consul. - -== The Blue and Red services - -Let's start with the very beginning: the service we will discover, select and call. - -Create the `src/main/java/org/acme/services/BlueService.java` with the following content: - -[source, java] ----- -package org.acme.services; - -import io.quarkus.runtime.StartupEvent; -import io.vertx.mutiny.core.Vertx; -import org.eclipse.microprofile.config.inject.ConfigProperty; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; - -@ApplicationScoped -public class BlueService { - - @ConfigProperty(name = "blue-service-port", defaultValue = "9000") int port; - - /** - * Start an HTTP server for the blue service. - * - * Note: this method is called on a worker thread, and so it is allowed to block. - */ - public void init(@Observes StartupEvent ev, Vertx vertx) { - vertx.createHttpServer() - .requestHandler(req -> req.response().endAndForget("Hello from Blue!")) - .listenAndAwait(port); - } -} ----- - -It creates a new HTTP server (using Vert.x) and implements our simple service when the application starts. -For each HTTP request, it sends a response with "Hello from Blue!" as the body. - -Following the same logic, create the `src/main/java/org/acme/services/RedService.java` with the following content: - -[source, java] ----- - -package org.acme.services; - -import io.quarkus.runtime.StartupEvent; -import io.vertx.mutiny.core.Vertx; -import org.eclipse.microprofile.config.inject.ConfigProperty; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; - -@ApplicationScoped -public class RedService { - @ConfigProperty(name = "red-service-port", defaultValue = "9001") int port; - - /** - * Start an HTTP server for the red service. - * - * Note: this method is called on a worker thread, and so it is allowed to block. - */ - public void init(@Observes StartupEvent ev, Vertx vertx) { - vertx.createHttpServer() - .requestHandler(req -> req.response().endAndForget("Hello from Red!")) - .listenAndAwait(port); - } - -} ----- - -This time, it writes "Hello from Red!". - -== Service registration in Consul - -Now that we have implemented our services, we need to register them into Consul. - -NOTE: Stork is not limited to Consul and integrates with other service discovery mechanisms. - -Create the `src/main/java/org/acme/services/Registration.java` file with the following content: - -[source, java] ----- -package org.acme.services; - -import io.quarkus.runtime.StartupEvent; -import io.vertx.ext.consul.ServiceOptions; -import io.vertx.mutiny.ext.consul.ConsulClient; -import io.vertx.ext.consul.ConsulClientOptions; -import io.vertx.mutiny.core.Vertx; -import org.eclipse.microprofile.config.inject.ConfigProperty; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; - -@ApplicationScoped -public class Registration { - - @ConfigProperty(name = "consul.host") String host; - @ConfigProperty(name = "consul.port") int port; - - @ConfigProperty(name = "blue-service-port", defaultValue = "9000") int red; - @ConfigProperty(name = "red-service-port", defaultValue = "9001") int blue; - - /** - * Register our two services in Consul. - * - * Note: this method is called on a worker thread, and so it is allowed to block. - */ - public void init(@Observes StartupEvent ev, Vertx vertx) { - ConsulClient client = ConsulClient.create(vertx, new ConsulClientOptions().setHost(host).setPort(port)); - - client.registerServiceAndAwait( - new ServiceOptions().setPort(blue).setAddress("localhost").setName("my-service").setId("blue")); - client.registerServiceAndAwait( - new ServiceOptions().setPort(red).setAddress("localhost").setName("my-service").setId("red")); - - } -} ----- - -When the application starts, it connects to Consul using the Vert.x Consul Client and registers our two instances. -Both registration uses the same name (`my-service`), but different ids to indicate that it's two instances of the same _service_. - -== The REST Client interface and the front end API - -So far, we didn't use Stork; we just scaffolded the services we will be discovering, selecting, and calling. - -We will call the services using the Reactive REST Client. -Create the `src/main/java/org/acme/MyService.java` file with the following content: - -[source, java] ----- -package org.acme; - -import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -/** - * The REST Client interface. - * - * Notice the `baseUri`. It uses `stork://` as URL scheme indicating that the called service uses Stork to locate and - * select the service instance. The `my-service` part is the service name. This is used to configure Stork discovery - * and selection in the `application.properties` file. - */ -@RegisterRestClient(baseUri = "stork://my-service") -public interface MyService { - - @GET - @Produces(MediaType.TEXT_PLAIN) - String get(); -} ----- - -It's a straightforward REST client interface containing a single method. However, note the `baseUri` attribute. -It starts with `stork://`. -It instructs the REST client to delegate the discovery and selection of the service instances to Stork. -Notice the `my-service` part in the URL. -It is the service name we will be using in the application configuration. - -It does not change how the REST client is used. -Create the `src/main/java/org/acme/FrontendApi.java` file with the following content: - -[source, java] ----- -package org.acme; - -import org.eclipse.microprofile.rest.client.inject.RestClient; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -/** - * A frontend API using our REST Client (which uses Stork to locate and select the service instance on each call). - */ -@Path("/api") -public class FrontendApi { - - @RestClient MyService service; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String invoke() { - return service.get(); - } - -} ----- - -It injects and uses the REST client as usual. - -== Stork configuration - -The system is almost complete. We only need to configure Stork and the `Registration` bean. - -In the `src/main/resources/application.properties`, add: - -[source, properties] ----- -consul.host=localhost -consul.port=8500 - -stork.my-service.service-discovery=consul -stork.my-service.service-discovery.consul-host=localhost -stork.my-service.service-discovery.consul-port=8500 -stork.my-service.load-balancer=round-robin ----- - -The first two lines provide the Consul location used by the `Registration` bean. - -The other properties are related to Stork. -`stork.my-service.service-discovery` indicates which type of service discovery we will be using to locate the `my-service` service. -In our case, it's `consul`. -`stork.my-service.service-discovery.consul-host` and `stork.my-service.service-discovery.consul-port` configures the access to Consul. -Finally, `stork.my-service.load-balancer` configures the service selection. -In our case, we use a `round-robin`. - -== Running the application - -We're done! -So, let's see if it works. - -First, start Consul: - -[source, shell script] ----- -docker run --rm --name consul -p 8500:8500 -p 8501:8501 consul:1.7 agent -dev -ui -client=0.0.0.0 -bind=0.0.0.0 --https-port=8501 ----- - -If you start Consul differently, do not forget to edit the application configuration. - -Then, package the application: - -include::includes/devtools/build.adoc[] - -And run it: - -[source, shell script] ----- -> java -jar target/quarkus-app/quarkus-run.jar ----- - -In another terminal, run: - -[source, shell script] ----- -> curl http://localhost:8080/api -... -> curl http://localhost:8080/api -... -> curl http://localhost:8080/api -... ----- - -The responses alternate between `Hello from Red!` and `Hello from Blue!`. - -You can compile this application into a native executable: - -include::includes/devtools/build-native.adoc[] - -And start it with: - -[source, shell script] ----- -> ./target/stork-getting-started-1.0.0-SNAPSHOT-runner ----- - -== Going further - -This guide has shown how to use SmallRye Stork to discover and select your services. -You can find more about Stork in: - -- the xref:stork-reference.adoc[Stork reference guide], -- the https://smallrye.io/smallrye-stork[SmallRye Stork website]. diff --git a/_versions/2.7/guides/stylesheet/asciidoc-tabs.css b/_versions/2.7/guides/stylesheet/asciidoc-tabs.css deleted file mode 100644 index 8fd9ebe2fe8..00000000000 --- a/_versions/2.7/guides/stylesheet/asciidoc-tabs.css +++ /dev/null @@ -1,31 +0,0 @@ -.asciidoc-tabs-hidden { - display: none; -} - -.asciidoc-tabs-switch { - border-width: 1px 0 0 1px; - border-style: solid; - border-color: #aaa; - margin-bottom: -1px; - display: inline-block; -} - -.asciidoc-tabs-switch--item.selected { - background-color: #fff; - color: #0D1C2C; - font-weight: 600; - border-bottom: 1px solid #fff; -} - -.asciidoc-tabs-switch--item { - padding: 0.75rem 2.5rem; - background-color: #e4edf7; - color: #0D1C2C; - display: inline-block; - cursor: pointer; - border-right: 1px solid #aaa; -} - -.asciidoc-tabs-switch ~ .content pre.highlight { - margin-top: 0; -} \ No newline at end of file diff --git a/_versions/2.7/guides/stylesheet/config.css b/_versions/2.7/guides/stylesheet/config.css deleted file mode 100644 index f7b70fabae1..00000000000 --- a/_versions/2.7/guides/stylesheet/config.css +++ /dev/null @@ -1,159 +0,0 @@ -table.configuration-reference.tableblock { - border-collapse: separate; - border-spacing: 1px; - border: none; -} - -table.configuration-reference.tableblock span.icon { - color: #0D1C2C; -} - -table.configuration-reference.tableblock > thead > tr > th.tableblock { - background-color: transparent; - border: none; - color: white; - font-weight: bold; -} - -table.configuration-reference.tableblock > tbody > tr:nth-child(even) > th { - background: transparent; -} - -table.configuration-reference.tableblock > tbody > tr > th { - background-color: transparent; - font-size: 1rem; - height: 60px; - border: none; - border-bottom: 1px solid #4695eb; - vertical-align: bottom; -} - -table.configuration-reference.tableblock > tbody > tr:first-child > th { - height: 30px; -} -table.configuration-reference.tableblock > tbody > tr > th:nth-child(2), -table.configuration-reference.tableblock > tbody > tr > th:nth-child(3), -table.configuration-reference.tableblock > tbody > tr > td:nth-child(2), -table.configuration-reference.tableblock > tbody > tr > td:nth-child(3) { - text-align: right; -} - -table.configuration-reference.tableblock > tbody > tr > th:nth-child(2) p, -table.configuration-reference.tableblock > tbody > tr > th:nth-child(3) p { - font-weight: normal; - color: black; -} - -table.configuration-reference.tableblock > tbody > tr > th > p { - font-weight: bold; -} - -table.configuration-reference.tableblock > tbody > tr > td { - padding-left: 30px; - border: none; -} - -table.configuration-reference.tableblock > tbody > tr > td > .content > .paragraph .icon { - margin-left: -19px; - margin-top: 5px; - float: left; -} - -table.configuration-reference.tableblock .hidden { - display: none; -} - -table.configuration-reference.tableblock .configuration-highlight { - background-color: #4695eb; - color: black; -} - -table.configuration-reference.tableblock caption { - color: inherit; -} - -table.configuration-reference.tableblock .configuration-legend input { - float: center; -} - -table.configuration-reference.tableblock .description-collapsed { - height: 19px; - overflow: hidden; -} - -table.configuration-reference.tableblock .description-decoration { - height: 10px; - margin: 0px; - padding: 0px; - text-align: center; - - cursor: pointer; -} - -table.configuration-reference.tableblock .description-decoration i { - margin-right: 5px; -} - -table.configuration-reference.tableblock a.link-collapsible { - float: right; -} - -table.configuration-reference.tableblock a.link-collapsible i.fa { - margin: 0 4px; - font-size: 8px; -} - -table.configuration-reference.tableblock tr.row-collapsible td { - cursor: pointer; -} - -table.configuration-reference.tableblock td.tableblock > .content > :last-child { - margin-bottom: inherit; -} - -input#config-search-0 { - -webkit-appearance: none; - display: block; - width: 100%; - margin-top: 10px; - padding: 12px 24px; - transition: transform 250ms ease-in-out; - font-size: 14px; - line-height: 18px; - color: color(#4695eb a(0.8)); - background-color: transparent; - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath stroke='white' fill='white' d='M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z'/%3E%3Cpath d='M0 0h24v24H0z' fill='none'/%3E%3C/svg%3E"); - background-repeat: no-repeat; - background-size: 18px 18px; - background-position: 98% center; - border: 1px solid #4695eb; - transition: all 250ms ease-in-out; - backface-visibility: hidden; - transform-style: preserve-3d; - letter-spacing: 1.5px; -} - -input#config-search-0:hover, -input#config-search-0:focus { - padding: 12px 2px; - outline: 0; - border: 1px solid transparent; - border-bottom: 1px dashed #4695eb; - border-radius: 0; - background-image: none; -} - -table.configuration-reference .configuration-legend p, -table.configuration-reference .configuration-legend input, -table.configuration-reference p, -table.configuration-reference pre code { - font-size: 1.2rem; -} - -table.configuration-reference.tableblock .description-decoration span, -table.configuration-reference.tableblock .description-decoration i { - font-size: 0.7rem; - color: #4695eb; -} - -.configuration-legend span.icon { color: #0D1C2C; } diff --git a/_versions/2.7/guides/tests-with-coverage.adoc b/_versions/2.7/guides/tests-with-coverage.adoc deleted file mode 100644 index 07a0660ce3f..00000000000 --- a/_versions/2.7/guides/tests-with-coverage.adoc +++ /dev/null @@ -1,456 +0,0 @@ - -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Measuring the coverage of your tests - -include::./attributes.adoc[] - -:toc: macro -:toclevels: 4 -:doctype: book -:icons: font -:docinfo1: - -:numbered: -:sectnums: -:sectnumlevels: 4 - - -Learn how to measure the test coverage of your application. This guide covers: - -* Measuring the coverage of your Unit Tests -* Measuring the coverage of your Integration Tests -* Separating the execution of your Unit Tests and Integration Tests -* Consolidating the coverage for all your tests - -Please note that code coverage is not supported in native mode. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] -* Having completed the xref:getting-started-testing.adoc[Testing your application guide] - -== Architecture - -The application built in this guide is just a JAX-RS endpoint (hello world) that relies on dependency injection to use a service. -The service will be tested with JUnit 5 and the endpoint will be annotated via a `@QuarkusTest` annotation. - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. However, you can go right to the completed example. -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `tests-with-coverage-quickstart` {quickstarts-tree-url}/tests-with-coverage-quickstart[directory]. - -== Starting from a simple project and two tests - -Let's start from an empty application created with the Quarkus Maven plugin: - -:create-app-artifact-id: tests-with-coverage-quickstart -:create-app-extensions: resteasy -include::includes/devtools/create-app.adoc[] - -Now we'll be adding all the elements necessary to have an application that is properly covered with tests. - -First, a JAX-RS resource serving a hello endpoint: - -[source,java] ----- -package org.acme.testcoverage; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/hello") -public class GreetingResource { - - private final GreetingService service; - - @Inject - public GreetingResource(GreetingService service) { - this.service = service; - } - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("/greeting/{name}") - public String greeting(@PathParam("name") String name) { - return service.greeting(name); - } - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello() { - return "hello"; - } -} ----- - -This endpoint uses a greeting service: - -[source,java] ----- -package org.acme.testcoverage; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingService { - - public String greeting(String name) { - return "hello " + name; - } - -} ----- - -The project will also need a test: - -[source,java] ----- -package org.acme.testcoverage; - -import io.quarkus.test.junit.QuarkusTest; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.Tag; - -import java.util.UUID; - -import static io.restassured.RestAssured.given; -import static org.hamcrest.CoreMatchers.is; - -@QuarkusTest -public class GreetingResourceTest { - - @Test - public void testHelloEndpoint() { - given() - .when().get("/hello") - .then() - .statusCode(200) - .body(is("hello")); - } - - @Test - public void testGreetingEndpoint() { - String uuid = UUID.randomUUID().toString(); - given() - .pathParam("name", uuid) - .when().get("/hello/greeting/{name}") - .then() - .statusCode(200) - .body(is("hello " + uuid)); - } -} ----- - -== Setting up Jacoco - -Now we need to add Jacoco to our project. To do this we need to add the following to the build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-jacoco - test - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -testImplementation("io.quarkus:quarkus-jacoco") ----- - -This Quarkus extension takes care of everything that would usually be done via the Jacoco Maven plugin, so no additional -config is required. - -WARNING: Using both the extension and the plugin requires special configuration, if you add both you will get lots of errors about classes -already being instrumented. The configuration needed is detailed below. - -== Running the tests with coverage - -Run `mvn verify`, the tests will be run and the results will end up in `target/jacoco-reports`. This is all that is needed, -the `quarkus-jacoco` extension allows Jacoco to just work out of the box. - -There are some config options that affect this: - -include::{generated-dir}/config/quarkus-jacoco-jacoco-config.adoc[opts=optional, leveloffset=+1] - -== Coverage for tests not using @QuarkusTest - -The Quarkus automatic Jacoco config will only work for tests that are annotated with `@QuarkusTest`. If you want to check -the coverage of other tests as well then you will need to fall back to the Jacoco maven plugin. - -In addition to including the `quarkus-jacoco` extension in your pom you will need the following config: - -[role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml -**** -[source,xml] ----- - - - - ... - - org.jacoco - jacoco-maven-plugin - - - default-prepare-agent - - prepare-agent - - - *QuarkusClassLoader <1> - ${project.build.directory}/jacoco-quarkus.exec - true - - - - default-prepare-agent-integration <2> - - prepare-agent-integration - - - *QuarkusClassLoader - ${project.build.directory}/jacoco-quarkus.exec - true - - - - - - - ----- -<1> This config tells it to ignore `@QuarkusTest` related classes, as they are loaded by `QuarkusClassLoader` -<2> This is only needed if you are using Failsafe to run integration tests -**** - -[role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle -**** -[source,gradle,subs=attributes+] ----- -plugins { - id 'jacoco' <1> -} - -test { - finalizedBy jacocoTestReport - jacoco { - excludeClassLoaders = ["*QuarkusClassLoader"] <2> - destinationFile = layout.buildDirectory.file("jacoco-quarkus.exec").get().asFile <2> - } - jacocoTestReport.enabled = false <3> -} ----- -<1> Add the `jacoco` gradle plugin -<2> This config tells it to ignore `@QuarkusTest` related classes, as they are loaded by `QuarkusClassLoader` -<3> Set this config to `false` if you are also using the `quarkus-jacoco` extension and have at least one `@QuarkusTest`. The default `jacocoTestReport` task can be skipped since `quarkus-jacoco` will generate the combined report of regular unit tests and `@QuarkusTest` classes since the execution data is recorded in the same file. -**** - -WARNING: This config will only work if at least one `@QuarkusTest` is being run. If you are not using `@QuarkusTest` then -you can simply use the Jacoco plugin in the standard manner with no additional config. - -=== Coverage for Integration Tests - -To get code coverage data from integration tests, the following need to be requirements need to be met: - -* The built artifact is a jar (and not a container or native binary). -* Jacoco needs to be configured in your build tool. -* The application must have been built with `quarkus.package.write-transformed-bytecode-to-build-output` set to `true` - -WARNING: Setting `quarkus.package.write-transformed-bytecode-to-build-output=true` should be done with a caution and only if subsequent builds are done in a clean environment - i.e. the build tool's output directory has been completely cleaned. - -In the `pom.xml`, you can add the following plugin configuration for Jacoco. This will append integration test data into the same destination file as unit tests, -re-build the Jacoco report after the integration tests are complete, and thus produce a comprehensive code-coverage report. - -[source, xml] ----- - - ... - - ... - - org.jacoco - jacoco-maven-plugin - - - default-prepare-agent-integration - - prepare-agent-integration - - - ${project.build.directory}/jacoco-quarkus.exec - true - - - - report-it - post-integration-test - - report - - - ${project.build.directory}/jacoco-quarkus.exec - ${project.build.directory}/jacoco-report - - - - - ... - - ... - ----- - -In order to run the integration tests as a jar with the Jacoco agent, add the following to your `pom.xml`. -[source, xml] ----- - - ... - - ... - - maven-failsafe-plugin - ${surefire-plugin.version} - - - - integration-test - verify - - - - org.jboss.logmanager.LogManager - ${maven.home} - ${argLine} - - - - - - ... - - ... - - ----- - -WARNING: Sharing the same value for `quarkus.test.arg-line` might break integration test runs that test different types of Quarkus artifacts. In such cases, the use of maven profiles is advised. - -== Setting coverage thresholds - -You can set thresholds for code coverage using the Jacoco Maven plugin. Note the element `${project.build.directory}/jacoco-quarkus.exec`. -You must set it matching your choice for `quarkus.jacoco.data-file`. - -[role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml -**** -[source,xml] ----- - - ... - - ... - - org.jacoco - jacoco-maven-plugin - ${jacoco.version} - - - jacoco-check - - check - - test - - ${project.build.directory}/jacoco-quarkus.exec - - - BUNDLE - - - LINE - COVEREDRATIO - 0.8 - - - BRANCH - COVEREDRATIO - 0.72 - - - - - - - - - ... - - ... - ----- -**** - -[role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle -**** -[source, gradle] ----- -jacocoTestCoverageVerification { - executionData.setFrom("$project.buildDir/jacoco-quarkus.exec") - violationRules { - rule { - limit { - counter = 'INSTRUCTION' - value = 'COVEREDRATIO' - minimum = 0.80 - } - limit { - counter = 'BRANCH' - value = 'COVEREDRATIO' - minimum = 0.72 - } - } - } -} -check.dependsOn jacocoTestCoverageVerification ----- - -Excluding classes from the verification task can be configured as following: - -[source,gradle] ----- -jacocoTestCoverageVerification { - afterEvaluate { <1> - classDirectories.setFrom(files(classDirectories.files.collect { <2> - fileTree(dir: it, exclude: [ - "org/example/package/**/*" <3> - ]) - })) - } -} ----- -<1> `classDirectories` needs to be read after evaluation phase in Gradle -<2> Currently, there is a bug in Gradle JaCoCo which requires the `excludes` to be specified in this manner - https://github.com/gradle/gradle/issues/14760. Once this issue is fixed, excludes -<3> Exclude all classes in `org/example/package` package -**** - -== Conclusion - -You now have all the information you need to study the coverage of your tests! -But remember, some code that is not covered is certainly not well tested. But some code that is covered is not necessarily *well* tested. Make sure to write good tests! diff --git a/_versions/2.7/guides/tooling.adoc b/_versions/2.7/guides/tooling.adoc deleted file mode 100644 index 534b118be5d..00000000000 --- a/_versions/2.7/guides/tooling.adoc +++ /dev/null @@ -1,34 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using our Tooling - -include::./attributes.adoc[] - -Quarkus comes with a toolchain enabling developers from live reload all the way down to deploying a Kubernetes application. In addition there are plugins and extensions to all major IDEs. - -In this guide, we will explore: - -* how to use xref:maven-tooling.adoc[Maven] as a build tool -* how to use xref:gradle-tooling.adoc[Gradle] as a build tool -* how to use the xref:cli-tooling.adoc[CLI] for your toolchain -* how to create and scaffold a new project -* how to deal with extensions -* how to enable live reload -* how to develop your application in your IDE -* how to compile your application natively -* how to setup Quarkus tools in xref:ide-tooling.adoc[Visual Studio Code, Eclipse IDE, Eclipse Che and IntelliJ] - -[[build-tool]] -== Choosing your build tool - -Quarkus comes with a toolchain to help you at all development stages. -You can use Maven or Gradle as build tool. -And we offer a CLI that is convenient to use (coming soon). - -* xref:maven-tooling.adoc[Maven] -* xref:gradle-tooling.adoc[Gradle] -* xref:cli-tooling.adoc[CLI] -* xref:ide-tooling.adoc[IDE] diff --git a/_versions/2.7/guides/transaction.adoc b/_versions/2.7/guides/transaction.adoc deleted file mode 100644 index 8d98c8f7f01..00000000000 --- a/_versions/2.7/guides/transaction.adoc +++ /dev/null @@ -1,247 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Transactions in Quarkus - -include::./attributes.adoc[] - -Quarkus comes with a Transaction Manager and uses it to coordinate and expose transactions to your applications. -Each extension dealing with persistence will integrate with it for you. -And you will explicitly interact with transactions via CDI. -This guide will walk you through all that. - -== Setting it up - -You don't need to worry about setting it up most of the time as extensions needing it will simply add it as a dependency. -Hibernate ORM for example will include the transaction manager and set it up properly. - -You might need to add it as a dependency explicitly if you are using transactions directly without Hibernate ORM for example. -Add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-narayana-jta - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-narayana-jta") ----- - -== Starting and stopping transactions: defining your boundaries - -You can define your transaction boundaries the easy way, or the less easy way :) - -=== Declarative approach - -The easiest way to define your transaction boundaries is to use the `@Transactional` annotation on your entry method (`javax.transaction.Transactional`). - -[source,java] ----- -@ApplicationScoped -public class SantaClausService { - - @Inject ChildDAO childDAO; - @Inject SantaClausDAO santaDAO; - - @Transactional // <1> - public void getAGiftFromSanta(Child child, String giftDescription) { - // some transaction work - Gift gift = childDAO.addToGiftList(child, giftDescription); - if (gift == null) { - throw new OMGGiftNotRecognizedException(); // <2> - } - else { - santaDAO.addToSantaTodoList(gift); - } - } -} ----- - -<1> This annotation defines your transaction boundaries and will wrap this call within a transaction. -<2> A `RuntimeException` crossing the transaction boundaries will rollback the transaction. - -`@Transactional` can be used to control transaction boundaries on any CDI bean at the method level or at the class level to ensure every method is transactional. -That includes REST endpoints. - -You can control whether and how the transaction is started with parameters on `@Transactional`: - -* `@Transactional(REQUIRED)` (default): starts a transaction if none was started, stays with the existing one otherwise. -* `@Transactional(REQUIRES_NEW)`: starts a transaction if none was started ; if an existing one was started, suspends it and starts a new one for the boundary of that method. -* `@Transactional(MANDATORY)`: fails if no transaction was started ; works within the existing transaction otherwise. -* `@Transactional(SUPPORTS)`: if a transaction was started, joins it ; otherwise works with no transaction. -* `@Transactional(NOT_SUPPORTED)`: if a transaction was started, suspends it and works with no transaction for the boundary of the method ; otherwise works with no transaction. -* `@Transactional(NEVER)`: if a transaction was started, raises an exception ; otherwise works with no transaction. - -`REQUIRED` or `NOT_SUPPORTED` are probably the most useful ones. -This is how you decide whether a method is to be running within or outside a transaction. -Make sure to check the JavaDoc for the precise semantic. - -The transaction context is propagated to all calls nested in the `@Transactional` method as you would expect (in this example `childDAO.addToGiftList()` and `santaDAO.addToSantaTodoList()`). -The transaction will commit unless a runtime exception crosses the method boundary. -You can override whether an exception forces the rollback or not by using `@Transactional(dontRollbackOn=SomeException.class)` (or `rollbackOn`). - -You can also programmatically ask for a transaction to be marked for rollback. -Inject a `TransactionManager` for this. - -[source,java] ----- -@ApplicationScoped -public class SantaClausService { - - @Inject TransactionManager tm; // <1> - @Inject ChildDAO childDAO; - @Inject SantaClausDAO santaDAO; - - @Transactional - public void getAGiftFromSanta(Child child, String giftDescription) { - // some transaction work - Gift gift = childDAO.addToGiftList(child, giftDescription); - if (gift == null) { - tm.setRollbackOnly(); // <2> - } - else { - santaDAO.addToSantaTodoList(gift); - } - } -} ----- - -<1> Inject the `TransactionManager` to be able to activate `setRollbackOnly` semantic. -<2> Programmatically decide to set the transaction for rollback. - - -=== Transaction Configuration - -Advanced configuration of the transaction is possible with the use of the `@TransactionConfiguration` annotation that is set in addition to the standard `@Transactional` annotation on your entry method or at the class level. - -The `@TransactionConfiguration` annotation allows to set a timeout property, in seconds, that applies to transactions created within the annotated method. - -This annotation may only be placed on the top level method delineating the transaction. -Annotated nested methods once a transaction has started will throw an exception. - -If defined on a class, it is equivalent to defining it on all the methods of the class marked with `@Transactional`. -The configuration defined on a method takes precedence over the configuration defined on a class. - -=== Reactive extensions - -If your `@Transactional`-annotated method returns a reactive value, such as: - -- `CompletionStage` (from the JDK) -- `Publisher` (from Reactive-Streams) -- Any type which can be converted to one of the two previous types using Reactive Type Converters - -then the behaviour is a bit different, because the transaction will not be terminated until the -returned reactive value is terminated. In effect, the returned reactive value will be listened to -and if it terminates exceptionally the transaction will be marked for rollback, and will be committed -or rolled-back only at termination of the reactive value. - -This allows your reactive methods to keep on working on the transaction asynchronously until their -work is really done, and not just until the reactive method returns. - -If you need to propagate your transaction context across your reactive pipeline, please see the -xref:context-propagation.adoc[Context Propagation guide]. - -=== API approach - -The less easy way is to inject a `UserTransaction` and use the various transaction demarcation methods. - -[source,java] ----- -@ApplicationScoped -public class SantaClausService { - - @Inject ChildDAO childDAO; - @Inject SantaClausDAO santaDAO; - @Inject UserTransaction transaction; - - public void getAGiftFromSanta(Child child, String giftDescription) { - // some transaction work - try { - transaction.begin(); - Gift gift = childDAO.addToGiftList(child, giftDescription); - santaDAO.addToSantaTodoList(gift); - transaction.commit(); - } - catch(SomeException e) { - // do something on Tx failure - transaction.rollback(); - } - } -} ----- - -[NOTE] -==== -You cannot use `UserTransaction` in a method having a transaction started by a `@Transactional` call. -==== - -== Configuring the transaction timeout -You can configure the default transaction timeout, the timeout that applies to all transactions managed by the transaction manager, via the property `quarkus.transaction-manager.default-transaction-timeout`, specified as a duration. - -include::duration-format-note.adoc[] - -The default value is 60 seconds. - -== Configuring transaction node name identifier - -Narayana, as the underlying transaction manager, has a concept of a unique node identifier. -This is important if you consider using XA transactions that involve multiple resources. - -The node name identifier plays a crucial part in the identification of a transaction. -The node name identifier is forged into the transaction id when the transaction is created. -Based on the node name identifier, the transaction manager is capable of recognizing the XA transaction -counterparts created in database or JMS broker. The identifier makes possible for the transaction manager -to roll back the transaction counterparts during recovery. - -The node name identifier needs to be unique per transaction manager deployment. -And the node identifier needs to be stable over the transaction manager restarts. - -The node name identifier may be configured via the property `quarkus.transaction-manager.node-name`. - -== Why always having a transaction manager? - -Does it work everywhere I want to?:: - -Yep, it works in your Quarkus application, in your IDE, in your tests, because all of these are Quarkus applications. -JTA has some bad press for some people. -I don't know why. -Let's just say that this is not your grandpa's JTA implementation. -What we have is perfectly embeddable and lean. - -Does it do 2 Phase Commit and slow down my app?:: - -No, this is an old folk tale. -Let's assume it essentially comes for free and let you scale to more complex cases involving several datasources as needed. - -I don't need transaction when I do read only operations, it's faster.:: - -Wrong. + -First off, just disable the transaction by marking your transaction boundary with `@Transactional(NOT_SUPPORTED)` (or `NEVER` or `SUPPORTS` depending on the semantic you want). + -Second, it's again fairy tale that not using transaction is faster. -The answer is, it depends on your DB and how many SQL SELECTs you are making. -No transaction means the DB does have a single operation transaction context anyways. + -Third, when you do several SELECTs, it's better to wrap them in a single transaction because they will all be consistent with one another. -Say your DB represents your car dashboard, you can see the number of kilometers remaining and the fuel gauge level. -By reading it in one transaction, they will be consistent. -If you read one and the other from two different transactions, then they can be inconsistent. -It can be more dramatic if you read data related to rights and access management for example. - -Why do you prefer JTA vs Hibernate's transaction management API:: - -Managing the transactions manually via `entityManager.getTransaction().begin()` and friends lead to a butt ugly code with tons of try catch finally that people get wrong. -Transactions are also about JMS and other database access, so one API makes more sense. - -It's a mess because I don't know if my JPA persistence unit is using `JTA` or `Resource-level` Transaction:: - -It's not a mess in Quarkus :) -Resource-level was introduced to support JPA in a non managed environment. -But Quarkus is both lean and a managed environment so we can safely always assume we are in JTA mode. -The end result is that the difficulties of running Hibernate ORM + CDI + a transaction manager in Java SE mode are solved by Quarkus. diff --git a/_versions/2.7/guides/upx.adoc b/_versions/2.7/guides/upx.adoc deleted file mode 100644 index ec3c32a767b..00000000000 --- a/_versions/2.7/guides/upx.adoc +++ /dev/null @@ -1,72 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// - -= Compressing native executables using UPX - -include::./attributes.adoc[] - -https://upx.github.io/[Ultimate Packer for eXecutables (UPX)] is a compression tool reducing the size of executables. -Quarkus can compress the produced native executable to reduce its size. -Such compression is interesting when: - -* building CLI tools, and you want to reduce the disk footprint, -* building small container images. - -Note that UPX compression: - -1. increases your build time, mainly if you use high-compression levels -2. increases the startup RSS usage of the application - -== System vs. Container - -The UPX compression requires: - -* the `upx` command to be available in the system `PATH`; -* or to have built the native executable using an in-container build. - -If you have the `upx` command available on your path, Quarkus uses it. -Otherwise, if you built the native image using an in-container build (using `quarkus.native.container-build=true`) and if the builder image provides the `upx` command, Quarkus compresses the executable from inside the container. - -If you are not in one of these cases, the compression fails. - -[IMPORTANT] -.upx is cross-platform. -==== -`upx` can compress executables using a different architecture and OS than your host machine. For example, `upx` on a MacOS X machine can compress a Linux 64bits executables. -==== - -== Configuring the UPX compression - -Then, in your application configuration, enable the compression by configuring the _compression level_ you want: - -[source, properties] ----- -quarkus.native.compression.level=5 ----- - -If the compression level is not set, the compression is disabled. -The compression will happen once the native executable is built and will replace the executable. - -== Compression level - -The compression level goes from 1 to 10: - -* `1`: faster compression -* `9`: better compression -* `10`: best compression (can be slow for big files) - -== Extra parameters - -You can pass extra parameter to upx, such as `--brute` or `--ultra-brute` using the `quarkus.native.compression.additional-args` parameter. -The value is a comma-separated list of arguments: - -[source, properties] ----- -quarkus.native.compression.level=3 -quarkus.native.compression.additional-args=--ultra-brute,-v ----- - -The exhaustive list of parameters can be found in https://github.com/upx/upx/blob/devel/doc/upx.pod[the UPX documentation]. \ No newline at end of file diff --git a/_versions/2.7/guides/validation.adoc b/_versions/2.7/guides/validation.adoc deleted file mode 100644 index 87e5a610fc7..00000000000 --- a/_versions/2.7/guides/validation.adoc +++ /dev/null @@ -1,439 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Validation with Hibernate Validator - -include::./attributes.adoc[] - -This guide covers how to use Hibernate Validator/Bean Validation for: - - * validating the input/output of your REST services; - * validating the parameters and return values of the methods of your business services. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -The application built in this guide is quite simple. The user fills a form on a web page. -The web page sends the form content to the `BookResource` as JSON (using Ajax). The `BookResource` validates the user input and returns the -_result_ as JSON. - -image:validation-guide-architecture.png[alt=Architecture] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `validation-quickstart` {quickstarts-tree-url}/validation-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: validation-quickstart -:create-app-extensions: resteasy,resteasy-jackson,hibernate-validator -include::includes/devtools/create-app.adoc[] - -This command generates a Maven structure importing the RESTEasy/JAX-RS, Jackson and Hibernate Validator/Bean Validation extensions. - -If you already have your Quarkus project configured, you can add the `hibernate-validator` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: hibernate-validator -include::includes/devtools/extension-add.adoc[] - -The result of this command is dependent on your build tool: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-hibernate-validator - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-hibernate-validator") ----- - -== Constraints - -In this application, we are going to test an elementary object, but we support complicated constraints and can validate graphs of objects. -Create the `org.acme.validation.Book` class with the following content: - -[source, java] ----- -package org.acme.validation; - -import javax.validation.constraints.NotBlank; -import javax.validation.constraints.Min; - -public class Book { - - @NotBlank(message="Title may not be blank") - public String title; - - @NotBlank(message="Author may not be blank") - public String author; - - @Min(message="Author has been very lazy", value=1) - public double pages; -} ----- - -Constraints are added on fields, and when an object is validated, the values are checked. -The getter and setter methods are also used for JSON mapping. - -== JSON mapping and validation - -Create the following REST resource as `org.acme.validation.BookResource`: - -[source,java] ----- -package org.acme.validation; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/books") -public class BookResource { - - @Inject - Validator validator; <1> - - @Path("/manual-validation") - @POST - public Result tryMeManualValidation(Book book) { - Set> violations = validator.validate(book); - if (violations.isEmpty()) { - return new Result("Book is valid! It was validated by manual validation."); - } else { - return new Result(violations); - } - } -} ----- -<1> The `Validator` instance is injected via CDI. - -Yes it does not compile, `Result` is missing, but we will add it very soon. - -The method parameter (`book`) is created from the JSON payload automatically. - -The method uses the `Validator` instance to check the payload. -It returns a set of violations. -If this set is empty, it means the object is valid. -In case of failures, the messages are concatenated and sent back to the browser. - -Let's now create the `Result` class as an inner class: - -[source, java] ----- -public static class Result { - - Result(String message) { - this.success = true; - this.message = message; - } - - Result(Set> violations) { - this.success = false; - this.message = violations.stream() - .map(cv -> cv.getMessage()) - .collect(Collectors.joining(", ")); - } - - private String message; - private boolean success; - - public String getMessage() { - return message; - } - - public boolean isSuccess() { - return success; - } - -} ----- - -The class is very simple and only contains 2 fields and the associated getters and setters. -Because we indicate that we produce JSON, the mapping to JSON is made automatically. - -== REST end point validation - -While using the `Validator` manually might be useful for some advanced usage, -if you simply want to validate the parameters or the return value or your REST end point, -you can annotate it directly, either with constraints (`@NotNull`, `@Digits`...) -or with `@Valid` (which will cascade the validation to the bean). - -Let's create an end point validating the `Book` provided in the request: - -[source, java] ----- -@Path("/end-point-method-validation") -@POST -@Produces(MediaType.APPLICATION_JSON) -@Consumes(MediaType.APPLICATION_JSON) -public Result tryMeEndPointMethodValidation(@Valid Book book) { - return new Result("Book is valid! It was validated by end point method validation."); -} ----- - -As you can see, we don't have to manually validate the provided `Book` anymore as it is automatically validated. - -If a validation error is triggered, a violation report is generated and serialized as JSON as our end point produces a JSON output. -It can be extracted and manipulated to display a proper error message. - -== Service method validation - -It might not always be handy to have the validation rules declared at the end point level as it could duplicate some business validation. - -The best option is then to annotate a method of your business service with your constraints (or in our particular case with `@Valid`): - -[source, java] ----- -package org.acme.validation; - -import javax.enterprise.context.ApplicationScoped; -import javax.validation.Valid; - -@ApplicationScoped -public class BookService { - - public void validateBook(@Valid Book book) { - // your business logic here - } -} ----- - -Calling the service in your rest end point triggers the `Book` validation automatically: - -[source, java] ----- -@Inject BookService bookService; - -@Path("/service-method-validation") -@POST -public Result tryMeServiceMethodValidation(Book book) { - try { - bookService.validateBook(book); - return new Result("Book is valid! It was validated by service method validation."); - } catch (ConstraintViolationException e) { - return new Result(e.getConstraintViolations()); - } -} ----- - -Note that, if you want to push the validation errors to the frontend, you have to catch the exception and push the information yourselves -as they will not be automatically pushed to the JSON output. - -Keep in mind that you usually don't want to expose to the public the internals of your services -- and especially not the validated value contained in the violation object. - -== A frontend - -Now let's add the simple web page to interact with our `BookResource`. -Quarkus automatically serves static resources contained in the `META-INF/resources` directory. -In the `src/main/resources/META-INF/resources` directory, replace the `index.html` file with the content from this {quickstarts-blob-url}/validation-quickstart/src/main/resources/META-INF/resources/index.html[index.html] file in it. - -== Run the application - -Now, let's see our application in action. Run it with: - -include::includes/devtools/dev.adoc[] - -Then, open your browser to http://localhost:8080/: - -1. Enter the book details (valid or invalid) -2. Click on the _Try me..._ buttons to check if your data is valid using one of the methods we presented above. - -image:validation-guide-screenshot.png[alt=Application] - -The application can be packaged using: - -include::includes/devtools/build.adoc[] - -and executed using `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also build the native executable using: - -include::includes/devtools/build-native.adoc[] - -== Going further - -=== Hibernate Validator extension and CDI - -The Hibernate Validator extension is tightly integrated with CDI. - -==== Configuring the ValidatorFactory - -Sometimes, you might need to configure the behavior of the `ValidatorFactory`, for instance to use a specific `ParameterNameProvider`. - -While the `ValidatorFactory` is instantiated by Quarkus itself, -you can very easily tweak it by declaring replacement beans that will be injected in the configuration. - -If you create a bean of the following types in your application, it will automatically be injected into the `ValidatorFactory` configuration: - - * `javax.validation.ClockProvider` - * `javax.validation.ConstraintValidator` - * `javax.validation.ConstraintValidatorFactory` - * `javax.validation.MessageInterpolator` - * `javax.validation.ParameterNameProvider` - * `javax.validation.TraversableResolver` - * `org.hibernate.validator.spi.properties.GetterPropertySelectionStrategy` - * `org.hibernate.validator.spi.nodenameprovider.PropertyNodeNameProvider` - * `org.hibernate.validator.spi.scripting.ScriptEvaluatorFactory` - -You don't have to wire anything. - -[WARNING] -==== -Obviously, for each listed type, you can declare only one bean. - -Most of the time, these beans should be declared as `@ApplicationScoped`. - -However, in the case of ``ConstraintValidator``s that are dependent of attributes of the constraint annotation -(typically when implementing the `initialize(A constraintAnnotation)` method), -use the `@Dependent` scope to make sure each annotation context has a separate instance of the `ConstraintValidator` bean. -==== - -==== Constraint validators as beans - -You can declare your constraint validators as CDI beans: - -[source,java] ----- -@ApplicationScoped -public class MyConstraintValidator implements ConstraintValidator { - - @Inject - MyService service; - - @Override - public boolean isValid(String value, ConstraintValidatorContext context) { - if (value == null) { - return true; - } - - return service.validate(value); - } -} ----- - -When initializing a constraint validator of a given type, -Quarkus will check if a bean of this type is available and, if so, it will use it instead of instantiating one. - -Thus, as demonstrated in our example, you can fully use injection in your constraint validator beans. - -[NOTE] -==== -Except in very specific situations, it is recommended to make the said beans `@ApplicationScoped`. -==== - -=== Validation and localization - -By default, constraint violation messages will be returned in the build system locale. - -You can configure this behavior by adding the following configuration in your `application.properties`: - -[source, properties] ----- -# The default locale to use -quarkus.default-locale=fr-FR ----- - -If you are using RESTEasy, in the context of a JAX-RS endpoint, Hibernate Validator will automatically resolve the optimal locale to use from the `Accept-Language` HTTP header, -provided the supported locales have been properly specified in the `application.properties`: - -[source, properties] ----- -# The list of all the supported locales -quarkus.locales=en-US,es-ES,fr-FR ----- - -=== Validation groups for REST endpoint or service method validation - -It's sometimes necessary to enable different validation constraints -for the same class when it's passed to a different method. - -For example, a `Book` may need to have a `null` identifier when passed to the `post` method -(because the identifier will be generated), -but a non-`null` identifier when passed to the `put` method -(because the method needs the identifier to know what to update). - -To address this, you can take advantage of validation groups. -Validation groups are markers that you put on your constraints in order to enable or disable them at will. - -First, define the `Post` and `Put` groups, which are just Java interfaces. - -[source, java] ----- -public interface ValidationGroups { - interface Post extends Default { // <1> - } - interface Put extends Default { // <1> - } -} ----- -<1> Make the custom groups extend the `Default` group. -This means that whenever these groups are enabled, the `Default` group is also enabled. -This is useful if you have constraints that you want validated in both the `Post` and `Put` method: -you can simply use the default group on those constraints, like on the `title` property below. - -Then add the relevant constraints to `Book`, assigning the right group to each constraint: - -[source, java] ----- -public class Book { - - @Null(groups = ValidationGroups.Post.class) - @NotNull(groups = ValidationGroups.Put.class) - public Long id; - - @NotBlank - public String title; - -} ----- - -Finally, add a `@ConvertGroup` annotation next to your `@Valid` annotation in your validated method. - -[source, java] ----- -@Path("/") -@POST -@Consumes(MediaType.APPLICATION_JSON) -public void post(@Valid @ConvertGroup(to = ValidationGroups.Post.class) Book book) { // <1> - // ... -} - -@Path("/") -@PUT -@Consumes(MediaType.APPLICATION_JSON) -public void put(@Valid @ConvertGroup(to = ValidationGroups.Put.class) Book book) { // <2> - // ... -} ----- -<1> Enable the `Post` group, meaning only constraints assigned to the `Post` (and `Default`) groups -will be validated for the `book` parameter of the `post` method. -In this case, it means `Book.id` must be `null` and `Book.title` must not be blank. -<2> Enable the `Put` group, meaning only constraints assigned to the `Put` (and `Default`) groups -will be validated for the `book` parameter of the `put` method. -In this case, it means `Book.id` must not be `null` and `Book.title` must not be blank. - -[[configuration-reference]] -== Hibernate Validator Configuration Reference - -include::{generated-dir}/config/quarkus-hibernate-validator.adoc[leveloffset=+1, opts=optional] diff --git a/_versions/2.7/guides/vertx-reference.adoc b/_versions/2.7/guides/vertx-reference.adoc deleted file mode 100644 index a12dfab3462..00000000000 --- a/_versions/2.7/guides/vertx-reference.adoc +++ /dev/null @@ -1,1040 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Vert.x Reference Guide - -include::./attributes.adoc[] - -https://vertx.io[Vert.x] is a toolkit for building reactive applications. -As described in the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture], Quarkus uses Vert.x underneath. - -This guide is the companion to the xref:vertx.adoc[Using Eclipse Vert.x API from a Quarkus Application] guide. -It provides more advanced details about the usage and the configuration of the Vert.x instance used by Quarkus. - - -[#vertx-access] -== Accessing the Vert.x instance - -To access the managed Vert.x instance, add the `quarkus-vertx` extension to your project. -Note that this dependency may already be installed (as a transitive dependency). - -With this extension, you can retrieve the managed instance of Vert.x using either field or constructor injection: - -[source, java] ----- -@ApplicationScoped -public class MyBean { -// Field injection -@Inject Vertx vertx; - -// Constructor injection -MyBean(Vertx vertx) { - // ... -} - -} ----- - -You can inject either the: - -* `io.vertx.core.Vertx` instance exposing the _bare_ Vert.x API -* `io.vertx.mutiny.core.Vertx` instance exposing the _Mutiny_ API - -We recommend using the Mutiny variant as it integrates with the other reactive APIs provided by Quarkus. - -[TIP] -.Mutiny -==== -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -Documentation about the Vert.x Mutiny variant is available on https://smallrye.io/smallrye-mutiny-vertx-bindings. - -[[vertx-config]] -== Configuring the Vert.x instance - -You can configure the Vert.x instance from the `application.properties` file. -The following table lists the supported properties: - -include::{generated-dir}/config/quarkus-vertx-core.adoc[opts=optional, leveloffset=+1] - - -[[using-vertx-clients]] -== Using Vert.x clients - -In addition to Vert.x core, you can use most Vert.x ecosystem libraries. -Some Quarkus extension already wraps Vert.x libraries. - -=== Available APIs - -The following table lists the most used libraries from the Vert.x ecosystem. -To access these APIs, add the indicated extension or dependency to your project. -Refer to the associated documentation to learn how to use them. - -[cols="1,1,1",stripes=even,options=headers] -|=== -|API -|Extension or Dependency -|Documentation - -|AMQP Client -|`io.quarkus:quarkus-smallrye-reactive-messaging-amqp` (extension) -|https://quarkus.io/guides/amqp - -|Circuit Breaker -|`io.smallrye.reactive:smallrye-mutiny-vertx-circuit-breaker` (external dependency) -|https://vertx.io/docs/vertx-circuit-breaker/java/ - -|Consul Client -|`io.smallrye.reactive:smallrye-mutiny-vertx-consul-client` (external dependency) -|https://vertx.io/docs/vertx-consul-client/java/ - -|DB2 Client -|`io.quarkus:quarkus-reactive-db2-client` (extension) -|https://quarkus.io/guides/reactive-sql-clients - -|Kafka Client -|`io.quarkus:quarkus-smallrye-reactive-messaging-kafka` (extension) -|https://quarkus.io/guides/kafka - -|Mail Client -|`io.quarkus:quarkus-mailer` (extension) -|https://quarkus.io/guides/mailer - -|MQTT Client -|`io.quarkus:quarkus-smallrye-reactive-messaging-mqtt` (extension) -|https://quarkus.io/guides/mqtt - -|MS SQL Client -|`io.quarkus:quarkus-reactive-mssql-client` (extension) -|https://quarkus.io/guides/reactive-sql-clients - -|MySQL Client -|`io.quarkus:quarkus-reactive-mysql-client` (extension) -|https://quarkus.io/guides/reactive-sql-clients - -|Oracle Client -|`io.quarkus:quarkus-reactive-oracle-client` (extension) -|https://quarkus.io/guides/reactive-sql-clients - -|PostgreSQL Client -|`io.quarkus:quarkus-reactive-pg-client` (extension) -|https://quarkus.io/guides/reactive-sql-clients - -|RabbitMQ Client -|`io.smallrye.reactive:smallrye-mutiny-vertx-rabbitmq-client` (external dependency) -|https://vertx.io/docs/vertx-rabbitmq-client/java - -|Redis Client -|`io.quarkus:quarkus-redis-client` (extension) -|https://quarkus.io/guides/redis - -|Web Client -|`io.smallrye.reactive:smallrye-mutiny-vertx-web-client` (external dependency) -|https://vertx.io/docs/vertx-web-client/java/ - -|=== - -To learn more about the usage of the Vert.x Mutiny API, refer to https://smallrye.io/smallrye-mutiny-vertx-bindings. - -=== Example of usage - -This section gives an example using the Vert.x `WebClient`. -As indicated in the table above, add the following dependency to your project: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.reactive - smallrye-mutiny-vertx-web-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye.reactive:smallrye-mutiny-vertx-web-client") ----- - -Now, in your code, you can create an instance of `WebClient`: - -[source, java] ----- -package org.acme.vertx; - - -import javax.annotation.PostConstruct; -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import io.smallrye.mutiny.Uni; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import io.vertx.mutiny.core.Vertx; -import io.vertx.mutiny.ext.web.client.WebClient; -import io.vertx.core.json.JsonObject; -import io.vertx.ext.web.client.WebClientOptions; - -@Path("/fruit-data") -public class ResourceUsingWebClient { - - private final WebClient client; - - @Inject - VertxResource(Vertx vertx) { - this.client = WebClient.create(vertx); - } - - @GET - @Produces(MediaType.APPLICATION_JSON) - @Path("/{name}") - public Uni getFruitData(@PathParam("name") String name) { - return client.getAbs("https://.../api/fruit/" + name) - .send() - .onItem().transform(resp -> { - if (resp.statusCode() == 200) { - return resp.bodyAsJsonObject(); - } else { - return new JsonObject() - .put("code", resp.statusCode()) - .put("message", resp.bodyAsString()); - } - }); - } - -} - ----- - -This resource creates a `WebClient` and, upon request, uses this client to invoke a remote HTTP API. -Depending on the result, the response is forwarded as received, or it creates a JSON object wrapping the error. -The `WebClient` is asynchronous (and non-blocking), to the endpoint returns a `Uni`. - -The application can also run as a native executable. -But, first, we need to instruct Quarkus to enable _ssl_ (if the remote API uses HTTPS). -Open the `src/main/resources/application.properties` and add: - -[source,properties] ----- -quarkus.ssl.native=true ----- - -Then, create the native executable with: - -include::includes/devtools/build-native.adoc[] - -[#using-vert-x-json] -== Using Vert.x JSON - -Vert.x APIs often rely on JSON. -Vert.x provides two convenient classes to manipulate JSON document: `io.vertx.core.json.JsonObject` and `io.vertx.core.json.JsonArray`. - -`JsonObject` can be used to map an object into its JSON representation and build an object from a JSON document: - -[source, java] ----- -// Map an object into JSON -Person person = ...; -JsonObject json = JsonObject.mapFrom(person); - -// Build an object from JSON -json = new JsonObject(); -person = json.mapTo(Person.class); ----- - -Note that these features use the mapper managed by the `quarkus-jackson` extension. -Refer to xref:rest-json.adoc#json[Jackson configuration] to customize the mapping. - - -JSON Object and JSON Array are both supported as Quarkus HTTP endpoint requests and response bodies (using classic RESTEasy and RESTEasy Reactive). -Consider these endpoints: - - -[source,java] ----- -package org.acme.vertx; - -import io.vertx.core.json.JsonObject; -import io.vertx.core.json.JsonArray; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -@Path("/hello") -@Produces(MediaType.APPLICATION_JSON) -public class VertxJsonResource { - - @GET - @Path("{name}/object") - public JsonObject jsonObject(@PathParam String name) { - return new JsonObject().put("Hello", name); - } - - @GET - @Path("{name}/array") - public JsonArray jsonArray(@PathParam String name) { - return new JsonArray().add("Hello").add(name); - } -} ----- - -http://localhost:8080/hello/Quarkus/object returns: - -[source, text] ----- -{"Hello":"Quarkus"} ----- - -http://localhost:8080/hello/Quarkus/array returns: - -[source, text] ----- -["Hello","Quarkus"] ----- - -This works equally well when the JSON content is a request body or is wrapped in a `Uni`, `Multi`, `CompletionStage` or `Publisher`. - -== Using verticles - -link:https://vertx.io/docs/vertx-core/java/#_verticles[Verticles] is "a simple, scalable, actor-like deployment and concurrency model" provided by _Vert.x_. -This model does not claim to be a strict actor-model implementation, but it shares similarities, especially concerning concurrency, scaling, and deployment. -To use this model, you write and _deploy_ verticles, communicating by sending messages on the event bus. - -You can deploy _verticles_ in Quarkus. -It supports: - -* _bare_ verticle - Java classes extending `io.vertx.core.AbstractVerticle` -* _Mutiny_ verticle - Java classes extending `io.smallrye.mutiny.vertx.core.AbstractVerticle` - -=== Deploying verticles - -To deploy verticles, use the `deployVerticle` method: - -[source, java] ----- -@Inject Vertx vertx; - -// ... -vertx.deployVerticle(MyVerticle.class.getName(), ar -> { }); -vertx.deployVerticle(new MyVerticle(), ar -> { }); ----- - -If you use the Mutiny-variant of Vert.x, be aware that the `deployVerticle` method returns a `Uni`, and you would need to trigger a subscription to make the actual deployment. - -NOTE: An example explaining how to deploy verticles during the initialization of the application will follow. - -=== Using @ApplicationScoped Beans as Verticle - -In general, Vert.x verticles are not CDI beans. -And so cannot use injection. -However, in Quarkus, you can deploy verticles as beans. -Note that in this case, CDI (Arc in Quarkus) is responsible for creating the instance. - -The following snippet provides an example: - -[source, java] ----- -package io.quarkus.vertx.verticles; - -import io.smallrye.mutiny.Uni; -import io.smallrye.mutiny.vertx.core.AbstractVerticle; -import org.eclipse.microprofile.config.inject.ConfigProperty; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class MyBeanVerticle extends AbstractVerticle { - - @ConfigProperty(name = "address") String address; - - @Override - public Uni asyncStart() { - return vertx.eventBus().consumer(address) - .handler(m -> m.replyAndForget("hello")) - .completionHandler(); - } -} ----- - -You don't have to inject the `vertx` instance; instead, leverage the protected field from `AbstractVerticle`. - -Then, deploy the verticle instances with: - -[source, java] ----- -package io.quarkus.vertx.verticles; - -import io.quarkus.runtime.StartupEvent; -import io.vertx.mutiny.core.Vertx; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; - -@ApplicationScoped -public class VerticleDeployer { - - public void init(@Observes StartupEvent e, Vertx vertx, MyBeanVerticle verticle) { - vertx.deployVerticle(verticle).await().indefinitely(); - } -} ----- - -If you want to deploy every exposed `AbstractVerticle`, you can use: - -[source,java] ----- -public void init(@Observes StartupEvent e, Vertx vertx, Instance verticles) { - for (AbstractVerticle verticle : verticles) { - vertx.deployVerticle(verticle).await().indefinitely(); - } -} ----- - -=== Using multiple verticles instances - -When using `@ApplicationScoped`, you will get a single instance for your verticle. -Having multiple instances of verticles can be helpful to share the load among them. -Each of them will be associated with a different I/O thread (Vert.x event loop). - -To deploy multiple instances of your verticle, use the `@Dependent` scope instead of `@ApplicationScoped`: - -[source, java] ----- -package org.acme.verticle; - -import io.smallrye.mutiny.Uni; -import io.smallrye.mutiny.vertx.core.AbstractVerticle; - -import javax.enterprise.context.Dependent; -import javax.inject.Inject; - -@Dependent -public class MyVerticle extends AbstractVerticle { - - @Override - public Uni asyncStart() { - return vertx.eventBus().consumer("address") - .handler(m -> m.reply("Hello from " + this)) - .completionHandler(); - } -} ----- - -Then, deploy your verticle as follows: - -[source, java] ----- -package org.acme.verticle; - -import io.quarkus.runtime.StartupEvent; -import io.vertx.core.DeploymentOptions; -import io.vertx.mutiny.core.Vertx; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; -import javax.enterprise.inject.Instance; -import javax.inject.Inject; - -@ApplicationScoped -public class MyApp { - - void init(@Observes StartupEvent ev, Vertx vertx, Instance verticles) { - vertx - .deployVerticle(verticles::get, new DeploymentOptions().setInstances(2)) - .await().indefinitely(); - } -} - ----- - -The `init` method receives an `Instance`. -Then, you pass a supplier to the `deployVerticle` method. -The supplier is just calling the `get()` method. -Thanks to the `@Dependent` scope, it returns a new instance on every call. -Finally, you pass the desired number of instances to the `DeploymentOptions`, such as two in the previous example. -It will call the supplier twice, which will create two instances of your verticle. - -[#eventbus] -== Using the event bus - -Vert.x comes with a built-in https://vertx.io/docs/vertx-core/java/#event_bus[event bus] that you can use from your Quarkus application. -So, your application components (CDI beans, resources...) can interact using asynchronous events, thus promoting loose-coupling. - -With the event bus, you send _messages_ to _virtual addresses_. -The event bus offers three types of delivery mechanisms: - -- point-to-point - send the message, one consumer receives it. If several consumers listen to the address, a round-robin is applied; -- publish/subscribe - publish a message; all the consumers listening to the address are receiving the message; -- request/reply - send the message and expect a response. The receiver can respond to the message in an asynchronous fashion. - -All these delivery mechanisms are non-blocking and are providing one of the fundamental bricks to build reactive applications. - -=== Consuming events - -While you can use the Vert.x API to register consumers, Quarkus comes with declarative support. -To consume events, use the `io.quarkus.vertx.ConsumeEvent` annotation: - -[source, java] ----- -package org.acme.vertx; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingService { - - @ConsumeEvent // <1> - public String consume(String name) { // <2> - return name.toUpperCase(); - } -} ----- -<1> If not set, the address is the fully qualified name of the bean; for instance, in this snippet, it's `org.acme.vertx.GreetingService`. -<2> The method parameter is the message body. If the method returns _something_, it's the message response. - -=== Configuring the address - -The `@ConsumeEvent` annotation can be configured to set the address: - -[source, java] ----- -@ConsumeEvent("greeting") // <1> -public String consume(String name) { - return name.toUpperCase(); -} ----- -<1> Receive the messages sent to the `greeting` address - -=== Asynchronous processing - -The previous examples use synchronous processing. -Asynchronous processing is also possible by returning either an `io.smallrye.mutiny.Uni` or a `java.util.concurrent.CompletionStage`: - -[source,java] ----- -package org.acme.vertx; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.CompletionStage; -import io.smallrye.mutiny.Uni; - -@ApplicationScoped -public class GreetingService { - - @ConsumeEvent - public CompletionStage consume(String name) { - // return a CompletionStage completed when the processing is finished. - // You can also fail the CompletionStage explicitly - } - - @ConsumeEvent - public Uni process(String name) { - // return an Uni completed when the processing is finished. - // You can also fail the Uni explicitly - } -} ----- - -[TIP] -.Mutiny -==== -The previous example uses Mutiny reactive types. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive reactive programming library]. -==== - -=== Blocking processing - -By default, the code consuming the event must be _non-blocking_, as it's called on an I/O thread. -If your processing is blocking, use the `@io.smallrye.common.annotation.Blocking` annotation: - -[source, java] ----- -@ConsumeEvent(value = "blocking-consumer") -@Blocking -void consumeBlocking(String message) { - // Something blocking -} ----- - -Alternatively, you can use the `blocking` attribute from the `@ConsumeEvent` annotation: - -[source, java] ----- -@ConsumeEvent(value = "blocking-consumer", blocking = true) -void consumeBlocking(String message) { - // Something blocking -} ----- - -When using `@Blocking`, it ignores the value of the `blocking` attribute of `@ConsumeEvent`. - -=== Replying to messages - -The _return_ value of a method annotated with `@ConsumeEvent` is used to respond to the incoming message. -For instance, in the following snippet, the returned `String` is the response. - -[source, java] ----- -@ConsumeEvent("greeting") -public String consume(String name) { - return name.toUpperCase(); -} ----- - -You can also return a `Uni` or a `CompletionStage` to handle asynchronous reply: - -[source, java] ----- -@ConsumeEvent("greeting") -public Uni consume2(String name) { - return Uni.createFrom().item(() -> name.toUpperCase()).emitOn(executor); -} ----- - -[NOTE] -==== -You can inject an `executor` if you use the Context Propagation extension: -[source, code] ----- -@Inject Executor executor; ----- -==== - -=== Implementing fire and forget interactions - -You don't have to reply to received messages. -Typically, for a _fire and forget_ interaction, the messages are consumed, and the sender does not need to know about it. -To implement this pattern, your consumer method returns `void`. - -[source,java] ----- -@ConsumeEvent("greeting") -public void consume(String event) { - // Do something with the event -} ----- - -=== Dealing with messages - -Unlike the previous example using the _payloads_ directly, you can also use `Message` directly: - -[source, java] ----- -@ConsumeEvent("greeting") -public void consume(Message msg) { - System.out.println(msg.address()); - System.out.println(msg.body()); -} ----- - -=== Handling Failures - -If a method annotated with `@ConsumeEvent` throws an exception, then: - -* if a reply handler is set, then the failure is propagated back to the sender via an `io.vertx.core.eventbus.ReplyException` with code `ConsumeEvent#FAILURE_CODE` and the exception message, -* if no reply handler is set, then the exception is rethrown (and wrapped in a `RuntimeException` if necessary) and can be handled by the default exception handler, i.e. `io.vertx.core.Vertx#exceptionHandler()`. - -=== Sending messages - -Sending and publishing messages use the Vert.x event bus: - -[source, java] ----- -package org.acme.vertx; - -import io.smallrye.mutiny.Uni; -import io.vertx.mutiny.core.eventbus.EventBus; -import io.vertx.mutiny.core.eventbus.Message; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/async") -public class EventResource { - - @Inject - EventBus bus; // <1> - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("{name}") - public Uni greeting(@PathParam String name) { - return bus.request("greeting", name) // <2> - .onItem().transform(Message::body); - } -} ----- -<1> Inject the Event bus -<2> Send a message to the address `greeting`. Message payload is `name` - -The `EventBus` object provides methods to: - -1. `send` a message to a specific address - one single consumer receives the message. -2. `publish` a message to a specific address - all consumers receive the messages. -3. `request` a message and expect a reply - -[source, java] ----- -// Case 1 -bus.sendAndForget("greeting", name) -// Case 2 -bus.publish("greeting", name) -// Case 3 -Uni response = bus.request("address", "hello, how are you?") - .onItem().transform(Message::body); ----- - -=== Using codecs - -The https://vertx.io/docs/vertx-core/java/#event_bus[Vert.x Event Bus] uses codecs to _serialize_ and _deserialize_ objects. -Quarkus provides a default codec for local delivery. -So you can exchange objects as follows: - -[source, java] ----- -@GET -@Produces(MediaType.TEXT_PLAIN) -@Path("{name}") -public Uni greeting(@PathParam String name) { - return bus.request("greeting", new MyName(name)) - .onItem().transform(Message::body); -} - -@ConsumeEvent(value = "greeting") -Uni greeting(MyName name) { - return Uni.createFrom().item(() -> "Hello " + name.getName()); -} ----- - -If you want to use a specific codec, you need to set it on both ends explicitly: - -[source, java] ----- -@GET -@Produces(MediaType.TEXT_PLAIN) -@Path("{name}") -public Uni greeting(@PathParam String name) { - return bus.request("greeting", name, - new DeliveryOptions().setCodecName(MyNameCodec.class.getName())) // <1> - .onItem().transform(Message::body); -} - -@ConsumeEvent(value = "greeting", codec = MyNameCodec.class) // <2> -Uni greeting(MyName name) { - return Uni.createFrom().item(() -> "Hello "+name.getName()); -} ----- -<1> Set the name of the codec to use to send the message -<2> Set the codec to use to receive the message - -=== Combining HTTP and the event bus - -Let's revisit a greeting HTTP endpoint and use asynchronous message passing to delegate the call to a separated bean. -It uses the request/reply dispatching mechanism. -Instead of implementing the business logic inside the JAX-RS endpoint, we are sending a message. -Another bean consumes this message, and the response is sent using the _reply_ mechanism. - -In your HTTP endpoint class, inject the event bus and uses the `request` method to send a message to the event bus and expect a response: - -[source,java] ----- -package org.acme.vertx; - -import io.smallrye.mutiny.Uni; -import io.vertx.mutiny.core.eventbus.EventBus; -import io.vertx.mutiny.core.eventbus.Message; -import org.jboss.resteasy.annotations.jaxrs.PathParam; - -import javax.inject.Inject; -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.Produces; -import javax.ws.rs.core.MediaType; - -@Path("/bus") -public class EventResource { - - @Inject - EventBus bus; - - @GET - @Produces(MediaType.TEXT_PLAIN) - @Path("{name}") - public Uni greeting(@PathParam String name) { - return bus.request("greeting", name) // <1> - .onItem().transform(Message::body); // <2> - } -} ----- -<1> send the `name` to the `greeting` address and request a response -<2> when we get the response, extract the body and send it to the user - -NOTE: the HTTP method returns a `Uni`. -If you are using RESTEasy Reactive, `Uni` support is built-in. -If you are using _classic_ RESTEasy, you need to add the `quarkus resteasy-mutiny` extension to your project. - -We need a consumer listening on the `greeting` address. -This consumer can be in the same class or another bean such as: - -[source, java] ----- -package org.acme.vertx; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped -public class GreetingService { - - @ConsumeEvent("greeting") - public String greeting(String name) { - return "Hello " + name; - } - -} ----- - -This bean receives the name and returns the greeting message. - -With this in place, every HTTP request on `/bus/quarkus` sends a message to the event bus, waits for a reply, and when this one arrives, writes the HTTP response: - -[source,text] ----- -Hello Quarkus ----- - -To better understand, let's detail how the HTTP request/response has been handled: - -1. The request is received by the `greeting` method -2. a message containing the _name_ is sent to the event bus -3. Another bean receives this message and computes the response -4. This response is sent back using the reply mechanism -5. Once the reply is received by the sender, the content is written to the HTTP response - - -=== Bi-directional communication with browsers using SockJS - -The SockJS bridge provided by Vert.x allows browser applications and Quarkus applications to communicate using the event bus. -It connects both sides. -So, both sides can send messages received on the other side. -It supports the three delivery mechanisms. - -SockJS negotiates the communication channel between the Quarkus application and the browser. -If WebSockets are supported, it uses them; otherwise, it degrades to SSE, long polling, etc. - -So use SockJS, you need to configure the bridge, especially the addresses that will be used to communicate: - -[source, java] ----- -package org.acme.vertx; - -import io.vertx.core.Vertx; -import io.vertx.ext.bridge.PermittedOptions; -import io.vertx.ext.web.Router; -import io.vertx.ext.web.handler.sockjs.SockJSBridgeOptions; -import io.vertx.ext.web.handler.sockjs.SockJSHandler; - -import javax.enterprise.context.ApplicationScoped; -import javax.enterprise.event.Observes; -import javax.inject.Inject; -import java.util.concurrent.atomic.AtomicInteger; - -@ApplicationScoped -public class SockJsExample { - - @Inject - Vertx vertx; - - public void init(@Observes Router router) { - SockJSHandler sockJSHandler = SockJSHandler.create(vertx); - sockJSHandler.bridge(new SockJSBridgeOptions() - .addOutboundPermitted(new PermittedOptions().setAddress("ticks"))); - router.route("/eventbus/*").handler(sockJSHandler); - } - -} ----- - -This code configures the SockJS bridge to send all the messages targeting the `ticks` address to the connected browsers. -More detailled explanations about the configuration can be found on https://vertx.io/docs/vertx-web/java/#_sockjs_event_bus_bridge[the Vert.x SockJS Bridge documentation]. - -The browser must use the `vertx-eventbus` JavaScript library to consume the message: - -[source, html] ----- - - - - - SockJS example - Quarkus - - - - - - -

SockJS Examples

- -

Last Tick:

- - - - ----- - -[#native-transport] -== Native Transport - -IMPORTANT: Native transports are not supported in GraalVM produced binaries. - -Vert.x is capable of using https://netty.io/wiki/native-transports.html[Netty's native transports], which offers -performance improvements on specific platforms.To enable them, you must include the appropriate dependency for your -platform. It's usually a good idea to have both to keep your application platform agnostic. Netty is smart enough -to use the correct one, that includes none at all on unsupported platforms: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.netty - netty-transport-native-epoll - linux-x86_64 - - - - io.netty - netty-transport-native-kqueue - osx-x86_64 - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.netty:netty-transport-native-epoll::linux-x86_64") - -implementation("io.netty:netty-transport-native-kqueue::osx-x86_64") ----- - -You will also have to explicitly configure Vert.x to use the native transport. -In `application.properties` add: - -[source,properties] ----- -quarkus.vertx.prefer-native-transport=true ----- - -Or in `application.yml`: - -[source,yml] ----- -quarkus: - vertx: - prefer-native-transport: true ----- - -If all is well quarkus will log: - ----- -[io.qua.ver.cor.run.VertxCoreRecorder] (main) Vertx has Native Transport Enabled: true ----- - -=== Native Linux Transport - -On Linux you can enable the following socket options: - -* SO_REUSEPORT ----- -quarkus.http.so-reuse-port=true ----- -* TCP_QUICKACK ----- -quarkus.http.tcp-quick-ack=true ----- -* TCP_CORK ----- -quarkus.http.tcp-cork=true ----- -* TCP_FASTOPEN ----- -quarkus.http.tcp-fast-open=true ----- - -=== Native MacOS Transport - -On MacOS Sierra and above you can enable the following socket options: - -* SO_REUSEPORT ----- -quarkus.http.so-reuse-port=true ----- - - -== Listening to a Unix Domain Socket - -Listening on a Unix domain socket allows us to dispense with the overhead of TCP -if the connection to the quarkus service is established from the same host. This can happen -if access to the service goes through a proxy which is often the case -if you're setting up a service mesh with a proxy like Envoy. - -IMPORTANT: This will only work on platforms that support <>. - -Enable the appropriate <> and set the following -environment property: - ----- -quarkus.http.domain-socket=/var/run/io.quarkus.app.socket -quarkus.http.domain-socket-enabled=true ----- - -By itself this will not disable the tcp socket which by default will open on -`0.0.0.0:8080`. It can be explicitly disabled: - ----- -quarkus.http.host-enabled=false ----- - -These properties can be set through Java's `-D` command line parameter or -on `application.properties`. - -== Read only deployment environments - -In environments with read only file systems you may receive errors of the form: - -[source] ----- -java.lang.IllegalStateException: Failed to create cache dir ----- - -Assuming `/tmp/` is writable this can be fixed by setting the `vertx.cacheDirBase` property to point to a directory in `/tmp/` for instance in OpenShift by creating an environment variable `JAVA_OPTS` with the value `-Dvertx.cacheDirBase=/tmp/vertx`. \ No newline at end of file diff --git a/_versions/2.7/guides/vertx.adoc b/_versions/2.7/guides/vertx.adoc deleted file mode 100644 index 17c36a9d00d..00000000000 --- a/_versions/2.7/guides/vertx.adoc +++ /dev/null @@ -1,402 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using Eclipse Vert.x API from a Quarkus Application - -include::./attributes.adoc[] - -https://vertx.io[Vert.x] is a toolkit for building reactive applications. -As described in the xref:quarkus-reactive-architecture.adoc[Quarkus Reactive Architecture], Quarkus uses Vert.x underneath. - -image::quarkus-reactive-core.png[Quarkus Reactive Core,width=50%, align=center] - -Quarkus applications can access and use the Vert.x APIs. - -This guide presents how you can build a Quarkus application using: - -* the managed instance of Vert.x -* the Vert.x event bus -* the Vert.x Web Client - -It's an introductory guide. -The xref:vertx-reference.adoc[Vert.x reference guide] covers more advanced features such as verticles, and native transports. - -== Architecture - -We are going to build a simple application exposing four HTTP endpoints: - -1. `/vertx/lorem` returns the content from a small file -2. `/vertx/book` returns the content from a large file (a book) -3. `/vertx/hello` uses the Vert.x event bus to produce the response -4. `/vertx/web` uses the Vert.x Web Client to retrieve data from Wikipedia - -image::quarkus-vertx-guide-architecture.png[Architecture of the Vert.x guide,width=50%, align=center] - -== Solution - -We recommend that you follow the instructions in the following sections and create the application step by step. -However, you can go right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `vertx-quickstart` {quickstarts-tree-url}/vertx-quickstart[directory]. - -[TIP] -.Mutiny -==== -This guide uses the Mutiny API. -If you are not familiar with Mutiny, check xref:mutiny-primer.adoc[Mutiny - an intuitive, reactive programming library]. -==== - - -== Bootstrapping the application - -Click on https://code.quarkus.io/?a=quarkus-getting-started-vertx&nc=true&e=resteasy-reactive-jackson&e=vertx[this link] to configure your application. -It selected a few extensions: - -* `resteasy-reactive-jackson`, which also brings `resteasy-reactive`. We are going to use it to expose our HTTP endpoints. -* `vertx`, which provides access to the underlying managed Vert.x - -Click on the `Generate your application` button, download the zip file and unzip it. -Then, open the project in your favorite IDE. - -If you open the generated build file, you can see the selected extensions: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-resteasy-reactive-jackson - - - io.quarkus - quarkus-vertx - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-resteasy-reactive-jackson") -implementation("io.quarkus:quarkus-vertx") ----- - -While you are in your build file, add the following dependency: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.smallrye.reactive - smallrye-mutiny-vertx-web-client - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.smallrye.reactive:smallrye-mutiny-vertx-web-client") ----- - -This dependency provides the Vert.x Web Client, which we will be using to implement the `/web` endpoint. - - -== Accessing the managed Vert.x instance - -Create the `src/main/java/org/acme/VertxResource.java` file. -It will contain our HTTP endpoints. - -In this file, copy the following code: - -[source, java] ----- -package org.acme; - -import io.vertx.mutiny.core.Vertx; - -import javax.inject.Inject; -import javax.ws.rs.Path; - -@Path("/vertx") // <1> -public class VertxResource { - - private final Vertx vertx; - - @Inject // <2> - public VertxResource(Vertx vertx) { // <3> - this.vertx = vertx; // <4> - } -} ----- -<1> Declare the root HTTP path. -<2> We use constructor injection to receive the managed Vert.x instance. Field injection works too. -<3> Receives the Vert.x instance as a constructor parameter -<4> Store the managed Vert.x instance into a field. - -With this, we can start implementing the endpoints. - -== Using Vert.x Core API - -The injected Vert.x instance provides a set of APIs you can use. -The one we are going to use in this section is the Vert.x File System. -It provides a non-blocking API to access files. - - -In the `src/main/resource` directory, create a `lorem.txt` file with the following content: - -[source, text] ----- -Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. ----- - -Then, in the `VertxResource` file add the following method: - -[source, java] ----- -@GET // <1> -@Path("/lorem") -public Uni readShortFile() { // <2> - return vertx.fileSystem().readFile("lorem.txt") // <3> - .onItem().transform(content -> content.toString(StandardCharsets.UTF_8)); // <4> -} ----- -<1> This endpoint handles HTTP `GET` request on path `/lorem` (so the full path will be `vertx/lorem`) -<2> As the Vert.x API is asynchronous, our method returns a `Uni`. The content is written into the HTTP response when the asynchronous operation represented by the Uni completes. -<3> We use the Vert.x file system API to read the created file -<4> Once the file is read, the content is stored in an in-memory buffer. We transform this buffer into a String. - -In a terminal, navigate to the root of the project and run: - -include::includes/devtools/dev.adoc[] - -In another terminal, run: - -[source, bash] ----- -> curl http://localhost:8080/vertx/lorem ----- - -You should see the content of the file printed in the console. - -IMPORTANT: Quarkus provides other ways to serve static files. This is an example made for the guide. - -== Using Vert.x stream capability - -Reading a file and storing the content in memory works for small files, but not big ones. -In this section, we will see how you can use Vert.x streaming capability. - -First, download https://www.gutenberg.org/files/2600/2600-0.txt[War and Peace] and store it in `src/main/resources/book.txt`. -It's a 3.2 Mb file, which, while not being huge, illustrates the purpose of streams. -This time, we will not accumulate the file's content in memory and write it in one batch, but read it chunk by chunk and write these chunks into the HTTP response one by one. - -So, you should have the following files in your project: - - -[source, text] ----- -. -├── mvnw -├── mvnw.cmd -├── pom.xml -├── README.md -├── src -│ └── main -│ ├── docker -│ │ ├── ... -│ ├── java -│ │ └── org -│ │ └── acme -│ │ └── VertxResource.java -│ └── resources -│ ├── application.properties -│ ├── book.txt -│ └── lorem.txt ----- - -Add the following method to the `VertxResource` class: - -[source, java] ----- -@GET -@Path("/book") -public Multi readLargeFile() { // <1> - return vertx.fileSystem().open("book.txt", // <2> - new OpenOptions().setRead(true) - ) - .onItem().transformToMulti(file -> file.toMulti()) // <3> - .onItem().transform(content -> content.toString(StandardCharsets.UTF_8)) // <4> - + "\n------------\n"); // <5> -} ----- -<1> This time, we return a Multi as we want to stream the chunks -<2> We open the file using the `open` method. It returns a `Uni` -<3> When the file is opened, we retrieve a `Multi` which will contain the chunks. -<4> For each chunk, we produce a String -<5> To visually see the chunks in the response, we append a separator - -Then, in a terminal, run: - -[source, bash] ----- -> curl http://localhost:8080/vertx/book ----- - -It should retrieve the book content. -In the output you should see the separator like: - -[source, text] ----- -... -The little princess had also left the tea table and followed Hélène. - -“Wait a moment, I’ll get my work.... Now then, what ------------- - are you -thinking of?” she went on, turning to Prince Hippolyte. “Fetch me my -workbag.” -... ----- - -== Using the event bus - -One of the core features of Vert.x is the https://vertx.io/docs/vertx-core/java/#event_bus[event bus]. -It provides a message-based backbone to your application. -So, you can have components interacting using asynchronous message passing, and so decouple your components. -You can send a message to a single consumer, or dispatch to multiple consumers, or implement a request-reply interaction, where you send a message (request) and expect a response. -This is what we are going to use in this section. -Our `VertxResource` will send a message containing a name to the `greetings` address. -Another component will receive the message and produce the "hello $name" response. -The `VertxResource` will receive the response and return it as the HTTP response. - -So, first, let's extend our `VertxResource` class with the following code: - - -[source, java] ----- -@Inject -EventBus bus; // <1> - -@GET -@Path("/hello") -public Uni hello(@QueryParam("name") String name) { // <2> - return bus.request("greetings", name) // <3> - .onItem().transform(response -> response.body()); // <4> -} ----- -<1> Inject the event bus. Alternatively you can use `vertx.eventBus()`. -<2> We receive a _name_ as a query parameter -<3> We use the `request` method to initiate the request-reply interaction. We send the name to the "greetings" address. -<4> When the response is received, we extract the body and return it as the HTTP response - -Now, we need the other side: the component receiving the name and replying. -Create the `src/main/java/org/acme/GreetingService.java` file with the following content: - -[source, java] ----- -package org.acme; - -import io.quarkus.vertx.ConsumeEvent; - -import javax.enterprise.context.ApplicationScoped; - -@ApplicationScoped // <1> -public class GreetingService { - - @ConsumeEvent("greetings") // <2> - public String hello(String name) { // <3> - return "Hello " + name; // <4> - } -} ----- -<1> Declaring a CDI Bean in the Application scope. Quarkus will create a single instance of this class. -<2> Use the `@ConsumeEvent` annotation to declare a consumer. It is possible to use the Vert.x API https://vertx.io/docs/vertx-core/java/#_acknowledging_messages_sending_replies[directly] too. -<3> Receive the message payload as a method parameter. The returned object will be the reply. -<4> Return the response. This response is sent back to the `VertxResource` class - -Let's try this. -In a terminal, run: - - -[source, bash] ----- -> curl "http://localhost:8080/vertx/hello?name=bob" ----- - -You should get the expected `Hello bob` message back. - -== Using Vert.x Clients - -So far, we have used the Vert.x Core API. -Vert.x offers much more. -It provides a vast ecosystem. -In this section, we will see how you can use the Vert.x Web Client, a reactive HTTP client. - -Note that some Quarkus extensions are wrapping Vert.x clients and manage them for you. -That's the case for the reactive data sources, Redis, mail... -That's not the case with the Web Client. - -Remember, at the beginning of the guide, we added the `smallrye-mutiny-vertx-web-client` dependency to our `pom.xml` file. -It's now time to use it. - -First, we need to create an instance of `WebClient`. -Extend the `VertxResource` class with the `client` field and the creation of the web client in the constructor: - -[source, java] ----- -private final Vertx vertx; -private final WebClient client; // <1> - -@Inject -public VertxResource(Vertx vertx) { - this.vertx = vertx; - this.client = WebClient.create(vertx); // <2> -} ----- -<1> Store the `WebClient`, so we will be able to use it in our HTTP endpoint -<2> Create the `WebClient`. Be sure to use the `io.vertx.mutiny.ext.web.client.WebClient` class - -Let's now implement a new HTTP endpoint that queries the Wikipedia API to retrieve the pages about Quarkus in the different languages. -Add the following method to the `VertxResource` class: - -[source, java] ----- -private static final String URL = "https://en.wikipedia.org/w/api.php?action=parse&page=Quarkus&format=json&prop=langlinks"; - -@GET -@Path("/web") -public Uni retrieveDataFromWikipedia() { // <1> - return client.getAbs(URL).send() // <2> - .onItem().transform(HttpResponse::bodyAsJsonObject) // <3> - .onItem().transform(json -> json.getJsonObject("parse") // <4> - .getJsonArray("langlinks")); -} ----- -<1> This endpoint returns a JSON Array. Vert.x provides a convenient way to manipulate JSON Object and Array. More details about these in xref:vertx-reference.adoc#using-vert-x-json[the reference guide]. -<2> Send a `GET` request to the Wikipedia API -<3> Once the response is received, extract it as a JSON Object -<4> Extract the `langlinks` array from the response. - -Then, invoke the endpoint using: - -[source, bash] ----- -> curl http://localhost:8080/vertx/web -[{"lang":"de","url":"https://de.wikipedia.org/wiki/Quarkus","langname":"German","autonym":"Deutsch","*":"Quarkus"},{"lang":"fr","url":"https://fr.wikipedia.org/wiki/Quarkus","langname":"French","autonym":"français","*":"Quarkus"}] ----- - -The response indicates that, in addition to the English page, there are a German and a French page about Quarkus on wikipedia. - -== Going further - -This guide introduced how you can use Vert.x APIs from a Quarkus application. -It's just a brief overview. -If you want to know more, check the xref:vertx-reference.adoc[reference guide about Vert.x in Quarkus]. - -As we have seen, the event bus is the connecting tissue of Vert.x applications. -Quarkus integrates it so different beans can interact with asynchronous messages. -This part is covered in the xref:reactive-event-bus.adoc[event bus documentation]. - -Learn how to implement highly performant, low-overhead database applications on Quarkus with the xref:reactive-sql-clients.adoc[Reactive SQL Clients]. diff --git a/_versions/2.7/guides/websockets.adoc b/_versions/2.7/guides/websockets.adoc deleted file mode 100644 index dcdc31adc77..00000000000 --- a/_versions/2.7/guides/websockets.adoc +++ /dev/null @@ -1,242 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Using WebSockets - -include::./attributes.adoc[] - -This guide explains how your Quarkus application can utilize web sockets to create interactive web applications. -Because it's the _canonical_ web socket application, we are going to create a simple chat application. - -== Prerequisites - -include::includes/devtools/prerequisites.adoc[] - -== Architecture - -In this guide, we create a straightforward chat application using web sockets to receive and send messages to the other connected users. - -image:websocket-guide-architecture.png[alt=Architecture] - -== Solution - -We recommend that you follow the instructions in the next sections and create the application step by step. -However, you can skip right to the completed example. - -Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive]. - -The solution is located in the `websockets-quickstart` {quickstarts-tree-url}/websockets-quickstart[directory]. - -== Creating the Maven project - -First, we need a new project. Create a new project with the following command: - -:create-app-artifact-id: websockets-quickstart -:create-app-extensions: websockets -include::includes/devtools/create-app.adoc[] - -This command generates the project (without any classes) and imports the `websockets` extension. - -If you already have your Quarkus project configured, you can add the `websockets` extension -to your project by running the following command in your project base directory: - -:add-extension-extensions: websockets -include::includes/devtools/extension-add.adoc[] - -This will add the following to your build file: - -[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"] -.pom.xml ----- - - io.quarkus - quarkus-websockets - ----- - -[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"] -.build.gradle ----- -implementation("io.quarkus:quarkus-websockets") ----- - -NOTE: If you only want to use the WebSocket client you should include `quarkus-websockets-client` instead. - -== Handling web sockets - -Our application contains a single class that handles the web sockets. -Create the `org.acme.websockets.ChatSocket` class in the `src/main/java` directory. -Copy the following content into the created file: - -[source,java] ----- -package org.acme.websockets; - -import java.util.Map; -import java.util.concurrent.ConcurrentHashMap; - -import javax.enterprise.context.ApplicationScoped; -import javax.websocket.OnClose; -import javax.websocket.OnError; -import javax.websocket.OnMessage; -import javax.websocket.OnOpen; -import javax.websocket.server.PathParam; -import javax.websocket.server.ServerEndpoint; -import javax.websocket.Session; - -@ServerEndpoint("/chat/{username}") // <1> -@ApplicationScoped -public class ChatSocket { - - Map sessions = new ConcurrentHashMap<>(); // <2> - - @OnOpen - public void onOpen(Session session, @PathParam("username") String username) { - sessions.put(username, session); - } - - @OnClose - public void onClose(Session session, @PathParam("username") String username) { - sessions.remove(username); - broadcast("User " + username + " left"); - } - - @OnError - public void onError(Session session, @PathParam("username") String username, Throwable throwable) { - sessions.remove(username); - broadcast("User " + username + " left on error: " + throwable); - } - - @OnMessage - public void onMessage(String message, @PathParam("username") String username) { - if (message.equalsIgnoreCase("_ready_")) { - broadcast("User " + username + " joined"); - } else { - broadcast(">> " + username + ": " + message); - } - } - - private void broadcast(String message) { - sessions.values().forEach(s -> { - s.getAsyncRemote().sendObject(message, result -> { - if (result.getException() != null) { - System.out.println("Unable to send message: " + result.getException()); - } - }); - }); - } - -} ----- -<1> Configures the web socket URL -<2> Stores the currently opened web sockets - -== A slick web frontend - -All chat applications need a _nice_ UI, well, this one may not be that nice, but does the work. -Quarkus automatically serves static resources contained in the `META-INF/resources` directory. -Create the `src/main/resources/META-INF/resources` directory and copy this {quickstarts-blob-url}/websockets-quickstart/src/main/resources/META-INF/resources/index.html[index.html] file in it. - -== Run the application - -Now, let's see our application in action. Run it with: - -include::includes/devtools/dev.adoc[] - -Then open your 2 browser windows to http://localhost:8080/: - -1. Enter a name in the top text area (use 2 different names). -2. Click on connect -3. Send and receive messages - -image:websocket-guide-screenshot.png[alt=Application] - -As usual, the application can be packaged using: - -include::includes/devtools/build.adoc[] - -And executed using `java -jar target/quarkus-app/quarkus-run.jar`. - -You can also build the native executable using: - -include::includes/devtools/build-native.adoc[] - -You can also test your web socket applications using the approach detailed {quickstarts-blob-url}/websockets-quickstart/src/test/java/org/acme/websockets/ChatTest.java[here]. - -== WebSocket Clients - -Quarkus also contains a WebSocket client. You can call `ContainerProvider.getWebSocketContainer().connectToServer` to create a websocket connection. By default the `quarkus-websockets` artifact includes both client and server support however if you only want the client you can include `quarkus-websockets-client` instead. - -When you connect to the server you can either pass in the Class of the annotated client endpoint you want to use, or an instance of `javax.websocket.Endpoint`. If you -are using the annotated endpoint then you can use the exact same annotations as you can on the server, except it must be annotated with `@ClientEndpoint` instead of -`@ServerEndpoint`. - -The example below shows the client being used to test the chat endpoint above. - -[source,java] ----- -package org.acme.websockets; - -import java.net.URI; -import java.util.concurrent.LinkedBlockingDeque; -import java.util.concurrent.TimeUnit; - -import javax.websocket.ClientEndpoint; -import javax.websocket.ContainerProvider; -import javax.websocket.OnMessage; -import javax.websocket.OnOpen; -import javax.websocket.Session; - -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; - -import io.quarkus.test.common.http.TestHTTPResource; -import io.quarkus.test.junit.QuarkusTest; - -@QuarkusTest -public class ChatTest { - - private static final LinkedBlockingDeque MESSAGES = new LinkedBlockingDeque<>(); - - @TestHTTPResource("/chat/stu") - URI uri; - - @Test - public void testWebsocketChat() throws Exception { - try (Session session = ContainerProvider.getWebSocketContainer().connectToServer(Client.class, uri)) { - Assertions.assertEquals("CONNECT", MESSAGES.poll(10, TimeUnit.SECONDS)); - Assertions.assertEquals("User stu joined", MESSAGES.poll(10, TimeUnit.SECONDS)); - session.getAsyncRemote().sendText("hello world"); - Assertions.assertEquals(">> stu: hello world", MESSAGES.poll(10, TimeUnit.SECONDS)); - } - } - - @ClientEndpoint - public static class Client { - - @OnOpen - public void open(Session session) { - MESSAGES.add("CONNECT"); - // Send a message to indicate that we are ready, - // as the message handler may not be registered immediately after this callback. - session.getAsyncRemote().sendText("_ready_"); - } - - @OnMessage - void message(String msg) { - MESSAGES.add(msg); - } - - } - -} ----- - - -== More WebSocket Information - -The Quarkus WebSocket implementation is an implementation of link:https://eclipse-ee4j.github.io/websocket-api/[Jakarta Websockets]. - - diff --git a/_versions/2.7/guides/writing-extensions.adoc b/_versions/2.7/guides/writing-extensions.adoc deleted file mode 100644 index 20eafc6d2d5..00000000000 --- a/_versions/2.7/guides/writing-extensions.adoc +++ /dev/null @@ -1,3142 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Writing Your Own Extension - -:numbered: -:sectnums: -:sectnumlevels: 4 -:toc: - -include::./attributes.adoc[] - -Quarkus extensions add a new developer focused behavior to the core offering, and consist of two distinct parts, buildtime augmentation and runtime container. The augmentation part is responsible for all metadata processing, such as reading annotations, XML descriptors etc. The output of this augmentation phase is recorded bytecode which is responsible for directly instantiating the relevant runtime services. - -This means that metadata is only processed once at build time, which both saves on startup time, and also on memory -usage as the classes etc that are used for processing are not loaded (or even present) in the runtime JVM. - -NOTE: This is an in-depth documentation, see the xref:building-my-first-extension.adoc[building my first extension] if you need an introduction. - -== Extension philosophy - -This section is a work in progress and gathers the philosophy under which extensions should be designed and written. - -=== Why an extension framework - -Quarkus’s mission is to transform your entire application including the libraries it uses, into an artifact that uses significantly less resources than traditional approaches. These can then be used to build native applications using GraalVM. -To do this you need to analyze and understand the full "closed world" of the application. -Without the full and complete context, the best that can be achieved is partial and limited generic support. -By using the Quarkus extension approach, we can bring Java applications in line with memory footprint constrained environments like Kubernetes or cloud platforms. - -The Quarkus extension framework results in significantly improved resource utilization even when GraalVM is not used (e.g. in HotSpot). -Let’s list the actions an extension performs: - -* Gather build time metadata and generate code -** This part has nothing to do with GraalVM, it is how Quarkus starts frameworks “at build time” -** The extension framework facilitates reading metadata, scanning classes as well as generating classes as needed -** A small part of the extension work is executed at runtime via the generated classes, while the bulk of the work is done at build time (called deployment time) -* Enforce opinionated and sensible defaults based on the close world view of the application (e.g. an application with no `@Entity` does not need to start Hibernate ORM) -* An extension hosts Substrate VM code substitution so that libraries can run on GraalVM -** Most changes are pushed upstream to help the underlying library run on GraalVM -** Not all changes can be pushed upstream, extensions host Substrate VM substitutions - which is a form of code patching - so that libraries can run -* Host Substrate VM code substitution to help dead code elimination based on the application needs -** This is application dependant and cannot really be shared in the library itself -** For example, Quarkus optimizes the Hibernate code because it knows it only needs a specific connection pool and cache provider -* Send metadata to GraalVM for example classes in need of reflection -** This information is not static per library (e.g. Hibernate) but the framework has the semantic knowledge and knows which classes need to have reflection (for example @Entity classes) - -=== Favor build time work over runtime work - -As much as possible favor doing work at build time (deployment part of the extension) as opposed to let the framework do work at startup time (runtime). -The more is done there, the smaller Quarkus applications using that extension will be and the faster they will load. - -=== How to expose configuration - -Quarkus simplifies the most common usages. -This means that its defaults might be different than the library it integrates. - -To make the simple experience easiest, unify the configuration in `application.properties` via SmallRye Config. -Avoid library specific configuration files, or at least make them optional: e.g. `persistence.xml` for Hibernate ORM is optional. - -Extensions should see the configuration holistically as a Quarkus application instead of focusing on the library experience. -For example `quarkus.database.url` and friends are shared between extensions as defining a database access is a shared task (instead of a `hibernate.` property for example). -The most useful configuration options should be exposed as `quarkus.[extension].` instead of the natural namespace of the library. -Less common properties can live in the library namespace. - -To fully enable the close world assumptions that Quarkus can optimize best, it is better to consider configuration options as build time settled vs overridable at runtime. -Of course properties like host, port, password should be overridable at runtime. -But many properties like enable caching or setting the JDBC driver can safely require a rebuild of the application. - -==== Static Init Config - -If the extension provides additional Config Sources and if these are required during Static Init, these must be registered with `StaticInitConfigSourceProviderBuildItem`. Configuration in Static Init does not scan for additional sources to avoid double initialization at application startup time. - -//// -=== API - -TODO: Describe where to put APIs -I wonder if that content should be in the technical aspects - -=== Substitution and recorders - -TODO: Describe where Substitutions and recorders should live -//// - -=== Expose your components via CDI - -Since CDI is the central programming model when it comes to component composition, frameworks and extensions should expose their components as beans that are easily consumable by user applications. -For example, Hibernate ORM exposes `EntityManagerFactory` and `EntityManager` beans, the connection pool exposes `DataSource` beans etc. -Extensions must register these bean definitions at build time. - -==== Beans backed by classes - -An extension can produce an <> to instruct the container to read a bean definition from a class as if it was part of the original application: - -.Bean Class Registered by `AdditionalBeanBuildItem` -[source%nowrap,java] ----- -@Singleton <1> -public class Echo { - - public String echo(String val) { - return val; - } -} ----- -<1> If a bean registered by an `AdditionalBeanBuildItem` does not specify a scope then `@Dependent` is assumed. - -All other beans can inject such a bean: - -.Bean Injecting a Bean Produced by an `AdditionalBeanBuildItem` -[source%nowrap,java] ----- -@Path("/hello") -public class ExampleResource { - - @Inject - Echo echo; - - @GET - @Produces(MediaType.TEXT_PLAIN) - public String hello(String foo) { - return echo.echo(foo); - } -} ----- - -And vice versa - the extension bean can inject application beans and beans provided by other extensions: - -.Extension Bean Injection Example -[source%nowrap,java] ----- -@Singleton -public class Echo { - - @Inject - DataSource dataSource; <1> - - @Inject - Instance> listsOfStrings; <2> - - //... -} ----- -<1> Inject a bean provided by other extension. -<2> Inject all beans matching the type `List`. - -[[bean_init]] -==== Bean initialization - -Some components may require additional initialization based on information collected during augmentation. -The most straightforward solution is to obtain a bean instance and call a method directly from a build step. -However, it is _illegal_ to obtain a bean instance during the augmentation phase. -The reason is that the CDI container is not started yet. -It's started during the <>. - -TIP: `BUILD_AND_RUN_TIME_FIXED` and `RUN_TIME` config roots can be injected in any bean. `RUN_TIME` config roots should only be injected after the bootstrap though. - -It is possible to invoke a bean method from a <> though. -If you need to access a bean in a `@Record(STATIC_INIT)` build step then is must either depend on the `BeanContainerBuildItem` or wrap the logic in a `BeanContainerListenerBuildItem`. -The reason is simple - we need to make sure the CDI container is fully initialized and started. -However, it is safe to expect that the CDI container is fully initialized and running in a `@Record(RUNTIME_INIT)` build step. -You can obtain a reference to the container via `CDI.current()` or Quarkus-specific `Arc.container()`. - -IMPORTANT: Don't forget to make sure the bean state guarantees the visibility, e.g. via the `volatile` keyword. - -NOTE: There is one significant drawback of this "late initialization" approach. -An _uninitialized_ bean may be accessed by other extensions or application components that are instantiated during bootstrap. -We'll cover a more robust solution in the <>. - -==== Default beans - -A very useful pattern of creating such beans but also giving application code the ability to easily override some of the beans with custom implementations, is to use -the `@DefaultBean` that Quarkus provides. -This is best explained with an example. - -Let us assume that the Quarkus extension needs to provide a `Tracer` bean which application code is meant to inject into its own beans. - -[source%nowrap,java] ----- -@Dependent -public class TracerConfiguration { - - @Produces - public Tracer tracer(Reporter reporter, Configuration configuration) { - return new Tracer(reporter, configuration); - } - - @Produces - @DefaultBean - public Configuration configuration() { - // create a Configuration - } - - @Produces - @DefaultBean - public Reporter reporter(){ - // create a Reporter - } -} ----- - -If for example application code wants to use `Tracer`, but also needs to use a custom `Reporter` bean, such a requirement could easily be done using something like: - - -[source%nowrap,java] ----- -@Dependent -public class CustomTracerConfiguration { - - @Produces - public Reporter reporter(){ - // create a custom Reporter - } -} ----- - -==== How to Override a Bean Defined by a Library/Quarkus Extension that doesn't use @DefaultBean - -Although `@DefaultBean` is the recommended approach, it is also possible for application code to override beans provided by an extension by marking beans as a CDI `@Alternative` and including `@Priority` annotation. -Let's show a simple example. -Suppose we work on an imaginary "quarkus-parser" extension and we have a default bean implementation: - -[source%nowrap,java] ----- -@Dependent -class Parser { - - String[] parse(String expression) { - return expression.split("::"); - } -} ----- - -And our extension also consumes this parser: - -[source%nowrap,java] ----- -@ApplicationScoped -class ParserService { - - @Inject - Parser parser; - - //... -} ----- - -Now, if a user or even some other extension needs to override the default implementation of the `Parser` the simplest solution is to use CDI `@Alternative` + `@Priority`: - -[source%nowrap,java] ----- -@Alternative <1> -@Priority(1) <2> -@Singleton -class MyParser extends Parser { - - String[] parse(String expression) { - // my super impl... - } -} ----- -<1> `MyParser` is an alternative bean. -<2> Enables the alternative. The priority could be any number to override the default bean but if there are multiple alternatives the highest priority wins. - -NOTE: CDI alternatives are only considered during injection and type-safe resolution. For example the default implementation would still receive observer notifications. - -[[synthetic_beans]] -==== Synthetic beans - -Sometimes it is very useful to be able to register a synthetic bean. -Bean attributes of a synthetic bean are not derived from a java class, method or field. -Instead, the attributes are specified by an extension. - -NOTE: Since the CDI container does not control the instantiation of a synthetic bean the dependency injection and other services (such as interceptors) are not supported. -In other words, it's up to the extension to provide all required services to a synthetic bean instance. - -There are several ways to register a <> in Quarkus. -In this chapter, we will cover a use case that can be used to initialize extension beans in a safe manner (compared to <>). - -The `SyntheticBeanBuildItem` can be used to register a synthetic bean: - -* whose instance can be easily produced through a <>, -* to provide a "context" bean that holds all the information collected during augmentation so that the real components do not need any "late initialization" because they can inject the context bean directly. - -.Instance Produced Through Recorder -[source%nowrap,java] ----- -@BuildStep -@Record(STATIC_INIT) -SyntheticBeanBuildItem syntheticBean(TestRecorder recorder) { - return SyntheticBeanBuildItem.configure(Foo.class).scope(Singleton.class) - .runtimeValue(recorder.createFoo("parameters are recorder in the bytecode")) <1> - .done(); -} ----- -<1> The string value is recorded in the bytecode and used to initialize the instance of `Foo`. - -."Context" Holder -[source%nowrap,java] ----- -@BuildStep -@Record(STATIC_INIT) -SyntheticBeanBuildItem syntheticBean(TestRecorder recorder) { - return SyntheticBeanBuildItem.configure(TestContext.class).scope(Singleton.class) - .runtimeValue(recorder.createContext("parameters are recorder in the bytecode")) <1> - .done(); -} ----- -<1> The "real" components can inject the `TestContext` directly. - -=== Some types of extensions - -There exist multiple stereotypes of extension, let's list a few. - -Bare library running:: -This is the less sophisticated extension. -It consists of a set of patches to make sure a library runs on GraalVM. -If possible, contribute these patches upstream, not in extensions. -Second best is to write Substrate VM substitutions, which are patches applied during native image compilation. - -Get a framework running:: -A framework at runtime typically reads configuration, scan the classpath and classes for metadata (annotations, getters etc), build a metamodel on top of which it runs, find options via the service loader pattern, prepare invocation calls (reflection), proxy interfaces, etc. + -These operations should be done at build time and the metamodel be passed to the recorder DSL that will generate classes that will be executed at runtime and boot the framework. - -Get a CDI portable extension running:: -The CDI portable extension model is very flexible. -Too flexible to benefit from the build time boot promoted by Quarkus. -Most extension we have seen do not make use of these extreme flexibility capabilities. -The way to port a CDI extension to Quarkus is to rewrite it as a Quarkus extension which will define the various beans at build time (deployment time in extension parlance). - -== Technical aspect - -[[bootstrap-three-phases]] -=== Three Phases of Bootstrap and Quarkus Philosophy - -There are three distinct bootstrap phases of a Quarkus app: - -Augmentation:: - This is the first phase, and is done by the <>. These processors have access to Jandex annotation - information and can parse any descriptors and read annotations, but should not attempt to load any application classes. The output of these - build steps is some recorded bytecode, using an extension of the ObjectWeb ASM project called Gizmo(ext/gizmo), that is used to actually bootstrap the application at runtime. Depending on the `io.quarkus.deployment.annotations.ExecutionTime` value of the `@io.quarkus.deployment.annotations.Record` annotation associated with the build step, - the step may be run in a different JVM based on the following two modes. - -Static Init:: - If bytecode is recorded with `@Record(STATIC_INIT)` then it will be executed from a static init method on the main - class. For a native executable build, this code is executed in a normal JVM as part of the native build - process, and any retained objects that are produced in this stage will be directly serialized into the native executable via an image mapped file. - This means that if a framework can boot in this phase then it will have its booted state directly written to the - image, and so the boot code does not need to be executed when the image is started. -+ -There are some restrictions on what can be done in this stage as the Substrate VM disallows some objects in the native executable. For example you should not attempt to listen on a port or start threads in this phase. In addition, it is disallowed to read run time configuration during static initialization. -+ -In non-native pure JVM mode, there is no real difference between Static and Runtime Init, except that Static Init is always executed first. This mode benefits from the same build phase augmentation as native mode as the descriptor parsing and annotation scanning are done -at build time and any associated class/framework dependencies can be removed from the build output jar. In servers like -WildFly, deployment related classes such as XML parsers hang around for the life of the application, using up valuable -memory. Quarkus aims to eliminate this, so that the only classes loaded at runtime are actually used at runtime. -+ -As an example, the only reason that a Quarkus application should load an XML parser is if the user is using XML in their -application. Any XML parsing of configuration should be done in the Augmentation phase. - -Runtime Init:: - If bytecode is recorded with `@Record(RUNTIME_INIT)` then it is executed from the application's main method. This code - will be run on native executable boot. In general as little code as possible should be executed in this phase, and should - be restricted to code that needs to open ports etc. - -Pushing as much as possible into the `@Record(STATIC_INIT)` phase allows for two different optimizations: - -1. In both native executable and pure JVM mode this allows the app to start as fast as possible since processing was done during build time. This also minimizes the classes/native code needed in the application to pure runtime related behaviors. - -2. Another benefit with native executable mode is that Substrate can more easily eliminate features that are not used. If features are directly initialized via bytecode, Substrate can detect that a method is never called and eliminate -that method. If config is read at runtime, Substrate cannot reason about the contents of the config and so needs to keep all features in case they are required. - - -=== Project setup - -Your extension project should be setup as a multi-module project with two submodules: - -1. A deployment time submodule that handles the build time processing and bytecode recording. - -2. A runtime submodule that contains the runtime behavior that will provide the extension behavior in the native executable or runtime JVM. - -Your runtime artifact should depend on `io.quarkus:quarkus-core`, and possibly the runtime artifacts of other Quarkus -modules if you want to use functionality provided by them. -Your deployment time module should depend on `io.quarkus:quarkus-core-deployment`, your runtime artifact, -and possibly the deployment artifacts of other Quarkus modules if you want to use functionality provided by them. - -[WARNING] -==== -Under no circumstances can the runtime module depend on a deployment artifact. This would result -in pulling all the deployment time code into runtime scope, which defeats the purpose of having the split. -==== - -==== Using Maven - -You will need to include the `io.quarkus:quarkus-bootstrap-maven-plugin` to generate the Quarkus extension descriptor included into the runtime artifact, if you are using the Quarkus parent pom it will automatically inherit the correct configuration. -Furthermore, you'll need to configure the `maven-compiler-plugin` to detect the `quarkus-extension-processor` annotation processor. - -TIP: You may want to use the `create-extension` mojo of `io.quarkus.platform:quarkus-maven-plugin` to create these Maven modules - see the next section. - -NOTE: By convention the deployment time artifact has the `-deployment` suffix, and the runtime artifact -has no suffix (and is what the end user adds to their project). - -[source%nowrap,xml] ----- - - - - io.quarkus - quarkus-core - - - - - - - io.quarkus - quarkus-bootstrap-maven-plugin - - - - - extension-descriptor - - - ${project.groupId}:${project.artifactId}-deployment:${project.version} - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - - - io.quarkus - quarkus-extension-processor - - - - - - ----- - -NOTE: The above `maven-compiler-plugin` configuration requires version 3.5+. - -You will also need to configure the `maven-compiler-plugin` of the deployment module to detect the `quarkus-extension-processor` annotation processor. - -[source%nowrap,xml] ----- - - - io.quarkus - quarkus-core-deployment - - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - - - io.quarkus - quarkus-extension-processor - - - - - - ----- - -===== Create new Quarkus Core extension modules using Maven - -Quarkus provides `create-extension` Maven Mojo to initialize your extension project. - -It will try to auto-detect its options: - -* from `quarkus` (Quarkus Core) or `quarkus/extensions` directory, it will use the 'Quarkus Core' extension layout and defaults. -* with `-DgroupId=io.quarkiverse.[extensionId]`, it will use the 'Quarkiverse' extension layout and defaults. -* in other cases it will use the 'Standalone' extension layout and defaults. -* we may introduce other layout types in the future. - -TIP: You may not specify any parameter to use the interactive mode: `mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create-extension -N` - -As and example, let's add a new extension called `my-ext` to the Quarkus source tree: - -[source,bash, subs=attributes+] ----- -git clone https://github.com/quarkusio/quarkus.git -cd quarkus -mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create-extension -N \ - -DextensionId=my-ext \ - -Dname="My Extension" ----- - -NOTE: by default, the `groupId`, `version`, `quarkusVersion`, `namespaceId`, and `namespaceName` will be consistent with other Quarkus core extensions. - -The above sequence of commands does the following: - -* Creates four new Maven modules: -** `quarkus-my-ext-parent` in the `extensions/my-ext` directory -** `quarkus-my-ext` in the `extensions/my-ext/runtime` directory -** `quarkus-my-ext-deployment` in the `extensions/my-ext/deployment` directory; a basic `MyExtProcessor` class is generated in this module. -** `quarkus-my-ext-integration-test` in the `integration-tests/my-ext/deployment` directory; an empty JAX-RS Resource class and two test classes (for JVM mode and native mode) are generated in this module. -* Links these three modules where necessary: -** `quarkus-my-ext-parent` is added to the `` of `quarkus-extensions-parent` -** `quarkus-my-ext` is added to the `` of the Quarkus BOM (Bill of Materials) `bom/application/pom.xml` -** `quarkus-my-ext-deployment` is added to the `` of the Quarkus BOM (Bill of Materials) `bom/application/pom.xml` -** `quarkus-my-ext-integration-test` is added to the `` of `quarkus-integration-tests-parent` - -NOTE: You also have to fill the `quarkus-extension.yaml` file that describe your extension inside the runtime module `src/main/resources/META-INF` folder. - -This is the `quarkus-extension.yaml` of the `quarkus-agroal` extension, you can use it as an example: - -[source,yaml] ----- -name: "Agroal - Database connection pool" -metadata: - keywords: - - "agroal" - - "database-connection-pool" - - "datasource" - - "jdbc" - guide: "https://quarkus.io/guides/datasource" - categories: - - "data" - status: "stable" ----- - -TIP: The `name` parameter of the mojo is optional. -If you do not specify it on the command line, the plugin will derive it from `extensionId` by replacing dashes with spaces and uppercasing each token. -So you may consider omitting explicit `name` in some cases. - -// The following link should point to the mojo page once https://github.com/quarkusio/quarkusio.github.io/issues/265 is fixed -Please refer to https://github.com/quarkusio/quarkus/blob/{quarkus-version}/devtools/maven/src/main/java/io/quarkus/maven/CreateExtensionMojo.java[CreateExtensionMojo JavaDoc] for all the available options of the mojo. - -==== Using Gradle - -You will need to apply the `io.quarkus.extension` plugin in the `runtime` module of your extension project. -The plugin includes the `extensionDescriptor` task that will generate `META-INF/quarkus-extension.properties` and `META-INF/quarkus-extension.yml` files. -The plugin also enables the `io.quarkus:quarkus-extension-processor` annotation processor in both `deployment` and `runtime` modules. -The name of the deployment module can be configured in the plugin by setting the `deploymentArtifact` property. The property is set to `deployment` by default: - -[source,groovy,subs=attributes+] ----- -plugins { - id 'java' - id 'io.quarkus.extensions' -} - -quarkusExtension { - deploymentArtifact = 'deployment' -} - -dependencies { - implementation platform('io.quarkus:quarkus-bom:{quarkus-version}') -} ----- - -[WARNING] -==== -This plugin is still experimental, it does not validate the extension dependencies as the equivalent Maven plugin does. -==== - -=== Build Step Processors - -Work is done at augmentation time by _build steps_ which produce and consume _build items_. The build steps found in -the deployment modules that correspond to the extensions in the project build are automatically wired together and executed -to produce the final build artifact(s). - -==== Build steps - -A _build step_ is a non-static method which is annotated with the `@io.quarkus.deployment.annotations.BuildStep` annotation. -Each build step may <> items that are produced by earlier stages, and <> items that can be consumed by later stages. Build steps are normally only run when they produce a build item that is -ultimately consumed by another step. - -Build steps are normally placed on plain classes within an extension's deployment module. The classes are automatically -instantiated during the augment process and utilize <>. - -[id='build-items'] -==== Build items - -Build items are concrete, final subclasses of the abstract `io.quarkus.builder.item.BuildItem` class. Each build item represents -some unit of information that must be passed from one stage to another. The base `BuildItem` class may not itself be directly -subclassed; rather, there are abstract subclasses for each of the kinds of build item subclasses that _may_ be created: -<>, <>, and <>. - -Think of build items as a way for different extensions to communicate with one another. For example, a build item can: - -- expose the fact that a database configuration exists -- consume that database configuration (e.g. a connection pool extension or an ORM extension) -- ask an extension to do work for another extension: e.g. an extension wanting to define a new CDI bean and asking the ArC extension -to do so - -This is a very flexible mechanism. - -NOTE: `BuildItem` instances should be immutable, as the producer/consumer model does not allow for mutation to be correctly -ordered. This is not enforced but failure to adhere to this rule can result in race conditions. - -[id='simple-build-items'] -===== Simple build items - -Simple build items are final classes which extend `io.quarkus.builder.item.SimpleBuildItem`. Simple build items may only -be produced by one step in a given build; if multiple steps in a build declare that they produce the same simple build item, -an error is raised. Any number of build steps may consume a simple build item. A build step which consumes a simple -build item will always run _after_ the build step which produced that item. - -.Example of a single build item -[source%nowrap,java] ----- -/** - * The build item which represents the Jandex index of the application, - * and would normally be used by many build steps to find usages - * of annotations. - */ -public final class ApplicationIndexBuildItem extends SimpleBuildItem { - - private final Index index; - - public ApplicationIndexBuildItem(Index index) { - this.index = index; - } - - public Index getIndex() { - return index; - } -} ----- - - -[id='multi-build-items'] -===== Multi build items - -Multiple or "multi" build items are final classes which extend `io.quarkus.builder.item.MultiBuildItem`. Any number of -multi build items of a given class may be produced by any number of steps, but any steps which consume multi build items -will only run _after_ every step which can produce them has run. - -.Example of a multiple build item -[source%nowrap,java] ----- -public final class ServiceWriterBuildItem extends MultiBuildItem { - private final String serviceName; - private final List implementations; - - public ServiceWriterBuildItem(String serviceName, String... implementations) { - this.serviceName = serviceName; - // Make sure it's immutable - this.implementations = Collections.unmodifiableList( - Arrays.asList( - implementations.clone() - ) - ); - } - - public String getServiceName() { - return serviceName; - } - - public List getImplementations() { - return implementations; - } -} ----- - -.Example of multiple build item usage -[source%nowrap,java] ----- -/** - * This build step produces a single multi build item that declares two - * providers of one configuration-related service. - */ -@BuildStep -public ServiceWriterBuildItem registerOneService() { - return new ServiceWriterBuildItem( - Converter.class.getName(), - MyFirstConfigConverterImpl.class.getName(), - MySecondConfigConverterImpl.class.getName() - ); -} - -/** - * This build step produces several multi build items that declare multiple - * providers of multiple configuration-related services. - */ -@BuildStep -public void registerSeveralServices( - BuildProducer providerProducer -) { - providerProducer.produce(new ServiceWriterBuildItem( - Converter.class.getName(), - MyThirdConfigConverterImpl.class.getName(), - MyFourthConfigConverterImpl.class.getName() - )); - providerProducer.produce(new ServiceWriterBuildItem( - ConfigSource.class.getName(), - MyConfigSourceImpl.class.getName() - )); -} - -/** - * This build step aggregates all the produced service providers - * and outputs them as resources. - */ -@BuildStep -public void produceServiceFiles( - List items, - BuildProducer resourceProducer -) throws IOException { - // Aggregate all of the providers - - Map> map = new HashMap<>(); - for (ServiceWriterBuildItem item : items) { - String serviceName = item.getName(); - for (String implName : item.getImplementations()) { - map.computeIfAbsent( - serviceName, - (k, v) -> new LinkedHashSet<>() - ).add(implName); - } - } - - // Now produce the resource(s) for the SPI files - for (Map.Entry> entry : map.entrySet()) { - String serviceName = entry.getKey(); - try (ByteArrayOutputStream os = new ByteArrayOutputStream()) { - try (OutputStreamWriter w = new OutputStreamWriter(os, StandardCharsets.UTF_8)) { - for (String implName : entry.getValue()) { - w.write(implName); - w.write(System.lineSeparator()); - } - w.flush(); - } - resourceProducer.produce( - new GeneratedResourceBuildItem( - "META-INF/services/" + serviceName, - os.toByteArray() - ) - ); - } - } -} ----- - -[id='empty-build-items'] -===== Empty build items - -Empty build items are final (usually empty) classes which extend `io.quarkus.builder.item.EmptyBuildItem`. -They represent build items that don't actually carry any data, and allow such items to be produced and consumed -without having to instantiate empty classes. They cannot themselves be instantiated. - -.Example of an empty build item -[source%nowrap,java] ----- -public final class NativeImageBuildItem extends EmptyBuildItem { - // empty -} ----- - -Empty build items can represent "barriers" which can impose ordering between steps. They can also be used in -the same way that popular build systems use "pseudo-targets", which is to say that the build item can represent a -conceptual goal that does not have a concrete representation. - -.Example of usage of an empty build item in a "pseudo-target" style -[source%nowrap,java] ----- -/** - * Contrived build step that produces the native image on disk. The main augmentation - * step (which is run by Maven or Gradle) would be declared to consume this empty item, - * causing this step to be run. - */ -@BuildStep -@Produce(NativeImageBuildItem.class) -void produceNativeImage() { - // ... - // (produce the native image) - // ... -} ----- - -.Example of usage of an empty build item in a "barrier" style -[source%nowrap,java] ----- -/** - * This would always run after {@link #produceNativeImage()} completes, producing - * an instance of {@code SomeOtherBuildItem}. - */ -@BuildStep -@Consume(NativeImageBuildItem.class) -SomeOtherBuildItem secondBuildStep() { - return new SomeOtherBuildItem("foobar"); -} ----- - -[id='injection'] -==== Injection - -Classes which contain build steps support the following types of injection: - -- Constructor parameter injection -- Field injection -- Method parameter injection (for build step methods only) - -Build step classes are instantiated and injected for each build step invocation, and are discarded afterwards. State -should only be communicated between build steps by way of build items, even if the steps are on the same class. - -NOTE: Final fields are not considered for injection, but can be populated by way of constructor parameter injection -if desired. Static fields are never considered for injection. - -The types of values that can be injected include: - -- <> produced by previous build steps -- <> to produce items for subsequent build steps -- <> types -- Template objects for <> - -WARNING: Objects which are injected into a build step method or its class _must not_ be used outside of that method's -execution. - -NOTE: Injection is resolved at compile time via an annotation processor, -and the resulting code does not have permission to inject private fields or invoke private methods. - -[id='producing-values'] -==== Producing values - -A build step may produce values for subsequent steps in several possible ways: - -- By returning a <> or <> instance -- By returning a `List` of a multi build item class -- By injecting a `BuildProducer` of a simple or multi build item class -- By annotating the method with `@io.quarkus.deployment.annotations.Produce`, giving the class name of a -<> - -If a simple build item is declared on a build step, it _must_ be produced during that build step, otherwise an error -will result. Build producers which are injected into steps _must not_ be used outside of that step. - -Note that a `@BuildStep` method will only be called if it produces something that another consumer or the final output -requires. If there is no consumer for a particular item then it will not be produced. What is required will depend on -the final target that is being produced. For example, when running in developer mode the final output will not ask -for GraalVM-specific build items such as `ReflectiveClassBuildItem`, so methods that only produce these -items will not be invoked. - -[id='consuming-values'] -==== Consuming values - -A build step may consume values from previous steps in the following ways: - -- By injecting a <> -- By injecting an `Optional` of a simple build item class -- By injecting a `List` of a <> class -- By annotating the method with `@io.quarkus.deployment.annotations.Consume`, giving the class name of a -<> - -Normally it is an error for a step which is included to consume a simple build item that is not produced by any other -step. In this way, it is guaranteed that all of the declared values will be present and non-`null` when a step is run. - -Sometimes a value isn't necessary for the build to complete, but might inform some behavior of the build step if it is -present. In this case, the value can be optionally injected. - -NOTE: Multi build values are always considered _optional_. If not present, an empty list will be injected. - -[id='producing-weak-values'] -===== Weak value production - -Normally a build step is included whenever it produces any build item which is in turn consumed by any other build step. In this way, -only the steps necessary to produce the final artifact(s) are included, and steps which pertain to extensions which are -not installed or which only produce build items which are not relevant for the given artifact type are excluded. - -For cases where this is not desired behavior, the `@io.quarkus.deployment.annotations.Weak` annotation may be used. This -annotation indicates that the build step should not automatically be included solely on the basis of producing the annotated value. - -.Example of producing a build item weakly -[source%nowrap,java] ----- -/** - * This build step is only run if something consumes the ExecutorClassBuildItem. - */ -@BuildStep -void createExecutor( - @Weak BuildProducer classConsumer, - BuildProducer executorClassConsumer -) { - ClassWriter cw = new ClassWriter(Gizmo.ASM_API_VERSION); - String className = generateClassThatCreatesExecutor(cw); // <1> - classConsumer.produce(new GeneratedClassBuildItem(true, className, cw.toByteArray())); - executorClassConsumer.produce(new ExecutorClassBuildItem(className)); -} ----- -<1> This method (not provided in this example) would generate the class using the ASM API. - -Certain types of build items are generally always consumed, such as generated classes or resources. -An extension might produce a build item along with a generated class to facilitate the usage -of that build item. Such a build step would use the `@Weak` annotation on the generated class build item, while normally -producing the other build item. If the other build item is ultimately consumed by something, then the step would run -and the class would be generated. If nothing consumes the other build item, the step would not be included in the build -process. - -In the example above, `GeneratedClassBuildItem` would only be produced if `ExecutorClassBuildItem` is consumed by -some other build step. - -Note that when using <>, the implicitly generated class can be declared to be weak by -using the `optional` attribute of the `@io.quarkus.deployment.annotations.Record` annotation. - -.Example of using a bytecode recorder where the generated class is weakly produced -[source%nowrap,java] ----- -/** - * This build step is only run if something consumes the ExecutorBuildItem. - */ -@BuildStep -@Record(value = ExecutionTime.RUNTIME_INIT, optional = true) // <1> -ExecutorBuildItem createExecutor( // <2> - ExecutorTemplate executorTemplate, - ThreadPoolConfig threadPoolConfig -) { - - return new ExecutorBuildItem( - setupTemplate.setupRunTime( - shutdownContextBuildItem, - threadPoolConfig, - launchModeBuildItem.getLaunchMode() - ) - ); -} ----- -<1> Note the `optional` attribute. -<2> This example is using recorder proxies; see the section on <> for more information. - -==== Application Archives - -The `@BuildStep` annotation can also register marker files that determine which archives on the class path are considered -to be 'Application Archives', and will therefore get indexed. This is done via the `applicationArchiveMarkers`. For -example the ArC extension registers `META-INF/beans.xml`, which means that all archives on the class path with a `beans.xml` -file will be indexed. - -==== Using Thread's Context Class Loader - -The build step will be run with a TCCL that can load user classes from the deployment in a transformer-safe way. -This class loader only lasts for the life of the augmentation, and is discarded afterwards. -The classes will be loaded again in a different class loader at runtime. -This means that loading a class during augmentation does not stop it from being transformed when running in the development/test mode. - -==== Adding external JARs to the indexer with IndexDependencyBuildItem - -The index of scanned classes will not automatically include your external class dependencies. -To add dependencies, create a `@BuildStep` that produces `IndexDependencyBuildItem` objects, for a `groupId` and `artifactId`. - -NOTE: It is important to specify all the required artifacts to be added to the indexer. No artifacts are implicitly added transitively. - -The `Amazon Alexa` extension adds dependent libraries from the Alexa SDK that are used in Jackson JSON transformations, in order for the reflective classes to identified and included at `BUILD_TIME`. - -[source%nowrap,java] ----- - @BuildStep - void addDependencies(BuildProducer indexDependency) { - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-runtime")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-model")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-lambda-support")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-servlet-support")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-dynamodb-persistence-adapter")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-apache-client")); - indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-model-runtime")); - } ----- - -With the artifacts added to the `Jandex` indexer, you can now search the index to identify classes implementing an interface, sub-classes of a specific class, or classes with a target annotation. - -For example, the `Jackson` extension uses code like below to search for annotations used in JSON deserialization, -and add them to the reflective hierarchy for `BUILD_TIME` analysis. - -[source%nowrap,java] ----- - - DotName JSON_DESERIALIZE = DotName.createSimple(JsonDeserialize.class.getName()); - - IndexView index = combinedIndexBuildItem.getIndex(); - - // handle the various @JsonDeserialize cases - for (AnnotationInstance deserializeInstance : index.getAnnotations(JSON_DESERIALIZE)) { - AnnotationTarget annotationTarget = deserializeInstance.target(); - if (CLASS.equals(annotationTarget.kind())) { - DotName dotName = annotationTarget.asClass().name(); - Type jandexType = Type.create(dotName, Type.Kind.CLASS); - reflectiveHierarchyClass.produce(new ReflectiveHierarchyBuildItem(jandexType)); - } - - } - ----- - -==== Visualizing build step dependencies - -It can occasionally be useful to see a visual representation of the interactions between the various build steps. For such cases, adding `-Djboss.builder.graph-output=build.dot` when building an application -will result in the creation of the `build.dot` file in the project's root directory. See link:https://graphviz.org/resources/[this] for a list of software that can open the file and show the actual visual representation. - -[[configuration]] -=== Configuration - -Configuration in Quarkus is based on SmallRye Config, an implementation of the MicroProfile Config specification. -All of the standard features of MP-Config are supported; in addition, there are several extensions which are made available -by the SmallRye Config project as well as by Quarkus itself. - -The value of these properties is configured in a `application.properties` file that follows the MicroProfile config format. - -Configuration of Quarkus extensions is injection-based, using annotations. - -==== Configuration Keys - -Leaf configuration keys are mapped to non-`private` fields via the `@io.quarkus.runtime.annotations.ConfigItem` annotation. - -NOTE: Though the SmallRye Config project is used for implementation, the standard `@ConfigProperty` annotation does not have the -same semantics that are needed to support configuration within extensions. - -Configuration keys are normally derived from the field names that they are tied to. This is done by de-camel-casing the name and then -joining the segments with hyphens (`-`). Some examples: - -* `bindAddress` becomes `bind-address` -* `keepAliveTime` becomes `keep-alive-time` -* `requestDNSTimeout` becomes `request-dns-timeout` - -The name can also be explicitly specified by giving a `name` attribute to the `@ConfigItem` annotation. - -NOTE: Though it is possible to override the configuration key name using the `name` attribute of `@ConfigItem`, -normally this should only be done in cases where (for example) the configuration key name is the same as a Java keyword. - -==== Configuration Value types - -The type of the field with the `@ConfigItem` annotation determines the conversion that is applied to it. Quarkus -extensions may use the full range of configuration types made available by SmallRye Config, which includes: - -* All primitive types and primitive wrapper types -* `String` -* Any type which has a constructor accepting a single argument of type `String` or `CharSequence` -* Any type which has a static method named `of` which accepts a single argument of type `String` -* Any type which has a static method named `valueOf` or `parse` which accepts a single argument of type `CharSequence` or `String` -* `java.time.Duration` -* `java.util.regex.Pattern` -* `java.nio.file.Path` -* `io.quarkus.runtime.configuration.MemorySize` to represent data sizes -* `java.net.InetSocketAddress`, `java.net.InetAddress` and `org.wildfly.common.net.CidrAddress` -* `java.util.Locale` where the string value is an IETF BCP 47 language tag -* `java.nio.charset.Charset` where the string value is a canonical name or an alias -* `java.time.ZoneId` where the string value is parsed via `java.time.ZoneId.of(String)` -* A `List` or `Optional` of any of the above types -* `OptionalInt`, `OptionalLong`, `OptionalDouble` - -In addition, custom converters may be registered by adding their fully qualified class name in file -`META-INF/services/org.eclipse.microprofile.config.spi.Converter`. - -Though these implicit converters use reflection, Quarkus will automatically ensure that they are loaded at the appropriate time. - -===== Optional Values - -If the configuration type is one of the optional types, then empty values are allowed for the configuration key; otherwise, -specification of an empty value will result in a configuration error which prevents the application from starting. This -is especially relevant to configuration properties of inherently emptiable values such as `List`, `Set`, and `String`. Such -value types will never be empty; in the event of an empty value, an empty `Optional` is always used. - -==== Configuration Default Values - -A configuration item can be marked to have a default value. The default value is used when no matching configuration key -is specified in the configuration. - -Configuration items with a primitive type (such as `int` or `boolean`) implicitly use a default value of `0` or `false`. The -sole exception to this rule is the `char` type which does not have an implicit default value. - -A property with a default value is not implicitly optional. If a non-optional configuration item with a default value -is explicitly specified to have an empty value, the application will report a configuration error and will not start. If -it is desired for a property to have a default value and also be optional, it must have an `Optional` type as described above. - -==== Configuration Groups - -Configuration values are always collected into grouping classes which are marked with the `@io.quarkus.runtime.annotations.ConfigGroup` -annotation. These classes contain a field for each key within its group. In addition, configuration groups can be nested. - -===== Optional Configuration Groups - -A nested configuration group may be wrapped with an `Optional` type. In this case, the group is not populated unless one -or more properties within that group are specified in the configuration. If the group is populated, then any required -properties in the group must also be specified otherwise a configuration error will be reported and the application will -not start. - -==== Configuration Maps - -A `Map` can be used for configuration at any position where a configuration group would be allowed. The key type of such a -map *must* be `String`, and its value may be either a configuration group class or a valid leaf type. The configuration -key segment following the map's key segment will be used as the key for map values. - -[id='configuration-roots'] -==== Configuration Roots - -Configuration roots are configuration groups that appear in the root of the configuration tree. A configuration property's full -name is determined by joining the string `quarkus.` with the hyphenated name of the fields that form the path from the root to the -leaf field. For example, if I define a configuration root group called `ThreadPool`, with a nested group in a field named `sizing` -that in turn contains a field called `minSize`, the final configuration property will be called `quarkus.thread-pool.sizing.min-size`. - -A configuration root's name can be given with the `name` property, or it can be inferred from the class name. If the latter, -then the configuration key will be the class name, minus any `Config` or `Configuration` suffix, broken up by camel-case, -lowercased, and re-joined using hyphens (`-`). - -A configuration root's class name can contain an extra suffix segment for the case where there are configuration -roots for multiple <>. Classes which correspond to the `BUILD_TIME` and `BUILD_AND_RUN_TIME_FIXED` -may end with `BuildTimeConfig` or `BuildTimeConfiguration`, classes which correspond to the `RUN_TIME` phase -may end with `RuntimeConfig`, `RunTimeConfig`, `RuntimeConfiguration` or `RunTimeConfiguration` while classes which correspond -to the `BOOTSTRAP` configuration may end with `BootstrapConfig` or `BootstrapConfiguration`. - -Note: The current implementation is still using injection site to determine the root set, so to avoid migration problems, it -is recommended that the injection site (field or parameter) have the same name as the configuration root class until -this change is complete. - -===== Configuration Root Phases - -Configuration roots are strictly bound by configuration phase, and attempting to access a configuration root from outside of its corresponding phase will result in an error. -A configuration root dictates when its contained keys are read from configuration, and when they are available to applications. The phases defined by `io.quarkus.runtime.annotations.ConfigPhase` are as follows: - -[cols="<3m,^1,^1,^1,^1,<8",options="header"] -|=== -| Phase name -| Read & avail. at build time -| Avail. at run time -| Read during static init -| Re-read during startup (native executable) -| Notes - -| BUILD_TIME -| ✓ -| ✗ -| ✗ -| ✗ -| Appropriate for things which affect build. - -| BUILD_AND_RUN_TIME_FIXED -| ✓ -| ✓ -| ✗ -| ✗ -| Appropriate for things which affect build and must be visible for run time code. Not read from config at run time. - -| BOOTSTRAP -| ✗ -| ✓ -| ✗ -| ✓ -| Used when runtime configuration needs to be obtained from an external system (like `Consul`), but details of that system need to be configurable (for example Consul's URL). The high level way this works is by using the standard Quarkus config sources (such as properties files, system properties, etc.) and producing `ConfigSourceProvider` objects which are subsequently taken into account by Quarkus when creating the final runtime `Config` object. - -| RUN_TIME -| ✗ -| ✓ -| ✓ -| ✓ -| Not available at build, read at start in all modes. - -|=== - -For all cases other than the `BUILD_TIME` case, the configuration root class and all of the configuration groups and types contained therein must be located in, or reachable from, the extension's run time artifact. Configuration roots of phase `BUILD_TIME` may be located in or reachable from either of the extension's run time or deployment artifacts. - -IMPORTANT: _Bootstrap_ configuration steps are executed during runtime-init *before* any of other runtime steps. This means that code executed as part of this step cannot access anything that gets initialized in runtime init steps (runtime synthetic CDI beans being one such example). - -==== Configuration Example - -[source%nowrap,java] ----- -import io.quarkus.runtime.annotations.ConfigItem; -import io.quarkus.runtime.annotations.ConfigGroup; -import io.quarkus.runtime.annotations.DefaultConverter - -import java.io.File; -import java.util.logging.Level; - -@ConfigGroup <1> -public class FileConfig { - - /** - * Enable logging to a file. - */ - @ConfigItem(defaultValue = "true") - boolean enable; - - /** - * The log format. - */ - @ConfigItem(defaultValue = "%d{yyyy-MM-dd HH:mm:ss,SSS} %h %N[%i] %-5p [%c{1.}] (%t) %s%e%n") - String format; - - /** - * The level of logs to be written into the file. - */ - @ConfigItem(defaultValue = "ALL") - Level level; - - /** - * The name of the file in which logs will be written. - */ - @ConfigItem(defaultValue = "application.log") - File path; - -} - -/** - * Logging configuration. - */ -@ConfigRoot(phase = ConfigPhase.RUN_TIME) <2> -public class LogConfiguration { - - // ... - - /** - * Configuration properties for the logging file handler. - */ - FileConfig file; -} - -public class LoggingProcessor { - // ... - - /** - * Logging configuration. - */ - <3> - LogConfiguration config; -} ----- - -A configuration property name can be split into segments. For example, a property name like -`quarkus.log.file.enable` can be split into the following segments: - -* `quarkus` - a namespace claimed by Quarkus which is a prefix for all `@ConfigRoot` classes, -* `log` - a name segment which corresponds to the `LogConfiguration` class annotated with `@ConfigRoot`, -* `file` - a name segment which corresponds to the `file` field in this class, -* `enabled` - a name segment which corresponds to `enable` field in `FileConfig` class annotated with `@ConfigGroup`. - -<1> The `FileConfig` class is annotated with `@ConfigGroup` to indicate that this is an aggregate -configuration object containing a collection of configurable properties, rather than being a simple configuration -key type. -<2> The `@ConfigRoot` annotation indicates that this object is a configuration root group, in this case one which -corresponds to a `log` segment. A class name is used to link configuration root group with the segment from a -property name. The `Configuration` part is stripped off from a `LogConfiguration` class name and the remaining `Log` -is lowercased to become a `log`. Since all `@ConfigRoot` annotated classes uses `quarkus` as a prefix, this finally -becomes `quarkus.log` and represents the properties which names begin with `quarkus.log.*`. -<3> Here the `LoggingProcessor` injects a `LogConfiguration` instance automatically by detecting the `@ConfigRoot` -annotation. - -A corresponding `application.properties` for the above example could be: - -[source%nowrap,properties] ----- -quarkus.log.file.enable=true -quarkus.log.file.level=DEBUG -quarkus.log.file.path=/tmp/debug.log ----- - -Since `format` is not defined in these properties, the default value from `@ConfigItem` will be used instead. - - -==== Enhanced conversion -You can use enhanced conversion of a config item by using the `@ConvertWith` annotation which accepts a `Converter` class object. -If the annotation is present on a config item, the implicit or custom built in converter in use will be overridden by the value provided. -To do, see the example below which converts `YES` or `NO` values to `boolean`. -[source%nowrap,java] ----- -@ConfigRoot -public class SomeConfig { - /** - * Config item with enhanced converter - */ - @ConvertWith(YesNoConverter.class) // <1> - @ConfigItem(defaultValue = "NO") - Boolean answer; - - - public static class YesNoConverter implements Converter { - - public YesNoConverter() {} - - @Override - public Boolean convert(String s) { - if (s == null || s.isEmpty()) { - return false; - } - - switch (s) { - case "YES": - return true; - case "NO": - return false; - } - - throw new IllegalArgumentException("Unsupported value " + s + " given"); - } - } -} ----- -<1> Override the default `Boolean` converter and use the provided converter which accepts a `YES` or `NO` config values. - - -The corresponding `application.properties` will look like. -[source%nowrap,properties] ----- -quarkus.some.answer=YES ----- - -[NOTE] -===== -Enum values (config items) are translated to skewed-case (hyphenated) by default. The table below illustrates an enum name and their canonical equivalence: - -|=== -|Java enum| Canonical equivalent - -|DISCARD -|discard - -|READ_UNCOMMITTED -|read-uncommitted - -|SIGUSR1 -|sigusr1 - -|JavaEnum -|java-enum - -|MAKING_LifeDifficult -|making-life-difficult - -|YeOldeJBoss -|ye-olde-jboss - -|camelCaseEnum -|camel-case-enum - -|=== - -To use the default behaviour which is based on implicit converter or a custom defined one add `@DefaultConverter` annotation to the configuration item -[source%nowrap,java] ----- -@ConfigRoot -public class SomeLogConfig { - /** - * The level of logs to be written into the file. - */ - @DefaultConverter // <1> - @ConfigItem(defaultValue = "ALL") - Level level; -} ----- -<1> Use the default converter (built in or a custom converter) to convert `Level.class` enum. -===== - - -=== Conditional Step Inclusion - -It is possible to only include a given `@BuildStep` under certain conditions. The `@BuildStep` annotation -has two optional parameters: `onlyIf` and `onlyIfNot`. These parameters can be set to one or more classes -which implement `BooleanSupplier`. The build step will only be included when the method returns -`true` (for `onlyIf`) or `false` (for `onlyIfNot`). - -The condition class can inject <> as long as they belong to -a build-time phase. Run time configuration is not available for condition classes. - -The condition class may also inject a value of type `io.quarkus.runtime.LaunchMode`. -Constructor parameter and field injection is supported. - -.An example of a conditional build step -[source%nowrap,java] ----- -@BuildStep(onlyIf = IsDevMode.class) -LogCategoryBuildItem enableDebugLogging() { - return new LogCategoryBuildItem("org.your.quarkus.extension", Level.DEBUG); -} - -static class IsDevMode implements BooleanSupplier { - LaunchMode launchMode; - - public boolean getAsBoolean() { - return launchMode == LaunchMode.DEVELOPMENT; - } -} ----- - -If you need to make your build step conditional on the presence or absence of another extension, you can -use <> for that. - -[id='bytecode-recording'] -=== Bytecode Recording - -One of the main outputs of the build process is recorded bytecode. This bytecode actually sets up the runtime environment. For example, in order to start Undertow, the resulting application will have some bytecode that directly registers all -Servlet instances and then starts Undertow. - -As writing bytecode directly is complex, this is instead done via bytecode recorders. At deployment time, -invocations are made on recorder objects that contain the actual runtime logic, but instead of these invocations -proceeding as normal they are intercepted and recorded (hence the name). This recording is then used to generate bytecode -that performs the same sequence of invocations at runtime. This is essentially a form of deferred execution where invocations -made at deployment time get deferred until runtime. - -Let's look at the classic 'Hello World' type example. To do this the Quarkus way we would create a recorder as follows: - -[source%nowrap,java] ----- -@Recorder -class HelloRecorder { - - public void sayHello(String name) { - System.out.println("Hello" + name); - } - -} ----- - -And then create a build step that uses this recorder: - -[source%nowrap,java] ----- -@Record(RUNTIME_INIT) -@BuildStep -public void helloBuildStep(HelloRecorder recorder) { - recorder.sayHello("World"); -} ----- - -When this build step is run nothing is printed to the console. This is because the `HelloRecorder` that is injected is -actually a proxy that records all invocations. Instead if we run the resulting Quarkus program we will see 'Hello World' -printed to the console. - -Methods on a recorder can return a value, which must be proxiable (if you want to return a non-proxiable item wrap it -in `io.quarkus.runtime.RuntimeValue`). These proxies may not be invoked directly, however they can be passed -into other recorder methods. This can be any recorder method, including from other `@BuildStep` methods, so a common pattern -is to produce `BuildItem` instances that wrap the results of these recorder invocations. - -For instance, in order to make arbitrary changes to a Servlet deployment Undertow has a `ServletExtensionBuildItem`, -which is a `MultiBuildItem` that wraps a `ServletExtension` instance. I can return a `ServletExtension` from a recorder -in another module, and Undertow will consume it and pass it into the recorder method that starts Undertow. - -At runtime the bytecode will be invoked in the order it is generated. This means that build step dependencies implicitly -control the order that generated bytecode is run. In the example above we know that the bytecode that produces a -`ServletExtensionBuildItem` will be run before the bytecode that consumes it. - -The following objects can be passed to recorders: - -- Primitives -- String -- Class objects -- Objects returned from a previous recorder invocation -- Objects with a no-arg constructor and getter/setters for all properties (or public fields) -- Objects with a constructor annotated with `@RecordableConstructor` with parameter names that match field names -- Any arbitrary object via the `io.quarkus.deployment.recording.RecorderContext#registerSubstitution(Class, Class, Class)` mechanism -- Arrays, Lists and Maps of the above - -==== Injecting Configuration into Recorders - -Configuration objects with phase `RUNTIME` or `BUILD_AND_RUNTIME_FIXED` can be injected into recorders via constructor -injection. Just create a constructor that takes the configuration objects the recorder needs. If the recorder has multiple -constructors you can annotate the one you want Quarkus to use with `@Inject`. If the recorder wants to inject runtime config -but is also used at static init time then it needs to inject a `RuntimeValue`, this value will only be set -when the runtime methods are being invoked. - -==== RecorderContext - -`io.quarkus.deployment.recording.RecorderContext` provides some convenience methods to enhance bytecode recording, -this includes the ability to register creation functions for classes without no-arg constructors, to register an object -substitution (basically a transformer from a non-serializable object to a serializable one and vice versa), and to create -a class proxy. This interface can be directly injected as a method parameter into any `@Record` method. - -Calling `classProxy` with a given class name will create a `Class` that can be passed into recorder -methods, and at runtime will be substituted with the class whose name was passed in to `classProxy`. This is basically a -convenience to avoid the need to explicitly load classes in the recorders. - -==== Printing step execution time - -At times, it can be useful to know how the exact time each startup task (which is the result of each bytecode recording) takes when the application is run. -The simplest way to determine this information is to launch the Quarkus application with the `-Dquarkus.debug.print-startup-times=true` system property. -The output will look something like: - -[source%nowrap] ----- -Build step LoggingResourceProcessor.setupLoggingRuntimeInit completed in: 42ms -Build step ConfigGenerationBuildStep.checkForBuildTimeConfigChange completed in: 4ms -Build step SyntheticBeansProcessor.initRuntime completed in: 0ms -Build step ConfigBuildStep.validateConfigProperties completed in: 1ms -Build step ResteasyStandaloneBuildStep.boot completed in: 95ms -Build step VertxHttpProcessor.initializeRouter completed in: 1ms -Build step VertxHttpProcessor.finalizeRouter completed in: 4ms -Build step LifecycleEventsBuildStep.startupEvent completed in: 1ms -Build step VertxHttpProcessor.openSocket completed in: 93ms -Build step ShutdownListenerBuildStep.setupShutdown completed in: 1ms ----- - -//// -TODO: config integration -//// -=== Contexts and Dependency Injection - -==== Extension Points - -As a CDI based runtime, Quarkus extensions often make CDI beans available as part of the extension behavior. -However, Quarkus DI solution does not support CDI Portable Extensions. -Instead, Quarkus extensions can make use of various xref:cdi-reference.adoc[Build Time Extension Points]. - -=== Quarkus Dev UI - -You can make your extension support the xref:dev-ui.adoc[Quarkus Dev UI] for a greater developer experience. - -=== Extension-defined endpoints - -Your extension can add additional, non-application endpoints to be served alongside endpoints -for Health, Metrics, OpenAPI, Swagger UI, etc. - -Use a `NonApplicationRootPathBuildItem` to define an endpoint: - -[source%nowrap,java] ----- -@BuildStep -RouteBuildItem myExtensionRoute(NonApplicationRootPathBuildItem nonApplicationRootPathBuildItem) { - return nonApplicationRootPathBuildItem.routeBuilder() - .route("custom-endpoint") - .handler(new MyCustomHandler()) - .displayOnNotFoundPage() - .build(); -} ----- - -Note that the path above does not start with a '/', indicating it is a relative path. The above -endpoint will be served relative to the configured non-application endpoint root. The non-application -endpoint root is `/q` by default, which means the resulting endpoint will be found at `/q/custom-endpoint`. - -Absolute paths are handled differently. If the above called `route("/custom-endpoint")`, the resulting -endpoint will be found at `/custom-endpoint`. - -If an extension needs nested non-application endpoints: - -[source%nowrap,java] ----- -@BuildStep -RouteBuildItem myNestedExtensionRoute(NonApplicationRootPathBuildItem nonApplicationRootPathBuildItem) { - return nonApplicationRootPathBuildItem.routeBuilder() - .nestedRoute("custom-endpoint", "deep") - .handler(new MyCustomHandler()) - .displayOnNotFoundPage() - .build(); -} ----- - -Given a default non-application endpoint root of `/q`, this will create an endpoint at `/q/custom-endpoint/deep`. - -Absolute paths also have an impact on nested endpoints. If the above called `nestedRoute("custom-endpoint", "/deep")`, -the resulting endpoint will be found at `/deep`. - -Refer to the xref:all-config.adoc#quarkus-vertx-http_quarkus.http.non-application-root-path[Quarkus Vertx HTTP configuration reference] -for details on how the non-application root path is configured. - -=== Extension Health Check - -Health checks are provided via the `quarkus-smallrye-health` extension. It provides both liveness and readiness checks capabilities. - -When writing an extension, it's beneficial to provide health checks for the extension, that can be automatically included without the developer needing to write their own. - -In order to provide a health check, you should do the following: - -- Import the `quarkus-smallrye-health` extension as an **optional** dependency in your runtime module so it will not impact the size of the application if -health check is not included. -- Create your health check following the xref:smallrye-health.adoc[SmallRye Health] guide. We advise providing only -readiness check for an extension (liveness check is designed to express the fact that an application is up and needs to be lightweight). -- Import the `quarkus-smallrye-health-spi` library in your deployment module. -- Add a build step in your deployment module that produces a `HealthBuildItem`. -- Add a way to disable the extension health check via a config item `quarkus..health.enabled` that should be enabled by default. - -Following is an example from the Agroal extension that provides a `DataSourceHealthCheck` to validate the readiness of a datasource. - -[source%nowrap,java] ----- -@BuildStep -HealthBuildItem addHealthCheck(AgroalBuildTimeConfig agroalBuildTimeConfig) { - return new HealthBuildItem("io.quarkus.agroal.runtime.health.DataSourceHealthCheck", - agroalBuildTimeConfig.healthEnabled); -} ----- - -=== Extension Metrics - -The `quarkus-micrometer` extension and the `quarkus-smallrye-metrics` extension provide support for collecting metrics. -As a compatibility note, the `quarkus-micrometer` extension adapts the MP Metrics API to Micrometer library primitives, so the `quarkus-micrometer` extension can be enabled without breaking code that relies on the MP Metrics API. -Note that the metrics emitted by Micrometer are different, see the `quarkus-micrometer` extension documentation for more information. - -NOTE: The compatibility layer for MP Metrics APIs will move to a different extension in the future. - -There are two broad patterns that extensions can use to interact with an optional metrics extension to add their own metrics: - -* Consumer pattern: An extension declares a `MetricsFactoryConsumerBuildItem` and uses that to provide a bytecode recorder to the metrics extension. When the metrics extension has initialized, it will iterate over registered consumers to initialize them with a `MetricsFactory`. This factory can be used to declare API-agnostic metrics, which can be a good fit for extensions that provide an instrumentable object for gathering statistics (e.g. Hibernate's `Statistics` class). - -* Binder pattern: An extension can opt to use completely different gathering implementations depending on the metrics system. An `Optional metricsCapability` build step parameter can be used to declare or otherwise initialize API-specific metrics based on the active metrics extension (e.g. "smallrye-metrics" or "micrometer"). This pattern can be combined with the consumer pattern by using `MetricsFactory::metricsSystemSupported()` to test the active metrics extension within the recorder. - -Remember that support for metrics is optional. Extensions can use an `Optional metricsCapability` parameter in their build step to test for the presence of an enabled metrics extension. Consider using additional configuration to control behavior of metrics. Datasource metrics can be expensive, for example, so additional configuration flags are used enable metrics collection on individual datasources. - -When adding metrics for your extension, you may find yourself in one of the following situations: - -1. An underlying library used by the extension is using a specific Metrics API directly (either MP Metrics, Micrometer, or some other). -2. An underlying library uses its own mechanism for collecting metrics and makes them available at runtime using its own API, e.g. Hibernate's `Statistics` class, or Vert.x `MetricsOptions`. -3. An underlying library does not provide metrics (or there is no library at all) and you want to add instrumentation. - -==== Case 1: The library uses a metrics library directly - -If the library directly uses a metrics API, there are two options: - -- Use an `Optional metricsCapability` parameter to test which metrics API is supported (e.g. "smallrye-metrics" or "micrometer") in your build step, and use that to selectively declare or initialize API-specific beans or build items. - -- Create a separate build step that consumes a `MetricsFactory`, and use the `MetricsFactory::metricsSystemSupported()` method within the bytecode recorder to initialize required resources if the desired metrics API is supported (e.g. "smallrye-metrics" or "micrometer"). - -Extensions may need to provide a fallback if there is no active metrics extension or the extension doesn't support the API required by the library. - -==== Case 2: The library provides its own metric API - -There are two examples of a library providing its own metrics API: - -- The extension defines an instrumentable object as Agroal does with `io.agroal.api.AgroalDataSourceMetrics`, or -- The extension provides its own abstraction of metrics, as Jaeger does with `io.jaegertracing.spi.MetricsFactory`. - -===== Observing instrumentable objects - -Let's take the instrumentable object (`io.agroal.api.AgroalDataSourceMetrics`) case first. In this case, you can do the following: - -- Define a `BuildStep` that produces a `MetricsFactoryConsumerBuildItem` that uses a `RUNTIME_INIT` or `STATIC_INIT` Recorder to define a `MetricsFactory` consumer. For example, the following creates a `MetricsFactoryConsumerBuildItem` if and only if metrics are enabled both for Agroal generally, and for a datasource specifically: -+ -[source%nowrap,java] ----- -@BuildStep -@Record(ExecutionTime.RUNTIME_INIT) -void registerMetrics(AgroalMetricsRecorder recorder, - DataSourcesBuildTimeConfig dataSourcesBuildTimeConfig, - BuildProducer datasourceMetrics, - List aggregatedDataSourceBuildTimeConfigs) { - - for (AggregatedDataSourceBuildTimeConfigBuildItem aggregatedDataSourceBuildTimeConfig : aggregatedDataSourceBuildTimeConfigs) { - // Create a MetricsFactory consumer to register metrics for a data source - // IFF metrics are enabled globally and for the data source - // (they are enabled for each data source by default if they are also enabled globally) - if (dataSourcesBuildTimeConfig.metricsEnabled && - aggregatedDataSourceBuildTimeConfig.getJdbcConfig().enableMetrics.orElse(true)) { - datasourceMetrics.produce(new MetricsFactoryConsumerBuildItem( - recorder.registerDataSourceMetrics(aggregatedDataSourceBuildTimeConfig.getName()))); - } - } -} ----- - -- The associated recorder should use the provided `MetricsFactory` to register metrics. For Agroal, this means using the `MetricFactory` API to observe `io.agroal.api.AgroalDataSourceMetrics` methods. For example: -+ -[source%nowrap,java] ----- -/* RUNTIME_INIT */ -public Consumer registerDataSourceMetrics(String dataSourceName) { - return new Consumer() { - @Override - public void accept(MetricsFactory metricsFactory) { - String tagValue = DataSourceUtil.isDefault(dataSourceName) ? "default" : dataSourceName; - AgroalDataSourceMetrics metrics = getDataSource(dataSourceName).getMetrics(); - - // When using MP Metrics, the builder uses the VENDOR registry by default. - metricsFactory.builder("agroal.active.count") - .description( - "Number of active connections. These connections are in use and not available to be acquired.") - .tag("datasource", tagValue) - .buildGauge(metrics::activeCount); - .... ----- - -The `MetricsFactory` provides a fluid builder for registration of metrics, with the final step constructing gauges or counters based on a `Supplier` or `ToDoubleFunction`. Timers can either wrap `Callable`, `Runnable`, or `Supplier` implementations, or can use a `TimeRecorder` to accumulate chunks of time. The underlying metrics extension will create appropriate artifacts to observe or measure the defined functions. - -===== Using a Metrics API-specific implementation - -Using metrics-API specific implementations may be preferred in some cases. Jaeger, for example, defines its own metrics interface, `io.jaegertracing.spi.MetricsFactory`, that it uses to define counters and gauges. A direct mapping from that interface to the metrics system will be the most efficient. In this case, it is important to isolate these specialized implementations and to avoid eager classloading to ensure the metrics API remains an optional, compile-time dependency. - -`Optional metricsCapability` can be used in the build step to selectively control initialization of beans or the production of other build items. The Jaeger extension, for example, can use the following to control initialization of specialized Metrics API adapters: -+ -[source%nowrap,java] ----- -/* RUNTIME_INIT */ -@BuildStep -@Record(ExecutionTime.RUNTIME_INIT) -void setupTracer(JaegerDeploymentRecorder jdr, JaegerBuildTimeConfig buildTimeConfig, JaegerConfig jaeger, - ApplicationConfig appConfig, Optional metricsCapability) { - - // Indicates that this extension would like the SSL support to be enabled - extensionSslNativeSupport.produce(new ExtensionSslNativeSupportBuildItem(Feature.JAEGER.getName())); - - if (buildTimeConfig.enabled) { - // To avoid dependency creep, use two separate recorder methods for the two metrics systems - if (buildTimeConfig.metricsEnabled && metricsCapability.isPresent()) { - if (metricsCapability.get().metricsSupported(MetricsFactory.MICROMETER)) { - jdr.registerTracerWithMicrometerMetrics(jaeger, appConfig); - } else { - jdr.registerTracerWithMpMetrics(jaeger, appConfig); - } - } else { - jdr.registerTracerWithoutMetrics(jaeger, appConfig); - } - } -} ----- - -A recorder consuming a `MetricsFactory` can use `MetricsFactory::metricsSystemSupported()` can be used to control initialization of metrics objects during bytecode recording in a similar way. - -==== Case 3: It is necessary to collect metrics within the extension code - -To define your own metrics from scratch, you have two basic options: Use the generic `MetricFactory` builders, or follow the binder pattern, and create instrumentation specific to the enabled metrics extension. - -To use the extension-agnostic `MetricFactory` API, your processor can define a `BuildStep` that produces a `MetricsFactoryConsumerBuildItem` that uses a `RUNTIME_INIT` or `STATIC_INIT` Recorder to define a `MetricsFactory` consumer. -+ -[source%nowrap,java] ----- -@BuildStep -@Record(ExecutionTime.RUNTIME_INIT) -MetricsFactoryConsumerBuildItem registerMetrics(MyExtensionRecorder recorder) { - return new MetricsFactoryConsumerBuildItem(recorder.registerMetrics()); -} ----- -+ -- The associated recorder should use the provided `MetricsFactory` to register metrics, for example -+ -[source%nowrap,java] ----- -final LongAdder extensionCounter = new LongAdder(); - -/* RUNTIME_INIT */ -public Consumer registerMetrics() { - return new Consumer() { - @Override - public void accept(MetricsFactory metricsFactory) { - metricsFactory.builder("my.extension.counter") - .buildGauge(extensionCounter::longValue); - .... ----- - -Remember that metrics extensions are optional. Keep metrics-related initialization isolated from other setup for your extension, and structure your code to avoid eager imports of metrics APIs. Gathering metrics can also be expensive. Consider using additional extension-specific configuration to control behavior of metrics if the presence/absence of metrics support isn't sufficient. - -=== Customizing JSON handling from an extension - -Extensions often need to register serializers and/or deserializers for types the extension provides. - -For this, both Jackson and JSON-B extensions provide a way to register serializer/deserializer from within an -extension deployment module. - -Keep in mind that not everybody will need JSON, so you need to make it optional. - -If an extension intends to provide JSON related customization, -it is strongly advised to provide customization for both Jackson and JSON-B. - -==== Customizing Jackson - -First, add an *optional* dependency to `quarkus-jackson` on your extension's runtime module. - -[source%nowrap,xml] ----- - - io.quarkus - quarkus-jackson - true - ----- - -Then create a serializer or a deserializer (or both) for Jackson, an example of which can be seen in the `mongodb-panache` extension. - -[source%nowrap,java] ----- -public class ObjectIdSerializer extends StdSerializer { - public ObjectIdSerializer() { - super(ObjectId.class); - } - @Override - public void serialize(ObjectId objectId, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) - throws IOException { - if (objectId != null) { - jsonGenerator.writeString(objectId.toString()); - } - } -} ----- - -Add a dependency to `quarkus-jackson-spi` on your extension's deployment module. - -[source%nowrap,xml] ----- - - io.quarkus - quarkus-jackson-spi - ----- - -Add a build step to your processor to register a Jackson module via the `JacksonModuleBuildItem`. -You need to name your module in a unique way across all Jackson modules. - -[source%nowrap,java] ----- -@BuildStep -JacksonModuleBuildItem registerJacksonSerDeser() { - return new JacksonModuleBuildItem.Builder("ObjectIdModule") - .add(io.quarkus.mongodb.panache.jackson.ObjectIdSerializer.class.getName(), - io.quarkus.mongodb.panache.jackson.ObjectIdDeserializer.class.getName(), - ObjectId.class.getName()) - .build(); -} ----- - -The Jackson extension will then use the produced build item to register a module within Jackson automatically. - -If you need more customization capabilities than registering a module, -you can produce a CDI bean that implements `io.quarkus.jackson.ObjectMapperCustomizer` via an `AdditionalBeanBuildItem`. -More info about customizing Jackson can be found on the JSON guide xref:rest-json.adoc#configuring-json-support[Configuring JSON support] - -==== Customizing JSON-B -First, add an *optional* dependency to `quarkus-jsonb` on your extension's runtime module. - -[source%nowrap,xml] ----- - - io.quarkus - quarkus-jsonb - true - ----- - -Then create a serializer and/or a deserializer for JSON-B, an example of which can be seen in the `mongodb-panache` extension. - -[source%nowrap,java] ----- -public class ObjectIdSerializer implements JsonbSerializer { - @Override - public void serialize(ObjectId obj, JsonGenerator generator, SerializationContext ctx) { - if (obj != null) { - generator.write(obj.toString()); - } - } -} ----- - -Add a dependency to `quarkus-jsonb-spi` on your extension's deployment module. - -[source%nowrap,xml] ----- - - io.quarkus - quarkus-jsonb-spi - ----- - -Add a build step to your processor to register the serializer via the `JsonbSerializerBuildItem`. - -[source%nowrap,java] ----- -@BuildStep -JsonbSerializerBuildItem registerJsonbSerializer() { - return new JsonbSerializerBuildItem(io.quarkus.mongodb.panache.jsonb.ObjectIdSerializer.class.getName())); -} ----- - -The JSON-B extension will then use the produced build item to register your serializer/deserializer automatically. - -If you need more customization capabilities than registering a serializer or a deserializer, -you can produce a CDI bean that implements `io.quarkus.jsonb.JsonbConfigCustomizer` via an `AdditionalBeanBuildItem`. -More info about customizing JSON-B can be found on the JSON guide xref:rest-json.adoc#configuring-json-support[Configuring JSON support] - -=== Integrating with Development Mode - -There are various APIS that you can use to integrate with development mode, and to get information about the current state. - -==== Handling restarts - -When Quarkus is starting the `io.quarkus.deployment.builditem.LiveReloadBuildItem` is guaranteed to be present that gives -information about this start, in particular: - -- Is this a clean start or a live reload -- If this is a live reload which changed files / classes triggered the reload - -It also provides a global context map you can use to store information between restarts, without needing to resort to -static fields. - -==== Triggering Live Reload - -Live reload is generally triggered by a HTTP request, however not all applications are HTTP applications and some extensions -may want to trigger live reload based on other events. To do this you need to implement `io.quarkus.dev.spi.HotReplacementSetup` -in your runtime module, and add a `META-INF/services/io.quarkus.dev.spi.HotReplacementSetup` that lists your implementation. - -On startup the `setupHotDeployment` method will be called, and you can use the provided `io.quarkus.dev.spi.HotReplacementContext` -to initiate a scan for changed files. - -=== Testing Extensions - -Testing of Quarkus extensions should be done with the `io.quarkus.test.QuarkusUnitTest` JUnit 5 extension. -This extension allows for Arquillian-style tests that test specific functionalities. -It is not intended for testing user applications, as this should be done via `io.quarkus.test.junit.QuarkusTest`. -The main difference is that `QuarkusTest` simply boots the application once at the start of the run, while `QuarkusUnitTest` deploys a custom -Quarkus application for each test class. - -These tests should be placed in the deployment module, if additional Quarkus modules are required for testing -their deployment modules should also be added as test scoped dependencies. - -Note that `QuarkusUnitTest` is in the `quarkus-junit5-internal` module. - -An example test class may look like: - -[source,java] ----- -package io.quarkus.health.test; - -import static org.junit.jupiter.api.Assertions.assertEquals; - -import java.util.ArrayList; -import java.util.List; - -import javax.enterprise.inject.Instance; -import javax.inject.Inject; - -import org.eclipse.microprofile.health.Liveness; -import org.eclipse.microprofile.health.HealthCheck; -import org.eclipse.microprofile.health.HealthCheckResponse; -import io.quarkus.test.QuarkusUnitTest; -import org.jboss.shrinkwrap.api.ShrinkWrap; -import org.jboss.shrinkwrap.api.asset.EmptyAsset; -import org.jboss.shrinkwrap.api.spec.JavaArchive; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.extension.RegisterExtension; - -import io.restassured.RestAssured; - -public class FailingUnitTest { - - @RegisterExtension // <1> - static final QuarkusUnitTest config = new QuarkusUnitTest() - .setArchiveProducer(() -> - ShrinkWrap.create(JavaArchive.class) // <2> - .addClasses(FailingHealthCheck.class) - .addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml") - ); - - @Inject // <3> - @Liveness - Instance checks; - - @Test - public void testHealthServlet() { - RestAssured.when().get("/q/health").then().statusCode(503); // <4> - } - - @Test - public void testHealthBeans() { - List check = new ArrayList<>(); // <5> - for (HealthCheck i : checks) { - check.add(i); - } - assertEquals(1, check.size()); - assertEquals(HealthCheckResponse.State.DOWN, check.get(0).call().getState()); - } -} ----- - -<1> The `QuarkusUnitTest` extension must be used with a static field. If used with a non-static field, the test application is not started. -<2> This producer is used to build the application to be tested. It uses Shrinkwrap to create a JavaArchive to test -<3> It is possible to inject beans from our test deployment directly into the test case -<4> This method directly invokes the health check Servlet and verifies the response -<5> This method uses the injected health check bean to verify it is returning the expected result - -If you want to test that an extension properly fails at build time, use the `setExpectedException` method: - -[source,java] ----- - -package io.quarkus.hibernate.orm; - -import io.quarkus.runtime.configuration.ConfigurationException; -import io.quarkus.test.QuarkusUnitTest; -import org.jboss.shrinkwrap.api.ShrinkWrap; -import org.jboss.shrinkwrap.api.spec.JavaArchive; -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.extension.RegisterExtension; - -public class PersistenceAndQuarkusConfigTest { - - @RegisterExtension - static QuarkusUnitTest runner = new QuarkusUnitTest() - .setExpectedException(ConfigurationException.class) <1> - .withApplicationRoot((jar) -> jar - .addAsManifestResource("META-INF/some-persistence.xml", "persistence.xml") - .addAsResource("application.properties")); - - @Test - public void testPersistenceAndConfigTest() { - // should not be called, deployment exception should happen first: - // it's illegal to have Hibernate configuration properties in both the - // application.properties and in the persistence.xml - Assertions.fail(); - } - -} ----- - -<1> This tells JUnit that the Quarkus deployment should fail with a specific exception - - -=== Testing hot reload - -It is also possible to write tests that verify an extension works correctly in development mode and can correctly -handle updates. - -For most extensions this will just work 'out of the box', however it is still a good idea to have a smoke test to -verify that this functionality is working as expected. To test this we use `QuarkusDevModeTest`: - - -[source,java] ----- -public class ServletChangeTestCase { - - @RegisterExtension - final static QuarkusDevModeTest test = new QuarkusDevModeTest() - .setArchiveProducer(new Supplier() { - @Override - public JavaArchive get() { - return ShrinkWrap.create(JavaArchive.class) <1> - .addClass(DevServlet.class) - .addAsManifestResource(new StringAsset("Hello Resource"), "resources/file.txt"); - } - }); - - @Test - public void testServletChange() throws InterruptedException { - RestAssured.when().get("/dev").then() - .statusCode(200) - .body(is("Hello World")); - - test.modifySourceFile("DevServlet.java", new Function() { <2> - - @Override - public String apply(String s) { - return s.replace("Hello World", "Hello Quarkus"); - } - }); - - RestAssured.when().get("/dev").then() - .statusCode(200) - .body(is("Hello Quarkus")); - } - - @Test - public void testAddServlet() throws InterruptedException { - RestAssured.when().get("/new").then() - .statusCode(404); - - test.addSourceFile(NewServlet.class); <3> - - RestAssured.when().get("/new").then() - .statusCode(200) - .body(is("A new Servlet")); - } - - @Test - public void testResourceChange() throws InterruptedException { - RestAssured.when().get("/file.txt").then() - .statusCode(200) - .body(is("Hello Resource")); - - test.modifyResourceFile("META-INF/resources/file.txt", new Function() { <4> - - @Override - public String apply(String s) { - return "A new resource"; - } - }); - - RestAssured.when().get("file.txt").then() - .statusCode(200) - .body(is("A new resource")); - } - - @Test - public void testAddResource() throws InterruptedException { - - RestAssured.when().get("/new.txt").then() - .statusCode(404); - - test.addResourceFile("META-INF/resources/new.txt", "New File"); <5> - - RestAssured.when().get("/new.txt").then() - .statusCode(200) - .body(is("New File")); - - } -} ----- - -<1> This starts the deployment, your test can modify it as part of the test suite. Quarkus will be restarted between -each test method so every method starts with a clean deployment. - -<2> This method allows you to modify the source of a class file. The old source is passed into the function, and the updated -source is returned. - -<3> This method adds a new class file to the deployment. The source that is used will be the original source that is part -of the current project. - -<4> This method modifies a static resource - -<5> This method adds a new static resource - -=== Native Executable Support - -There Quarkus provides a lot of build items that control aspects of the native executable build. This allows for extensions -to programmatically perform tasks such as registering classes for reflection or adding static resources to the native -executable. Some of these build items are listed below: - -`io.quarkus.deployment.builditem.nativeimage.NativeImageResourceBuildItem`:: - Includes static resources into the native executable. - -`io.quarkus.deployment.builditem.nativeimage.NativeImageResourceDirectoryBuildItem`:: - Includes directory's static resources into the native executable. - -`io.quarkus.deployment.builditem.nativeimage.RuntimeReinitializedClassBuildItem`:: -A class that will be reinitialized at runtime by Substrate. This will result in the static initializer running twice. - -`io.quarkus.deployment.builditem.nativeimage.NativeImageSystemPropertyBuildItem`:: -A system property that will be set at native executable build time. - -`io.quarkus.deployment.builditem.nativeimage.NativeImageResourceBundleBuildItem`:: -Includes a resource bundle in the native executable. - -`io.quarkus.deployment.builditem.nativeimage.ReflectiveClassBuildItem`:: -Registers a class for reflection in Substrate. Constructors are always registered, while methods and fields are optional. - -`io.quarkus.deployment.builditem.nativeimage.RuntimeInitializedClassBuildItem`:: -A class that will be initialized at runtime rather than build time. This will cause the build to fail if the class is initialized as part of the native executable build process, so care must be taken. - -`io.quarkus.deployment.builditem.nativeimage.NativeImageConfigBuildItem`:: -A convenience feature that allows you to control most of the above features from a single build item. - -`io.quarkus.deployment.builditem.NativeImageEnableAllCharsetsBuildItem`:: -Indicates that all charsets should be enabled in native image. - -`io.quarkus.deployment.builditem.ExtensionSslNativeSupportBuildItem`:: -A convenient way to tell Quarkus that the extension requires SSL and it should be enabled during native image build. -When using this feature, remember to add your extension to the list of extensions that offer SSL support automatically on the https://github.com/quarkusio/quarkus/blob/main/docs/src/main/asciidoc/native-and-ssl.adoc[native and ssl guide]. - -=== IDE support tips - -==== Writing Quarkus extensions in Eclipse - -The only particular aspect of writing Quarkus extensions in Eclipse is that APT (Annotation Processing Tool) is required as part of extension builds, which means you need to: - -- Install `m2e-apt` from https://marketplace.eclipse.org/content/m2e-apt -- Define this property in your `pom.xml`: `jdt_apt`, although if you rely on `io.quarkus:quarkus-build-parent` you will get it for free. -- If you have the `io.quarkus:quarkus-extension-processor` project open at the same time in your IDE (for example, if you have the Quarkus sources checked out and open in your IDE) you will need to close that project. Otherwise, Eclipse will not invoke the APT plugin that it contains. -- If you just closed the extension processor project, be sure to do `Maven > Update Project` on the other projects in order for Eclipse to pick up the extension processor from the Maven repository. - -=== Troubleshooting / Debugging Tips - -// This id was previously used for the "Dump the Generated Classes to the File System" section -[[dump-the-generated-classes-to-the-file-system]] -==== Inspecting the Generated/Transformed Classes - -Quarkus generates a lot of classes during the build phase and in many cases also transforms existing classes. -It is often extremely useful to see the generated bytecode and transformed classes during the development of an extension. - -If you set the `quarkus.package.fernflower.enabled` property to `true` then Quarkus will download and invoke the https://github.com/JetBrains/intellij-community/tree/master/plugins/java-decompiler/engine[Fernflower decompiler] and dump the result in the `decompiled` directory of the build tool output (`target/decompiled` for Maven for example). - -NOTE: This property only works during a normal production build (i.e. not for dev mode/tests) and when `fast-jar` packaging type is used (the default behavior). - -There are also three system properties that allow you to dump the generated/transformed classes to the filesystem and inspect them later, for example via a decompiler in your IDE. - -- `quarkus.debug.generated-classes-dir` - to dump the generated classes, such as bean metadata -- `quarkus.debug.transformed-classes-dir` - to dump the transformed classes, e.g. Panache entities -- `quarkus.debug.generated-sources-dir` - to dump the ZIG files; ZIG file is a textual representation of the generated code that is referenced in the stack traces - -These properties are especially useful in the development mode or when running the tests where the generated/transformed classes are only held in memory in a class loader. - -For example, you can specify the `quarkus.debug.generated-classes-dir` system property to have these classes written out to disk for inspection in the development mode: - -[source,bash] ----- -./mvnw quarkus:dev -Dquarkus.debug.generated-classes-dir=dump-classes ----- - -NOTE: The property value could be either an absolute path, such as `/home/foo/dump` on a Linux machine, or a path relative to the user working directory, i.e. `dump` corresponds to the `{user.dir}/target/dump` in the dev mode and `{user.dir}/dump` when running the tests. - -You should see a line in the log for each class written to the directory: - -[source,text] ----- -INFO [io.qua.run.boo.StartupActionImpl] (main) Wrote /path/to/my/app/target/dump-classes/io/quarkus/arc/impl/ActivateRequestContextInterceptor_Bean.class ----- - -The property is also honored when running tests: - -[source,bash] ----- -./mvnw clean test -Dquarkus.debug.generated-classes-dir=target/dump-generated-classes ----- - -Analogously, you can use the `quarkus.debug.transformed-classes-dir` and `quarkus.debug.transformed-classes-dir` properties to dump the relevant output. - -==== Multi-module Maven Projects and the Development Mode - -It's not uncommon to develop an extension in a multi-module Maven project that also contains an "example" module. -However, if you want to run the example in the development mode then the `-DnoDeps` system property must be used in order to exclude the local project dependencies. -Otherwise, Quarkus attempts to monitor the extension classes and this may result in weird class loading issues. - -[source,bash] ----- -./mvnw compile quarkus:dev -DnoDeps ----- - -==== Indexer does not include your external dependency - -Remember to add `IndexDependencyBuildItem` artifacts to your `@BuildStep`. - -=== Sample Test Extension -We have an extension that is used to test for regressions in the extension processing. It is located in {quarkus-tree-url}/core/test-extension directory. In this section we touch on some of the tasks an extension -author will typically need to perform using the test-extension code to illustrate how the task could be done. - -==== Features and Capabilities - -===== Features - -A _feature_ represents a functionality provided by an extension. -The name of the feature gets displayed in the log during application bootstrap. - -.Example Startup Lines -[source,text] ----- -2019-03-22 14:02:37,884 INFO [io.quarkus] (main) Quarkus 999-SNAPSHOT started in 0.061s. -2019-03-22 14:02:37,884 INFO [io.quarkus] (main) Installed features: [cdi, test-extension] <1> ----- -<1> A list of features installed in the runtime image - -A feature can be registered in a <> method that produces a `FeatureBuildItem`: - -.TestProcessor#feature() -[source,java] ----- - @BuildStep - FeatureBuildItem feature() { - return new FeatureBuildItem("test-extension"); - } ----- - -The name of the feature should only contain lowercase characters, words are separated by dash; e.g. `security-jpa`. -An extension should provide at most one feature and the name must be unique. -If multiple extensions register a feature of the same name the build fails. - -The feature name should also map to a label in the extension's `devtools/common/src/main/filtered/extensions.json` entry so that -the feature name displayed by the startup line matches a label that one can used to select the extension when creating a project -using the Quarkus maven plugin as shown in this example taken from the xref:rest-json.adoc[Writing JSON REST Services] guide where the `resteasy-jackson` feature is referenced: - -[source,bash,subs=attributes+] ----- -mvn io.quarkus.platform:quarkus-maven-plugin:{quarkus-version}:create \ - -DprojectGroupId=org.acme \ - -DprojectArtifactId=rest-json \ - -DclassName="org.acme.rest.json.FruitResource" \ - -Dpath="/fruits" \ - -Dextensions="resteasy,resteasy-jackson" -cd rest-json ----- - -===== Capabilities - -A _capability_ represents a technical capability that can be queried by other extensions. -An extension may provide multiple capabilities and multiple extensions can provide the same capability. -By default, capabilities are not displayed to users. -Capabilities should be used when checking for the presence of an extension rather than class path based checks. - -Capabilities can be registered in a <> method that produces a `CapabilityBuildItem`: - -.TestProcessor#capability() -[source,java] ----- - @BuildStep - void capabilities(BuildProducer capabilityProducer) { - capabilityProducer.produce(new CapabilityBuildItem("org.acme.test-transactions")); - capabilityProducer.produce(new CapabilityBuildItem("org.acme.test-metrics")); - } ----- - -Extensions can consume registered capabilities using the `Capabilities` build item: - -.TestProcessor#doSomeCoolStuff() -[source,java] ----- - @BuildStep - void doSomeCoolStuff(Capabilities capabilities) { - if (capabilities.isPresent(Capability.TRANSACTIONS)) { - // do something only if JTA transactions are in... - } - } ----- - -Capabilities should follow the naming conventions of Java packages; e.g. `io.quarkus.security.jpa`. -Capabilities provided by core extensions should be listed in the `io.quarkus.deployment.Capability` enum and their name should always start with the `io.quarkus` prefix. - -==== Bean Defining Annotations -The CDI layer processes CDI beans that are either explicitly registered or that it discovers based on bean defining annotations as defined in https://jakarta.ee/specifications/cdi/2.0/cdi-spec-2.0.html#bean_defining_annotations[2.5.1. Bean defining annotations]. You can expand this set of annotations to include annotations your extension processes using a `BeanDefiningAnnotationBuildItem` as shown in this `TestProcessor#registerBeanDefinningAnnotations` example: - -.Register a Bean Defining Annotation -[source,java] ----- -import javax.enterprise.context.ApplicationScoped; -import org.jboss.jandex.DotName; -import io.quarkus.extest.runtime.TestAnnotation; - -public final class TestProcessor { - static DotName TEST_ANNOTATION = DotName.createSimple(TestAnnotation.class.getName()); - static DotName TEST_ANNOTATION_SCOPE = DotName.createSimple(ApplicationScoped.class.getName()); - -... - - @BuildStep - BeanDefiningAnnotationBuildItem registerX() { - <1> - return new BeanDefiningAnnotationBuildItem(TEST_ANNOTATION, TEST_ANNOTATION_SCOPE); - } -... -} - -/** - * Marker annotation for test configuration target beans - */ -@Target({ TYPE }) -@Retention(RUNTIME) -@Documented -@Inherited -public @interface TestAnnotation { -} - -/** - * A sample bean - */ -@TestAnnotation <2> -public class ConfiguredBean implements IConfigConsumer { - -... ----- -<1> Register the annotation class and CDI default scope using the Jandex `DotName` class. -<2> `ConfiguredBean` will be processed by the CDI layer the same as a bean annotated with the CDI standard @ApplicationScoped. - -==== Parsing Config to Objects -One of the main things an extension is likely to do is completely separate the configuration phase of behavior from the runtime phase. Frameworks often do parsing/load of configuration on startup that can be done during build time to both reduce the runtime dependencies on frameworks like xml parsers as well as reducing the startup time the parsing incurs. - -An example of parsing a XML config file using JAXB is shown in the `TestProcessor#parseServiceXmlConfig` method: -.Parsing an XML Configuration into Runtime XmlConfig Instance -[source,java] ----- - @BuildStep - @Record(STATIC_INIT) - RuntimeServiceBuildItem parseServiceXmlConfig(TestRecorder recorder) throws JAXBException { - RuntimeServiceBuildItem serviceBuildItem = null; - JAXBContext context = JAXBContext.newInstance(XmlConfig.class); - Unmarshaller unmarshaller = context.createUnmarshaller(); - InputStream is = getClass().getResourceAsStream("/config.xml"); <1> - if (is != null) { - log.info("Have XmlConfig, loading"); - XmlConfig config = (XmlConfig) unmarshaller.unmarshal(is); <2> -... - } - return serviceBuildItem; - } - ----- -<1> Look for a config.xml classpath resource -<2> If found, parse using JAXB context for `XmlConfig.class` - -[NOTE] -==== -If there was no /config.xml resource available in the build environment, then a null `RuntimeServiceBuildItem` would be returned and no subsequent logic based on a `RuntimeServiceBuildItem` being produced would execute. -==== - -Typically one is loading a configuration to create some runtime component/service as `parseServiceXmlConfig` is doing. We will come back to the rest of the behavior in `parseServiceXmlConfig` in the following <> section. - -If for some reason you need to parse the config and use it in other build steps in an extension processor, you would need to create an `XmlConfigBuildItem` to pass the parsed XmlConfig instance around. - -[TIP] -==== -If you look at the XmlConfig code you will see that it does carry around the JAXB annotations. If you don't want these in the runtime image, you could clone the XmlConfig instance into some POJO object graph and then replace XmlConfig with the POJO class. We will do this in <>. -==== - -==== Scanning Deployments Using Jandex -If your extension defines annotations or interfaces that mark beans needing to be processed, you can locate these beans using the Jandex API, a Java annotation indexer and offline reflection library. The following `TestProcessor#scanForBeans` method shows how to find the beans annotated with our `@TestAnnotation` that also implement the `IConfigConsumer` interface: - -.Example Jandex Usage -[source,java] ----- - static DotName TEST_ANNOTATION = DotName.createSimple(TestAnnotation.class.getName()); -... - - @BuildStep - @Record(STATIC_INIT) - void scanForBeans(TestRecorder recorder, BeanArchiveIndexBuildItem beanArchiveIndex, <1> - BuildProducer testBeanProducer) { - IndexView indexView = beanArchiveIndex.getIndex(); <2> - Collection testBeans = indexView.getAnnotations(TEST_ANNOTATION); <3> - for (AnnotationInstance ann : testBeans) { - ClassInfo beanClassInfo = ann.target().asClass(); - try { - boolean isConfigConsumer = beanClassInfo.interfaceNames() - .stream() - .anyMatch(dotName -> dotName.equals(DotName.createSimple(IConfigConsumer.class.getName()))); <4> - if (isConfigConsumer) { - Class beanClass = (Class) Class.forName(beanClassInfo.name().toString(), false, Thread.currentThread().getContextClassLoader()); - testBeanProducer.produce(new TestBeanBuildItem(beanClass)); <5> - log.infof("Configured bean: %s", beanClass); - } - } catch (ClassNotFoundException e) { - log.warn("Failed to load bean class", e); - } - } - } ----- -<1> Depend on a `BeanArchiveIndexBuildItem` to have the build step be run after the deployment has been indexed. -<2> Retrieve the index. -<3> Find all beans annotated with `@TestAnnotation`. -<4> Determine which of these beans also has the `IConfigConsumer` interface. -<5> Save the bean class in a `TestBeanBuildItem` for use in a latter RUNTIME_INIT build step that will interact with the bean instances. - -==== Interacting With Extension Beans -You can use the `io.quarkus.arc.runtime.BeanContainer` interface to interact with your extension beans. The following `configureBeans` methods illustrate interacting with the beans scanned for in the previous section: - -.Using CDI BeanContainer Interface -[source,java] ----- -// TestProcessor#configureBeans - @BuildStep - @Record(RUNTIME_INIT) - void configureBeans(TestRecorder recorder, List testBeans, <1> - BeanContainerBuildItem beanContainer, <2> - TestRunTimeConfig runTimeConfig) { - - for (TestBeanBuildItem testBeanBuildItem : testBeans) { - Class beanClass = testBeanBuildItem.getConfigConsumer(); - recorder.configureBeans(beanContainer.getValue(), beanClass, buildAndRunTimeConfig, runTimeConfig); <3> - } - } - -// TestRecorder#configureBeans - public void configureBeans(BeanContainer beanContainer, Class beanClass, - TestBuildAndRunTimeConfig buildTimeConfig, - TestRunTimeConfig runTimeConfig) { - log.info("Begin BeanContainerListener callback\n"); - IConfigConsumer instance = beanContainer.instance(beanClass); <4> - instance.loadConfig(buildTimeConfig, runTimeConfig); <5> - log.infof("configureBeans, instance=%s\n", instance); - } ----- -<1> Consume the `TestBeanBuildItem`s produced from the scanning build step. -<2> Consume the `BeanContainerBuildItem` to order this build step to run after the CDI bean container has been created. -<3> Call the runtime recorder to record the bean interactions. -<4> Runtime recorder retrieves the bean using its type. -<5> Runtime recorder invokes the `IConfigConsumer#loadConfig(...)` method passing in the configuration objects with runtime information. - -==== Manage Non-CDI Service -A common purpose for an extension is to integrate a non-CDI aware service into the CDI based Quarkus runtime. Step 1 of this task is to load any configuration needed in a STATIC_INIT build step as we did in <>. Now we need to create an instance of the service using the configuration. Let's return to the `TestProcessor#parseServiceXmlConfig` method to see how this can be done. - -.Creating a Non-CDI Service -[source,java] ----- -// TestProcessor#parseServiceXmlConfig - @BuildStep - @Record(STATIC_INIT) - RuntimeServiceBuildItem parseServiceXmlConfig(TestRecorder recorder) throws JAXBException { - RuntimeServiceBuildItem serviceBuildItem = null; - JAXBContext context = JAXBContext.newInstance(XmlConfig.class); - Unmarshaller unmarshaller = context.createUnmarshaller(); - InputStream is = getClass().getResourceAsStream("/config.xml"); - if (is != null) { - log.info("Have XmlConfig, loading"); - XmlConfig config = (XmlConfig) unmarshaller.unmarshal(is); - log.info("Loaded XmlConfig, creating service"); - RuntimeValue service = recorder.initRuntimeService(config); //<1> - serviceBuildItem = new RuntimeServiceBuildItem(service); //<3> - } - return serviceBuildItem; - } - -// TestRecorder#initRuntimeService - public RuntimeValue initRuntimeService(XmlConfig config) { - RuntimeXmlConfigService service = new RuntimeXmlConfigService(config); //<2> - return new RuntimeValue<>(service); - } - -// RuntimeServiceBuildItem - final public class RuntimeServiceBuildItem extends SimpleBuildItem { - private RuntimeValue service; - - public RuntimeServiceBuildItem(RuntimeValue service) { - this.service = service; - } - - public RuntimeValue getService() { - return service; - } -} ----- -<1> Call into the runtime recorder to record the creation of the service. -<2> Using the parsed `XmlConfig` instance, create an instance of `RuntimeXmlConfigService` and wrap it in a `RuntimeValue`. Use a `RuntimeValue` wrapper for non-interface objects that are non-proxiable. -<3> Wrap the return service value in a `RuntimeServiceBuildItem` for use in a RUNTIME_INIT build step that will start the service. - -===== Starting a Service -Now that you have recorded the creation of a service during the build phase, you need to record how to start the service at runtime during booting. You do this with a RUNTIME_INIT build step as shown in the `TestProcessor#startRuntimeService` method. - -.Starting/Stopping a Non-CDI Service -[source,java] ----- -// TestProcessor#startRuntimeService - @BuildStep - @Record(RUNTIME_INIT) - ServiceStartBuildItem startRuntimeService(TestRecorder recorder, ShutdownContextBuildItem shutdownContextBuildItem , // <1> - RuntimeServiceBuildItem serviceBuildItem) throws IOException { // <2> - if (serviceBuildItem != null) { - log.info("Registering service start"); - recorder.startRuntimeService(shutdownContextBuildItem, serviceBuildItem.getService()); // <3> - } else { - log.info("No RuntimeServiceBuildItem seen, check config.xml"); - } - return new ServiceStartBuildItem("RuntimeXmlConfigService"); //<4> - } - -// TestRecorder#startRuntimeService - public void startRuntimeService(ShutdownContext shutdownContext, RuntimeValue runtimeValue) - throws IOException { - RuntimeXmlConfigService service = runtimeValue.getValue(); - service.startService(); //<5> - shutdownContext.addShutdownTask(service::stopService); //<6> - } ----- -<1> We consume a ShutdownContextBuildItem to register the service shutdown. -<2> We consume the previously initialized service captured in `RuntimeServiceBuildItem`. -<3> Call the runtime recorder to record the service start invocation. -<4> Produce a `ServiceStartBuildItem` to indicate the startup of a service. See <> for details. -<5> Runtime recorder retrieves the service instance reference and calls its `startService` method. -<6> Runtime recorder registers an invocation of the service instance `stopService` method with the Quarkus `ShutdownContext`. - -The code for the `RuntimeXmlConfigService` can be viewed here: -{quarkus-blob-url}/core/test-extension/runtime/src/main/java/io/quarkus/extest/runtime/RuntimeXmlConfigService.java[RuntimeXmlConfigService.java] - -The testcase for validating that the `RuntimeXmlConfigService` has started can be found in the `testRuntimeXmlConfigService` test of `ConfiguredBeanTest` and `NativeImageIT`. - -==== Startup and Shutdown Events -The Quarkus container supports startup and shutdown lifecycle events to notify components of the container startup -and shutdown. There are CDI events fired that components can observe are illustrated in this example: - -.Observing Container Startup -[source,java] ----- -import io.quarkus.runtime.ShutdownEvent; -import io.quarkus.runtime.StartupEvent; - -public class SomeBean { - /** - * Called when the runtime has started - * @param event - */ - void onStart(@Observes StartupEvent event) { // <1> - System.out.printf("onStart, event=%s%n", event); - } - - /** - * Called when the runtime is shutting down - * @param event - */ - void onStop(@Observes ShutdownEvent event) { // <2> - System.out.printf("onStop, event=%s%n", event); - } -} ----- -<1> Observe a `StartupEvent` to be notified the runtime has started. -<2> Observe a `ShutdownEvent` to be notified when the runtime is going to shutdown. - -What is the relevance of startup and shutdown events for extension authors? We have already seen the use of a `ShutdownContext` -to register a callback to perform shutdown tasks in the <> section. These shutdown tasks would be called -after a `ShutdownEvent` had been sent. - -A `StartupEvent` is fired after all `io.quarkus.deployment.builditem.ServiceStartBuildItem` producers have been consumed. -The implication of this is that if an extension has services that application components would expect to have been -started when they observe a `StartupEvent`, the build steps that invoke the runtime code to start those services needs -to produce a `ServiceStartBuildItem` to ensure that the runtime code is run before the `StartupEvent` is sent. Recall that -we saw the production of a `ServiceStartBuildItem` in the previous section, and it is repeated here for clarity: - -.Example of Producing a ServiceStartBuildItem -[source,java] ----- -// TestProcessor#startRuntimeService - @BuildStep - @Record(RUNTIME_INIT) - ServiceStartBuildItem startRuntimeService(TestRecorder recorder, ShutdownContextBuildItem shutdownContextBuildItem, - RuntimeServiceBuildItem serviceBuildItem) throws IOException { -... - return new ServiceStartBuildItem("RuntimeXmlConfigService"); //<1> - } ----- -<1> Produce a `ServiceStartBuildItem` to indicate that this is a service starting step that needs to run before the `StartupEvent` is sent. - -==== Register Resources for Use in Native Image -Not all configuration or resources can be consumed at build time. If you have classpath resources that the runtime needs to access, you need to inform the build phase that these resources need to be copied into the native image. This is done by producing one or more `NativeImageResourceBuildItem` or `NativeImageResourceBundleBuildItem` in the case of resource bundles. Examples of this are shown in this sample `registerNativeImageResources` build step: - -.Registering Resources and ResourceBundles -[source,java] ----- -public final class MyExtProcessor { - @Inject - BuildProducer resource; - @Inject - BuildProducer resourceBundle; - - @BuildStep - void registerNativeImageResources() { - resource.produce(new NativeImageResourceBuildItem("/security/runtime.keys")); //<1> - - resource.produce(new NativeImageResourceBuildItem( - "META-INF/my-descriptor.xml")); //<2> - - resourceBundle.produce(new NativeImageResourceBuildItem("javax.xml.bind.Messages")); //<3> - } -} ----- -<1> Indicate that the /security/runtime.keys classpath resource should be copied into native image. -<2> Indicate that the `META-INF/my-descriptor.xml` resource should be copied into native image -<3> Indicate that the "javax.xml.bind.Messages" resource bundle should be copied into native image. - -==== Service files - -If you are using `META-INF/services` files you need to register the files as resources so that your native image can find them, -but you also need to register each listed class for reflection so they can be instantiated or inspected at run-time: - -[source,java] ----- -public final class MyExtProcessor { - - @BuildStep - void registerNativeImageResources(BuildProducer services) { - String service = "META-INF/services/" + io.quarkus.SomeService.class.getName(); - - // find out all the implementation classes listed in the service files - Set implementations = - ServiceUtil.classNamesNamedIn(Thread.currentThread().getContextClassLoader(), - service); - - // register every listed implementation class so they can be instantiated - // in native-image at run-time - services.produce( - new ServiceProviderBuildItem(io.quarkus.SomeService.class.getName(), - implementations.toArray(new String[0]))); - } -} ----- - -WARNING: `ServiceProviderBuildItem` takes a list of service implementation classes as parameters: if -you are not reading them from the service file, make sure that they correspond to the service file contents -because the service file will still be read and used at run-time. This is not a substitute for writing a service -file. - -NOTE: This only registers the implementation classes for instantiation via reflection (you will not be able -to inspect its fields and methods). If you need to do that, you can do it this way: - -[source,java] ----- -public final class MyExtProcessor { - - @BuildStep - void registerNativeImageResources(BuildProducer resource, - BuildProducer reflectionClasses) { - String service = "META-INF/services/" + io.quarkus.SomeService.class.getName(); - - // register the service file so it is visible in native-image - resource.produce(new NativeImageResourceBuildItem(service)); - - // register every listed implementation class so they can be inspected/instantiated - // in native-image at run-time - Set implementations = - ServiceUtil.classNamesNamedIn(Thread.currentThread().getContextClassLoader(), - service); - reflectionClasses.produce( - new ReflectiveClassBuildItem(true, true, implementations.toArray(new String[0]))); - } -} ----- - -While this is the easiest way to get your services running natively, it's less efficient than scanning the implementation -classes at build time and generating code that registers them at static-init time instead of relying on reflection. - -You can achieve that by adapting the previous build step to use a static-init recorder instead of registering -classes for reflection: - -[source,java] ----- -public final class MyExtProcessor { - - @BuildStep - @Record(ExecutionTime.STATIC_INIT) - void registerNativeImageResources(RecorderContext recorderContext, - SomeServiceRecorder recorder) { - String service = "META-INF/services/" + io.quarkus.SomeService.class.getName(); - - // read the implementation classes - Collection> implementationClasses = new LinkedHashSet<>(); - Set implementations = ServiceUtil.classNamesNamedIn(Thread.currentThread().getContextClassLoader(), - service); - for(String implementation : implementations) { - implementationClasses.add((Class) - recorderContext.classProxy(implementation)); - } - - // produce a static-initializer with those classes - recorder.configure(implementationClasses); - } -} - -@Recorder -public class SomeServiceRecorder { - - public void configure(List> implementations) { - // configure our service statically - SomeServiceProvider serviceProvider = SomeServiceProvider.instance(); - SomeServiceBuilder builder = serviceProvider.getSomeServiceBuilder(); - - List services = new ArrayList<>(implementations.size()); - // instantiate the service implementations - for (Class implementationClass : implementations) { - try { - services.add(implementationClass.getConstructor().newInstance()); - } catch (Exception e) { - throw new IllegalArgumentException("Unable to instantiate service " + implementationClass, e); - } - } - - // build our service - builder.withSomeServices(implementations.toArray(new io.quarkus.SomeService[0])); - ServiceManager serviceManager = builder.build(); - - // register it - serviceProvider.registerServiceManager(serviceManager, Thread.currentThread().getContextClassLoader()); - } -} ----- - - -==== Object Substitution -Objects created during the build phase that are passed into the runtime need to have a default constructor in order for them to be created and configured at startup of the runtime from the build time state. If an object does not have a default constructor you will see an error similar to the following during generation of the augmented artifacts: - -.DSAPublicKey Serialization Error -[source,text] ----- - [error]: Build step io.quarkus.deployment.steps.MainClassBuildStep#build threw an exception: java.lang.RuntimeException: Unable to serialize objects of type class sun.security.provider.DSAPublicKeyImpl to bytecode as it has no default constructor - at io.quarkus.builder.Execution.run(Execution.java:123) - at io.quarkus.builder.BuildExecutionBuilder.execute(BuildExecutionBuilder.java:136) - at io.quarkus.deployment.QuarkusAugmentor.run(QuarkusAugmentor.java:110) - at io.quarkus.runner.RuntimeRunner.run(RuntimeRunner.java:99) - ... 36 more ----- - -There is a `io.quarkus.runtime.ObjectSubstitution` interface that can be implemented to tell Quarkus how to handle such classes. An example implementation for the `DSAPublicKey` is shown here: - -.DSAPublicKeyObjectSubstitution Example -[source,java] ----- -package io.quarkus.extest.runtime.subst; - -import java.security.KeyFactory; -import java.security.NoSuchAlgorithmException; -import java.security.interfaces.DSAPublicKey; -import java.security.spec.InvalidKeySpecException; -import java.security.spec.X509EncodedKeySpec; -import java.util.logging.Logger; - -import io.quarkus.runtime.ObjectSubstitution; - -public class DSAPublicKeyObjectSubstitution implements ObjectSubstitution { - private static final Logger log = Logger.getLogger("DSAPublicKeyObjectSubstitution"); - @Override - public KeyProxy serialize(DSAPublicKey obj) { //<1> - log.info("DSAPublicKeyObjectSubstitution.serialize"); - byte[] encoded = obj.getEncoded(); - KeyProxy proxy = new KeyProxy(); - proxy.setContent(encoded); - return proxy; - } - - @Override - public DSAPublicKey deserialize(KeyProxy obj) { //<2> - log.info("DSAPublicKeyObjectSubstitution.deserialize"); - byte[] encoded = obj.getContent(); - X509EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(encoded); - DSAPublicKey dsaPublicKey = null; - try { - KeyFactory kf = KeyFactory.getInstance("DSA"); - dsaPublicKey = (DSAPublicKey) kf.generatePublic(publicKeySpec); - - } catch (NoSuchAlgorithmException | InvalidKeySpecException e) { - e.printStackTrace(); - } - return dsaPublicKey; - } -} ----- -<1> The serialize method takes the object without a default constructor and creates a `KeyProxy` that contains the information necessary to recreate the `DSAPublicKey`. -<2> The deserialize method uses the `KeyProxy` to recreate the `DSAPublicKey` from its encoded form using the key factory. - -An extension registers this substitution by producing an `ObjectSubstitutionBuildItem` as shown in this `TestProcessor#loadDSAPublicKey` fragment: - -.Registering an Object Substitution -[source,java] ----- - @BuildStep - @Record(STATIC_INIT) - PublicKeyBuildItem loadDSAPublicKey(TestRecorder recorder, - BuildProducer substitutions) throws IOException, GeneralSecurityException { -... - // Register how to serialize DSAPublicKey - ObjectSubstitutionBuildItem.Holder holder = new ObjectSubstitutionBuildItem.Holder( - DSAPublicKey.class, KeyProxy.class, DSAPublicKeyObjectSubstitution.class); - ObjectSubstitutionBuildItem keysub = new ObjectSubstitutionBuildItem(holder); - substitutions.produce(keysub); - - log.info("loadDSAPublicKey run"); - return new PublicKeyBuildItem(publicKey); - } ----- - -==== Replacing Classes in the Native Image -The Graal SDK supports substitutions of classes in the native image. An example of how one could replace the `XmlConfig/XmlData` classes with versions that have no JAXB annotation dependencies is shown in these example classes: - -.Substitution of XmlConfig/XmlData Classes Example -[source,java] ----- -package io.quarkus.extest.runtime.graal; -import java.util.Date; -import com.oracle.svm.core.annotate.Substitute; -import com.oracle.svm.core.annotate.TargetClass; -import io.quarkus.extest.runtime.config.XmlData; - -@TargetClass(XmlConfig.class) -@Substitute -public final class Target_XmlConfig { - - @Substitute - private String address; - @Substitute - private int port; - @Substitute - private ArrayList dataList; - - @Substitute - public String getAddress() { - return address; - } - - @Substitute - public int getPort() { - return port; - } - - @Substitute - public ArrayList getDataList() { - return dataList; - } - - @Substitute - @Override - public String toString() { - return "Target_XmlConfig{" + - "address='" + address + '\'' + - ", port=" + port + - ", dataList=" + dataList + - '}'; - } -} - -@TargetClass(XmlData.class) -@Substitute -public final class Target_XmlData { - - @Substitute - private String name; - @Substitute - private String model; - @Substitute - private Date date; - - @Substitute - public String getName() { - return name; - } - - @Substitute - public String getModel() { - return model; - } - - @Substitute - public Date getDate() { - return date; - } - - @Substitute - @Override - public String toString() { - return "Target_XmlData{" + - "name='" + name + '\'' + - ", model='" + model + '\'' + - ", date='" + date + '\'' + - '}'; - } -} ----- - -== Configuration reference documentation - -The configuration is an important part of each extension and therefore needs to be properly documented. -Each configuration property should have a proper Javadoc comment. - -While it is handy to have the documentation available when coding, this configuration documentation must also be available in the extension guides. -The Quarkus build automatically generates the configuration documentation for you based on the Javadoc comments but you need to explicitly include it in your guide. - -In this section, we will explain everything you need to know about the configuration reference documentation. - -=== Writing the documentation - -For each configuration property, you need to write some Javadoc explaining its purpose. - -[TIP] -==== -Always make the first sentence meaningful and self-contained as it is included in the summary table. -==== - -You can either use standard Javadoc comments or Asciidoc directly as a Javadoc comment. - -We assume you are familiar with writing Javadoc comments so let's focus on our Asciidoc support. -While standard Javadoc comments are perfectly fine for simple documentation (recommended even), -if you want to include tips, source code extracts, lists... Asciidoc comes in handy. - -Here is a typical configuration property commented with Asciidoc: - -[source,java] ----- -/** - * Class name of the Hibernate ORM dialect. The complete list of bundled dialects is available in the - * https://docs.jboss.org/hibernate/stable/orm/javadocs/org/hibernate/dialect/package-summary.html[Hibernate ORM JavaDoc]. - * - * [NOTE] - * ==== - * Not all the dialects are supported in GraalVM native executables: we currently provide driver extensions for PostgreSQL, - * MariaDB, Microsoft SQL Server and H2. - * ==== - * - * @asciidoclet - */ -@ConfigItem -public Optional dialect; ----- - -This is the simple case: you just have to write Asciidoc and mark the comment with the `@asciidoclet` tag. -This tag has two purposes: it is used as a marker for our generation tool but it is also used by the `javadoc` process for proper Javadoc generation. - -Now let's consider a more complicated example: - -[source,java] ----- -// @formatter:off -/** - * Name of the file containing the SQL statements to execute when Hibernate ORM starts. - * Its default value differs depending on the Quarkus launch mode: - * - * * In dev and test modes, it defaults to `import.sql`. - * Simply add an `import.sql` file in the root of your resources directory - * and it will be picked up without having to set this property. - * Pass `no-file` to force Hibernate ORM to ignore the SQL import file. - * * In production mode, it defaults to `no-file`. - * It means Hibernate ORM won't try to execute any SQL import file by default. - * Pass an explicit value to force Hibernate ORM to execute the SQL import file. - * - * If you need different SQL statements between dev mode, test (`@QuarkusTest`) and in production, use Quarkus - * https://quarkus.io/guides/config#configuration-profiles[configuration profiles facility]. - * - * [source,property] - * .application.properties - * ---- - * %dev.quarkus.hibernate-orm.sql-load-script = import-dev.sql - * %test.quarkus.hibernate-orm.sql-load-script = import-test.sql - * %prod.quarkus.hibernate-orm.sql-load-script = no-file - * ---- - * - * [NOTE] - * ==== - * Quarkus supports `.sql` file with SQL statements or comments spread over multiple lines. - * Each SQL statement must be terminated by a semicolon. - * ==== - * - * @asciidoclet - */ -// @formatter:on -@ConfigItem -public Optional sqlLoadScript; ----- - -A few comments on this one: - - * Every time you will need the indentation to be respected in the Javadoc comment (think list items spread on multiple lines or indented source code), - you will need to disable temporarily the automatic Eclipse formatter - (this, even if you don't use Eclipse as the formatter is included in our build). - To do so, use the `// @formatter:off`/`// @formatter:on` markers. - Note the fact that they are separate comments and there is a space after the `//` marker. This is required. - * As you can see, you can use the full power of Asciidoctor (except for the limitation below) - -[WARNING] -==== -You cannot use open blocks (`--`) in your Asciidoctor documentation. -All the other types of blocks (source, admonitions...) are supported. -==== - -[TIP] -==== -By default, the doc generator will use the hyphenated field name as the key of a `java.util.Map` configuration item. -To override this default and have a user friendly key (independent of implementation details), you may use the `io.quarkus.runtime.annotations.ConfigDocMapKey` annotation. -See the following example, -[source,java] ----- -@ConfigRoot -public class SomeConfig { - /** - * Namespace configuration. - */ - @ConfigItem(name = ConfigItem.PARENT) - @ConfigDocMapKey("cache-name") <1> - Map namespace; -} ----- -<1> This will generate a configuration map key named `quarkus.some."cache-name"` instead of `quarkus.some."namespace"`. -==== - -=== Writing section documentation - -If you wish to generate configuration section of a given `@ConfigGroup`, Quarkus has got you covered with the `@ConfigDocSection` annotation. -See the code example below: -[source,java] ----- -/** -* Config group related configuration. -* Amazing introduction here -*/ -@ConfigItem -@ConfigDocSection <1> -public ConfigGroupConfig configGroup; ----- -<1> This will add a section documentation for the `configGroup` config item in the generated documentation. -Section's title and introduction will be derived from the javadoc of the configuration item. The first sentence from the javadoc is considered as the section title and the remaining sentences used as section introduction. -You can also use the `@asciidoclet` tag as shown above. - -=== Generating the documentation - -Generating the documentation is easy: - - * Running `./mvnw clean install -DskipTests -DskipITs` will do. - * You can either do it globally or in a specific extension directory (e.g. `extensions/mailer`). - -The documentation is generated in the global `target/asciidoc/generated/config/` located at the root of the project. - -=== Including the documentation in the extension guide - -Now that you have generated the configuration reference documentation for your extension, you need to include it in your guide (and review it). - -This is simple, include the generated documentation in your guide: - -[source,asciidoc] ----- -\include::{generated-dir}/config/quarkus-your-extension.adoc[opts=optional, leveloffset=+1] ----- - -If you are interested in including the generated documentation for the config group, you can use the include statement below -[source,asciidoc] ----- -\include::{generated-dir}/config/hyphenated-config-group-class-name-with-runtime-or-deployment-namespace-replaced-by-config-group-namespace.adoc[opts=optional, leveloffset=+1] ----- - -For example, the `io.quarkus.vertx.http.runtime.FormAuthConfig` configuration group will be generated in a file named `quarkus-vertx-http-config-group-form-auth-config.adoc`. - - -A few recommendations: - - * `opts=optional` is mandatory as we don't want the build to fail if only part of the configuration documentation has been generated - * The documentation is generated with a title level of 2 (i.e. `==`). - You usually need to adjust it. - It can be done with `leveloffset=+N`. - -It is not recommended to include the whole configuration documentation in the middle of your guide as it's heavy. -If you have an `application.properties` extract with your configuration, just do as follows. - -First, include a tip just below your `application.properties` extract: - -[source, asciidoc] ----- -[TIP] -For more information about the extension configuration please refer to the <>. ----- - -Then, at the end of your documentation, include the extensive documentation: - -[source, asciidoc] ----- -[[configuration-reference]] -== Configuration Reference - -\include::{generated-dir}/config/quarkus-your-extension.adoc[opts=optional, leveloffset=+1] ----- - -Finally, generate the documentation and check it out. - -[[ecosystem-ci]] -== Continuous testing of your extension - -In order to make it easy for extension authors to test their extensions daily against the latest snapshot of Quarkus, Quarkus has introduced -the notion of Ecosystem CI. The Ecosystem CI link:https://github.com/quarkusio/quarkus-ecosystem-ci/blob/main/README.adoc[README] -has all the details on how to set up a GitHub Actions job to take advantage of this capability, while this link:https://www.youtube.com/watch?v=VpbRA1n0hHQ[video] provides an overview -of what the process looks like. - -== Publish your extension in registry.quarkus.io - -Before publishing your extension to the xref:tooling.adoc[Quarkus tooling], make sure that the following requirements are met: - -* The `quarkus-extension.yaml` file (in the extension's `runtime/` module) has the minimum metadata set: -** `name` -** `description` (unless you have it already set in the ``runtime/pom.xml``'s `` element, which is the recommended approach) - -* Your extension is published in Maven Central - -* Your extension repository is configured to use the <>. - -Then you must create a pull request adding a `your-extension.yaml` file in the `extensions/` directory in the link:https://github.com/quarkusio/quarkus-extension-catalog[Quarkus Extension Catalog]. The YAML must have the following structure: - -```yaml -group-id: -artifact-id: -``` - -That's all. Once the pull request is merged, a scheduled job will check Maven Central for new versions and update the xref:extension-registry-user.adoc[Quarkus Extension Registry]. - diff --git a/_versions/2.7/guides/writing-native-applications-tips.adoc b/_versions/2.7/guides/writing-native-applications-tips.adoc deleted file mode 100644 index b644ec4cb14..00000000000 --- a/_versions/2.7/guides/writing-native-applications-tips.adoc +++ /dev/null @@ -1,454 +0,0 @@ -//// -This guide is maintained in the main Quarkus repository -and pull requests should be submitted there: -https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc -//// -= Tips for writing native applications - -include::./attributes.adoc[] - -This guide contains various tips and tricks for getting around problems that might arise when attempting to run Java applications as native executables. - -Note that we differentiate two contexts where the solution applied might be different: - - * in the context of an application, you will rely on configuring the `native-image` configuration by tweaking your `pom.xml`; - * in the context of an extension, Quarkus offers a lot of infrastructure to simplify all of this. - -Please refer to the appropriate section depending on your context. - -== Supporting native in your application - -GraalVM imposes a number of constraints and making your application a native executable might require a few tweaks. - -=== Including resources - -By default, when building a native executable, GraalVM will not include any of the resources that are on the classpath into the native executable it creates. -Resources that are meant to be part of the native executable need to be configured explicitly. - -Quarkus automatically includes the resources present in `META-INF/resources` (the web resources) but, outside of this directory, you are on your own. - -[WARNING] -==== -Note that you need to be extremely careful here as anything in `META-INF/resources` will be exposed as static web resources. -So this directory is not a shortcut for "let's automatically include these resources in the native executable" and should only be used for static web resources. - -Other resources should be declared explicitly. -==== - -To include more resources in the native executable, the easiest way is to use the `quarkus.native.resources.includes` configuration property, -and its counterpart to exclude resources `quarkus.native.resources.excludes`. - -Both configuration properties support glob patterns. - -For instance, having the following properties in your `application.properties`: - -[source,properties] ----- -quarkus.native.resources.includes=foo/**,bar/**/*.txt -quarkus.native.resources.excludes=foo/private/** ----- - -will include: - -* all files in the `foo/` directory and its subdirectories except for files in `foo/private/` and its subdirectories, -* all text files in the `bar/` directory and its subdirectories. - -If globs are not sufficiently precise for your use case and you need to rely on regular expressions or if you prefer relying on the GraalVM infrastructure, -you can also create a `resources-config.json` (the most common location is within `src/main/resources`) JSON file defining which resources should be included: - -[source,json] ----- -{ - "resources": [ - { - "pattern": ".*\\.xml$" - }, - { - "pattern": ".*\\.json$" - } - ] -} ----- - -The patterns are valid Java regexps. -Here we include all the XML files and JSON files into the native executable. - -[NOTE] -==== -You can find more information about this topic in https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/Resources.md[the GraalVM documentation]. -==== - -The final order of business is to make the configuration file known to the `native-image` executable by adding the proper configuration to `application.properties`: - -[source,properties] ----- -quarkus.native.additional-build-args =-H:ResourceConfigurationFiles=resources-config.json ----- - -In the previous snippet we were able to simply use `resources-config.json` instead of specifying the entire path of the file simply because it was added to `src/main/resources`. -If the file had been added to another directory, the proper file path would have had to be specified manually. - -[TIP] -==== -Multiple options may be separated by a comma. For example, one could use: - -[source,properties] ----- -quarkus.native.additional-build-args =\ - -H:ResourceConfigurationFiles=resources-config.json,\ - -H:ReflectionConfigurationFiles=reflection-config.json ----- - -in order to ensure that various resources are included and additional reflection is registered. - -==== -If for some reason adding the aforementioned configuration to `application.properties` is not desirable, it is possible to configure the build tool to effectively perform the same operation. - -When using Maven, we could use the following configuration: - -[source,xml] ----- - - - native - - native - -H:ResourceConfigurationFiles=resources-config.json - - - ----- - -=== Registering for reflection - -When building a native executable, GraalVM operates with a closed world assumption. -It analyzes the call tree and removes all the classes/methods/fields that are not used directly. - -The elements used via reflection are not part of the call tree so they are dead code eliminated (if not called directly in other cases). -To include these elements in your native executable, you need to register them for reflection explicitly. - -This is a very common case as JSON libraries typically use reflection to serialize the objects to JSON: - -[source,java] ----- - public class Person { - private String first; - private String last; - - public String getFirst() { - return first; - } - - public void setFirst(String first) { - this.first = first; - } - - public String getLast() { - return last; - } - - public void setValue(String last) { - this.last = last; - } - } - - @Path("/person") - @Produces(MediaType.APPLICATION_JSON) - @Consumes(MediaType.APPLICATION_JSON) - public class PersonResource { - - private final Jsonb jsonb; - - public PersonResource() { - jsonb = JsonbBuilder.create(new JsonbConfig()); - } - - @GET - public Response list() { - return Response.ok(jsonb.fromJson("{\"first\": \"foo\", \"last\": \"bar\"}", Person.class)).build(); - } - } ----- - -If we were to use the code above, we would get an exception like the following when using the native executable: - -[source] ----- -Exception handling request to /person: org.jboss.resteasy.spi.UnhandledException: javax.json.bind.JsonbException: Can't create instance of a class: class org.acme.jsonb.Person, No default constructor found ----- - -or if you are using Jackson: - -[source] ----- -com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class org.acme.jsonb.Person and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) ----- - -An even nastier possible outcome could be for no exception to be thrown, but instead the JSON result would be completely empty. - -There are two different ways to fix this type of issues. - -[#registerForReflection] -==== Using the @RegisterForReflection annotation - -The easiest way to register a class for reflection is to use the `@RegisterForReflection` annotation: - -[source,java] ----- -@RegisterForReflection -public class MyClass { -} ----- - -If your class is in a third-party jar, you can do it by using an empty class that will host the `@RegisterForReflection` for it. - -[source,java] ----- -@RegisterForReflection(targets={ MyClassRequiringReflection.class, MySecondClassRequiringReflection.class}) -public class MyReflectionConfiguration { -} ----- - -Note that `MyClassRequiringReflection` and `MySecondClassRequiringReflection` will be registered for reflection but not `MyReflectionConfiguration`. - -This feature is handy when using third-party libraries using object mapping features (such as Jackson or GSON): - -[source, java] ----- -@RegisterForReflection(targets = {User.class, UserImpl.class}) -public class MyReflectionConfiguration { - -} ----- - -==== Using a configuration file - -You can use a configuration file to register classes for reflection. - -As an example, in order to register all methods of class `com.acme.MyClass` for reflection, we create `reflection-config.json` (the most common location is within `src/main/resources`) - -[source,json] ----- -[ - { - "name" : "com.acme.MyClass", - "allDeclaredConstructors" : true, - "allPublicConstructors" : true, - "allDeclaredMethods" : true, - "allPublicMethods" : true, - "allDeclaredFields" : true, - "allPublicFields" : true - } -] ----- - -[NOTE] -==== -For more details on the format of this file, please refer to https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/Reflection.md[the GraalVM documentation]. -==== - -The final order of business is to make the configuration file known to the `native-image` executable by adding the proper configuration to `application.properties`: - -[source,properties] ----- -quarkus.native.additional-build-args =-H:ReflectionConfigurationFiles=reflection-config.json ----- - -In the previous snippet we were able to simply use `reflection-config.json` instead of specifying the entire path of the file simply because it was added to `src/main/resources`. -If the file had been added to another directory, the proper file path would have had to be specified manually. - -[TIP] -==== -Multiple options may be separated by a comma. For example, one could use: - -[source,properties] ----- -quarkus.native.additional-build-args =\ - -H:ResourceConfigurationFiles=resources-config.json,\ - -H:ReflectionConfigurationFiles=reflection-config.json ----- - -in order to ensure that various resources are included and additional reflection is registered. - -==== -If for some reason adding the aforementioned configuration to `application.properties` is not desirable, it is possible to configure the build tool to effectively perform the same operation. - -When using Maven, we could use the following configuration: - -[source,xml] ----- - - - native - - native - -H:ReflectionConfigurationFiles=reflection-config.json - - - ----- - -=== Delaying class initialization - -By default, Quarkus initializes all classes at build time. - -There are cases where the initialization of certain classes is done in a static block needs to be postponed to runtime. -Typically omitting such configuration would result in a runtime exception like the following: - -[source] ----- -Error: No instances are allowed in the image heap for a class that is initialized or reinitialized at image runtime: sun.security.provider.NativePRNG -Trace: object java.security.SecureRandom -method com.amazonaws.services.s3.model.CryptoConfiguration.(CryptoMode) -Call path from entry point to com.amazonaws.services.s3.model.CryptoConfiguration.(CryptoMode): ----- - -If you need to delay the initialization of a class, you can use the `--initialize-at-run-time=` configuration knob. - -It should be added to the `native-image` configuration using the `quarkus.native.additional-build-args` configuration property as shown in the examples above. - -[NOTE] -==== -You can find more information about all this in https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/ClassInitialization.md[the GraalVM documentation]. -==== - -[NOTE] -==== -When multiple classes or packages need to be specified via the `quarkus.native.additional-build-args` configuration property, the `,` symbol needs to be escaped. -An example of this is the following: - -[source,properties] ----- -quarkus.native.additional-build-args=--initialize-at-run-time=com.example.SomeClass\\,org.acme.SomeOtherClass ----- - -and in the case of using the Maven configuration instead of `application.properties`: - -[source,xml] ----- ---initialize-at-run-time=com.example.SomeClass\,org.acme.SomeOtherClass ----- -==== - -=== Managing Proxy Classes - -While writing native application you'll need to define proxy classes at image build time by specifying the list of interfaces that they implement. - -In such a situation, the error you might encounter is: - -[source] ----- -com.oracle.svm.core.jdk.UnsupportedFeatureError: Proxy class defined by interfaces [interface org.apache.http.conn.HttpClientConnectionManager, interface org.apache.http.pool.ConnPoolControl, interface com.amazonaws.http.conn.Wrapped] not found. Generating proxy classes at runtime is not supported. Proxy classes need to be defined at image build time by specifying the list of interfaces that they implement. To define proxy classes use -H:DynamicProxyConfigurationFiles= and -H:DynamicProxyConfigurationResources= options. ----- - -Solving this issue requires adding the `-H:DynamicProxyConfigurationResources=` option and to provide a dynamic proxy configuration file. -You can find all the information about the format of this file in https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/DynamicProxy.md#manual-configuration[the GraalVM documentation]. - -[[native-in-extension]] -== Supporting native in a Quarkus extension - -Supporting native in a Quarkus extension is even easier as Quarkus provides a lot of tools to simplify all this. - -[WARNING] -==== -Everything described here will only work in the context of Quarkus extensions, it won't work in an application. -==== - -=== Register reflection - -Quarkus makes registration of reflection in an extension a breeze by using `ReflectiveClassBuildItem`, thus eliminating the need for a JSON configuration file. - -To register a class for reflection, one would need to create a Quarkus processor class and add a build step that registers reflection: - -[source,java] ----- -public class SaxParserProcessor { - - @BuildStep - ReflectiveClassBuildItem reflection() { - // since we only need reflection to the constructor of the class, we can specify `false` for both the methods and the fields arguments. - return new ReflectiveClassBuildItem(false, false, "com.sun.org.apache.xerces.internal.parsers.SAXParser"); - } - -} ----- - -[NOTE] -==== -More information about reflection in GraalVM can be found https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/Reflection.md[here]. -==== - -=== Including resources - -In the context of an extension, Quarkus eliminates the need for a JSON configuration file by allowing extension authors to specify a `NativeImageResourceBuildItem`: - -[source,java] ----- -public class ResourcesProcessor { - - @BuildStep - NativeImageResourceBuildItem nativeImageResourceBuildItem() { - return new NativeImageResourceBuildItem("META-INF/extra.properties"); - } - -} ----- - -[NOTE] -==== -For more information about GraalVM resource handling in native executables please refer to https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/Resources.md[the GraalVM documentation]. -==== - - -== Delay class initialization - -Quarkus simplifies things by allowing extensions authors to simply register a `RuntimeInitializedClassBuildItem`. A simple example of doing so could be: - -[source,java] ----- -public class S3Processor { - - @BuildStep - RuntimeInitializedClassBuildItem cryptoConfiguration() { - return new RuntimeInitializedClassBuildItem(CryptoConfiguration.class.getCanonicalName()); - } - -} ----- - -Using such a construct means that a `--initialize-at-run-time` option will automatically be added to the `native-image` command line. - -[NOTE] -==== -For more information about `--initialize-at-run-time`, please read https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/ClassInitialization.md[the GraalVM documentation]. -==== - -=== Managing Proxy Classes - -Very similarly, Quarkus allows extensions authors to register a `NativeImageProxyDefinitionBuildItem`. An example of doing so is: - -[source,java] ----- -public class S3Processor { - - @BuildStep - NativeImageProxyDefinitionBuildItem httpProxies() { - return new NativeImageProxyDefinitionBuildItem("org.apache.http.conn.HttpClientConnectionManager", - "org.apache.http.pool.ConnPoolControl", "com.amazonaws.http.conn.Wrapped"); - } - -} ----- - -Using such a construct means that a `-H:DynamicProxyConfigurationResources` option will automatically be added to the `native-image` command line. - -[NOTE] -==== -For more information about Proxy Classes you can read https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/DynamicProxy.md[the GraalVM documentation]. -==== - -=== Logging with Native Image - -If you are using dependencies that require logging components such as Apache Commons Logging or Log4j and are experiencing a `ClassNotFoundException` when building the native executable, you can resolve this by excluding the logging library and adding the corresponding JBoss Logging adapter. - -For more details please refer to the xref:logging.adoc#logging-adapters[Logging guide].