diff --git a/pom.xml b/pom.xml
index 38a742d830..0a9bd1723b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -334,10 +334,6 @@
org.apache.maven.pluginsmaven-assembly-plugin
-
- org.asciidoctor
- asciidoctor-maven-plugin
- io.spring.maven.antoraantora-maven-plugin
diff --git a/src/main/asciidoc/auditing.adoc b/src/main/asciidoc/auditing.adoc
deleted file mode 100644
index 152558d4a0..0000000000
--- a/src/main/asciidoc/auditing.adoc
+++ /dev/null
@@ -1,123 +0,0 @@
-[[auditing]]
-= Auditing
-
-[[auditing.basics]]
-== Basics
-Spring Data provides sophisticated support to transparently keep track of who created or changed an entity and when the change happened.To benefit from that functionality, you have to equip your entity classes with auditing metadata that can be defined either using annotations or by implementing an interface.
-Additionally, auditing has to be enabled either through Annotation configuration or XML configuration to register the required infrastructure components.
-Please refer to the store-specific section for configuration samples.
-
-[NOTE]
-====
-Applications that only track creation and modification dates are not required do make their entities implement <>.
-====
-
-[[auditing.annotations]]
-=== Annotation-based Auditing Metadata
-We provide `@CreatedBy` and `@LastModifiedBy` to capture the user who created or modified the entity as well as `@CreatedDate` and `@LastModifiedDate` to capture when the change happened.
-
-.An audited entity
-====
-[source,java]
-----
-class Customer {
-
- @CreatedBy
- private User user;
-
- @CreatedDate
- private Instant createdDate;
-
- // … further properties omitted
-}
-----
-====
-
-As you can see, the annotations can be applied selectively, depending on which information you want to capture.
-The annotations, indicating to capture when changes are made, can be used on properties of type JDK8 date and time types, `long`, `Long`, and legacy Java `Date` and `Calendar`.
-
-Auditing metadata does not necessarily need to live in the root level entity but can be added to an embedded one (depending on the actual store in use), as shown in the snippet below.
-
-.Audit metadata in embedded entity
-====
-[source,java]
-----
-class Customer {
-
- private AuditMetadata auditingMetadata;
-
- // … further properties omitted
-}
-
-class AuditMetadata {
-
- @CreatedBy
- private User user;
-
- @CreatedDate
- private Instant createdDate;
-
-}
-----
-====
-
-[[auditing.interfaces]]
-=== Interface-based Auditing Metadata
-In case you do not want to use annotations to define auditing metadata, you can let your domain class implement the `Auditable` interface. It exposes setter methods for all of the auditing properties.
-
-[[auditing.auditor-aware]]
-=== `AuditorAware`
-
-In case you use either `@CreatedBy` or `@LastModifiedBy`, the auditing infrastructure somehow needs to become aware of the current principal. To do so, we provide an `AuditorAware` SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type `T` defines what type the properties annotated with `@CreatedBy` or `@LastModifiedBy` have to be.
-
-The following example shows an implementation of the interface that uses Spring Security's `Authentication` object:
-
-.Implementation of `AuditorAware` based on Spring Security
-====
-[source, java]
-----
-class SpringSecurityAuditorAware implements AuditorAware {
-
- @Override
- public Optional getCurrentAuditor() {
-
- return Optional.ofNullable(SecurityContextHolder.getContext())
- .map(SecurityContext::getAuthentication)
- .filter(Authentication::isAuthenticated)
- .map(Authentication::getPrincipal)
- .map(User.class::cast);
- }
-}
-----
-====
-
-The implementation accesses the `Authentication` object provided by Spring Security and looks up the custom `UserDetails` instance that you have created in your `UserDetailsService` implementation. We assume here that you are exposing the domain user through the `UserDetails` implementation but that, based on the `Authentication` found, you could also look it up from anywhere.
-
-[[auditing.reactive-auditor-aware]]
-=== `ReactiveAuditorAware`
-
-When using reactive infrastructure you might want to make use of contextual information to provide `@CreatedBy` or `@LastModifiedBy` information.
-We provide an `ReactiveAuditorAware` SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type `T` defines what type the properties annotated with `@CreatedBy` or `@LastModifiedBy` have to be.
-
-The following example shows an implementation of the interface that uses reactive Spring Security's `Authentication` object:
-
-.Implementation of `ReactiveAuditorAware` based on Spring Security
-====
-[source, java]
-----
-class SpringSecurityAuditorAware implements ReactiveAuditorAware {
-
- @Override
- public Mono getCurrentAuditor() {
-
- return ReactiveSecurityContextHolder.getContext()
- .map(SecurityContext::getAuthentication)
- .filter(Authentication::isAuthenticated)
- .map(Authentication::getPrincipal)
- .map(User.class::cast);
- }
-}
-----
-====
-
-The implementation accesses the `Authentication` object provided by Spring Security and looks up the custom `UserDetails` instance that you have created in your `UserDetailsService` implementation. We assume here that you are exposing the domain user through the `UserDetails` implementation but that, based on the `Authentication` found, you could also look it up from anywhere.
diff --git a/src/main/asciidoc/custom-conversions.adoc b/src/main/asciidoc/custom-conversions.adoc
deleted file mode 100644
index 285e7c70f3..0000000000
--- a/src/main/asciidoc/custom-conversions.adoc
+++ /dev/null
@@ -1,41 +0,0 @@
-The following example of a Spring `Converter` implementation converts from a `String` to a custom `Email` value object:
-
-[source,java,subs="verbatim,attributes"]
-----
-@ReadingConverter
-public class EmailReadConverter implements Converter {
-
- public Email convert(String source) {
- return Email.valueOf(source);
- }
-}
-----
-
-If you write a `Converter` whose source and target type are native types, we cannot determine whether we should consider it as a reading or a writing converter.
-Registering the converter instance as both might lead to unwanted results.
-For example, a `Converter` is ambiguous, although it probably does not make sense to try to convert all `String` instances into `Long` instances when writing.
-To let you force the infrastructure to register a converter for only one way, we provide `@ReadingConverter` and `@WritingConverter` annotations to be used in the converter implementation.
-
-Converters are subject to explicit registration as instances are not picked up from a classpath or container scan to avoid unwanted registration with a conversion service and the side effects resulting from such a registration. Converters are registered with `CustomConversions` as the central facility that allows registration and querying for registered converters based on source- and target type.
-
-`CustomConversions` ships with a pre-defined set of converter registrations:
-
-* JSR-310 Converters for conversion between `java.time`, `java.util.Date` and `String` types.
-
-NOTE: Default converters for local temporal types (e.g. `LocalDateTime` to `java.util.Date`) rely on system-default timezone settings to convert between those types. You can override the default converter, by registering your own converter.
-
-[[customconversions.converter-disambiguation]]
-== Converter Disambiguation
-
-Generally, we inspect the `Converter` implementations for the source and target types they convert from and to.
-Depending on whether one of those is a type the underlying data access API can handle natively, we register the converter instance as a reading or a writing converter.
-The following examples show a writing- and a read converter (note the difference is in the order of the qualifiers on `Converter`):
-
-[source,java]
-----
-// Write converter as only the target type is one that can be handled natively
-class MyConverter implements Converter { … }
-
-// Read converter as only the source type is one that can be handled natively
-class MyConverter implements Converter { … }
-----
diff --git a/src/main/asciidoc/dependencies.adoc b/src/main/asciidoc/dependencies.adoc
deleted file mode 100644
index 4eaa6d88bf..0000000000
--- a/src/main/asciidoc/dependencies.adoc
+++ /dev/null
@@ -1,61 +0,0 @@
-[[dependencies]]
-= Dependencies
-
-Due to the different inception dates of individual Spring Data modules, most of them carry different major and minor version numbers. The easiest way to find compatible ones is to rely on the Spring Data Release Train BOM that we ship with the compatible versions defined. In a Maven project, you would declare this dependency in the `` section of your POM as follows:
-
-.Using the Spring Data release train BOM
-====
-[source, xml, subs="+attributes"]
-----
-
-
-
- org.springframework.data
- spring-data-bom
- {releasetrainVersion}
- import
- pom
-
-
-
-----
-====
-
-[[dependencies.train-names]]
-[[dependencies.train-version]]
-The current release train version is `{releasetrainVersion}`. The train version uses https://calver.org/[calver] with the pattern `YYYY.MINOR.MICRO`.
-The version name follows `${calver}` for GA releases and service releases and the following pattern for all other versions: `${calver}-${modifier}`, where `modifier` can be one of the following:
-
-* `SNAPSHOT`: Current snapshots
-* `M1`, `M2`, and so on: Milestones
-* `RC1`, `RC2`, and so on: Release candidates
-
-You can find a working example of using the BOMs in our https://github.com/spring-projects/spring-data-examples/tree/main/bom[Spring Data examples repository]. With that in place, you can declare the Spring Data modules you would like to use without a version in the `` block, as follows:
-
-.Declaring a dependency to a Spring Data module
-====
-[source, xml]
-----
-
-
- org.springframework.data
- spring-data-jpa
-
-
-----
-====
-
-[[dependencies.spring-boot]]
-== Dependency Management with Spring Boot
-
-Spring Boot selects a recent version of the Spring Data modules for you. If you still want to upgrade to a newer version,
-set the `spring-data-bom.version` property to the <>
-you would like to use.
-
-See Spring Boot's https://docs.spring.io/spring-boot/docs/current/reference/html/dependency-versions.html#appendix.dependency-versions.properties[documentation]
-(search for "Spring Data Bom") for more details.
-
-[[dependencies.spring-framework]]
-== Spring Framework
-
-The current version of Spring Data modules require Spring Framework {springVersion} or better. The modules might also work with an older bugfix version of that minor version. However, using the most recent version within that generation is highly recommended.
diff --git a/src/main/asciidoc/entity-callbacks.adoc b/src/main/asciidoc/entity-callbacks.adoc
deleted file mode 100644
index b9a31a9727..0000000000
--- a/src/main/asciidoc/entity-callbacks.adoc
+++ /dev/null
@@ -1,163 +0,0 @@
-[[entity-callbacks]]
-= Entity Callbacks
-
-The Spring Data infrastructure provides hooks for modifying an entity before and after certain methods are invoked.
-Those so called `EntityCallback` instances provide a convenient way to check and potentially modify an entity in a callback fashioned style. +
-An `EntityCallback` looks pretty much like a specialized `ApplicationListener`.
-Some Spring Data modules publish store specific events (such as `BeforeSaveEvent`) that allow modifying the given entity. In some cases, such as when working with immutable types, these events can cause trouble.
-Also, event publishing relies on `ApplicationEventMulticaster`. If configuring that with an asynchronous `TaskExecutor` it can lead to unpredictable outcomes, as event processing can be forked onto a Thread.
-
-Entity callbacks provide integration points with both synchronous and reactive APIs to guarantee in-order execution at well-defined checkpoints within the processing chain, returning a potentially modified entity or an reactive wrapper type.
-
-Entity callbacks are typically separated by API type. This separation means that a synchronous API considers only synchronous entity callbacks and a reactive implementation considers only reactive entity callbacks.
-
-[NOTE]
-====
-The Entity Callback API has been introduced with Spring Data Commons 2.2. It is the recommended way of applying entity modifications.
-Existing store specific `ApplicationEvents` are still published *before* the invoking potentially registered `EntityCallback` instances.
-====
-
-[[entity-callbacks.implement]]
-== Implementing Entity Callbacks
-
-An `EntityCallback` is directly associated with its domain type through its generic type argument.
-Each Spring Data module typically ships with a set of predefined `EntityCallback` interfaces covering the entity lifecycle.
-
-.Anatomy of an `EntityCallback`
-====
-[source,java]
-----
-@FunctionalInterface
-public interface BeforeSaveCallback extends EntityCallback {
-
- /**
- * Entity callback method invoked before a domain object is saved.
- * Can return either the same or a modified instance.
- *
- * @return the domain object to be persisted.
- */
- T onBeforeSave(T entity <2>, String collection <3>); <1>
-}
-----
-<1> `BeforeSaveCallback` specific method to be called before an entity is saved. Returns a potentially modifed instance.
-<2> The entity right before persisting.
-<3> A number of store specific arguments like the _collection_ the entity is persisted to.
-====
-
-.Anatomy of a reactive `EntityCallback`
-====
-[source,java]
-----
-@FunctionalInterface
-public interface ReactiveBeforeSaveCallback extends EntityCallback {
-
- /**
- * Entity callback method invoked on subscription, before a domain object is saved.
- * The returned Publisher can emit either the same or a modified instance.
- *
- * @return Publisher emitting the domain object to be persisted.
- */
- Publisher onBeforeSave(T entity <2>, String collection <3>); <1>
-}
-----
-<1> `BeforeSaveCallback` specific method to be called on subscription, before an entity is saved. Emits a potentially modifed instance.
-<2> The entity right before persisting.
-<3> A number of store specific arguments like the _collection_ the entity is persisted to.
-====
-
-NOTE: Optional entity callback parameters are defined by the implementing Spring Data module and inferred from call site of `EntityCallback.callback()`.
-
-Implement the interface suiting your application needs like shown in the example below:
-
-.Example `BeforeSaveCallback`
-====
-[source,java]
-----
-class DefaultingEntityCallback implements BeforeSaveCallback, Ordered { <2>
-
- @Override
- public Object onBeforeSave(Person entity, String collection) { <1>
-
- if(collection == "user") {
- return // ...
- }
-
- return // ...
- }
-
- @Override
- public int getOrder() {
- return 100; <2>
- }
-}
-----
-<1> Callback implementation according to your requirements.
-<2> Potentially order the entity callback if multiple ones for the same domain type exist. Ordering follows lowest precedence.
-====
-
-[[entity-callbacks.register]]
-== Registering Entity Callbacks
-
-`EntityCallback` beans are picked up by the store specific implementations in case they are registered in the `ApplicationContext`.
-Most template APIs already implement `ApplicationContextAware` and therefore have access to the `ApplicationContext`
-
-The following example explains a collection of valid entity callback registrations:
-
-.Example `EntityCallback` Bean registration
-====
-[source,java]
-----
-@Order(1) <1>
-@Component
-class First implements BeforeSaveCallback {
-
- @Override
- public Person onBeforeSave(Person person) {
- return // ...
- }
-}
-
-@Component
-class DefaultingEntityCallback implements BeforeSaveCallback,
- Ordered { <2>
-
- @Override
- public Object onBeforeSave(Person entity, String collection) {
- // ...
- }
-
- @Override
- public int getOrder() {
- return 100; <2>
- }
-}
-
-@Configuration
-public class EntityCallbackConfiguration {
-
- @Bean
- BeforeSaveCallback unorderedLambdaReceiverCallback() { <3>
- return (BeforeSaveCallback) it -> // ...
- }
-}
-
-@Component
-class UserCallbacks implements BeforeConvertCallback,
- BeforeSaveCallback { <4>
-
- @Override
- public Person onBeforeConvert(User user) {
- return // ...
- }
-
- @Override
- public Person onBeforeSave(User user) {
- return // ...
- }
-}
-----
-<1> `BeforeSaveCallback` receiving its order from the `@Order` annotation.
-<2> `BeforeSaveCallback` receiving its order via the `Ordered` interface implementation.
-<3> `BeforeSaveCallback` using a lambda expression. Unordered by default and invoked last. Note that callbacks implemented by a lambda expression do not expose typing information hence invoking these with a non-assignable entity affects the callback throughput. Use a `class` or `enum` to enable type filtering for the callback bean.
-<4> Combine multiple entity callback interfaces in a single implementation class.
-====
diff --git a/src/main/asciidoc/images/epub-cover.png b/src/main/asciidoc/images/epub-cover.png
deleted file mode 100644
index 539b90504a..0000000000
Binary files a/src/main/asciidoc/images/epub-cover.png and /dev/null differ
diff --git a/src/main/asciidoc/images/epub-cover.svg b/src/main/asciidoc/images/epub-cover.svg
deleted file mode 100644
index 5f53f8a9c6..0000000000
--- a/src/main/asciidoc/images/epub-cover.svg
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
-
diff --git a/src/main/asciidoc/index.adoc b/src/main/asciidoc/index.adoc
deleted file mode 100644
index 14fd59748d..0000000000
--- a/src/main/asciidoc/index.adoc
+++ /dev/null
@@ -1,42 +0,0 @@
-= Spring Data Commons - Reference Documentation
-Oliver Gierke; Thomas Darimont; Christoph Strobl; Mark Pollack; Thomas Risberg; Mark Paluch; Jay Bryant
-:revnumber: {version}
-:revdate: {localdate}
-:feature-scroll: true
-ifdef::backend-epub3[:front-cover-image: image:epub-cover.png[Front Cover,1050,1600]]
-
-(C) 2008-2022 The original authors.
-
-NOTE: Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.
-
-include::preface.adoc[]
-
-[[reference-documentation]]
-= Reference Documentation
-
-:leveloffset: +1
-
-include::dependencies.adoc[]
-
-include::object-mapping.adoc[]
-
-include::repositories.adoc[]
-
-include::repository-projections.adoc[]
-
-include::query-by-example.adoc[]
-
-include::auditing.adoc[]
-
-:leveloffset: -1
-
-[[appendix]]
-= Appendices
-
-:numbered!:
-:leveloffset: +1
-include::repository-namespace-reference.adoc[]
-include::repository-populator-namespace-reference.adoc[]
-include::repository-query-keywords-reference.adoc[]
-include::repository-query-return-types-reference.adoc[]
-:leveloffset: -1
diff --git a/src/main/asciidoc/is-new-state-detection.adoc b/src/main/asciidoc/is-new-state-detection.adoc
deleted file mode 100644
index c800770f9e..0000000000
--- a/src/main/asciidoc/is-new-state-detection.adoc
+++ /dev/null
@@ -1,30 +0,0 @@
-[[is-new-state-detection]]
-= Entity State Detection Strategies
-
-The following table describes the strategies that Spring Data offers for detecting whether an entity is new:
-
-.Options for detection whether an entity is new in Spring Data
-[options = "autowidth",cols="1,1"]
-|===
-|`@Id`-Property inspection (the default)
-|By default, Spring Data inspects the identifier property of the given entity.
-If the identifier property is `null` or `0` in case of primitive types, then the entity is assumed to be new.
-Otherwise, it is assumed to not be new.
-
-|`@Version`-Property inspection
-|If a property annotated with `@Version` is present and `null`, or in case of a version property of primitive type `0` the entity is considered new.
-If the version property is present but has a different value, the entity is considered to not be new.
-If no version property is present Spring Data falls back to inspection of the identifier property.
-
-|Implementing `Persistable`
-|If an entity implements `Persistable`, Spring Data delegates the new detection to the `isNew(…)` method of the entity.
-See the link:https://docs.spring.io/spring-data/data-commons/docs/current/api/index.html?org/springframework/data/domain/Persistable.html[Javadoc] for details.
-
-_Note: Properties of `Persistable` will get detected and persisted if you use `AccessType.PROPERTY`.
-To avoid that, use `@Transient`._
-
-|Providing a custom `EntityInformation` implementation
-|You can customize the `EntityInformation` abstraction used in the repository base implementation by creating a subclass of the module specific repository factory and overriding the `getEntityInformation(…)` method.
-You then have to register the custom implementation of module specific repository factory as a Spring bean.
-Note that this should rarely be necessary.
-|===
diff --git a/src/main/asciidoc/kotlin-coroutines.adoc b/src/main/asciidoc/kotlin-coroutines.adoc
deleted file mode 100644
index 556031e00a..0000000000
--- a/src/main/asciidoc/kotlin-coroutines.adoc
+++ /dev/null
@@ -1,92 +0,0 @@
-[[kotlin.coroutines]]
-= Coroutines
-
-Kotlin https://kotlinlang.org/docs/reference/coroutines-overview.html[Coroutines] are lightweight threads allowing to write non-blocking code imperatively.
-On language side, `suspend` functions provides an abstraction for asynchronous operations while on library side https://github.com/Kotlin/kotlinx.coroutines[kotlinx.coroutines] provides functions like https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/async.html[`async { }`] and types like https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/index.html[`Flow`].
-
-Spring Data modules provide support for Coroutines on the following scope:
-
-* https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/-deferred/index.html[Deferred] and https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/index.html[Flow] return values support in Kotlin extensions
-
-[[kotlin.coroutines.dependencies]]
-== Dependencies
-
-Coroutines support is enabled when `kotlinx-coroutines-core`, `kotlinx-coroutines-reactive` and `kotlinx-coroutines-reactor` dependencies are in the classpath:
-
-.Dependencies to add in Maven pom.xml
-====
-[source,xml]
-----
-
- org.jetbrains.kotlinx
- kotlinx-coroutines-core
-
-
-
- org.jetbrains.kotlinx
- kotlinx-coroutines-reactive
-
-
-
- org.jetbrains.kotlinx
- kotlinx-coroutines-reactor
-
-----
-====
-
-NOTE: Supported versions `1.3.0` and above.
-
-[[kotlin.coroutines.reactive]]
-== How Reactive translates to Coroutines?
-
-For return values, the translation from Reactive to Coroutines APIs is the following:
-
-* `fun handler(): Mono` becomes `suspend fun handler()`
-* `fun handler(): Mono` becomes `suspend fun handler(): T` or `suspend fun handler(): T?` depending on if the `Mono` can be empty or not (with the advantage of being more statically typed)
-* `fun handler(): Flux` becomes `fun handler(): Flow`
-
-https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/index.html[`Flow`] is `Flux` equivalent in Coroutines world, suitable for hot or cold stream, finite or infinite streams, with the following main differences:
-
-* `Flow` is push-based while `Flux` is push-pull hybrid
-* Backpressure is implemented via suspending functions
-* `Flow` has only a https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/collect.html[single suspending `collect` method] and operators are implemented as https://kotlinlang.org/docs/reference/extensions.html[extensions]
-* https://github.com/Kotlin/kotlinx.coroutines/tree/master/kotlinx-coroutines-core/common/src/flow/operators[Operators are easy to implement] thanks to Coroutines
-* Extensions allow adding custom operators to `Flow`
-* Collect operations are suspending functions
-* https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/map.html[`map` operator] supports asynchronous operation (no need for `flatMap`) since it takes a suspending function parameter
-
-Read this blog post about https://spring.io/blog/2019/04/12/going-reactive-with-spring-coroutines-and-kotlin-flow[Going Reactive with Spring, Coroutines and Kotlin Flow] for more details, including how to run code concurrently with Coroutines.
-
-[[kotlin.coroutines.repositories]]
-== Repositories
-
-Here is an example of a Coroutines repository:
-
-====
-[source,kotlin]
-----
-interface CoroutineRepository : CoroutineCrudRepository {
-
- suspend fun findOne(id: String): User
-
- fun findByFirstname(firstname: String): Flow
-
- suspend fun findAllByFirstname(id: String): List
-}
-----
-====
-
-Coroutines repositories are built on reactive repositories to expose the non-blocking nature of data access through Kotlin's Coroutines.
-Methods on a Coroutines repository can be backed either by a query method or a custom implementation.
-Invoking a custom implementation method propagates the Coroutines invocation to the actual implementation method if the custom method is `suspend`-able without requiring the implementation method to return a reactive type such as `Mono` or `Flux`.
-
-Note that depending on the method declaration the coroutine context may or may not be available.
-To retain access to the context, either declare your method using `suspend` or return a type that enables context propagation such as `Flow`.
-
-* `suspend fun findOne(id: String): User`: Retrieve the data once and synchronously by suspending.
-* `fun findByFirstname(firstname: String): Flow`: Retrieve a stream of data.
-The `Flow` is created eagerly while data is fetched upon `Flow` interaction (`Flow.collect(…)`).
-* `fun getUser(): User`: Retrieve data once *blocking the thread* and without context propagation.
-This should be avoided.
-
-NOTE: Coroutines repositories are only discovered when the repository extends the `CoroutineCrudRepository` interface.
diff --git a/src/main/asciidoc/kotlin-extensions.adoc b/src/main/asciidoc/kotlin-extensions.adoc
deleted file mode 100644
index 8a6f8634fe..0000000000
--- a/src/main/asciidoc/kotlin-extensions.adoc
+++ /dev/null
@@ -1,13 +0,0 @@
-[[kotlin.extensions]]
-= Extensions
-
-Kotlin https://kotlinlang.org/docs/reference/extensions.html[extensions] provide the ability to extend existing classes with additional functionality. Spring Data Kotlin APIs use these extensions to add new Kotlin-specific conveniences to existing Spring APIs.
-
-[NOTE]
-====
-Keep in mind that Kotlin extensions need to be imported to be used.
-Similar to static imports, an IDE should automatically suggest the import in most cases.
-====
-
-For example, https://kotlinlang.org/docs/reference/inline-functions.html#reified-type-parameters[Kotlin reified type parameters] provide a workaround for JVM https://docs.oracle.com/javase/tutorial/java/generics/erasure.html[generics type erasure], and Spring Data provides some extensions to take advantage of this feature.
-This allows for a better Kotlin API.
diff --git a/src/main/asciidoc/kotlin.adoc b/src/main/asciidoc/kotlin.adoc
deleted file mode 100644
index 504fd49efe..0000000000
--- a/src/main/asciidoc/kotlin.adoc
+++ /dev/null
@@ -1,43 +0,0 @@
-[[kotlin]]
-= Kotlin Support
-
-https://kotlinlang.org[Kotlin] is a statically typed language that targets the JVM (and other platforms) which allows writing concise and elegant code while providing excellent https://kotlinlang.org/docs/reference/java-interop.html[interoperability] with existing libraries written in Java.
-
-Spring Data provides first-class support for Kotlin and lets developers write Kotlin applications almost as if Spring Data was a Kotlin native framework.
-
-The easiest way to build a Spring application with Kotlin is to leverage Spring Boot and its https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-kotlin.html[dedicated Kotlin support].
-This comprehensive https://spring.io/guides/tutorials/spring-boot-kotlin/[tutorial] will teach you how to build Spring Boot applications with Kotlin using https://start.spring.io/#!language=kotlin&type=gradle-project[start.spring.io].
-
-[[kotlin.requirements]]
-== Requirements
-
-Spring Data supports Kotlin 1.3 and requires `kotlin-stdlib` (or one of its variants, such as `kotlin-stdlib-jdk8`) and `kotlin-reflect` to be present on the classpath.
-Those are provided by default if you bootstrap a Kotlin project via https://start.spring.io/#!language=kotlin&type=gradle-project[start.spring.io].
-
-[[kotlin.null-safety]]
-== Null Safety
-
-One of Kotlin's key features is https://kotlinlang.org/docs/null-safety.html[null safety], which cleanly deals with `null` values at compile time.
-This makes applications safer through nullability declarations and the expression of "`value or no value`" semantics without paying the cost of wrappers, such as `Optional`.
-(Kotlin allows using functional constructs with nullable values. See this https://www.baeldung.com/kotlin/null-safety[comprehensive guide to Kotlin null safety].)
-
-Although Java does not let you express null safety in its type system, Spring Data API is annotated with https://jcp.org/en/jsr/detail?id=305[JSR-305] tooling friendly annotations declared in the `org.springframework.lang` package.
-By default, types from Java APIs used in Kotlin are recognized as https://kotlinlang.org/docs/reference/java-interop.html#null-safety-and-platform-types[platform types], for which null checks are relaxed.
-https://kotlinlang.org/docs/reference/java-interop.html#jsr-305-support[Kotlin support for JSR-305 annotations] and Spring nullability annotations provide null safety for the whole Spring Data API to Kotlin developers, with the advantage of dealing with `null` related issues at compile time.
-
-See <> how null safety applies to Spring Data Repositories.
-
-[TIP]
-====
-You can configure JSR-305 checks by adding the `-Xjsr305` compiler flag with the following options: `-Xjsr305={strict|warn|ignore}`.
-
-For Kotlin versions 1.1+, the default behavior is the same as `-Xjsr305=warn`.
-The `strict` value is required take Spring Data API null-safety into account. Kotlin types inferred from Spring API but should be used with the knowledge that Spring API nullability declaration could evolve, even between minor releases and that more checks may be added in the future.
-====
-
-NOTE: Generic type arguments, varargs, and array elements nullability are not supported yet, but should be in an upcoming release.
-
-[[kotlin.mapping]]
-== Object Mapping
-
-See <> for details on how Kotlin objects are materialized.
diff --git a/src/main/asciidoc/object-mapping.adoc b/src/main/asciidoc/object-mapping.adoc
deleted file mode 100644
index 804212fac6..0000000000
--- a/src/main/asciidoc/object-mapping.adoc
+++ /dev/null
@@ -1,462 +0,0 @@
-[[mapping.fundamentals]]
-= Object Mapping Fundamentals
-
-This section covers the fundamentals of Spring Data object mapping, object creation, field and property access, mutability and immutability.
-Note, that this section only applies to Spring Data modules that do not use the object mapping of the underlying data store (like JPA).
-Also be sure to consult the store-specific sections for store-specific object mapping, like indexes, customizing column or field names or the like.
-
-Core responsibility of the Spring Data object mapping is to create instances of domain objects and map the store-native data structures onto those.
-This means we need two fundamental steps:
-
-1. Instance creation by using one of the constructors exposed.
-2. Instance population to materialize all exposed properties.
-
-[[mapping.object-creation]]
-== Object creation
-
-Spring Data automatically tries to detect a persistent entity's constructor to be used to materialize objects of that type.
-The resolution algorithm works as follows:
-
-1. If there is a single static factory method annotated with `@PersistenceCreator` then it is used.
-2. If there is a single constructor, it is used.
-3. If there are multiple constructors and exactly one is annotated with `@PersistenceCreator`, it is used.
-4. If the type is a Java `Record` the canonical constructor is used.
-5. If there's a no-argument constructor, it is used.
-Other constructors will be ignored.
-
-The value resolution assumes constructor/factory method argument names to match the property names of the entity, i.e. the resolution will be performed as if the property was to be populated, including all customizations in mapping (different datastore column or field name etc.).
-This also requires either parameter names information available in the class file or an `@ConstructorProperties` annotation being present on the constructor.
-
-The value resolution can be customized by using Spring Framework's `@Value` value annotation using a store-specific SpEL expression.
-Please consult the section on store specific mappings for further details.
-
-[[mapping.object-creation.details]]
-.Object creation internals
-****
-
-To avoid the overhead of reflection, Spring Data object creation uses a factory class generated at runtime by default, which will call the domain classes constructor directly.
-I.e. for this example type:
-
-[source,java]
-----
-class Person {
- Person(String firstname, String lastname) { … }
-}
-----
-
-we will create a factory class semantically equivalent to this one at runtime:
-
-[source, java]
-----
-class PersonObjectInstantiator implements ObjectInstantiator {
-
- Object newInstance(Object... args) {
- return new Person((String) args[0], (String) args[1]);
- }
-}
-----
-
-This gives us a roundabout 10% performance boost over reflection.
-For the domain class to be eligible for such optimization, it needs to adhere to a set of constraints:
-
-- it must not be a private class
-- it must not be a non-static inner class
-- it must not be a CGLib proxy class
-- the constructor to be used by Spring Data must not be private
-
-If any of these criteria match, Spring Data will fall back to entity instantiation via reflection.
-****
-
-[[mapping.property-population]]
-== Property population
-
-Once an instance of the entity has been created, Spring Data populates all remaining persistent properties of that class.
-Unless already populated by the entity's constructor (i.e. consumed through its constructor argument list), the identifier property will be populated first to allow the resolution of cyclic object references.
-After that, all non-transient properties that have not already been populated by the constructor are set on the entity instance.
-For that we use the following algorithm:
-
-1. If the property is immutable but exposes a `with…` method (see below), we use the `with…` method to create a new entity instance with the new property value.
-2. If property access (i.e. access through getters and setters) is defined, we're invoking the setter method.
-3. If the property is mutable we set the field directly.
-4. If the property is immutable we're using the constructor to be used by persistence operations (see <>) to create a copy of the instance.
-5. By default, we set the field value directly.
-
-[[mapping.property-population.details]]
-.Property population internals
-****
-Similarly to our <> we also use Spring Data runtime generated accessor classes to interact with the entity instance.
-
-[source,java]
-----
-class Person {
-
- private final Long id;
- private String firstname;
- private @AccessType(Type.PROPERTY) String lastname;
-
- Person() {
- this.id = null;
- }
-
- Person(Long id, String firstname, String lastname) {
- // Field assignments
- }
-
- Person withId(Long id) {
- return new Person(id, this.firstname, this.lastame);
- }
-
- void setLastname(String lastname) {
- this.lastname = lastname;
- }
-}
-----
-
-.A generated Property Accessor
-====
-[source, java]
-----
-class PersonPropertyAccessor implements PersistentPropertyAccessor {
-
- private static final MethodHandle firstname; <2>
-
- private Person person; <1>
-
- public void setProperty(PersistentProperty property, Object value) {
-
- String name = property.getName();
-
- if ("firstname".equals(name)) {
- firstname.invoke(person, (String) value); <2>
- } else if ("id".equals(name)) {
- this.person = person.withId((Long) value); <3>
- } else if ("lastname".equals(name)) {
- this.person.setLastname((String) value); <4>
- }
- }
-}
-----
-<1> PropertyAccessor's hold a mutable instance of the underlying object. This is, to enable mutations of otherwise immutable properties.
-<2> By default, Spring Data uses field-access to read and write property values. As per visibility rules of `private` fields, `MethodHandles` are used to interact with fields.
-<3> The class exposes a `withId(…)` method that's used to set the identifier, e.g. when an instance is inserted into the datastore and an identifier has been generated. Calling `withId(…)` creates a new `Person` object. All subsequent mutations will take place in the new instance leaving the previous untouched.
-<4> Using property-access allows direct method invocations without using `MethodHandles`.
-====
-
-This gives us a roundabout 25% performance boost over reflection.
-For the domain class to be eligible for such optimization, it needs to adhere to a set of constraints:
-
-- Types must not reside in the default or under the `java` package.
-- Types and their constructors must be `public`
-- Types that are inner classes must be `static`.
-- The used Java Runtime must allow for declaring classes in the originating `ClassLoader`. Java 9 and newer impose certain limitations.
-
-By default, Spring Data attempts to use generated property accessors and falls back to reflection-based ones if a limitation is detected.
-****
-
-Let's have a look at the following entity:
-
-.A sample entity
-====
-[source, java]
-----
-class Person {
-
- private final @Id Long id; <1>
- private final String firstname, lastname; <2>
- private final LocalDate birthday;
- private final int age; <3>
-
- private String comment; <4>
- private @AccessType(Type.PROPERTY) String remarks; <5>
-
- static Person of(String firstname, String lastname, LocalDate birthday) { <6>
-
- return new Person(null, firstname, lastname, birthday,
- Period.between(birthday, LocalDate.now()).getYears());
- }
-
- Person(Long id, String firstname, String lastname, LocalDate birthday, int age) { <6>
-
- this.id = id;
- this.firstname = firstname;
- this.lastname = lastname;
- this.birthday = birthday;
- this.age = age;
- }
-
- Person withId(Long id) { <1>
- return new Person(id, this.firstname, this.lastname, this.birthday, this.age);
- }
-
- void setRemarks(String remarks) { <5>
- this.remarks = remarks;
- }
-}
-----
-====
-<1> The identifier property is final but set to `null` in the constructor.
-The class exposes a `withId(…)` method that's used to set the identifier, e.g. when an instance is inserted into the datastore and an identifier has been generated.
-The original `Person` instance stays unchanged as a new one is created.
-The same pattern is usually applied for other properties that are store managed but might have to be changed for persistence operations.
-The wither method is optional as the persistence constructor (see 6) is effectively a copy constructor and setting the property will be translated into creating a fresh instance with the new identifier value applied.
-<2> The `firstname` and `lastname` properties are ordinary immutable properties potentially exposed through getters.
-<3> The `age` property is an immutable but derived one from the `birthday` property.
-With the design shown, the database value will trump the defaulting as Spring Data uses the only declared constructor.
-Even if the intent is that the calculation should be preferred, it's important that this constructor also takes `age` as parameter (to potentially ignore it) as otherwise the property population step will attempt to set the age field and fail due to it being immutable and no `with…` method being present.
-<4> The `comment` property is mutable and is populated by setting its field directly.
-<5> The `remarks` property is mutable and is populated by invoking the setter method.
-<6> The class exposes a factory method and a constructor for object creation.
-The core idea here is to use factory methods instead of additional constructors to avoid the need for constructor disambiguation through `@PersistenceCreator`.
-Instead, defaulting of properties is handled within the factory method.
-If you want Spring Data to use the factory method for object instantiation, annotate it with `@PersistenceCreator`.
-
-[[mapping.general-recommendations]]
-== General recommendations
-
-* _Try to stick to immutable objects_ -- Immutable objects are straightforward to create as materializing an object is then a matter of calling its constructor only.
-Also, this avoids your domain objects to be littered with setter methods that allow client code to manipulate the objects state.
-If you need those, prefer to make them package protected so that they can only be invoked by a limited amount of co-located types.
-Constructor-only materialization is up to 30% faster than properties population.
-* _Provide an all-args constructor_ -- Even if you cannot or don't want to model your entities as immutable values, there's still value in providing a constructor that takes all properties of the entity as arguments, including the mutable ones, as this allows the object mapping to skip the property population for optimal performance.
-* _Use factory methods instead of overloaded constructors to avoid ``@PersistenceCreator``_ -- With an all-argument constructor needed for optimal performance, we usually want to expose more application use case specific constructors that omit things like auto-generated identifiers etc.
-It's an established pattern to rather use static factory methods to expose these variants of the all-args constructor.
-* _Make sure you adhere to the constraints that allow the generated instantiator and property accessor classes to be used_ --
-* _For identifiers to be generated, still use a final field in combination with an all-arguments persistence constructor (preferred) or a `with…` method_ --
-* _Use Lombok to avoid boilerplate code_ -- As persistence operations usually require a constructor taking all arguments, their declaration becomes a tedious repetition of boilerplate parameter to field assignments that can best be avoided by using Lombok's `@AllArgsConstructor`.
-
-[[mapping.general-recommendations.override.properties]]
-=== Overriding Properties
-
-Java's allows a flexible design of domain classes where a subclass could define a property that is already declared with the same name in its superclass.
-Consider the following example:
-
-====
-[source,java]
-----
-public class SuperType {
-
- private CharSequence field;
-
- public SuperType(CharSequence field) {
- this.field = field;
- }
-
- public CharSequence getField() {
- return this.field;
- }
-
- public void setField(CharSequence field) {
- this.field = field;
- }
-}
-
-public class SubType extends SuperType {
-
- private String field;
-
- public SubType(String field) {
- super(field);
- this.field = field;
- }
-
- @Override
- public String getField() {
- return this.field;
- }
-
- public void setField(String field) {
- this.field = field;
-
- // optional
- super.setField(field);
- }
-}
-----
-====
-
-Both classes define a `field` using assignable types. `SubType` however shadows `SuperType.field`.
-Depending on the class design, using the constructor could be the only default approach to set `SuperType.field`.
-Alternatively, calling `super.setField(…)` in the setter could set the `field` in `SuperType`.
-All these mechanisms create conflicts to some degree because the properties share the same name yet might represent two distinct values.
-Spring Data skips super-type properties if types are not assignable.
-That is, the type of the overridden property must be assignable to its super-type property type to be registered as override, otherwise the super-type property is considered transient.
-We generally recommend using distinct property names.
-
-Spring Data modules generally support overridden properties holding different values.
-From a programming model perspective there are a few things to consider:
-
-1. Which property should be persisted (default to all declared properties)?
-You can exclude properties by annotating these with `@Transient`.
-2. How to represent properties in your data store?
-Using the same field/column name for different values typically leads to corrupt data so you should annotate least one of the properties using an explicit field/column name.
-3. Using `@AccessType(PROPERTY)` cannot be used as the super-property cannot be generally set without making any further assumptions of the setter implementation.
-
-[[mapping.kotlin]]
-== Kotlin support
-
-Spring Data adapts specifics of Kotlin to allow object creation and mutation.
-
-[[mapping.kotlin.creation]]
-=== Kotlin object creation
-
-Kotlin classes are supported to be instantiated, all classes are immutable by default and require explicit property declarations to define mutable properties.
-
-Spring Data automatically tries to detect a persistent entity's constructor to be used to materialize objects of that type.
-The resolution algorithm works as follows:
-
-1. If there is a constructor that is annotated with `@PersistenceCreator`, it is used.
-2. If the type is a <> the primary constructor is used.
-3. If there is a single static factory method annotated with `@PersistenceCreator` then it is used.
-4. If there is a single constructor, it is used.
-5. If there are multiple constructors and exactly one is annotated with `@PersistenceCreator`, it is used.
-6. If the type is a Java `Record` the canonical constructor is used.
-7. If there's a no-argument constructor, it is used.
-Other constructors will be ignored.
-
-Consider the following `data` class `Person`:
-
-====
-[source,kotlin]
-----
-data class Person(val id: String, val name: String)
-----
-====
-
-The class above compiles to a typical class with an explicit constructor.We can customize this class by adding another constructor and annotate it with `@PersistenceCreator` to indicate a constructor preference:
-
-====
-[source,kotlin]
-----
-data class Person(var id: String, val name: String) {
-
- @PersistenceCreator
- constructor(id: String) : this(id, "unknown")
-}
-----
-====
-
-Kotlin supports parameter optionality by allowing default values to be used if a parameter is not provided.
-When Spring Data detects a constructor with parameter defaulting, then it leaves these parameters absent if the data store does not provide a value (or simply returns `null`) so Kotlin can apply parameter defaulting.Consider the following class that applies parameter defaulting for `name`
-
-====
-[source,kotlin]
-----
-data class Person(var id: String, val name: String = "unknown")
-----
-====
-
-Every time the `name` parameter is either not part of the result or its value is `null`, then the `name` defaults to `unknown`.
-
-=== Property population of Kotlin data classes
-
-In Kotlin, all classes are immutable by default and require explicit property declarations to define mutable properties.
-Consider the following `data` class `Person`:
-
-====
-[source,kotlin]
-----
-data class Person(val id: String, val name: String)
-----
-====
-
-This class is effectively immutable.
-It allows creating new instances as Kotlin generates a `copy(…)` method that creates new object instances copying all property values from the existing object and applying property values provided as arguments to the method.
-
-[[mapping.kotlin.override.properties]]
-=== Kotlin Overriding Properties
-
-Kotlin allows declaring https://kotlinlang.org/docs/inheritance.html#overriding-properties[property overrides] to alter properties in subclasses.
-
-====
-[source,kotlin]
-----
-open class SuperType(open var field: Int)
-
-class SubType(override var field: Int = 1) :
- SuperType(field) {
-}
-----
-====
-
-Such an arrangement renders two properties with the name `field`.
-Kotlin generates property accessors (getters and setters) for each property in each class.
-Effectively, the code looks like as follows:
-
-====
-[source,java]
-----
-public class SuperType {
-
- private int field;
-
- public SuperType(int field) {
- this.field = field;
- }
-
- public int getField() {
- return this.field;
- }
-
- public void setField(int field) {
- this.field = field;
- }
-}
-
-public final class SubType extends SuperType {
-
- private int field;
-
- public SubType(int field) {
- super(field);
- this.field = field;
- }
-
- public int getField() {
- return this.field;
- }
-
- public void setField(int field) {
- this.field = field;
- }
-}
-----
-====
-
-Getters and setters on `SubType` set only `SubType.field` and not `SuperType.field`.
-In such an arrangement, using the constructor is the only default approach to set `SuperType.field`.
-Adding a method to `SubType` to set `SuperType.field` via `this.SuperType.field = …` is possible but falls outside of supported conventions.
-Property overrides create conflicts to some degree because the properties share the same name yet might represent two distinct values.
-We generally recommend using distinct property names.
-
-Spring Data modules generally support overridden properties holding different values.
-From a programming model perspective there are a few things to consider:
-
-1. Which property should be persisted (default to all declared properties)?
-You can exclude properties by annotating these with `@Transient`.
-2. How to represent properties in your data store?
-Using the same field/column name for different values typically leads to corrupt data so you should annotate least one of the properties using an explicit field/column name.
-3. Using `@AccessType(PROPERTY)` cannot be used as the super-property cannot be set.
-
-[[mapping.kotlin.value.classes]]
-=== Kotlin Value Classes
-
-Kotlin Value Classes are designed for a more expressive domain model to make underlying concepts explicit.
-Spring Data can read and write types that define properties using Value Classes.
-
-Consider the following domain model:
-
-====
-[source,kotlin]
-----
-@JvmInline
-value class EmailAddress(val theAddress: String) <1>
-
-data class Contact(val id: String, val name:String, val emailAddress: EmailAddress) <2>
-----
-
-<1> A simple value class with a non-nullable value type.
-<2> Data class defining a property using the `EmailAddress` value class.
-====
-
-NOTE: Non-nullable properties using non-primitive value types are flattened in the compiled class to the value type.
-Nullable primitive value types or nullable value-in-value types are represented with their wrapper type and that affects how value types are represented in the database.
diff --git a/src/main/asciidoc/preface.adoc b/src/main/asciidoc/preface.adoc
deleted file mode 100644
index 486d5aa882..0000000000
--- a/src/main/asciidoc/preface.adoc
+++ /dev/null
@@ -1,12 +0,0 @@
-[[preface]]
-= Preface
-The Spring Data Commons project applies core Spring concepts to the development of solutions using many relational and non-relational data stores.
-
-[[project]]
-== Project Metadata
-
-* Version control: https://github.com/spring-projects/spring-data-commons
-* Bugtracker: https://github.com/spring-projects/spring-data-commons/issues
-* Release repository: https://repo1.maven.org/maven2/
-* Milestone repository: https://repo.spring.io/milestone/
-* Snapshot repository: https://repo.spring.io/snapshot/
diff --git a/src/main/asciidoc/query-by-example.adoc b/src/main/asciidoc/query-by-example.adoc
deleted file mode 100644
index 192055cc66..0000000000
--- a/src/main/asciidoc/query-by-example.adoc
+++ /dev/null
@@ -1,218 +0,0 @@
-[[query-by-example]]
-= Query by Example
-
-[[query-by-example.introduction]]
-== Introduction
-
-This chapter provides an introduction to Query by Example and explains how to use it.
-
-Query by Example (QBE) is a user-friendly querying technique with a simple interface.
-It allows dynamic query creation and does not require you to write queries that contain field names.
-In fact, Query by Example does not require you to write queries by using store-specific query languages at all.
-
-[[query-by-example.usage]]
-== Usage
-
-The Query by Example API consists of four parts:
-
-* Probe: The actual example of a domain object with populated fields.
-* `ExampleMatcher`: The `ExampleMatcher` carries details on how to match particular fields.
-It can be reused across multiple Examples.
-* `Example`: An `Example` consists of the probe and the `ExampleMatcher`.
-It is used to create the query.
-* `FetchableFluentQuery`: A `FetchableFluentQuery` offers a fluent API, that allows further customization of a query derived from an `Example`.
- Using the fluent API lets you to specify ordering projection and result processing for your query.
-
-Query by Example is well suited for several use cases:
-
-* Querying your data store with a set of static or dynamic constraints.
-* Frequent refactoring of the domain objects without worrying about breaking existing queries.
-* Working independently from the underlying data store API.
-
-Query by Example also has several limitations:
-
-* No support for nested or grouped property constraints, such as `firstname = ?0 or (firstname = ?1 and lastname = ?2)`.
-* Only supports starts/contains/ends/regex matching for strings and exact matching for other property types.
-
-Before getting started with Query by Example, you need to have a domain object.
-To get started, create an interface for your repository, as shown in the following example:
-
-.Sample Person object
-====
-[source,java]
-----
-public class Person {
-
- @Id
- private String id;
- private String firstname;
- private String lastname;
- private Address address;
-
- // … getters and setters omitted
-}
-----
-====
-
-The preceding example shows a simple domain object.
-You can use it to create an `Example`.
-By default, fields having `null` values are ignored, and strings are matched by using the store specific defaults.
-
-NOTE: Inclusion of properties into a Query by Example criteria is based on nullability.
-Properties using primitive types (`int`, `double`, …) are always included unless the <>.
-
-Examples can be built by either using the `of` factory method or by using <>. `Example` is immutable.
-The following listing shows a simple Example:
-
-.Simple Example
-====
-[source,java]
-----
-Person person = new Person(); <1>
-person.setFirstname("Dave"); <2>
-
-Example example = Example.of(person); <3>
-----
-
-<1> Create a new instance of the domain object.
-<2> Set the properties to query.
-<3> Create the `Example`.
-====
-
-You can run the example queries by using repositories.
-To do so, let your repository interface extend `QueryByExampleExecutor`.
-The following listing shows an excerpt from the `QueryByExampleExecutor` interface:
-
-.The `QueryByExampleExecutor`
-====
-[source,java]
-----
-public interface QueryByExampleExecutor {
-
- S findOne(Example example);
-
- Iterable findAll(Example example);
-
- // … more functionality omitted.
-}
-----
-====
-
-[[query-by-example.matchers]]
-== Example Matchers
-
-Examples are not limited to default settings.
-You can specify your own defaults for string matching, null handling, and property-specific settings by using the `ExampleMatcher`, as shown in the following example:
-
-.Example matcher with customized matching
-====
-[source,java]
-----
-Person person = new Person(); <1>
-person.setFirstname("Dave"); <2>
-
-ExampleMatcher matcher = ExampleMatcher.matching() <3>
- .withIgnorePaths("lastname") <4>
- .withIncludeNullValues() <5>
- .withStringMatcher(StringMatcher.ENDING); <6>
-
-Example example = Example.of(person, matcher); <7>
-
-----
-
-<1> Create a new instance of the domain object.
-<2> Set properties.
-<3> Create an `ExampleMatcher` to expect all values to match.
-It is usable at this stage even without further configuration.
-<4> Construct a new `ExampleMatcher` to ignore the `lastname` property path.
-<5> Construct a new `ExampleMatcher` to ignore the `lastname` property path and to include null values.
-<6> Construct a new `ExampleMatcher` to ignore the `lastname` property path, to include null values, and to perform suffix string matching.
-<7> Create a new `Example` based on the domain object and the configured `ExampleMatcher`.
-====
-
-By default, the `ExampleMatcher` expects all values set on the probe to match.
-If you want to get results matching any of the predicates defined implicitly, use `ExampleMatcher.matchingAny()`.
-
-You can specify behavior for individual properties (such as "firstname" and "lastname" or, for nested properties, "address.city").
-You can tune it with matching options and case sensitivity, as shown in the following example:
-
-.Configuring matcher options
-====
-[source,java]
-----
-ExampleMatcher matcher = ExampleMatcher.matching()
- .withMatcher("firstname", endsWith())
- .withMatcher("lastname", startsWith().ignoreCase());
-}
-----
-====
-
-Another way to configure matcher options is to use lambdas (introduced in Java 8).
-This approach creates a callback that asks the implementor to modify the matcher.
-You need not return the matcher, because configuration options are held within the matcher instance.
-The following example shows a matcher that uses lambdas:
-
-.Configuring matcher options with lambdas
-====
-[source,java]
-----
-ExampleMatcher matcher = ExampleMatcher.matching()
- .withMatcher("firstname", match -> match.endsWith())
- .withMatcher("firstname", match -> match.startsWith());
-}
-----
-====
-
-Queries created by `Example` use a merged view of the configuration.
-Default matching settings can be set at the `ExampleMatcher` level, while individual settings can be applied to particular property paths.
-Settings that are set on `ExampleMatcher` are inherited by property path settings unless they are defined explicitly.
-Settings on a property patch have higher precedence than default settings.
-The following table describes the scope of the various `ExampleMatcher` settings:
-
-[cols="1,2",options="header"]
-.Scope of `ExampleMatcher` settings
-|===
-| Setting
-| Scope
-
-| Null-handling
-| `ExampleMatcher`
-
-| String matching
-| `ExampleMatcher` and property path
-
-| Ignoring properties
-| Property path
-
-| Case sensitivity
-| `ExampleMatcher` and property path
-
-| Value transformation
-| Property path
-
-|===
-
-[[query-by-example.fluent]]
-== Fluent API
-
-`QueryByExampleExecutor` offers one more method, which we did not mention so far: ` R findBy(Example example, Function, R> queryFunction)`.
-As with other methods, it executes a query derived from an `Example`.
-However, with the second argument, you can control aspects of that execution that you cannot dynamically control otherwise.
-You do so by invoking the various methods of the `FetchableFluentQuery` in the second argument.
-`sortBy` lets you specify an ordering for your result.
-`as` lets you specify the type to which you want the result to be transformed.
-`project` limits the queried attributes.
-`first`, `firstValue`, `one`, `oneValue`, `all`, `page`, `stream`, `count`, and `exists` define what kind of result you get and how the query behaves when more than the expected number of results are available.
-
-
-.Use the fluent API to get the last of potentially many results, ordered by lastname.
-====
-[source,java]
-----
-Optional match = repository.findBy(example,
- q -> q
- .sortBy(Sort.by("lastname").descending())
- .first()
-);
-----
-====
diff --git a/src/main/asciidoc/repositories-null-handling.adoc b/src/main/asciidoc/repositories-null-handling.adoc
deleted file mode 100644
index c56ff6dfbb..0000000000
--- a/src/main/asciidoc/repositories-null-handling.adoc
+++ /dev/null
@@ -1,97 +0,0 @@
-[[repositories.nullability]]
-=== Null Handling of Repository Methods
-
-As of Spring Data 2.0, repository CRUD methods that return an individual aggregate instance use Java 8's `Optional` to indicate the potential absence of a value.
-Besides that, Spring Data supports returning the following wrapper types on query methods:
-
-* `com.google.common.base.Optional`
-* `scala.Option`
-* `io.vavr.control.Option`
-
-Alternatively, query methods can choose not to use a wrapper type at all.
-The absence of a query result is then indicated by returning `null`.
-Repository methods returning collections, collection alternatives, wrappers, and streams are guaranteed never to return `null` but rather the corresponding empty representation.
-See "`<>`" for details.
-
-[[repositories.nullability.annotations]]
-==== Nullability Annotations
-
-You can express nullability constraints for repository methods by using {spring-framework-docs}/core.html#null-safety[Spring Framework's nullability annotations].
-They provide a tooling-friendly approach and opt-in `null` checks during runtime, as follows:
-
-* {spring-framework-javadoc}/org/springframework/lang/NonNullApi.html[`@NonNullApi`]: Used on the package level to declare that the default behavior for parameters and return values is, respectively, neither to accept nor to produce `null` values.
-* {spring-framework-javadoc}/org/springframework/lang/NonNull.html[`@NonNull`]: Used on a parameter or return value that must not be `null` (not needed on a parameter and return value where `@NonNullApi` applies).
-* {spring-framework-javadoc}/org/springframework/lang/Nullable.html[`@Nullable`]: Used on a parameter or return value that can be `null`.
-
-Spring annotations are meta-annotated with https://jcp.org/en/jsr/detail?id=305[JSR 305] annotations (a dormant but widely used JSR).
-JSR 305 meta-annotations let tooling vendors (such as https://www.jetbrains.com/help/idea/nullable-and-notnull-annotations.html[IDEA], https://help.eclipse.org/latest/index.jsp?topic=/org.eclipse.jdt.doc.user/tasks/task-using_external_null_annotations.htm[Eclipse], and link:https://kotlinlang.org/docs/reference/java-interop.html#null-safety-and-platform-types[Kotlin]) provide null-safety support in a generic way, without having to hard-code support for Spring annotations.
-To enable runtime checking of nullability constraints for query methods, you need to activate non-nullability on the package level by using Spring’s `@NonNullApi` in `package-info.java`, as shown in the following example:
-
-.Declaring Non-nullability in `package-info.java`
-====
-[source,java]
-----
-@org.springframework.lang.NonNullApi
-package com.acme;
-----
-====
-
-Once non-null defaulting is in place, repository query method invocations get validated at runtime for nullability constraints.
-If a query result violates the defined constraint, an exception is thrown.
-This happens when the method would return `null` but is declared as non-nullable (the default with the annotation defined on the package in which the repository resides).
-If you want to opt-in to nullable results again, selectively use `@Nullable` on individual methods.
-Using the result wrapper types mentioned at the start of this section continues to work as expected: an empty result is translated into the value that represents absence.
-
-The following example shows a number of the techniques just described:
-
-.Using different nullability constraints
-====
-[source,java]
-----
-package com.acme; <1>
-
-import org.springframework.lang.Nullable;
-
-interface UserRepository extends Repository {
-
- User getByEmailAddress(EmailAddress emailAddress); <2>
-
- @Nullable
- User findByEmailAddress(@Nullable EmailAddress emailAdress); <3>
-
- Optional findOptionalByEmailAddress(EmailAddress emailAddress); <4>
-}
-----
-<1> The repository resides in a package (or sub-package) for which we have defined non-null behavior.
-<2> Throws an `EmptyResultDataAccessException` when the query does not produce a result.
-Throws an `IllegalArgumentException` when the `emailAddress` handed to the method is `null`.
-<3> Returns `null` when the query does not produce a result.
-Also accepts `null` as the value for `emailAddress`.
-<4> Returns `Optional.empty()` when the query does not produce a result.
-Throws an `IllegalArgumentException` when the `emailAddress` handed to the method is `null`.
-====
-
-[[repositories.nullability.kotlin]]
-==== Nullability in Kotlin-based Repositories
-
-Kotlin has the definition of https://kotlinlang.org/docs/reference/null-safety.html[nullability constraints] baked into the language.
-Kotlin code compiles to bytecode, which does not express nullability constraints through method signatures but rather through compiled-in metadata.
-Make sure to include the `kotlin-reflect` JAR in your project to enable introspection of Kotlin's nullability constraints.
-Spring Data repositories use the language mechanism to define those constraints to apply the same runtime checks, as follows:
-
-.Using nullability constraints on Kotlin repositories
-====
-[source,kotlin]
-----
-interface UserRepository : Repository {
-
- fun findByUsername(username: String): User <1>
-
- fun findByFirstname(firstname: String?): User? <2>
-}
-----
-<1> The method defines both the parameter and the result as non-nullable (the Kotlin default).
-The Kotlin compiler rejects method invocations that pass `null` to the method.
-If the query yields an empty result, an `EmptyResultDataAccessException` is thrown.
-<2> This method accepts `null` for the `firstname` parameter and returns `null` if the query does not produce a result.
-====
diff --git a/src/main/asciidoc/repositories-paging-sorting.adoc b/src/main/asciidoc/repositories-paging-sorting.adoc
deleted file mode 100644
index 85e5172abe..0000000000
--- a/src/main/asciidoc/repositories-paging-sorting.adoc
+++ /dev/null
@@ -1,232 +0,0 @@
-[[repositories.special-parameters]]
-=== Paging, Iterating Large Results, Sorting & Limiting
-
-To handle parameters in your query, define method parameters as already seen in the preceding examples.
-Besides that, the infrastructure recognizes certain specific types like `Pageable`, `Sort` and `Limit`, to apply pagination, sorting and limiting to your queries dynamically.
-The following example demonstrates these features:
-
-ifdef::feature-scroll[]
-.Using `Pageable`, `Slice`, `ScrollPosition`, `Sort` and `Limit` in query methods
-====
-[source,java]
-----
-Page findByLastname(String lastname, Pageable pageable);
-
-Slice findByLastname(String lastname, Pageable pageable);
-
-Window findTop10ByLastname(String lastname, ScrollPosition position, Sort sort);
-
-List findByLastname(String lastname, Sort sort);
-
-List findByLastname(String lastname, Sort sort, Limit limit);
-
-List findByLastname(String lastname, Pageable pageable);
-----
-====
-endif::[]
-
-ifndef::feature-scroll[]
-.Using `Pageable`, `Slice`, `Sort` and `Limit` in query methods
-====
-[source,java]
-----
-Page findByLastname(String lastname, Pageable pageable);
-
-Slice findByLastname(String lastname, Pageable pageable);
-
-List findByLastname(String lastname, Sort sort);
-
-List findByLastname(String lastname, Sort sort, Limit limit);
-
-List findByLastname(String lastname, Pageable pageable);
-----
-====
-endif::[]
-
-IMPORTANT: APIs taking `Sort`, `Pageable` and `Limit` expect non-`null` values to be handed into methods.
-If you do not want to apply any sorting or pagination, use `Sort.unsorted()`, `Pageable.unpaged()` and `Limit.unlimited()`.
-
-The first method lets you pass an `org.springframework.data.domain.Pageable` instance to the query method to dynamically add paging to your statically defined query.
-A `Page` knows about the total number of elements and pages available.
-It does so by the infrastructure triggering a count query to calculate the overall number.
-As this might be expensive (depending on the store used), you can instead return a `Slice`.
-A `Slice` knows only about whether a next `Slice` is available, which might be sufficient when walking through a larger result set.
-
-Sorting options are handled through the `Pageable` instance, too.
-If you need only sorting, add an `org.springframework.data.domain.Sort` parameter to your method.
-As you can see, returning a `List` is also possible.
-In this case, the additional metadata required to build the actual `Page` instance is not created (which, in turn, means that the additional count query that would have been necessary is not issued).
-Rather, it restricts the query to look up only the given range of entities.
-
-NOTE: To find out how many pages you get for an entire query, you have to trigger an additional count query.
-By default, this query is derived from the query you actually trigger.
-
-[IMPORTANT]
-====
-Special parameters may only be used once within a query method. +
-Some special parameters described above are mutually exclusive.
-Please consider the following list of invalid parameter combinations.
-
-|===
-| Parameters | Example | Reason
-
-| `Pageable` and `Sort`
-| `findBy...(Pageable page, Sort sort)`
-| `Pageable` already defines `Sort`
-
-| `Pageable` and `Limit`
-| `findBy...(Pageable page, Limit limit)`
-| `Pageable` already defines a limit.
-
-|===
-
-The `Top` keyword used to limit results can be used to along with `Pageable` whereas `Top` defines the total maximum of results, whereas the Pageable parameter may reduce this number.
-====
-
-[[repositories.scrolling.guidance]]
-==== Which Method is Appropriate?
-
-The value provided by the Spring Data abstractions is perhaps best shown by the possible query method return types outlined in the following table below.
-The table shows which types you can return from a query method
-
-.Consuming Large Query Results
-[cols="1,2,2,3"]
-|===
-| Method|Amount of Data Fetched|Query Structure|Constraints
-
-| <`>>
-| All results.
-| Single query.
-| Query results can exhaust all memory. Fetching all data can be time-intensive.
-
-| <`>>
-| All results.
-| Single query.
-| Query results can exhaust all memory. Fetching all data can be time-intensive.
-
-| <`>>
-| Chunked (one-by-one or in batches) depending on `Stream` consumption.
-| Single query using typically cursors.
-| Streams must be closed after usage to avoid resource leaks.
-
-| `Flux`
-| Chunked (one-by-one or in batches) depending on `Flux` consumption.
-| Single query using typically cursors.
-| Store module must provide reactive infrastructure.
-
-| `Slice`
-| `Pageable.getPageSize() + 1` at `Pageable.getOffset()`
-| One to many queries fetching data starting at `Pageable.getOffset()` applying limiting.
-a| A `Slice` can only navigate to the next `Slice`.
-
-* `Slice` provides details whether there is more data to fetch.
-* Offset-based queries becomes inefficient when the offset is too large because the database still has to materialize the full result.
-
-ifdef::feature-scroll[]
-| Offset-based `Window`
-| `limit + 1` at `OffsetScrollPosition.getOffset()`
-| One to many queries fetching data starting at `OffsetScrollPosition.getOffset()` applying limiting.
-a| A `Window` can only navigate to the next `Window`.
-endif::[]
-
-* `Window` provides details whether there is more data to fetch.
-* Offset-based queries becomes inefficient when the offset is too large because the database still has to materialize the full result.
-
-| `Page`
-| `Pageable.getPageSize()` at `Pageable.getOffset()`
-| One to many queries starting at `Pageable.getOffset()` applying limiting. Additionally, `COUNT(…)` query to determine the total number of elements can be required.
-a| Often times, `COUNT(…)` queries are required that are costly.
-
-* Offset-based queries becomes inefficient when the offset is too large because the database still has to materialize the full result.
-
-ifdef::feature-scroll[]
-| Keyset-based `Window`
-| `limit + 1` using a rewritten `WHERE` condition
-| One to many queries fetching data starting at `KeysetScrollPosition.getKeys()` applying limiting.
-a| A `Window` can only navigate to the next `Window`.
-
-* `Window` provides details whether there is more data to fetch.
-* Keyset-based queries require a proper index structure for efficient querying.
-* Most data stores do not work well when Keyset-based query results contain `null` values.
-* Results must expose all sorting keys in their results requiring projections to select potentially more properties than required for the actual projection.
-endif::[]
-
-|===
-
-[[repositories.paging-and-sorting]]
-==== Paging and Sorting
-
-You can define simple sorting expressions by using property names.
-You can concatenate expressions to collect multiple criteria into one expression.
-
-.Defining sort expressions
-====
-[source,java]
-----
-Sort sort = Sort.by("firstname").ascending()
- .and(Sort.by("lastname").descending());
-----
-====
-
-For a more type-safe way to define sort expressions, start with the type for which to define the sort expression and use method references to define the properties on which to sort.
-
-.Defining sort expressions by using the type-safe API
-====
-[source,java]
-----
-TypedSort person = Sort.sort(Person.class);
-
-Sort sort = person.by(Person::getFirstname).ascending()
- .and(person.by(Person::getLastname).descending());
-----
-====
-
-NOTE: `TypedSort.by(…)` makes use of runtime proxies by (typically) using CGlib, which may interfere with native image compilation when using tools such as Graal VM Native.
-
-If your store implementation supports Querydsl, you can also use the generated metamodel types to define sort expressions:
-
-.Defining sort expressions by using the Querydsl API
-====
-[source,java]
-----
-QSort sort = QSort.by(QPerson.firstname.asc())
- .and(QSort.by(QPerson.lastname.desc()));
-----
-====
-
-ifdef::feature-scroll[]
-include::repositories-scrolling.adoc[]
-endif::[]
-
-[[repositories.limit-query-result]]
-=== Limiting Query Results
-
-You can limit the results of query methods by using the `first` or `top` keywords, which you can use interchangeably.
-You can append an optional numeric value to `top` or `first` to specify the maximum result size to be returned.
-If the number is left out, a result size of 1 is assumed.
-The following example shows how to limit the query size:
-
-.Limiting the result size of a query with `Top` and `First`
-====
-[source,java]
-----
-User findFirstByOrderByLastnameAsc();
-
-User findTopByOrderByAgeDesc();
-
-Page queryFirst10ByLastname(String lastname, Pageable pageable);
-
-Slice findTop3ByLastname(String lastname, Pageable pageable);
-
-List findFirst10ByLastname(String lastname, Sort sort);
-
-List findTop10ByLastname(String lastname, Pageable pageable);
-----
-====
-
-The limiting expressions also support the `Distinct` keyword for datastores that support distinct queries.
-Also, for the queries that limit the result set to one instance, wrapping the result into with the `Optional` keyword is supported.
-
-If pagination or slicing is applied to a limiting query pagination (and the calculation of the number of available pages), it is applied within the limited result.
-
-NOTE: Limiting the results in combination with dynamic sorting by using a `Sort` parameter lets you express query methods for the 'K' smallest as well as for the 'K' biggest elements.
diff --git a/src/main/asciidoc/repositories-scrolling.adoc b/src/main/asciidoc/repositories-scrolling.adoc
deleted file mode 100644
index a76ad905c3..0000000000
--- a/src/main/asciidoc/repositories-scrolling.adoc
+++ /dev/null
@@ -1,106 +0,0 @@
-[[repositories.scrolling]]
-==== Scrolling
-
-Scrolling is a more fine-grained approach to iterate through larger results set chunks.
-Scrolling consists of a stable sort, a scroll type (Offset- or Keyset-based scrolling) and result limiting.
-You can define simple sorting expressions by using property names and define static result limiting using the <> through query derivation.
-You can concatenate expressions to collect multiple criteria into one expression.
-
-Scroll queries return a `Window` that allows obtaining the scroll position to resume to obtain the next `Window` until your application has consumed the entire query result.
-Similar to consuming a Java `Iterator>` by obtaining the next batch of results, query result scrolling lets you access the a `ScrollPosition` through `Window.positionAt(...)`.
-
-[source,java]
-----
-Window users = repository.findFirst10ByLastnameOrderByFirstname("Doe", ScrollPosition.offset());
-do {
-
- for (User u : users) {
- // consume the user
- }
-
- // obtain the next Scroll
- users = repository.findFirst10ByLastnameOrderByFirstname("Doe", users.positionAt(users.size() - 1));
-} while (!users.isEmpty() && users.hasNext());
-----
-
-`WindowIterator` provides a utility to simplify scrolling across ``Window``s by removing the need to check for the presence of a next `Window` and applying the `ScrollPosition`.
-
-[source,java]
-----
-WindowIterator users = WindowIterator.of(position -> repository.findFirst10ByLastnameOrderByFirstname("Doe", position))
- .startingAt(ScrollPosition.offset());
-
-while (users.hasNext()) {
- User u = users.next();
- // consume the user
-}
-----
-
-[[repositories.scrolling.offset]]
-===== Scrolling using Offset
-
-Offset scrolling uses similar to pagination, an Offset counter to skip a number of results and let the data source only return results beginning at the given Offset.
-This simple mechanism avoids large results being sent to the client application.
-However, most databases require materializing the full query result before your server can return the results.
-
-.Using Offset `ScrollPosition` with Repository Query Methods
-====
-[source,java]
-----
-interface UserRepository extends Repository {
-
- Window findFirst10ByLastnameOrderByFirstname(String lastname, OffsetScrollPosition position);
-}
-
-WindowIterator users = WindowIterator.of(position -> repository.findFirst10ByLastnameOrderByFirstname("Doe", position))
- .startingAt(ScrollPosition.offset()); <1>
-----
-
-<1> Start from the initial offset at position `0`.
-====
-
-[[repositories.scrolling.keyset]]
-===== Scrolling using Keyset-Filtering
-
-Offset-based requires most databases require materializing the entire result before your server can return the results.
-So while the client only sees the portion of the requested results, your server needs to build the full result, which causes additional load.
-
-Keyset-Filtering approaches result subset retrieval by leveraging built-in capabilities of your database aiming to reduce the computation and I/O requirements for individual queries.
-This approach maintains a set of keys to resume scrolling by passing keys into the query, effectively amending your filter criteria.
-
-The core idea of Keyset-Filtering is to start retrieving results using a stable sorting order.
-Once you want to scroll to the next chunk, you obtain a `ScrollPosition` that is used to reconstruct the position within the sorted result.
-The `ScrollPosition` captures the keyset of the last entity within the current `Window`.
-To run the query, reconstruction rewrites the criteria clause to include all sort fields and the primary key so that the database can leverage potential indexes to run the query.
-The database needs only constructing a much smaller result from the given keyset position without the need to fully materialize a large result and then skipping results until reaching a particular offset.
-
-[WARNING]
-====
-Keyset-Filtering requires the keyset properties (those used for sorting) to be non-nullable.
-This limitation applies due to the store specific `null` value handling of comparison operators as well as the need to run queries against an indexed source.
-Keyset-Filtering on nullable properties will lead to unexpected results.
-====
-
-.Using `KeysetScrollPosition` with Repository Query Methods
-====
-[source,java]
-----
-interface UserRepository extends Repository {
-
- Window findFirst10ByLastnameOrderByFirstname(String lastname, KeysetScrollPosition position);
-}
-
-WindowIterator users = WindowIterator.of(position -> repository.findFirst10ByLastnameOrderByFirstname("Doe", position))
- .startingAt(ScrollPosition.keyset()); <1>
-----
-<1> Start at the very beginning and do not apply additional filtering.
-====
-
-Keyset-Filtering works best when your database contains an index that matches the sort fields, hence a static sort works well.
-Scroll queries applying Keyset-Filtering require to the properties used in the sort order to be returned by the query, and these must be mapped in the returned entity.
-
-You can use interface and DTO projections, however make sure to include all properties that you've sorted by to avoid keyset extraction failures.
-
-When specifying your `Sort` order, it is sufficient to include sort properties relevant to your query;
-You do not need to ensure unique query results if you do not want to.
-The keyset query mechanism amends your sort order by including the primary key (or any remainder of composite primary keys) to ensure each query result is unique.
diff --git a/src/main/asciidoc/repositories.adoc b/src/main/asciidoc/repositories.adoc
deleted file mode 100644
index 3510c66155..0000000000
--- a/src/main/asciidoc/repositories.adoc
+++ /dev/null
@@ -1,1644 +0,0 @@
-:spring-framework-docs: {springDocsUrl}
-:spring-framework-javadoc: {springJavadocUrl}
-
-ifndef::store[]
-:store: Jpa
-endif::[]
-
-[[repositories]]
-= Working with Spring Data Repositories
-
-The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores.
-
-[IMPORTANT]
-====
-_Spring Data repository documentation and your module_
-
-This chapter explains the core concepts and interfaces of Spring Data repositories.
-The information in this chapter is pulled from the Spring Data Commons module.
-It uses the configuration and code samples for the Jakarta Persistence API (JPA) module.
-ifeval::[{include-xml-namespaces} != false]
-If you want to use XML configuration you should adapt the XML namespace declaration and the types to be extended to the equivalents of the particular module that you use. "`<>`" covers XML configuration, which is supported across all Spring Data modules that support the repository API.
-endif::[]
-"`<>`" covers the query method keywords supported by the repository abstraction in general.
-For detailed information on the specific features of your module, see the chapter on that module of this document.
-====
-
-[[repositories.core-concepts]]
-== Core concepts
-
-The central interface in the Spring Data repository abstraction is `Repository`.
-It takes the domain class to manage as well as the identifier type of the domain class as type arguments.
-This interface acts primarily as a marker interface to capture the types to work with and to help you to discover interfaces that extend this one.
-The https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/CrudRepository.html[`CrudRepository`] and https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/ListCrudRepository.html[`ListCrudRepository`] interfaces provide sophisticated CRUD functionality for the entity class that is being managed.
-
-[[repositories.repository]]
-.`CrudRepository` Interface
-====
-[source,java]
-----
-public interface CrudRepository extends Repository {
-
- S save(S entity); <1>
-
- Optional findById(ID primaryKey); <2>
-
- Iterable findAll(); <3>
-
- long count(); <4>
-
- void delete(T entity); <5>
-
- boolean existsById(ID primaryKey); <6>
-
- // … more functionality omitted.
-}
-----
-<1> Saves the given entity.
-<2> Returns the entity identified by the given ID.
-<3> Returns all entities.
-<4> Returns the number of entities.
-<5> Deletes the given entity.
-<6> Indicates whether an entity with the given ID exists.
-====
-
-The methods declared in this interface are commonly referred to as CRUD methods.
-`ListCrudRepository` offers equivalent methods, but they return `List` where the `CrudRepository` methods return an `Iterable`.
-
-NOTE: We also provide persistence technology-specific abstractions, such as `JpaRepository` or `MongoRepository`.
-Those interfaces extend `CrudRepository` and expose the capabilities of the underlying persistence technology in addition to the rather generic persistence technology-agnostic interfaces such as `CrudRepository`.
-
-Additional to the `CrudRepository`, there is a https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/PagingAndSortingRepository.html[`PagingAndSortingRepository`] abstraction that adds additional methods to ease paginated access to entities:
-
-.`PagingAndSortingRepository` interface
-====
-[source,java]
-----
-public interface PagingAndSortingRepository {
-
- Iterable findAll(Sort sort);
-
- Page findAll(Pageable pageable);
-}
-----
-====
-
-To access the second page of `User` by a page size of 20, you could do something like the following:
-
-====
-[source,java]
-----
-PagingAndSortingRepository repository = // … get access to a bean
-Page users = repository.findAll(PageRequest.of(1, 20));
-----
-====
-
-ifdef::feature-scroll[]
-In addition to pagination, scrolling provides a more fine-grained access to iterate through chunks of larger result sets.
-endif::[]
-
-In addition to query methods, query derivation for both count and delete queries is available.
-The following list shows the interface definition for a derived count query:
-
-.Derived Count Query
-====
-[source,java]
-----
-interface UserRepository extends CrudRepository {
-
- long countByLastname(String lastname);
-}
-----
-====
-
-The following listing shows the interface definition for a derived delete query:
-
-.Derived Delete Query
-====
-[source,java]
-----
-interface UserRepository extends CrudRepository {
-
- long deleteByLastname(String lastname);
-
- List removeByLastname(String lastname);
-}
-----
-====
-
-[[repositories.query-methods]]
-== Query Methods
-
-Standard CRUD functionality repositories usually have queries on the underlying datastore.
-With Spring Data, declaring those queries becomes a four-step process:
-
-. Declare an interface extending Repository or one of its subinterfaces and type it to the domain class and ID type that it should handle, as shown in the following example:
-+
-====
-[source,java]
-----
-interface PersonRepository extends Repository { … }
-----
-====
-
-. Declare query methods on the interface.
-+
-====
-[source,java]
-----
-interface PersonRepository extends Repository {
- List findByLastname(String lastname);
-}
-----
-====
-
-. Set up Spring to create proxy instances for those interfaces, either with <> or with <>.
-+
-====
-.Java
-[source,java,subs="attributes,specialchars",role="primary"]
-----
-import org.springframework.data.….repository.config.Enable{store}Repositories;
-
-@Enable{store}Repositories
-class Config { … }
-----
-
-ifeval::[{include-xml-namespaces} != false]
-.XML
-[source,xml,role="secondary"]
-----
-
-
-
-
-
-
-----
-endif::[]
-====
-+
-ifeval::[{include-xml-namespaces} != false]
-The JPA namespace is used in this example.
-If you use the repository abstraction for any other store, you need to change this to the appropriate namespace declaration of your store module.
-In other words, you should exchange `jpa` in favor of, for example, `mongodb`.
-endif::[]
-+
-Note that the JavaConfig variant does not configure a package explicitly, because the package of the annotated class is used by default.
-To customize the package to scan, use one of the `basePackage…` attributes of the data-store-specific repository's `@Enable{store}Repositories`-annotation.
-. Inject the repository instance and use it, as shown in the following example:
-+
-====
-[source,java]
-----
-class SomeClient {
-
- private final PersonRepository repository;
-
- SomeClient(PersonRepository repository) {
- this.repository = repository;
- }
-
- void doSomething() {
- List persons = repository.findByLastname("Matthews");
- }
-}
-----
-====
-
-The sections that follow explain each step in detail:
-
-* <>
-* <>
-* <>
-* <>
-
-[[repositories.definition]]
-== Defining Repository Interfaces
-
-To define a repository interface, you first need to define a domain class-specific repository interface.
-The interface must extend `Repository` and be typed to the domain class and an ID type.
-If you want to expose CRUD methods for that domain type, you may extend `CrudRepository`, or one of its variants instead of `Repository`.
-
-[[repositories.definition-tuning]]
-=== Fine-tuning Repository Definition
-
-There are a few variants how you can get started with your repository interface.
-
-The typical approach is to extend `CrudRepository`, which gives you methods for CRUD functionality.
-CRUD stands for Create, Read, Update, Delete.
-With version 3.0 we also introduced `ListCrudRepository` which is very similar to the `CrudRepository` but for those methods that return multiple entities it returns a `List` instead of an `Iterable` which you might find easier to use.
-
-If you are using a reactive store you might choose `ReactiveCrudRepository`, or `RxJava3CrudRepository` depending on which reactive framework you are using.
-
-If you are using Kotlin you might pick `CoroutineCrudRepository` which utilizes Kotlin's coroutines.
-
-Additional you can extend `PagingAndSortingRepository`, `ReactiveSortingRepository`, `RxJava3SortingRepository`, or `CoroutineSortingRepository` if you need methods that allow to specify a `Sort` abstraction or in the first case a `Pageable` abstraction.
-Note that the various sorting repositories no longer extended their respective CRUD repository as they did in Spring Data Versions pre 3.0.
-Therefore, you need to extend both interfaces if you want functionality of both.
-
-If you do not want to extend Spring Data interfaces, you can also annotate your repository interface with `@RepositoryDefinition`.
-Extending one of the CRUD repository interfaces exposes a complete set of methods to manipulate your entities.
-If you prefer to be selective about the methods being exposed, copy the methods you want to expose from the CRUD repository into your domain repository.
-When doing so, you may change the return type of methods.
-Spring Data will honor the return type if possible.
-For example, for methods returning multiple entities you may choose `Iterable`, `List`, `Collection` or a VAVR list.
-
-If many repositories in your application should have the same set of methods you can define your own base interface to inherit from.
-Such an interface must be annotated with `@NoRepositoryBean`.
-This prevents Spring Data to try to create an instance of it directly and failing because it can't determine the entity for that repository, since it still contains a generic type variable.
-
-The following example shows how to selectively expose CRUD methods (`findById` and `save`, in this case):
-
-.Selectively exposing CRUD methods
-====
-[source,java]
-----
-@NoRepositoryBean
-interface MyBaseRepository extends Repository {
-
- Optional findById(ID id);
-
- S save(S entity);
-}
-
-interface UserRepository extends MyBaseRepository {
- User findByEmailAddress(EmailAddress emailAddress);
-}
-----
-====
-
-In the prior example, you defined a common base interface for all your domain repositories and exposed `findById(…)` as well as `save(…)`.These methods are routed into the base repository implementation of the store of your choice provided by Spring Data (for example, if you use JPA, the implementation is `SimpleJpaRepository`), because they match the method signatures in `CrudRepository`.
-So the `UserRepository` can now save users, find individual users by ID, and trigger a query to find `Users` by email address.
-
-NOTE: The intermediate repository interface is annotated with `@NoRepositoryBean`.
-Make sure you add that annotation to all repository interfaces for which Spring Data should not create instances at runtime.
-
-[[repositories.multiple-modules]]
-=== Using Repositories with Multiple Spring Data Modules
-
-Using a unique Spring Data module in your application makes things simple, because all repository interfaces in the defined scope are bound to the Spring Data module.
-Sometimes, applications require using more than one Spring Data module.
-In such cases, a repository definition must distinguish between persistence technologies.
-When it detects multiple repository factories on the class path, Spring Data enters strict repository configuration mode.
-Strict configuration uses details on the repository or the domain class to decide about Spring Data module binding for a repository definition:
-
-. If the repository definition <>, it is a valid candidate for the particular Spring Data module.
-. If the domain class is <>, it is a valid candidate for the particular Spring Data module.
-Spring Data modules accept either third-party annotations (such as JPA's `@Entity`) or provide their own annotations (such as `@Document` for Spring Data MongoDB and Spring Data Elasticsearch).
-
-The following example shows a repository that uses module-specific interfaces (JPA in this case):
-
-[[repositories.multiple-modules.types]]
-.Repository definitions using module-specific interfaces
-====
-[source,java]
-----
-interface MyRepository extends JpaRepository { }
-
-@NoRepositoryBean
-interface MyBaseRepository extends JpaRepository { … }
-
-interface UserRepository extends MyBaseRepository { … }
-----
-
-`MyRepository` and `UserRepository` extend `JpaRepository` in their type hierarchy.
-They are valid candidates for the Spring Data JPA module.
-====
-
-The following example shows a repository that uses generic interfaces:
-
-.Repository definitions using generic interfaces
-====
-[source,java]
-----
-interface AmbiguousRepository extends Repository { … }
-
-@NoRepositoryBean
-interface MyBaseRepository extends CrudRepository { … }
-
-interface AmbiguousUserRepository extends MyBaseRepository { … }
-----
-
-`AmbiguousRepository` and `AmbiguousUserRepository` extend only `Repository` and `CrudRepository` in their type hierarchy.
-While this is fine when using a unique Spring Data module, multiple modules cannot distinguish to which particular Spring Data these repositories should be bound.
-====
-
-The following example shows a repository that uses domain classes with annotations:
-
-[[repositories.multiple-modules.annotations]]
-.Repository definitions using domain classes with annotations
-====
-[source,java]
-----
-interface PersonRepository extends Repository { … }
-
-@Entity
-class Person { … }
-
-interface UserRepository extends Repository { … }
-
-@Document
-class User { … }
-----
-
-`PersonRepository` references `Person`, which is annotated with the JPA `@Entity` annotation, so this repository clearly belongs to Spring Data JPA. `UserRepository` references `User`, which is annotated with Spring Data MongoDB's `@Document` annotation.
-====
-
-The following bad example shows a repository that uses domain classes with mixed annotations:
-
-.Repository definitions using domain classes with mixed annotations
-====
-[source,java]
-----
-interface JpaPersonRepository extends Repository { … }
-
-interface MongoDBPersonRepository extends Repository { … }
-
-@Entity
-@Document
-class Person { … }
-----
-
-This example shows a domain class using both JPA and Spring Data MongoDB annotations.
-It defines two repositories, `JpaPersonRepository` and `MongoDBPersonRepository`.
-One is intended for JPA and the other for MongoDB usage.
-Spring Data is no longer able to tell the repositories apart, which leads to undefined behavior.
-====
-
-<> and <> are used for strict repository configuration to identify repository candidates for a particular Spring Data module.
-Using multiple persistence technology-specific annotations on the same domain type is possible and enables reuse of domain types across multiple persistence technologies.
-However, Spring Data can then no longer determine a unique module with which to bind the repository.
-
-The last way to distinguish repositories is by scoping repository base packages.
-Base packages define the starting points for scanning for repository interface definitions, which implies having repository definitions located in the appropriate packages.
-By default, annotation-driven configuration uses the package of the configuration class.
-The <> is mandatory.
-
-The following example shows annotation-driven configuration of base packages:
-
-.Annotation-driven configuration of base packages
-====
-[source,java]
-----
-@EnableJpaRepositories(basePackages = "com.acme.repositories.jpa")
-@EnableMongoRepositories(basePackages = "com.acme.repositories.mongo")
-class Configuration { … }
-----
-====
-
-[[repositories.query-methods.details]]
-== Defining Query Methods
-
-The repository proxy has two ways to derive a store-specific query from the method name:
-
-* By deriving the query from the method name directly.
-* By using a manually defined query.
-
-Available options depend on the actual store.
-However, there must be a strategy that decides what actual query is created.
-The next section describes the available options.
-
-[[repositories.query-methods.query-lookup-strategies]]
-=== Query Lookup Strategies
-
-The following strategies are available for the repository infrastructure to resolve the query.
-ifeval::[{include-xml-namespaces} != false]
-With XML configuration, you can configure the strategy at the namespace through the `query-lookup-strategy` attribute.
-endif::[]
-For Java configuration, you can use the `queryLookupStrategy` attribute of the `Enable{store}Repositories` annotation.
-Some strategies may not be supported for particular datastores.
-
-- `CREATE` attempts to construct a store-specific query from the query method name.
-The general approach is to remove a given set of well known prefixes from the method name and parse the rest of the method.
-You can read more about query construction in "`<>`".
-
-- `USE_DECLARED_QUERY` tries to find a declared query and throws an exception if it cannot find one.
-The query can be defined by an annotation somewhere or declared by other means.
-See the documentation of the specific store to find available options for that store.
-If the repository infrastructure does not find a declared query for the method at bootstrap time, it fails.
-
-- `CREATE_IF_NOT_FOUND` (the default) combines `CREATE` and `USE_DECLARED_QUERY`.
-It looks up a declared query first, and, if no declared query is found, it creates a custom method name-based query.
-This is the default lookup strategy and, thus, is used if you do not configure anything explicitly.
-It allows quick query definition by method names but also custom-tuning of these queries by introducing declared queries as needed.
-
-[[repositories.query-methods.query-creation]]
-=== Query Creation
-
-The query builder mechanism built into the Spring Data repository infrastructure is useful for building constraining queries over entities of the repository.
-
-The following example shows how to create a number of queries:
-
-.Query creation from method names
-====
-[source,java]
-----
-interface PersonRepository extends Repository {
-
- List findByEmailAddressAndLastname(EmailAddress emailAddress, String lastname);
-
- // Enables the distinct flag for the query
- List findDistinctPeopleByLastnameOrFirstname(String lastname, String firstname);
- List findPeopleDistinctByLastnameOrFirstname(String lastname, String firstname);
-
- // Enabling ignoring case for an individual property
- List findByLastnameIgnoreCase(String lastname);
- // Enabling ignoring case for all suitable properties
- List findByLastnameAndFirstnameAllIgnoreCase(String lastname, String firstname);
-
- // Enabling static ORDER BY for a query
- List findByLastnameOrderByFirstnameAsc(String lastname);
- List findByLastnameOrderByFirstnameDesc(String lastname);
-}
-----
-====
-
-Parsing query method names is divided into subject and predicate.
-The first part (`find…By`, `exists…By`) defines the subject of the query, the second part forms the predicate.
-The introducing clause (subject) can contain further expressions.
-Any text between `find` (or other introducing keywords) and `By` is considered to be descriptive unless using one of the result-limiting keywords such as a `Distinct` to set a distinct flag on the query to be created or <>.
-
-The appendix contains the <> and <>.
-However, the first `By` acts as a delimiter to indicate the start of the actual criteria predicate.
-At a very basic level, you can define conditions on entity properties and concatenate them with `And` and `Or`.
-
-The actual result of parsing the method depends on the persistence store for which you create the query.
-However, there are some general things to notice:
-
-- The expressions are usually property traversals combined with operators that can be concatenated.
-You can combine property expressions with `AND` and `OR`.
-You also get support for operators such as `Between`, `LessThan`, `GreaterThan`, and `Like` for the property expressions.
-The supported operators can vary by datastore, so consult the appropriate part of your reference documentation.
-
-- The method parser supports setting an `IgnoreCase` flag for individual properties (for example, `findByLastnameIgnoreCase(…)`) or for all properties of a type that supports ignoring case (usually `String` instances -- for example, `findByLastnameAndFirstnameAllIgnoreCase(…)`).
-Whether ignoring cases is supported may vary by store, so consult the relevant sections in the reference documentation for the store-specific query method.
-
-- You can apply static ordering by appending an `OrderBy` clause to the query method that references a property and by providing a sorting direction (`Asc` or `Desc`).
-To create a query method that supports dynamic sorting, see "`<>`".
-
-[[repositories.query-methods.query-property-expressions]]
-=== Property Expressions
-
-Property expressions can refer only to a direct property of the managed entity, as shown in the preceding example.
-At query creation time, you already make sure that the parsed property is a property of the managed domain class.
-However, you can also define constraints by traversing nested properties.
-Consider the following method signature:
-
-====
-[source,java]
-----
-List findByAddressZipCode(ZipCode zipCode);
-----
-====
-
-Assume a `Person` has an `Address` with a `ZipCode`.
-In that case, the method creates the `x.address.zipCode` property traversal.
-The resolution algorithm starts by interpreting the entire part (`AddressZipCode`) as the property and checks the domain class for a property with that name (uncapitalized).
-If the algorithm succeeds, it uses that property.
-If not, the algorithm splits up the source at the camel-case parts from the right side into a head and a tail and tries to find the corresponding property -- in our example, `AddressZip` and `Code`.
-If the algorithm finds a property with that head, it takes the tail and continues building the tree down from there, splitting the tail up in the way just described.
-If the first split does not match, the algorithm moves the split point to the left (`Address`, `ZipCode`) and continues.
-
-Although this should work for most cases, it is possible for the algorithm to select the wrong property.
-Suppose the `Person` class has an `addressZip` property as well.
-The algorithm would match in the first split round already, choose the wrong property, and fail (as the type of `addressZip` probably has no `code` property).
-
-To resolve this ambiguity you can use `_` inside your method name to manually define traversal points.
-So our method name would be as follows:
-
-====
-[source,java]
-----
-List findByAddress_ZipCode(ZipCode zipCode);
-----
-====
-
-Because we treat the underscore character as a reserved character, we strongly advise following standard Java naming conventions (that is, not using underscores in property names but using camel case instead).
-
-include::repositories-paging-sorting.adoc[]
-
-[[repositories.collections-and-iterables]]
-=== Repository Methods Returning Collections or Iterables
-
-Query methods that return multiple results can use standard Java `Iterable`, `List`, and `Set`.
-Beyond that, we support returning Spring Data's `Streamable`, a custom extension of `Iterable`, as well as collection types provided by https://www.vavr.io/[Vavr].
-Refer to the appendix explaining all possible <>.
-
-[[repositories.collections-and-iterables.streamable]]
-==== Using Streamable as Query Method Return Type
-
-You can use `Streamable` as alternative to `Iterable` or any collection type.
-It provides convenience methods to access a non-parallel `Stream` (missing from `Iterable`) and the ability to directly `….filter(…)` and `….map(…)` over the elements and concatenate the `Streamable` to others:
-
-.Using Streamable to combine query method results
-====
-[source,java]
-----
-interface PersonRepository extends Repository {
- Streamable findByFirstnameContaining(String firstname);
- Streamable findByLastnameContaining(String lastname);
-}
-
-Streamable result = repository.findByFirstnameContaining("av")
- .and(repository.findByLastnameContaining("ea"));
-----
-====
-
-[[repositories.collections-and-iterables.streamable-wrapper]]
-==== Returning Custom Streamable Wrapper Types
-
-Providing dedicated wrapper types for collections is a commonly used pattern to provide an API for a query result that returns multiple elements.
-Usually, these types are used by invoking a repository method returning a collection-like type and creating an instance of the wrapper type manually.
-You can avoid that additional step as Spring Data lets you use these wrapper types as query method return types if they meet the following criteria:
-
-. The type implements `Streamable`.
-. The type exposes either a constructor or a static factory method named `of(…)` or `valueOf(…)` that takes `Streamable` as an argument.
-
-The following listing shows an example:
-
-====
-[source,java]
-----
-class Product { <1>
- MonetaryAmount getPrice() { … }
-}
-
-@RequiredArgsConstructor(staticName = "of")
-class Products implements Streamable { <2>
-
- private final Streamable streamable;
-
- public MonetaryAmount getTotal() { <3>
- return streamable.stream()
- .map(Priced::getPrice)
- .reduce(Money.of(0), MonetaryAmount::add);
- }
-
-
- @Override
- public Iterator iterator() { <4>
- return streamable.iterator();
- }
-}
-
-interface ProductRepository implements Repository {
- Products findAllByDescriptionContaining(String text); <5>
-}
-----
-<1> A `Product` entity that exposes API to access the product's price.
-<2> A wrapper type for a `Streamable` that can be constructed by using `Products.of(…)` (factory method created with the Lombok annotation).
- A standard constructor taking the `Streamable` will do as well.
-<3> The wrapper type exposes an additional API, calculating new values on the `Streamable`.
-<4> Implement the `Streamable` interface and delegate to the actual result.
-<5> That wrapper type `Products` can be used directly as a query method return type.
-You do not need to return `Streamable` and manually wrap it after the query in the repository client.
-====
-
-[[repositories.collections-and-iterables.vavr]]
-==== Support for Vavr Collections
-
-https://www.vavr.io/[Vavr] is a library that embraces functional programming concepts in Java.
-It ships with a custom set of collection types that you can use as query method return types, as the following table shows:
-
-[options=header]
-|====
-|Vavr collection type|Used Vavr implementation type|Valid Java source types
-|`io.vavr.collection.Seq`|`io.vavr.collection.List`|`java.util.Iterable`
-|`io.vavr.collection.Set`|`io.vavr.collection.LinkedHashSet`|`java.util.Iterable`
-|`io.vavr.collection.Map`|`io.vavr.collection.LinkedHashMap`|`java.util.Map`
-|====
-
-You can use the types in the first column (or subtypes thereof) as query method return types and get the types in the second column used as implementation type, depending on the Java type of the actual query result (third column).
-Alternatively, you can declare `Traversable` (the Vavr `Iterable` equivalent), and we then derive the implementation class from the actual return value.
-That is, a `java.util.List` is turned into a Vavr `List` or `Seq`, a `java.util.Set` becomes a Vavr `LinkedHashSet` `Set`, and so on.
-
-
-[[repositories.query-streaming]]
-=== Streaming Query Results
-
-You can process the results of query methods incrementally by using a Java 8 `Stream` as the return type.
-Instead of wrapping the query results in a `Stream`, data store-specific methods are used to perform the streaming, as shown in the following example:
-
-.Stream the result of a query with Java 8 `Stream`
-====
-[source,java]
-----
-@Query("select u from User u")
-Stream findAllByCustomQueryAndStream();
-
-Stream readAllByFirstnameNotNull();
-
-@Query("select u from User u")
-Stream streamAllPaged(Pageable pageable);
-----
-====
-
-NOTE: A `Stream` potentially wraps underlying data store-specific resources and must, therefore, be closed after usage.
-You can either manually close the `Stream` by using the `close()` method or by using a Java 7 `try-with-resources` block, as shown in the following example:
-
-.Working with a `Stream` result in a `try-with-resources` block
-====
-[source,java]
-----
-try (Stream stream = repository.findAllByCustomQueryAndStream()) {
- stream.forEach(…);
-}
-----
-====
-
-NOTE: Not all Spring Data modules currently support `Stream` as a return type.
-
-include::repositories-null-handling.adoc[]
-
-[[repositories.query-async]]
-=== Asynchronous Query Results
-
-You can run repository queries asynchronously by using {spring-framework-docs}/integration.html#scheduling[Spring's asynchronous method running capability].
-This means the method returns immediately upon invocation while the actual query occurs in a task that has been submitted to a Spring `TaskExecutor`.
-Asynchronous queries differ from reactive queries and should not be mixed.
-See the store-specific documentation for more details on reactive support.
-The following example shows a number of asynchronous queries:
-
-====
-[source,java]
-----
-@Async
-Future findByFirstname(String firstname); <1>
-
-@Async
-CompletableFuture findOneByFirstname(String firstname); <2>
-----
-<1> Use `java.util.concurrent.Future` as the return type.
-<2> Use a Java 8 `java.util.concurrent.CompletableFuture` as the return type.
-====
-
-[[repositories.create-instances]]
-== Creating Repository Instances
-
-This section covers how to create instances and bean definitions for the defined repository interfaces.
-
-[[repositories.create-instances.java-config]]
-=== Java Configuration
-
-Use the store-specific `@Enable{store}Repositories` annotation on a Java configuration class to define a configuration for repository activation.
-For an introduction to Java-based configuration of the Spring container, see {spring-framework-docs}/core.html#beans-java[JavaConfig in the Spring reference documentation].
-
-A sample configuration to enable Spring Data repositories resembles the following:
-
-.Sample annotation-based repository configuration
-====
-[source,java]
-----
-@Configuration
-@EnableJpaRepositories("com.acme.repositories")
-class ApplicationConfiguration {
-
- @Bean
- EntityManagerFactory entityManagerFactory() {
- // …
- }
-}
-----
-====
-
-NOTE: The preceding example uses the JPA-specific annotation, which you would change according to the store module you actually use. The same applies to the definition of the `EntityManagerFactory` bean. See the sections covering the store-specific configuration.
-
-ifeval::[{include-xml-namespaces} != false]
-[[repositories.create-instances.spring]]
-[[repositories.create-instances.xml]]
-=== XML Configuration
-
-Each Spring Data module includes a `repositories` element that lets you define a base package that Spring scans for you, as shown in the following example:
-
-.Enabling Spring Data repositories via XML
-====
-[source,xml]
-----
-
-
-
-
-
-
-----
-====
-
-In the preceding example, Spring is instructed to scan `com.acme.repositories` and all its sub-packages for interfaces extending `Repository` or one of its sub-interfaces.
-For each interface found, the infrastructure registers the persistence technology-specific `FactoryBean` to create the appropriate proxies that handle invocations of the query methods.
-Each bean is registered under a bean name that is derived from the interface name, so an interface of `UserRepository` would be registered under `userRepository`.
-Bean names for nested repository interfaces are prefixed with their enclosing type name.
-The base package attribute allows wildcards so that you can define a pattern of scanned packages.
-endif::[]
-
-[[repositories.using-filters]]
-=== Using Filters
-
-By default, the infrastructure picks up every interface that extends the persistence technology-specific `Repository` sub-interface located under the configured base package and creates a bean instance for it.
-However, you might want more fine-grained control over which interfaces have bean instances created for them.
-To do so, use filter elements inside the repository declaration.
-The semantics are exactly equivalent to the elements in Spring's component filters.
-For details, see the {spring-framework-docs}/core.html#beans-scanning-filters[Spring reference documentation] for these elements.
-
-For example, to exclude certain interfaces from instantiation as repository beans, you could use the following configuration:
-
-.Using filters
-====
-.Java
-[source,java,subs="attributes,specialchars",role="primary"]
-----
-@Configuration
-@Enable{store}Repositories(basePackages = "com.acme.repositories",
- includeFilters = { @Filter(type = FilterType.REGEX, pattern = ".*SomeRepository") },
- excludeFilters = { @Filter(type = FilterType.REGEX, pattern = ".*SomeOtherRepository") })
-class ApplicationConfiguration {
-
- @Bean
- EntityManagerFactory entityManagerFactory() {
- // …
- }
-}
-----
-
-ifeval::[{include-xml-namespaces} != false]
-.XML
-[source,xml,role="secondary"]
-----
-
-
-
-
-----
-endif::[]
-====
-
-The preceding example excludes all interfaces ending in `SomeRepository` from being instantiated and includes those ending with `SomeOtherRepository`.
-
-
-[[repositories.create-instances.standalone]]
-=== Standalone Usage
-
-You can also use the repository infrastructure outside of a Spring container -- for example, in CDI environments. You still need some Spring libraries in your classpath, but, generally, you can set up repositories programmatically as well. The Spring Data modules that provide repository support ship with a persistence technology-specific `RepositoryFactory` that you can use, as follows:
-
-.Standalone usage of the repository factory
-====
-[source,java]
-----
-RepositoryFactorySupport factory = … // Instantiate factory here
-UserRepository repository = factory.getRepository(UserRepository.class);
-----
-====
-
-[[repositories.custom-implementations]]
-== Custom Implementations for Spring Data Repositories
-
-Spring Data provides various options to create query methods with little coding.
-But when those options don't fit your needs you can also provide your own custom implementation for repository methods.
-This section describes how to do that.
-
-[[repositories.single-repository-behavior]]
-=== Customizing Individual Repositories
-
-To enrich a repository with custom functionality, you must first define a fragment interface and an implementation for the custom functionality, as follows:
-
-.Interface for custom repository functionality
-====
-[source,java]
-----
-interface CustomizedUserRepository {
- void someCustomMethod(User user);
-}
-----
-====
-
-.Implementation of custom repository functionality
-====
-[source,java]
-----
-class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
-
- public void someCustomMethod(User user) {
- // Your custom implementation
- }
-}
-----
-====
-
-NOTE: The most important part of the class name that corresponds to the fragment interface is the `Impl` postfix.
-
-The implementation itself does not depend on Spring Data and can be a regular Spring bean.
-Consequently, you can use standard dependency injection behavior to inject references to other beans (such as a `JdbcTemplate`), take part in aspects, and so on.
-
-Then you can let your repository interface extend the fragment interface, as follows:
-
-.Changes to your repository interface
-====
-[source,java]
-----
-interface UserRepository extends CrudRepository, CustomizedUserRepository {
-
- // Declare query methods here
-}
-----
-====
-
-Extending the fragment interface with your repository interface combines the CRUD and custom functionality and makes it available to clients.
-
-Spring Data repositories are implemented by using fragments that form a repository composition.
-Fragments are the base repository, functional aspects (such as <>), and custom interfaces along with their implementations.
-Each time you add an interface to your repository interface, you enhance the composition by adding a fragment.
-The base repository and repository aspect implementations are provided by each Spring Data module.
-
-The following example shows custom interfaces and their implementations:
-
-.Fragments with their implementations
-====
-[source,java]
-----
-interface HumanRepository {
- void someHumanMethod(User user);
-}
-
-class HumanRepositoryImpl implements HumanRepository {
-
- public void someHumanMethod(User user) {
- // Your custom implementation
- }
-}
-
-interface ContactRepository {
-
- void someContactMethod(User user);
-
- User anotherContactMethod(User user);
-}
-
-class ContactRepositoryImpl implements ContactRepository {
-
- public void someContactMethod(User user) {
- // Your custom implementation
- }
-
- public User anotherContactMethod(User user) {
- // Your custom implementation
- }
-}
-----
-====
-
-The following example shows the interface for a custom repository that extends `CrudRepository`:
-
-.Changes to your repository interface
-====
-[source,java]
-----
-interface UserRepository extends CrudRepository, HumanRepository, ContactRepository {
-
- // Declare query methods here
-}
-----
-====
-
-Repositories may be composed of multiple custom implementations that are imported in the order of their declaration.
-Custom implementations have a higher priority than the base implementation and repository aspects.
-This ordering lets you override base repository and aspect methods and resolves ambiguity if two fragments contribute the same method signature.
-Repository fragments are not limited to use in a single repository interface.
-Multiple repositories may use a fragment interface, letting you reuse customizations across different repositories.
-
-The following example shows a repository fragment and its implementation:
-
-.Fragments overriding `save(…)`
-====
-[source,java]
-----
-interface CustomizedSave {
- S save(S entity);
-}
-
-class CustomizedSaveImpl implements CustomizedSave {
-
- public S save(S entity) {
- // Your custom implementation
- }
-}
-----
-====
-
-The following example shows a repository that uses the preceding repository fragment:
-
-.Customized repository interfaces
-====
-[source,java]
-----
-interface UserRepository extends CrudRepository, CustomizedSave {
-}
-
-interface PersonRepository extends CrudRepository, CustomizedSave {
-}
-----
-====
-
-[[repositories.configuration]]
-==== Configuration
-
-The repository infrastructure tries to autodetect custom implementation fragments by scanning for classes below the package in which it found a repository.
-These classes need to follow the naming convention of appending a postfix defaulting to `Impl`.
-
-The following example shows a repository that uses the default postfix and a repository that sets a custom value for the postfix:
-
-.Configuration example
-====
-.Java
-[source,java,subs="attributes,specialchars",role="primary"]
-----
-@Enable{store}Repositories(repositoryImplementationPostfix = "MyPostfix")
-class Configuration { … }
-----
-
-ifeval::[{include-xml-namespaces} != false]
-.XML
-[source,xml,role="secondary"]
-----
-
-
-
-----
-endif::[]
-====
-
-The first configuration in the preceding example tries to look up a class called `com.acme.repository.CustomizedUserRepositoryImpl` to act as a custom repository implementation.
-The second example tries to look up `com.acme.repository.CustomizedUserRepositoryMyPostfix`.
-
-[[repositories.single-repository-behaviour.ambiguity]]
-===== Resolution of Ambiguity
-
-If multiple implementations with matching class names are found in different packages, Spring Data uses the bean names to identify which one to use.
-
-Given the following two custom implementations for the `CustomizedUserRepository` shown earlier, the first implementation is used.
-Its bean name is `customizedUserRepositoryImpl`, which matches that of the fragment interface (`CustomizedUserRepository`) plus the postfix `Impl`.
-
-.Resolution of ambiguous implementations
-====
-[source,java]
-----
-package com.acme.impl.one;
-
-class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
-
- // Your custom implementation
-}
-----
-
-[source,java]
-----
-package com.acme.impl.two;
-
-@Component("specialCustomImpl")
-class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
-
- // Your custom implementation
-}
-----
-====
-
-If you annotate the `UserRepository` interface with `@Component("specialCustom")`, the bean name plus `Impl` then matches the one defined for the repository implementation in `com.acme.impl.two`, and it is used instead of the first one.
-
-[[repositories.manual-wiring]]
-===== Manual Wiring
-
-If your custom implementation uses annotation-based configuration and autowiring only, the preceding approach shown works well, because it is treated as any other Spring bean.
-If your implementation fragment bean needs special wiring, you can declare the bean and name it according to the conventions described in the <>.
-The infrastructure then refers to the manually defined bean definition by name instead of creating one itself.
-The following example shows how to manually wire a custom implementation:
-
-.Manual wiring of custom implementations
-====
-
-.Java
-[source,java,role="primary"]
-----
-class MyClass {
- MyClass(@Qualifier("userRepositoryImpl") UserRepository userRepository) {
- …
- }
-}
-----
-
-ifeval::[{include-xml-namespaces} != false]
-.XML
-[source,xml,role="secondary"]
-----
-
-
-
-
-
-----
-endif::[]
-
-====
-
-[[repositories.customize-base-repository]]
-=== Customize the Base Repository
-
-The approach described in the <> requires customization of each repository interfaces when you want to customize the base repository behavior so that all repositories are affected.
-To instead change behavior for all repositories, you can create an implementation that extends the persistence technology-specific repository base class.
-This class then acts as a custom base class for the repository proxies, as shown in the following example:
-
-.Custom repository base class
-====
-[source,java]
-----
-class MyRepositoryImpl
- extends SimpleJpaRepository {
-
- private final EntityManager entityManager;
-
- MyRepositoryImpl(JpaEntityInformation entityInformation,
- EntityManager entityManager) {
- super(entityInformation, entityManager);
-
- // Keep the EntityManager around to used from the newly introduced methods.
- this.entityManager = entityManager;
- }
-
- @Transactional
- public S save(S entity) {
- // implementation goes here
- }
-}
-----
-====
-
-CAUTION: The class needs to have a constructor of the super class which the store-specific repository factory implementation uses.
-If the repository base class has multiple constructors, override the one taking an `EntityInformation` plus a store specific infrastructure object (such as an `EntityManager` or a template class).
-
-The final step is to make the Spring Data infrastructure aware of the customized repository base class.
-In configuration, you can do so by using the `repositoryBaseClass`, as shown in the following example:
-
-.Configuring a custom repository base class
-====
-.Java
-[source,java,subs="attributes,specialchars",role="primary"]
-----
-@Configuration
-@Enable{store}Repositories(repositoryBaseClass = MyRepositoryImpl.class)
-class ApplicationConfiguration { … }
-----
-
-ifeval::[{include-xml-namespaces} != false]
-.XML
-[source,xml,role="secondary"]
-----
-
-----
-endif::[]
-====
-
-[[core.domain-events]]
-== Publishing Events from Aggregate Roots
-
-Entities managed by repositories are aggregate roots.
-In a Domain-Driven Design application, these aggregate roots usually publish domain events.
-Spring Data provides an annotation called `@DomainEvents` that you can use on a method of your aggregate root to make that publication as easy as possible, as shown in the following example:
-
-.Exposing domain events from an aggregate root
-====
-[source,java]
-----
-class AnAggregateRoot {
-
- @DomainEvents <1>
- Collection