diff --git a/src/reference/asciidoc/aggregator.adoc b/src/reference/asciidoc/aggregator.adoc index 5d1a247428e..fbe52349207 100644 --- a/src/reference/asciidoc/aggregator.adoc +++ b/src/reference/asciidoc/aggregator.adoc @@ -127,7 +127,7 @@ See <<./splitter.adoc#splitter,Splitter>> for more information. [[agg-message-collection]] IMPORTANT: The `SimpleMessageGroup.getMessages()` method returns an `unmodifiableCollection`. -Therefore, if your aggregating POJO method has a `Collection` parameter, the argument passed in is exactly that `Collection` instance and, when you use a `SimpleMessageStore` for the aggregator, that original `Collection` is cleared after releasing the group. +Therefore, if an aggregating POJO method has a `Collection` parameter, the argument passed in is exactly that `Collection` instance and, when you use a `SimpleMessageStore` for the aggregator, that original `Collection` is cleared after releasing the group. Consequently, the `Collection` variable in the POJO is cleared too, if it is passed out of the aggregator. If you wish to simply release that collection as-is for further processing, you must build a new `Collection` (for example, `new ArrayList(messages)`). Starting with version 4.3, the framework no longer copies the messages to a new collection, to avoid undesired extra object creation. @@ -195,7 +195,7 @@ public class MyReleaseStrategy { Based on the signatures in the preceding two examples, the POJO-based release strategy is passed a `Collection` of not-yet-released messages (if you need access to the whole `Message`) or a `Collection` of payload objects (if the type parameter is anything other than `Message`). This satisfies the majority of use cases. -However if, for some reason, you need to access the full `MessageGroup`, you should provide an implementation of the `ReleaseStrategy` interface. +However, if, for some reason, you need to access the full `MessageGroup`, you should provide an implementation of the `ReleaseStrategy` interface. [WARNING] ===== @@ -218,7 +218,7 @@ You can release partial sequences by using a `MessageGroupStoreReaper` together IMPORTANT: To facilitate discarding of late-arriving messages, the aggregator must maintain state about the group after it has been released. This can eventually cause out-of-memory conditions. To avoid such situations, you should consider configuring a `MessageGroupStoreReaper` to remove the group metadata. -The expiry parameters should be set to expire groups once a point has been reach after after which late messages are not expected to arrive. +The expiry parameters should be set to expire groups once a point has been reach after which late messages are not expected to arrive. For information about configuring a reaper, see <>. Spring Integration provides an implementation for `ReleaseStrategy`: `SimpleSequenceSizeReleaseStrategy`. @@ -422,7 +422,7 @@ See <<./message-store.adoc#message-store,Message Store>> for more information. Optional. <8> Indicates that expired messages should be aggregated and sent to the 'output-channel' or 'replyChannel' once their containing `MessageGroup` is expired (see https://docs.spring.io/spring-integration/api/org/springframework/integration/store/MessageGroupStore.html#expireMessageGroups-long[`MessageGroupStore.expireMessageGroups(long)`]). One way of expiring a `MessageGroup` is by configuring a `MessageGroupStoreReaper`. -However you can alternatively expire `MessageGroup` by calling `MessageGroupStore.expireMessageGroups(timeout)`. +However, you can alternatively expire `MessageGroup` by calling `MessageGroupStore.expireMessageGroups(timeout)`. You can accomplish that through a Control Bus operation or, if you have a reference to the `MessageGroupStore` instance, by invoking `expireMessageGroups(timeout)`. Otherwise, by itself, this attribute does nothing. It serves only as an indicator of whether to discard or send to the output or reply channel any messages that are still in the `MessageGroup` that is about to be expired. @@ -433,7 +433,7 @@ Defaults to `-1`, which results in blocking indefinitely. It is applied only if the output channel has some 'sending' limitations, such as a `QueueChannel` with a fixed 'capacity'. In this case, a `MessageDeliveryException` is thrown. For `AbstractSubscribableChannel` implementations, the `send-timeout` is ignored . -For `group-timeout(-expression)`, the `MessageDeliveryException` from the scheduled expire task leads this task to be rescheduled. +For `group-timeout(-expression)`, the `MessageDeliveryException` from the scheduled expiring task leads this task to be rescheduled. Optional. <10> A reference to a bean that implements the message correlation (grouping) algorithm. The bean can be an implementation of the `CorrelationStrategy` interface or a POJO. @@ -549,7 +549,7 @@ Such a periodic purge functionality is useful when a message store is needed to In most cases this happens after an application restart, when using a persistent message group store. The functionality is similar to the `MessageGroupStoreReaper` with a scheduled task, but provides a convenient way to deal with old groups within specific components, when using group timeout instead of a reaper. The `MessageGroupStore` must be provided exclusively for the current correlation endpoint. -Otherwise one aggregator may purge groups from another. +Otherwise, one aggregator may purge groups from another. With the aggregator, groups expired using this technique will either be discarded or released as a partial group, depending on the `expireGroupsUponCompletion` property. ===== @@ -920,11 +920,11 @@ In version 5.2, the `FluxAggregatorMessageHandler` component has been introduced It is based on the Project Reactor `Flux.groupBy()` and `Flux.window()` operators. The incoming messages are emitted into the `FluxSink` initiated by the `Flux.create()` in the constructor of this component. If the `outputChannel` is not provided or it is not an instance of `ReactiveStreamsSubscribableChannel`, the subscription to the main `Flux` is done from the `Lifecycle.start()` implementation. -Otherwise it is postponed to the subscription done by the `ReactiveStreamsSubscribableChannel` implementation. +Otherwise, it is postponed to the subscription done by the `ReactiveStreamsSubscribableChannel` implementation. The messages are grouped by the `Flux.groupBy()` using a `CorrelationStrategy` for the group key. By default, the `IntegrationMessageHeaderAccessor.CORRELATION_ID` header of the message is consulted. -By default every closed window is released as a `Flux` in payload of a message to produce. +By default, every closed window is released as a `Flux` in payload of a message to produce. This message contains all the headers from the first message in the window. This `Flux` in the output message payload must be subscribed and processed downstream. Such a logic can be customized (or superseded) by the `setCombineFunction(Function>, Mono>>)` configuration option of the `FluxAggregatorMessageHandler`. @@ -948,8 +948,8 @@ There are several options in the `FluxAggregatorMessageHandler` to select an app See its JavaDocs for more information. Has a precedence over all other window options. * `setWindowSize(int)` and `setWindowSizeFunction(Function, Integer>)` - is propagated to the `Flux.window(int)` or `windowTimeout(int, Duration)`. -By default a window size is calculated from the first message in group and its `IntegrationMessageHeaderAccessor.SEQUENCE_SIZE` header. -* `setWindowTimespan(Duration)` - is propagated to the `Flux.window(Duration)` or `windowTimeout(int, Duration)` depending in the window size configuration. +By default, a window size is calculated from the first message in group and its `IntegrationMessageHeaderAccessor.SEQUENCE_SIZE` header. +* `setWindowTimespan(Duration)` - is propagated to the `Flux.window(Duration)` or `windowTimeout(int, Duration)` depending on the window size configuration. * `setWindowConfigurer(Function>, Flux>>>)` - a function to apply a transformation into the grouped fluxes for any custom window operation not covered by the exposed options. Since this component is a `MessageHandler` implementation it can simply be used as a `@Bean` definition together with a `@ServiceActivator` messaging annotation. diff --git a/src/reference/asciidoc/amqp.adoc b/src/reference/asciidoc/amqp.adoc index ec86b75c8ef..11be4131c64 100644 --- a/src/reference/asciidoc/amqp.adoc +++ b/src/reference/asciidoc/amqp.adoc @@ -599,7 +599,7 @@ Default none (nacks will not be generated). This requires a `RabbitTemplate` configured for confirms as well as a `confirm-correlation-expression`. The thread will block for up to `confirm-timeout` (or 5 seconds by default). If a timeout occurs, a `MessageTimeoutException` will be thrown. -If returns are enabled and a message is returned, or any other exception occurs while awaiting the confirm, a `MessageHandlingException` will be thrown, with an appropriate message. +If returns are enabled and a message is returned, or any other exception occurs while awaiting the confirmation, a `MessageHandlingException` will be thrown, with an appropriate message. <15> The channel to which returned messages are sent. When provided, the underlying AMQP template is configured to return undeliverable messages to the adapter. When there is no `ErrorMessageStrategy` configured, the message is constructed from the data received from AMQP, with the following additional headers: `amqp_returnReplyCode`, `amqp_returnReplyText`, `amqp_returnExchange`, `amqp_returnRoutingKey`. @@ -969,7 +969,7 @@ See also <<./service-activator.adoc#async-service-activator,Asynchronous Service .RabbitTemplate ==== When you use confirmations and returns, we recommend that the `RabbitTemplate` wired into the `AsyncRabbitTemplate` be dedicated. -Otherwise, unexpected side-effects may be encountered. +Otherwise, unexpected side effects may be encountered. ==== [[alternative-confirms-returns]] @@ -1373,7 +1373,7 @@ public IntegrationFlow flow(RabbitTemplate template) { Suppose we send messages `A`, `B` and `C` to the gateway. While it is likely that messages `A`, `B`, `C` are sent in order, there is no guarantee. This is because the template "`borrows`" a channel from the cache for each send operation, and there is no guarantee that the same channel is used for each message. -One solution is to start a transaction before the splitter, but transactions are expensive in RabbitMQ and can reduce performance several hundred fold. +One solution is to start a transaction before the splitter, but transactions are expensive in RabbitMQ and can reduce performance several hundred-fold. To solve this problem in a more efficient manner, starting with version 5.1, Spring Integration provides the `BoundRabbitChannelAdvice` which is a `HandleMessageAdvice`. See <<./handler-advice.adoc#handle-message-advice,Handling Message Advice>>. @@ -1417,5 +1417,4 @@ In return, that message is retrieved by Spring Integration and printed to the co The following image illustrates the basic set of Spring Integration components used in this sample. -.The Spring Integration graph of the AMQP sample -image::images/spring-integration-amqp-sample-graph.png[] +.The Spring Integration graph of the AMQP sample image::images/spring-integration-amqp-sample-graph.png[] diff --git a/src/reference/asciidoc/barrier.adoc b/src/reference/asciidoc/barrier.adoc index cfc236347f9..7f7ad58085d 100644 --- a/src/reference/asciidoc/barrier.adoc +++ b/src/reference/asciidoc/barrier.adoc @@ -51,7 +51,7 @@ public BarrierMessageHandler barrier(MessageChannel out, MessageChannel lateTrig @ServiceActivator (inputChannel="release") @Bean public MessageHandler releaser(MessageTriggerAction barrier) { - return barrier::trigger(message); + return barrier::trigger; } ---- [source, xml, role="secondary"] diff --git a/src/reference/asciidoc/bridge.adoc b/src/reference/asciidoc/bridge.adoc index e48e576aede..8951b4571be 100644 --- a/src/reference/asciidoc/bridge.adoc +++ b/src/reference/asciidoc/bridge.adoc @@ -6,7 +6,7 @@ For example, you may want to connect a `PollableChannel` to a `SubscribableChann Instead, the messaging bridge provides the polling configuration. By providing an intermediary poller between two channels, you can use a messaging bridge to throttle inbound messages. -The poller's trigger determines the rate at which messages arrive on the second channel, and the poller's `maxMessagesPerPoll` property enforces a limit on the throughput. +The poller's trigger determines the rate at which messages arrive at the second channel, and the poller's `maxMessagesPerPoll` property enforces a limit on the throughput. Another valid use for a messaging bridge is to connect two different systems. In such a scenario, Spring Integration's role is limited to making the connection between these systems and managing a poller, if necessary. diff --git a/src/reference/asciidoc/chain.adoc b/src/reference/asciidoc/chain.adoc index c0cd2247526..7382ad841b2 100644 --- a/src/reference/asciidoc/chain.adoc +++ b/src/reference/asciidoc/chain.adoc @@ -123,7 +123,7 @@ Its `componentName` is based on its position in the ``. In this case, it is 'somethingChain$child#1'. (The final element of the name is the order within the chain, beginning with '#0'). Note, this transformer is not registered as a bean within the application context, so it does not get a `beanName`. -However its `componentName` has a value that is useful for logging and other purposes. +However, its `componentName` has a value that is useful for logging and other purposes. The `id` attribute for `` elements lets them be eligible for <<./jmx.adoc#jmx-mbean-exporter,JMX export>>, and they are trackable in the <<./message-history.adoc#message-history,message history>>. You can access them from the `BeanFactory` by using the appropriate bean name, as discussed earlier. diff --git a/src/reference/asciidoc/channel.adoc b/src/reference/asciidoc/channel.adoc index c36332bb2ad..67abeb6378f 100644 --- a/src/reference/asciidoc/channel.adoc +++ b/src/reference/asciidoc/channel.adoc @@ -68,7 +68,7 @@ public interface SubscribableChannel extends MessageChannel { [[channel-implementations]] ==== Message Channel Implementations -Spring Integration provides several different message channel implementations. +Spring Integration provides different message channel implementations. The following sections briefly describe each one. [[channel-implementations-publishsubscribechannel]] @@ -146,7 +146,7 @@ In addition to being the simplest point-to-point channel option, one of its most For example, if a handler subscribes to a `DirectChannel`, then sending a `Message` to that channel triggers invocation of that handler's `handleMessage(Message)` method directly in the sender's thread, before the `send()` method invocation can return. The key motivation for providing a channel implementation with this behavior is to support transactions that must span across the channel while still benefiting from the abstraction and loose coupling that the channel provides. -If the send call is invoked within the scope of a transaction, the outcome of the handler's invocation (for example, updating a database record) plays a role in determining the ultimate result of that transaction (commit or rollback). +If the `send()` call is invoked within the scope of a transaction, the outcome of the handler's invocation (for example, updating a database record) plays a role in determining the ultimate result of that transaction (commit or rollback). NOTE: Since the `DirectChannel` is the simplest option and does not add any additional overhead that would be required for scheduling and managing the threads of a poller, it is the default channel type within Spring Integration. The general idea is to define the channels for an application, consider which of those need to provide buffering or to throttle input, and modify those to be queue-based `PollableChannels`. @@ -296,10 +296,10 @@ It is a simple interceptor that sends the `Message` to another channel without o It can be very useful for debugging and monitoring. An example is shown in <>. -Because it is rarely necessary to implement all of the interceptor methods, the interface provides no-op methods (methods returning `void` method have no code, the `Message`-returning methods return the `Message` as-is, and the `boolean` method returns `true`). +Because it is rarely necessary to implement all of the interceptor methods, the interface provides no-op methods (those returning `void` method have no code, the `Message`-returning methods return the `Message` as-is, and the `boolean` method returns `true`). TIP: The order of invocation for the interceptor methods depends on the type of channel. -As described earlier, the queue-based channels are the only ones where the receive method is intercepted in the first place. +As described earlier, the queue-based channels are the only ones where the `receive()` method is intercepted in the first place. Additionally, the relationship between send and receive interception depends on the timing of the separate sender and receiver threads. For example, if a receiver is already blocked while waiting for a message, the order could be as follows: `preSend`, `preReceive`, `postReceive`, `postSend`. However, if a receiver polls after the sender has placed a message on the channel and has already returned, the order would be as follows: `preSend`, `postSend` (some-time-elapses), `preReceive`, `postReceive`. @@ -525,7 +525,7 @@ inChannel.send(new GenericMessage("5")); ---- ==== -Typically this would be a perfectly legal operation. +Typically, this would be a perfectly legal operation. However, since we use Datatype Channel, the result of such operation would generate an exception similar to the following: ==== @@ -655,7 +655,7 @@ The Spring Integration JDBC module also provides a schema Data Definition Langua These schemas are located in the org.springframework.integration.jdbc.store.channel package of that module (`spring-integration-jdbc`). IMPORTANT: One important feature is that, with any transactional persistent store (such as `JdbcChannelMessageStore`), as long as the poller has a transaction configured, a message removed from the store can be permanently removed only if the transaction completes successfully. -Otherwise the transaction rolls back, and the `Message` is not lost. +Otherwise, the transaction rolls back, and the `Message` is not lost. Many other implementations of the message store are available as the growing number of Spring projects related to "`NoSQL`" data stores come to provide underlying support for these stores. You can also provide your own implementation of the `MessageGroupStore` interface if you cannot find one that meets your particular needs. @@ -1051,11 +1051,11 @@ The last two break the thread boundary, making communication over such channels That is what is going to make your wire-tap flow synchronous or asynchronous. It is consistent with other components within the framework (such as message publisher) and adds a level of consistency and simplicity by sparing you from worrying in advance (other than writing thread-safe code) about whether a particular piece of code should be implemented as synchronous or asynchronous. The actual wiring of two pieces of code (say, component A and component B) over a message channel is what makes their collaboration synchronous or asynchronous. -You may even want to change from synchronous to asynchronous in the future, and message channel lets you to do it swiftly without ever touching the code. +You may even want to change from synchronous to asynchronous in the future, and message channel lets you do it swiftly without ever touching the code. One final point regarding the wire tap is that, despite the rationale provided above for not being asynchronous by default, you should keep in mind that it is usually desirable to hand off the message as soon as possible. Therefore, it would be quite common to use an asynchronous channel option as the wire tap's outbound channel. -However we doe not enforce asynchronous behavior by default. +However, the asynchronous behavior is not enforced by default. There are a number of use cases that would break if we did, including that you might not want to break a transactional boundary. Perhaps you use the wire tap pattern for auditing purposes, and you do want the audit messages to be sent within the original transaction. As an example, you might connect the wire tap to a JMS outbound channel adapter. @@ -1137,8 +1137,8 @@ For example, you can use this technique to configure a test case to verify messa Two special channels are defined within the application context by default: `errorChannel` and `nullChannel`. The 'nullChannel' (an instance of `NullChannel`) acts like `/dev/null`, logging any message sent to it at the `DEBUG` level and returning immediately. -The special treatment is applied for an `org.reactivestreams.Publisher` payload of a sent message: it is subscribed to in this channel immediately, to initiate reactive stream processing, although the data is discarded. -An error thrown from a reactive stream processing (see `Subscriber.onError(Throwable)`) is logged under the warn level for possible investigation. +The special treatment is applied for an `org.reactivestreams.Publisher` payload of a transmitted message: it is subscribed to in this channel immediately, to initiate reactive stream processing, although the data is discarded. +An error thrown from a reactive stream processing (see `Subscriber.onError(Throwable)`) is logged under the `warn` level for possible investigation. If there is need to do anything with such an error, the `<<./handler-advice.adoc#reactive-advice,ReactiveRequestHandlerAdvice>>` with a `Mono.doOnError()` customization can be applied to the message handler producing `Mono` reply into this `nullChannel`. Any time you face channel resolution errors for a reply that you do not care about, you can set the affected component's `output-channel` attribute to 'nullChannel' (the name, 'nullChannel', is reserved within the application context). diff --git a/src/reference/asciidoc/claim-check.adoc b/src/reference/asciidoc/claim-check.adoc index 779e9f94935..3e247c66dce 100644 --- a/src/reference/asciidoc/claim-check.adoc +++ b/src/reference/asciidoc/claim-check.adoc @@ -162,7 +162,7 @@ Optional. Sometimes, a particular message must be claimed only once. As an analogy, consider process of handling airplane luggage. -You checking in your luggage on departure and claiming it on arrival. +You're checking in your luggage on departure and claiming it on arrival. Once the luggage has been claimed, it can not be claimed again without first checking it back in. To accommodate such cases, we introduced a `remove-message` boolean attribute on the `claim-check-out` transformer. This attribute is set to `false` by default. diff --git a/src/reference/asciidoc/codec.adoc b/src/reference/asciidoc/codec.adoc index 4c407245c0a..104b4dc15b5 100644 --- a/src/reference/asciidoc/codec.adoc +++ b/src/reference/asciidoc/codec.adoc @@ -94,9 +94,9 @@ For an example, see the https://github.com/spring-projects/spring-integration/bl ====== Implementing KryoSerializable -If you have write access to the domain object source code, you can implement `KryoSerializable` as described https://github.com/EsotericSoftware/kryo#kryoserializable[here]. +If you have `write` access to the domain object source code, you can implement `KryoSerializable` as described https://github.com/EsotericSoftware/kryo#kryoserializable[here]. In this case, the class provides the serialization methods itself and no further configuration is required. -However benchmarks have shown this is not quite as efficient as registering a custom serializer explicitly. +However, benchmarks have shown this is not quite as efficient as registering a custom serializer explicitly. The following example shows a custom Kryo serializer: ==== @@ -138,5 +138,5 @@ public class SomeClass { ---- ==== -If you have write access to the domain object, this may be a simpler way to specify a custom serializer. +If you have `write` access to the domain object, this may be a simpler way to specify a custom serializer. Note that this does not register the class with an ID, which may make the technique unhelpful for certain situations. diff --git a/src/reference/asciidoc/configuration.adoc b/src/reference/asciidoc/configuration.adoc index 8cabe625a6a..7785830b0bb 100644 --- a/src/reference/asciidoc/configuration.adoc +++ b/src/reference/asciidoc/configuration.adoc @@ -762,12 +762,12 @@ The input parameter is a message payload. If the parameter type is not compatible with a message payload, an attempt is made to convert it by using a conversion service provided by Spring 3.0. The return value is a newly constructed message that is sent to the next destination. -The followig example shows a single parameter that is a message (or one of its subclasses) with an arbitrary object or primitive return type: +The following example shows a single parameter that is a message (or one of its subclasses) with an arbitrary object or primitive return type: ==== [source,java] ---- -public int doSomething(Message msg); +public int doSomething(Message msg); ---- ==== @@ -822,7 +822,7 @@ public String doSomething(); ==== This message handler method is invoked based on the Message sent to the input channel to which this handler is connected. -However no `Message` data is mapped, thus making the `Message` act as event or trigger to invoke the handler. +However, no `Message` data is mapped, thus making the `Message` act as event or trigger to invoke the handler. The output is mapped according to the rules <>. The following example shows no parameters and a void return: @@ -892,7 +892,7 @@ The payload is not being mapped to any argument. The following example uses multiple parameters: -Multiple parameters can create a lot of ambiguity with regards to determining the appropriate mappings. +Multiple parameters can create a lot of ambiguity in regards to determining the appropriate mappings. The general advice is to annotate your method parameters with `@Payload`, `@Header`, and `@Headers`. The examples in this section show ambiguous conditions that result in an exception being raised. @@ -929,9 +929,9 @@ public String foo(Map m, Map f) Although one might argue that one `Map` could be mapped to the message payload and the other one to the message headers, we cannot rely on the order. -TIP: Any method signature with more than one method argument that is not (Map, ) and with unannotated parameters results in an ambiguous condition and triggers an exception. +TIP: Any method signature with more than one method argument that is not (`Map`, ``) and with unannotated parameters results in an ambiguous condition and triggers an exception. -The next set of examples each show mutliple methods that result in ambiguity. +The next set of examples each show multiple methods that result in ambiguity. Message handlers with multiple methods are mapped based on the same rules that are described earlier (in the examples). However, some scenarios might still look confusing. diff --git a/src/reference/asciidoc/content-enrichment.adoc b/src/reference/asciidoc/content-enrichment.adoc index 999e39fd6e8..635fb16704c 100644 --- a/src/reference/asciidoc/content-enrichment.adoc +++ b/src/reference/asciidoc/content-enrichment.adoc @@ -327,8 +327,8 @@ Optional. <9> Maximum amount of time in milliseconds to wait when sending a message to the channel, if the channel might block. For example, a queue channel can block until space is available, if its maximum capacity has been reached. -Internally, the send timeout is set on the `MessagingTemplate` and ultimately applied when invoking the send operation on the `MessageChannel`. -By default, the send timeout is set to '-1', which can cause the send operation on the `MessageChannel`, depending on the implementation, to block indefinitely. +Internally, the `send()` timeout is set on the `MessagingTemplate` and ultimately applied when invoking the send operation on the `MessageChannel`. +By default, the `send() timeout is set to '-1', which can cause the send operation on the `MessageChannel`, depending on the implementation, to block indefinitely. Optional. <10> Boolean value indicating whether any payload that implements `Cloneable` should be cloned prior to sending the message to the request channel for acquiring the enriching data. The cloned version would be used as the target payload for the ultimate reply. diff --git a/src/reference/asciidoc/delayer.adoc b/src/reference/asciidoc/delayer.adoc index 5ec81c22329..389e884ffaf 100644 --- a/src/reference/asciidoc/delayer.adoc +++ b/src/reference/asciidoc/delayer.adoc @@ -81,8 +81,7 @@ For any message that has a delay of `0` (or less), the message is sent immediate NOTE: The XML parser uses a message group ID of `.messageGroupId`. TIP: The delay handler supports expression evaluation results that represent an interval in milliseconds (any `Object` whose `toString()` method produces a value that can be parsed into a `Long`) as well as `java.util.Date` instances representing an absolute time. -In the first case, the milliseconds are counted from the current time (for example -a value of `5000` would delay the message for at least five seconds from the time it is received by the delayer). +In the first case, the milliseconds are counted from the current time (for example a value of `5000` would delay the message for at least five seconds from the time it is received by the delayer). With a `Date` instance, the message is not released until the time represented by that `Date` object. A value that equates to a non-positive delay or a Date in the past results in no delay. Instead, it is sent directly to the output channel on the original sender's thread. @@ -112,7 +111,7 @@ The second results in something similar to the following: ---- ==== -Consequently, if there is a possibility of the header being omitted and you want to fall back to the default delay, it is generally more efficient (and recommended) to use the indexer syntax instead of dot property accessor syntax, because detecting the null is faster than catching an exception. +Consequently, if there is a possibility of the header being omitted and you want to fall back to the default delay, it is generally more efficient (and recommended) using the indexer syntax instead of dot property accessor syntax, because detecting the null is faster than catching an exception. ===== The delayer delegates to an instance of Spring's `TaskScheduler` abstraction. diff --git a/src/reference/asciidoc/dsl.adoc b/src/reference/asciidoc/dsl.adoc index 878e128e5d3..e852e23ecfb 100644 --- a/src/reference/asciidoc/dsl.adoc +++ b/src/reference/asciidoc/dsl.adoc @@ -75,7 +75,7 @@ The following list includes the common DSL method names and the associated EIP e Conceptually, integration processes are constructed by composing these endpoints into one or more message flows. Note that EIP does not formally define the term 'message flow', but it is useful to think of it as a unit of work that uses well known messaging patterns. The DSL provides an `IntegrationFlow` component to define a composition of channels and endpoints between them, but now `IntegrationFlow` plays only the configuration role to populate real beans in the application context and is not used at runtime. -However the bean for `IntegrationFlow` can be autowired as a `Lifecycle` to control `start()` and `stop()` for the whole flow which is delegated to all the Spring Integration components associated with this `IntegrationFlow`. +However, the bean for `IntegrationFlow` can be autowired as a `Lifecycle` to control `start()` and `stop()` for the whole flow which is delegated to all the Spring Integration components associated with this `IntegrationFlow`. The following example uses the `IntegrationFlows` factory to define an `IntegrationFlow` bean by using EIP-methods from `IntegrationFlowBuilder`: ==== @@ -145,7 +145,7 @@ Instead, use: ==== The Java DSL can register beans for the object defined in-line in the flow definition, as well as can reuse existing, injected beans. In case of the same bean name defined for in-line object and existing bean definition, a `BeanDefinitionOverrideException` is thrown indicating that such a configuration is wrong. -However when you deal with `prototype` beans, there is no way to detect from the integration flow processor an existing bean definition because every time we call a `prototype` bean from the `BeanFactory` we get a new instance. +However, when you deal with `prototype` beans, there is no way to detect from the integration flow processor an existing bean definition because every time we call a `prototype` bean from the `BeanFactory` we get a new instance. This way a provided instance is used in the `IntegrationFlow` as is without any bean registration and any possible check against existing `prototype` bean definition. However `BeanFactory.initializeBean()` is called for this object if it has an explicit `id` and bean definition for this name is in `prototype` scope. ==== @@ -172,7 +172,7 @@ public MessageChannel priorityChannel() { The same `MessageChannels` builder factory can be used in the `channel()` EIP method from `IntegrationFlowBuilder` to wire endpoints, similar to wiring an `input-channel`/`output-channel` pair in the XML configuration. By default, endpoints are wired with `DirectChannel` instances where the bean name is based on the following pattern: `[IntegrationFlow.beanName].channel#[channelNameIndex]`. This rule is also applied for unnamed channels produced by inline `MessageChannels` builder factory usage. -However all `MessageChannels` methods have a variant that is aware of the `channelId` that you can use to set the bean names for `MessageChannel` instances. +However, all `MessageChannels` methods have a variant that is aware of the `channelId` that you can use to set the bean names for `MessageChannel` instances. The `MessageChannel` references and `beanName` can be used as bean-method invocations. The following example shows the possible ways to use the `channel()` EIP method: @@ -329,7 +329,7 @@ public IntegrationFlow clientTcpFlow() { } ---- -That is they are not merged, only the `testAdvice()` bean is used in this case. +They are not merged, only the `testAdvice()` bean is used in this case. [[java-dsl-transformers]] === Transformers @@ -612,7 +612,7 @@ This operator has several overloads for different goals: - `gateway(IntegrationFlow flow)` to send a message to the input channel of the provided `IntegrationFlow`. All of these have a variant with the second `Consumer` argument to configure the target `GatewayMessageHandler` and respective `AbstractEndpoint`. -Also the `IntegrationFlow`-based methods allows calling existing `IntegrationFlow` bean or declare the flow as a sub-flow via an in-place lambda for an `IntegrationFlow` functional interface or have it extracted in a `private` method cleaner code style: +Also, the `IntegrationFlow`-based methods allows calling existing `IntegrationFlow` bean or declare the flow as a sub-flow via an in-place lambda for an `IntegrationFlow` functional interface or have it extracted in a `private` method cleaner code style: ==== [source,java] @@ -825,7 +825,7 @@ You can achieve the same result with separate `IntegrationFlow` `@Bean` definiti We find that it results in shorter (and so more readable) code. Starting with version 5.3, a `BroadcastCapableChannel`-based `publishSubscribeChannel()` implementation is provided to configure sub-flow subscribers on broker-backed message channels. -For example we now can configure several subscribers as sub-flows on the `Jms.publishSubscribeChannel()`: +For example, we now can configure several subscribers as sub-flows on the `Jms.publishSubscribeChannel()`: ==== [source,java] @@ -953,7 +953,7 @@ With an inline subflow, the input channel is not yet available. [[java-dsl-protocol-adapters]] === Using Protocol Adapters -All of the examples shown so far illustrate how the DSL supports a messaging architecture by using the Spring Integration programming model. +All the examples shown so far illustrate how the DSL supports a messaging architecture by using the Spring Integration programming model. However, we have yet to do any real integration. Doing so requires access to remote resources over HTTP, JMS, AMQP, TCP, JDBC, FTP, SMTP, and so on or access to the local file system. Spring Integration supports all of these and more. @@ -1184,7 +1184,7 @@ public void testTcpGateways() { This is useful when we have multiple configuration options and have to create several instances of similar flows. To do so, we can iterate our options and create and register `IntegrationFlow` instances within a loop. -Another variant is when our source of data is not Spring-based and we must create it on the fly. +Another variant is when our source of data is not Spring-based, so we must create it on the fly. Such a sample is Reactive Streams event source, as the following example shows: ==== @@ -1286,9 +1286,9 @@ You can mark the service interface with the `@MessagingGateway` annotation and m Nevertheless, the `requestChannel` is ignored and overridden with that internal channel for the next component in the `IntegrationFlow`. Otherwise, creating such a configuration by using `IntegrationFlow` does not make sense. -By default a `GatewayProxyFactoryBean` gets a conventional bean name, such as `[FLOW_BEAN_NAME.gateway]`. +By default, a `GatewayProxyFactoryBean` gets a conventional bean name, such as `[FLOW_BEAN_NAME.gateway]`. You can change that ID by using the `@MessagingGateway.name()` attribute or the overloaded `IntegrationFlows.from(Class serviceInterface, Consumer endpointConfigurer)` factory method. -Also all the attributes from the `@MessagingGateway` annotation on the interface are applied to the target `GatewayProxyFactoryBean`. +Also, all the attributes from the `@MessagingGateway` annotation on the interface are applied to the target `GatewayProxyFactoryBean`. When annotation configuration is not applicable, the `Consumer` variant can be used for providing appropriate option for the target proxy. This DSL method is available starting with version 5.2. @@ -1409,7 +1409,7 @@ IntegrationFlow compositionMainFlow(IntegrationFlow templateSourceFlow) { ---- ==== -On the other hand, the `IntegrationFlowDefinition` has added a `to(IntegrationFlow)` terminal operator to continue the current flow at the input channel of some other flow: +On the other hand, the `IntegrationFlowDefinition` has added a `to(IntegrationFlow)` terminal operator to continue the current flow at the input channel of some other flow: ==== [source,java] diff --git a/src/reference/asciidoc/endpoint-summary.adoc b/src/reference/asciidoc/endpoint-summary.adoc index b1b0544b125..a23ecc13a73 100644 --- a/src/reference/asciidoc/endpoint-summary.adoc +++ b/src/reference/asciidoc/endpoint-summary.adoc @@ -236,5 +236,5 @@ Each of these works without requiring any source-level dependencies on Spring In The equivalent of an outbound gateway in this context is using a service activator (see <<./service-activator.adoc#service-activator,Service Activator>>) to invoke a method that returns an `Object` of some kind. Starting with version `5.2.2`, all the inbound gateways can be configured with an `errorOnTimeout` boolean flag to throw a `MessageTimeoutException` when the downstream flow doesn't return a reply during the reply timeout. -The timer is not started until the thread returns control to the gateway, so usually it is only useful when the downstream flow is asynchronous or it stops because of a `null` return from some handler, e.g. <<./filter.adoc#filter,filter>>. +The timer is not started until the thread returns control to the gateway, so usually it is only useful when the downstream flow is asynchronous, or it stops because of a `null` return from some handler, e.g. <<./filter.adoc#filter,filter>>. Such an exception can be handled on the `errorChannel` flow, e.g. producing a compensation reply for requesting client. diff --git a/src/reference/asciidoc/endpoint.adoc b/src/reference/asciidoc/endpoint.adoc index afe98c3b554..9d6c6f63d30 100644 --- a/src/reference/asciidoc/endpoint.adoc +++ b/src/reference/asciidoc/endpoint.adoc @@ -167,7 +167,7 @@ consumer.setTaskExecutor(taskExecutor); ==== Furthermore, a `PollingConsumer` has a property called `adviceChain`. -This property lets you to specify a `List` of AOP advices for handling additional cross cutting concerns including transactions. +This property lets you to specify a `List` of AOP advices for handling additional cross-cutting concerns including transactions. These advices are applied around the `doPoll()` method. For more in-depth information, see the sections on AOP advice chains and transaction support under <>. @@ -324,7 +324,6 @@ For Java configuration a `PollerMetadata` bean with the `PollerMetadata.DEFAULT_ In that case, any endpoint with a `PollableChannel` for its input channel, that is defined within the same `ApplicationContext`, and has no explicitly configured `poller` uses that default. The following example shows such a poller and a transformer that uses it: - ==== [source, java, role="primary"] .Java DSL @@ -635,7 +634,7 @@ By default, this converter provides (in strict order): . https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/converter/GenericMessageConverter.html[`GenericMessageConverter`] See the Javadoc (linked in the preceding list) for more information about their purpose and appropriate `contentType` values for conversion. -The `ConfigurableCompositeMessageConverter` is used because it can be be supplied with any other `MessageConverter` implementations, including or excluding the previously mentioned default converters. +The `ConfigurableCompositeMessageConverter` is used because it can be supplied with any other `MessageConverter` implementations, including or excluding the previously mentioned default converters. It can also be registered as an appropriate bean in the application context, overriding the default converter, as the following example shows: ==== @@ -851,7 +850,7 @@ An example of this is a file inbound channel adapter that is polling a shared di To participate in a leader election and be notified when elected leader, when leadership is revoked, or on failure to acquire the resources to become leader, an application creates a component in the application context called a "`leader initiator`". Normally, a leader initiator is a `SmartLifecycle`, so it starts (optionally) when the context starts and then publishes notifications when leadership changes. -You can also receive failure notifications by setting the `publishFailedEvents` to `true` (starting with version 5.0), for cases when you want take a specific action if a failure occurs. +You can also receive failure notifications by setting the `publishFailedEvents` to `true` (starting with version 5.0), for cases when you want to take a specific action if a failure occurs. By convention, you should provide a `Candidate` that receives the callbacks. You can also revoke the leadership through a `Context` object provided by the framework. Your code can also listen for `o.s.i.leader.event.AbstractLeaderEvent` instances (the super class of `OnGrantedEvent` and `OnRevokedEvent`) and respond accordingly (for instance, by using a `SmartLifecycleRoleController`). @@ -890,6 +889,6 @@ public LockRegistryLeaderInitiator leaderInitiator(LockRegistry locks) { If the lock registry is implemented correctly, there is only ever at most one leader. If the lock registry also provides locks that throw exceptions (ideally, `InterruptedException`) when they expire or are broken, the duration of the leaderless periods can be as short as is allowed by the inherent latency in the lock implementation. -By default, the `busyWaitMillis` property adds some additional latency to prevent CPU starvation in the (more usual) case that the locks are imperfect and you only know they expired when you try to obtain one again. +By default, the `busyWaitMillis` property adds some additional latency to prevent CPU starvation in the (more usual) case that the locks are imperfect, and you only know they expired when you try to obtain one again. See <<./zookeeper.adoc#zk-leadership,Zookeeper Leadership Event Handling>> for more information about leadership election and events that use Zookeeper. diff --git a/src/reference/asciidoc/error-handling.adoc b/src/reference/asciidoc/error-handling.adoc index b29fe490f2c..9cffedebd17 100644 --- a/src/reference/asciidoc/error-handling.adoc +++ b/src/reference/asciidoc/error-handling.adoc @@ -8,7 +8,7 @@ Some things become more complicated in a loosely coupled environment, and one ex When sending a message to a channel, the component that ultimately handles that message may or may not be operating within the same thread as the sender. If using a simple default `DirectChannel` (when the `` element that has no `` child element and no 'task-executor' attribute), the message handling occurs in the same thread that sends the initial message. -In that case, if an `Exception` is thrown, it can be caught by the sender (or it may propagate past the sender if it is an uncaught `RuntimeException`). +In that case, if an `Exception` is thrown, it can be caught by the sender, or it may propagate past the sender if it is an uncaught `RuntimeException`. This is the same behavior as an exception-throwing operation in a normal Java call stack. A message flow that runs on a caller thread might be invoked through a messaging gateway (see <<./gateway.adoc#gateway,Messaging Gateways>>) or a `MessagingTemplate` (see <<./channel.adoc#channel-template,`MessagingTemplate`>>). diff --git a/src/reference/asciidoc/event.adoc b/src/reference/asciidoc/event.adoc index 4d5ec0a4599..8eef4305f30 100644 --- a/src/reference/asciidoc/event.adoc +++ b/src/reference/asciidoc/event.adoc @@ -7,8 +7,8 @@ For more information about Spring's support for events and listeners, see the ht You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -16,9 +16,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-event:{project-version}" ---- diff --git a/src/reference/asciidoc/feed.adoc b/src/reference/asciidoc/feed.adoc index d88d7adf9bf..a33ff2a6919 100644 --- a/src/reference/asciidoc/feed.adoc +++ b/src/reference/asciidoc/feed.adoc @@ -69,7 +69,7 @@ When an inbound feed adapter is started, it does the first poll and receives a ` That object contains multiple `SyndEntry` objects. Each entry is stored in the local entry queue and is released based on the value in the `max-messages-per-poll` attribute, such that each message contains a single entry. If, during retrieval of the entries from the entry queue, the queue has become empty, the adapter attempts to update the feed, thereby populating the queue with more entries (`SyndEntry` instances), if any are available. -Otherwise the next attempt to poll for a feed is determined by the trigger of the poller (every ten seconds in the preceding configuration). +Otherwise, the next attempt to poll for a feed is determined by the trigger of the poller (every ten seconds in the preceding configuration). === Duplicate Entries diff --git a/src/reference/asciidoc/file.adoc b/src/reference/asciidoc/file.adoc index 2b8389bcf64..80cc2a0b97e 100644 --- a/src/reference/asciidoc/file.adoc +++ b/src/reference/asciidoc/file.adoc @@ -669,8 +669,7 @@ To customize the suffix, you can set the `temporary-file-suffix` attribute on bo NOTE: When using the `APPEND` file `mode`, the `temporary-file-suffix` attribute is ignored, since the data is appended to the file directly. -Starting with ,version 4.2.5, the generated file name (as a result of `filename-generator` or `filename-generator-expression` -evaluation) can represent a child path together with the target file name. +Starting with ,version 4.2.5, the generated file name (as a result of `filename-generator` or `filename-generator-expression` evaluation) can represent a child path together with the target file name. It is used as a second constructor argument for `File(File parent, String child)` as before. However, in the past we did not create (`mkdirs()`) directories for the child path, assuming only the file name. This approach is useful for cases when we need to restore the file system tree to match the source directory -- for example, when unzipping the archive and saving all the files in the target directory in the original order. @@ -697,7 +696,7 @@ This attribute accepts a SpEL expression that is evaluated for each message bein Thus, you have full access to a message's payload and its headers when you dynamically specify the output file directory. The SpEL expression must resolve to either a `String`, `java.io.File` or `org.springframework.core.io.Resource`. -(The later is evaluated into a `File` anyway.) +(The latter is evaluated into a `File` anyway.) Furthermore, the resulting `String` or `File` must point to a directory. If you do not specify the `directory-expression` attribute, then you must set the `directory` attribute. diff --git a/src/reference/asciidoc/filter.adoc b/src/reference/asciidoc/filter.adoc index a89e99129a0..eb2f9c44295 100644 --- a/src/reference/asciidoc/filter.adoc +++ b/src/reference/asciidoc/filter.adoc @@ -67,7 +67,7 @@ The following example shows how to configure a filter that uses the `method` att ==== If the selector or adapted POJO method returns `false`, a few settings control the handling of the rejected message. -By default (if configured as in the preceding example), rejected messages are silently dropped. +By default, (if configured as in the preceding example) rejected messages are silently dropped. If rejection should instead result in an error condition, set the `throw-exception-on-rejection` attribute to `true`, as the following example shows: ==== @@ -91,7 +91,7 @@ If you want rejected messages to be routed to a specific channel, provide that r See also <<./handler-advice.adoc#advising-filters,Advising Filters>>. NOTE: Message filters are commonly used in conjunction with a publish-subscribe channel. -Many filter endpoints may be subscribed to the same channel, and they decide whether or not to pass the message to the next endpoint, which could be any of the supported types (such as a service activator). +Many filter endpoints may be subscribed to the same channel, and they decide whether to pass the message to the next endpoint, which could be any of the supported types (such as a service activator). This provides a reactive alternative to the more proactive approach of using a message router with a single point-to-point input channel and multiple output channels. We recommend using a `ref` attribute if the custom filter implementation is referenced in other `` definitions. @@ -146,7 +146,7 @@ All of this is demonstrated in the following configuration example, where the ex - @@ -191,7 +191,7 @@ public class PetFilter { It must be specified if this class is to be used as a filter. -All of the configuration options provided by the XML element are also available for the `@Filter` annotation. +All the configuration options provided by the XML element are also available for the `@Filter` annotation. The filter can be either referenced explicitly from XML or, if the `@MessageEndpoint` annotation is defined on the class, detected automatically through classpath scanning. diff --git a/src/reference/asciidoc/ftp.adoc b/src/reference/asciidoc/ftp.adoc index 58f57ff28eb..724398fc458 100644 --- a/src/reference/asciidoc/ftp.adoc +++ b/src/reference/asciidoc/ftp.adoc @@ -9,8 +9,8 @@ FTPS stands for "`FTP over SSL`". You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -18,9 +18,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-ftp:{project-version}" ---- @@ -164,7 +163,7 @@ Currently, the Apache FTPSClient does not support this feature. See https://issues.apache.org/jira/browse/NET-408[NET-408]. The following solution, courtesy of https://stackoverflow.com/questions/32398754/how-to-connect-to-ftps-server-with-data-connection-using-same-tls-session[Stack Overflow], uses reflection on the `sun.security.ssl.SSLSessionContextImpl`, so it may not work on other JVMs. -The stack overflow answer was submitted in 2015, and the solution has been tested by the Spring Integration team recently on JDK 1.8.0_112. +The stack overflow answer was submitted in 2015, and the solution has been tested by the Spring Integration team on JDK 1.8.0_112. The following example shows how to create an FTPS session: @@ -699,7 +698,7 @@ public class FtpJavaApplication { ---- ==== -Notice that, in this example, the message handler downstream of the transformer has an advice that removes the remote file after processing. +Notice that, in this example, the message handler downstream of the transformer has an `advice` that removes the remote file after processing. [[ftp-rotating-server-advice]] === Inbound Channel Adapters: Polling Multiple Servers and Directories @@ -1199,7 +1198,7 @@ See also <>. [[ftp-put-command]] ==== Using the `put` Command -The `put` commad sends a file to the remote server. +The `put` command sends a file to the remote server. The payload of the message can be a `java.io.File`, a `byte[]`, or a `String`. A `remote-filename-generator` (or expression) is used to name the remote file. Other available attributes include `remote-directory`, `temporary-remote-directory`, and their `*-expression` equivalents: `use-temporary-file-name` and `auto-create-directory`. @@ -1402,8 +1401,7 @@ public class FtpJavaApplication { When you perform operations on multiple files (by using `mget` and `mput`), an exception can occur some time after one or more files have been transferred. In this case (starting with version 4.2), a `PartialSuccessException` is thrown. -As well as the usual `MessagingException` properties (`failedMessage` and `cause`), this exception has two additional -properties: +As well as the usual `MessagingException` properties (`failedMessage` and `cause`), this exception has two additional properties: * `partialResults`: The successful transfer results. * `derivedInput`: The list of files generated from the request message (for example, local files to transfer for an `mput`). @@ -1426,10 +1424,8 @@ root/ ---- ==== -If the exception occurs on `file3.txt`, the `PartialSuccessException` thrown by the gateway has `derivedInput` -of `file1.txt`, `subdir`, and `zoo.txt` and `partialResults` of `file1.txt`. -Its `cause` is another `PartialSuccessException` with `derivedInput` of `file2.txt` and `file3.txt` and -`partialResults` of `file2.txt`. +If the exception occurs on `file3.txt`, the `PartialSuccessException` thrown by the gateway has `derivedInput` of `file1.txt`, `subdir`, and `zoo.txt` and `partialResults` of `file1.txt`. +Its `cause` is another `PartialSuccessException` with `derivedInput` of `file2.txt` and `file3.txt` and `partialResults` of `file2.txt`. [[ftp-session-caching]] === FTP Session Caching @@ -1465,8 +1461,7 @@ The following example shows how to do so: ---- ==== -The preceding example shows a `CachingSessionFactory` created with the `sessionCacheSize` set to `10` and the -`sessionWaitTimeout` set to one second (its value is in milliseconds). +The preceding example shows a `CachingSessionFactory` created with the `sessionCacheSize` set to `10` and the `sessionWaitTimeout` set to one second (its value is in milliseconds). Starting with Spring Integration 3.0, the `CachingConnectionFactory` provides a `resetCache()` method. When invoked, all idle sessions are immediately closed and in-use sessions are closed when they are returned to the cache. @@ -1480,17 +1475,17 @@ When true, the session will be tested by sending a NOOP command to ensure it is Starting with Spring Integration 3.0, a new abstraction is provided over the `FtpSession` object. The template provides methods to send, retrieve (as an `InputStream`), remove, and rename files. -In addition an `execute` method is provided allowing the caller to execute multiple operations on the session. +In addition, an `execute` method is provided allowing the caller to execute multiple operations on the session. In all cases, the template takes care of reliably closing the session. For more information, see the https://docs.spring.io/spring-integration/api/org/springframework/integration/file/remote/RemoteFileTemplate.html[Javadoc for `RemoteFileTemplate`]. There is a subclass for FTP: `FtpRemoteFileTemplate`. -Version 4.1 added added additional methods, including `getClientInstance()`, which provides access to the underlying `FTPClient` and thus gives you access to low-level APIs. +Version 4.1 added additional methods, including `getClientInstance()`, which provides access to the underlying `FTPClient` and thus gives you access to low-level APIs. Not all FTP servers properly implement the `STAT ` command. Some return a positive result for a non-existent path. -The `NLST` command reliably returns the name when the path is a file and it exists. +The `NLST` command reliably returns the name when the path is a file, and it exists. However, this does not support checking that an empty directory exists since `NLST` always returns an empty list when the path is a directory. Since the template does not know whether the path represents a directory, it has to perform additional checks when the path does not appear to exist (when using `NLST`). This adds overhead, requiring several requests to the server. @@ -1517,9 +1512,7 @@ See the https://docs.spring.io/spring-integration/api/org/springframework/integr [[ftp-session-callback]] === Using `MessageSessionCallback` -Starting with Spring Integration 4.2, you can use a `MessageSessionCallback` implementation with the -`` (`FtpOutboundGateway` in Java) to perform any operations on the `Session` with -the `requestMessage` context. +Starting with Spring Integration 4.2, you can use a `MessageSessionCallback` implementation with the `` (`FtpOutboundGateway` in Java) to perform any operations on the `Session` with the `requestMessage` context. It can be used for any non-standard or low-level FTP operations and allows access from an integration flow definition and functional interface (Lambda) implementation injection, as the following example shows: ==== @@ -1546,7 +1539,7 @@ When configuring with Java, different constructors are available in the https:// The `ApacheMinaFtplet`, added in version 5.2, listens for certain Apache Mina FTP server events and publishes them as `ApplicationEvent` s which can be received by any `ApplicationListener` bean, `@EventListener` bean method, or <<./event.adoc#appevent-inbound, Event Inbound Channel Adapter>>. -Currently supported events are: +Currently, supported events are: * `SessionOpenedEvent` - a client session was opened * `DirectoryCreatedEvent` - a directory was created @@ -1555,7 +1548,7 @@ Currently supported events are: * `PathRemovedEvent` - a file or directory was removed * `SessionClosedEvent` - the client has disconnected -Each of these is a subclass of `ApacheMinaFtpEvent`; you can configure a single listener to receive all of the event types. +Each of these is a subclass of `ApacheMinaFtpEvent`; you can configure a single listener to receive all the event types. The `source` property of each event is a `FtpSession`, from which you can obtain information such as the client address; a convenient `getSession()` method is provided on the abstract event. Events other than session open/close have another property `FtpRequest` which has properties such as the command and arguments. @@ -1605,5 +1598,5 @@ Since the `FtpInboundFileSynchronizingMessageSource` doesn't produce messages ag This metadata is retrieved by the `FtpInboundFileSynchronizingMessageSource` when local file is polled. When local file is deleted, it is recommended to remove its metadata entry. The `AbstractInboundFileSynchronizer` provides a `removeRemoteFileMetadata()` callback for this purpose. -In addition there is a `setMetadataStorePrefix()` to be used in the metadata keys. +In addition, there is a `setMetadataStorePrefix()` to be used in the metadata keys. It is recommended to have this prefix be different from the one used in the `MetadataStore`-based `FileListFilter` implementations, when the same `MetadataStore` instance is shared between these components, to avoid entry overriding because both filter and `AbstractInboundFileSynchronizer` use the same local file name for the metadata entry key. diff --git a/src/reference/asciidoc/functions-support.adoc b/src/reference/asciidoc/functions-support.adoc index d71457027c8..de0e0013c75 100644 --- a/src/reference/asciidoc/functions-support.adoc +++ b/src/reference/asciidoc/functions-support.adoc @@ -84,7 +84,7 @@ public Supplier pojoSupplier() { ==== With the Java DSL we just need to use a reference to the function bean in the endpoint definitions. -Meanwhile an implementation of the `Supplier` interface can be used as regular `MessageSource` definition: +Meanwhile, an implementation of the `Supplier` interface can be used as regular `MessageSource` definition: ==== [source, java] @@ -114,7 +114,7 @@ This function support is useful when used together with the https://cloud.spring [[kotlin-functions-support]] ==== Kotlin Lambdas -The Framework also has been improved to support Kotlin lambdas for functions so now you can use a combination of the Kotlin language and Spring Integration flow definitions: +The Framework also has been improved to support Kotlin lambdas for functions, so now you can use a combination of the Kotlin language and Spring Integration flow definitions: ==== [source, java] diff --git a/src/reference/asciidoc/gateway.adoc b/src/reference/asciidoc/gateway.adoc index 03350f03e4e..2a2d26188fb 100644 --- a/src/reference/asciidoc/gateway.adoc +++ b/src/reference/asciidoc/gateway.adoc @@ -60,7 +60,7 @@ See <>. Typically, you need not specify the `default-reply-channel`, since a Gateway auto-creates a temporary, anonymous reply channel, where it listens for the reply. However, some cases may prompt you to define a `default-reply-channel` (or `reply-channel` with adapter gateways, such as HTTP, JMS, and others). -For some background, we briefly discuss some of the inner workings of the gateway. +For some background, we briefly discuss some inner workings of the gateway. A gateway creates a temporary point-to-point reply channel. It is anonymous and is added to the message headers with the name, `replyChannel`. When providing an explicit `default-reply-channel` (`reply-channel` with remote adapter gateways), you can point to a publish-subscribe channel, which is so named because you can add more than one subscriber to it. @@ -131,7 +131,7 @@ If you prefer the XML approach to configuring gateway methods, you can add `meth ==== You can also use XML to provide individual headers for each method invocation. -This could be useful if the headers you want to set are static in nature and you do not want to embed them in the gateway's method signature by using `@Header` annotations. +This could be useful if the headers you want to set are static in nature, and you do not want to embed them in the gateway's method signature by using `@Header` annotations. For example, in the loan broker example, we want to influence how aggregation of the loan quotes is done, based on what type of request was initiated (single quote or all quotes). Determining the type of the request by evaluating which gateway method was invoked, although possible, would violate the separation of concerns paradigm (the method is a Java artifact). However, expressing your intention (meta information) in message headers is natural in a messaging architecture. @@ -433,7 +433,7 @@ public interface RequestReplyExchanger { Before version 5.0, this `exchange` method did not have a `throws` clause and, as a result, the exception was unwrapped. If you use this interface and want to restore the previous unwrap behavior, use a custom `service-interface` instead or access the `cause` of the `MessagingException` yourself. -However, you may want to log the error rather than propagating it or you may want to treat an exception as a valid reply (by mapping it to a message that conforms to some "error message" contract that the caller understands). +However, you may want to log the error rather than propagating it, or you may want to treat an exception as a valid reply (by mapping it to a message that conforms to some "error message" contract that the caller understands). To accomplish this, the gateway provides support for a message channel dedicated to the errors by including support for the `error-channel` attribute. In the following example, a 'transformer' creates a reply `Message` from the `Exception`: @@ -487,10 +487,10 @@ Finally, you might want to consider setting downstream flags, such as 'requires- NOTE: If the downstream flow returns an `ErrorMessage`, its `payload` (a `Throwable`) is treated as a regular downstream error. If there is an `error-channel` configured, it is sent to the error flow. -Otherwise the payload is thrown to the caller of the gateway. +Otherwise, the payload is thrown to the caller of the gateway. Similarly, if the error flow on the `error-channel` returns an `ErrorMessage`, its payload is thrown to the caller. The same applies to any message with a `Throwable` payload. -This can be useful in asynchronous situations when when you need to propagate an `Exception` directly to the caller. +This can be useful in asynchronous situations when you need to propagate an `Exception` directly to the caller. To do so, you can either return an `Exception` (as the `reply` from some service) or throw it. Generally, even with an asynchronous flow, the framework takes care of propagating an exception thrown by the downstream flow back to the gateway. The https://github.com/spring-projects/spring-integration-samples/tree/main/intermediate/tcp-client-server-multiplex[TCP Client-Server Multiplex] sample demonstrates both techniques to return the exception to the caller. @@ -539,7 +539,7 @@ When configuring with XML, the timeout attributes can be a long value or a SpEL As a pattern, the messaging gateway offers a nice way to hide messaging-specific code while still exposing the full capabilities of the messaging system. As <>, the `GatewayProxyFactoryBean` provides a convenient way to expose a proxy over a service-interface giving you POJO-based access to a messaging system (based on objects in your own domain, primitives/Strings, or other objects). However, when a gateway is exposed through simple POJO methods that return values, it implies that, for each request message (generated when the method is invoked), there must be a reply message (generated when the method has returned). -Since messaging systems are naturally asynchronous, you may not always be able to guarantee the contract where "`for each request, there will always be be a reply`". Spring Integration 2.0 introduced support for an asynchronous gateway, which offers a convenient way to initiate flows when you may not know if a reply is expected or how long it takes for replies to arrive. +Since messaging systems are naturally asynchronous, you may not always be able to guarantee the contract where "`for each request, there will always be a reply`". Spring Integration 2.0 introduced support for an asynchronous gateway, which offers a convenient way to initiate flows when you may not know if a reply is expected or how long it takes for replies to arrive. To handle these types of scenarios, Spring Integration uses `java.util.concurrent.Future` instances to support an asynchronous gateway. @@ -755,7 +755,7 @@ The following example shows how to create a gateway with Project Reactor: [source,java] ---- @MessagingGateway -public static interface TestGateway { +public interface TestGateway { @Gateway(requestChannel = "promiseChannel") Mono multiply(Integer value); @@ -867,7 +867,7 @@ IMPORTANT: You should understand that, by default, `reply-timeout` is unbounded. Consequently, if you do not explicitly set the `reply-timeout`, your gateway method invocation might hang indefinitely. So, to make sure you analyze your flow and if there is even a remote possibility of one of these scenarios to occur, you should set the `reply-timeout` attribute to a "'safe'" value. Even better, you can set the `requires-reply` attribute of the downstream component to 'true' to ensure a timely response, as produced by the throwing of an exception as soon as that downstream component returns null internally. -However you should also realize that there are some scenarios (see <>) where `reply-timeout` does not help. +However, you should also realize that there are some scenarios (see <>) where `reply-timeout` does not help. That means it is also important to analyze your message flow and decide when to use a synchronous gateway rather than an asynchrnous gateway. As <>, the latter case is a matter of defining gateway methods that return `Future` instances. Then you are guaranteed to receive that return value, and you have more granular control over the results of the invocation. diff --git a/src/reference/asciidoc/gemfire.adoc b/src/reference/asciidoc/gemfire.adoc index 0a3557f8697..9847188cdc0 100644 --- a/src/reference/asciidoc/gemfire.adoc +++ b/src/reference/asciidoc/gemfire.adoc @@ -1,13 +1,13 @@ [[gemfire]] -== Pivotal GemFire and Apache Geode Support +== VMware Tanzu GemFire and Apache Geode Support -Spring Integration provides support for Pivotal GemFire and Apache Geode. +Spring Integration provides support for VMware Tanzu GemFire and Apache Geode. You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -15,22 +15,21 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-gemfire:{project-version}" ---- ==== GemFire is a distributed data management platform that provides a key-value data grid along with advanced distributed system features, such as event processing, continuous querying, and remote function execution. -This guide assumes some familiarity with the commercial https://pivotal.io/pivotal-gemfire[Pivotal GemFire] or Open Source https://geode.apache.org[Apache Geode]. +This guide assumes some familiarity with the commercial https://tanzu.vmware.com/gemfire[VMware Tanzu GemFire] or Open Source https://geode.apache.org[Apache Geode]. Spring integration provides support for GemFire by implementing inbound adapters for entry and continuous query events, an outbound adapter to write entries to the cache, and message and metadata stores and `GemfireLockRegistry` implementations. -Spring integration leverages the https://projects.spring.io/spring-data-gemfire[Spring Data for Pivotal GemFire] project, providing a thin wrapper over its components. +Spring integration leverages the https://projects.spring.io/spring-data-gemfire[Spring Data for VMware Tanzu GemFire] project, providing a thin wrapper over its components. Starting with version 5.1, the Spring Integration GemFire module uses the https://github.com/spring-projects/spring-data-geode[Spring Data for Apache Geode] transitive dependency by default. -To switch to the commercial Pivotal GemFire-based Spring Data for Pivotal GemFire, exclude `spring-data-geode` from dependencies and add `spring-data-gemfire`, as the following Maven snippet shows: +To switch to the commercial VMware Tanzu GemFire-based Spring Data for VMware Tanzu GemFire, exclude `spring-data-geode` from dependencies and add `spring-data-gemfire`, as the following Maven snippet shows: ==== [source,xml] @@ -114,7 +113,7 @@ The continuous query acts as an event source that fires whenever its result set NOTE: GemFire queries are written in OQL and are scoped to the entire cache (not just one region). Additionally, continuous queries require a remote (that is, running in a separate process or remote host) cache server. -See the https://gemfire82.docs.pivotal.io/docs-gemfire/gemfire_nativeclient/continuous-querying/continuous-querying.html[GemFire documentation] for more information on implementing continuous queries. +See the https://gemfire82.docs.VMware Tanzu.io/docs-gemfire/gemfire_nativeclient/continuous-querying/continuous-querying.html[GemFire documentation] for more information on implementing continuous queries. The following configuration creates a GemFire client cache (recall that a remote cache server is required for this implementation and its address is configured as a child element of the pool), a client region, and a `ContinuousQueryListenerContainer` that uses Spring Data: diff --git a/src/reference/asciidoc/graph.adoc b/src/reference/asciidoc/graph.adoc index dbeb66a7a78..58cfefbd997 100644 --- a/src/reference/asciidoc/graph.adoc +++ b/src/reference/asciidoc/graph.adoc @@ -101,8 +101,7 @@ The `name` can be customized on the `IntegrationGraphServer` bean or in the `spr Other properties are provided by the framework and let you distinguish a similar model from other sources. The `links` graph element represents connections between nodes from the `nodes` graph element and, therefore, between integration components in the source Spring Integration application. -For example, from a `MessageChannel` to an `EventDrivenConsumer` with some `MessageHandler` -or from an `AbstractReplyProducingMessageHandler` to a `MessageChannel`. +For example, from a `MessageChannel` to an `EventDrivenConsumer` with some `MessageHandler` or from an `AbstractReplyProducingMessageHandler` to a `MessageChannel`. For convenience and to let you determine a link's purpose, the model includes the `type` attribute. The possible types are: @@ -112,7 +111,7 @@ The possible types are: * `discard`: From `DiscardingMessageHandler` (such as `MessageFilter`) to the `MessageChannel` through an `errorChannel` property. * `route`: From `AbstractMappingMessageRouter` (such as `HeaderValueRouter`) to the `MessageChannel`. Similar to `output` but determined at run-time. -May be a configured channel mapping or a dynamically resolved channel. +Maybe a configured channel mapping or a dynamically resolved channel. Routers typically retain only up to 100 dynamic routes for this purpose, but you can modify this value by setting the `dynamicChannelLimit` property. The information from this element can be used by a visualization tool to render connections between nodes from the `nodes` graph element, where the `from` and `to` numbers represent the value from the `nodeId` property of the linked nodes. @@ -150,7 +149,7 @@ The `input` and `output` attributes are for the `inputChannel` and `outputChanne See the next section for more information. Starting with version 5.1, the `IntegrationGraphServer` accepts a `Function> additionalPropertiesCallback` for population of additional properties on the `IntegrationNode` for a particular `NamedComponent`. -For example you can expose the `SmartLifecycle` `autoStartup` and `running` properties into the target graph: +For example, you can expose the `SmartLifecycle` `autoStartup` and `running` properties into the target graph: ==== [source,java] diff --git a/src/reference/asciidoc/groovy.adoc b/src/reference/asciidoc/groovy.adoc index 049ee7ed5ae..62ddb1ce754 100644 --- a/src/reference/asciidoc/groovy.adoc +++ b/src/reference/asciidoc/groovy.adoc @@ -7,8 +7,8 @@ For more information about Groovy, see the Groovy documentation, which you can f You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -16,9 +16,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-groovy:{project-version}" ---- @@ -28,7 +27,7 @@ compile "org.springframework.integration:spring-integration-groovy:{project-vers ==== Groovy Configuration With Spring Integration 2.1, the configuration namespace for the Groovy support is an extension of Spring Integration's scripting support and shares the core configuration and behavior described in detail in the <<./scripting.adoc#scripting,Scripting Support>> section. -Even though Groovy scripts are well supported by generic scripting support, the Groovy support provides the `Groovy` configuration namespace, which is backed by the Spring Framework's `org.springframework.scripting.groovy.GroovyScriptFactory` and related components, offering extended capabilities for using Groovy. +Even though Groovy scripts are well-supported by generic scripting support, the Groovy support provides the `Groovy` configuration namespace, which is backed by the Spring Framework's `org.springframework.scripting.groovy.GroovyScriptFactory` and related components, offering extended capabilities for using Groovy. The following listing shows two sample configurations: .Filter @@ -124,8 +123,7 @@ With that, the `filter()` method is transformed and compiled to static Java code dynamic phases of invocation, such as `getProperty()` factories and `CallSite` proxies. Starting with version 4.3, you can configure the Spring Integration Groovy components with the `compile-static` `boolean` option, specifying that `ASTTransformationCustomizer` for `@CompileStatic` should be added to the internal `CompilerConfiguration`. -With that in place, you can omit the method declaration with `@CompileStatic` in our script code and still get compiled -plain Java code. +With that in place, you can omit the method declaration with `@CompileStatic` in our script code and still get compiled plain Java code. In this case, the preceding script can be short but still needs to be a little more verbose than interpreted script, as the following example shows: ==== diff --git a/src/reference/asciidoc/handler-advice.adoc b/src/reference/asciidoc/handler-advice.adoc index 1d528f2ca26..b24e2b377c5 100644 --- a/src/reference/asciidoc/handler-advice.adoc +++ b/src/reference/asciidoc/handler-advice.adoc @@ -415,7 +415,7 @@ NOTE: Starting with version 5.1.3, if channels are configured, but expressions a When an exception is thrown in the scope of the advice, by default, that exception is thrown to the caller after any `failureExpression` is evaluated. If you wish to suppress throwing the exception, set the `trapException` property to `true`. -The following advice shows how to configure an advice with Java DSL: +The following advice shows how to configure an `advice` with Java DSL: ==== [source, java] @@ -556,7 +556,7 @@ public Message service(Message message) { Starting with version 5.3, a `ReactiveRequestHandlerAdvice` can be used for request message handlers producing a `Mono` replies. A `BiFunction, Mono, Publisher>` has to be provided for this advice and it is called from the `Mono.transform()` operator on a reply produced by the intercepted `handleRequestMessage()` method implementation. -Typically such a `Mono` customization is necessary when we would like to control network fluctuations via `timeout()`, `retry()` and similar support operators. +Typically, such a `Mono` customization is necessary when we would like to control network fluctuations via `timeout()`, `retry()` and similar support operators. For example when we can an HTTP request over WebFlux client, we could use below configuration to not wait for response more than 5 seconds: ==== @@ -683,8 +683,6 @@ Note that, in that case, the entire downstream flow is within the transaction sc In the case of a `MessageHandler` that does not return a response, the advice chain order is retained. -Starting with version 5.3, the `HandleMessageAdviceAdapter` is present to let apply any existing `MethodInterceptor` for the `MessageHandler.handleMessage()` and, therefore, whole sub-flow. -For example a `RetryOperationsInterceptor` could be applied for the whole sub-flow starting from some endpoint, which is not possible by default because consumer endpoint applies advices only for the `AbstractReplyProducingMessageHandler.RequestHandler.handleRequestMessage()`. Starting with version 5.3, the `HandleMessageAdviceAdapter` is provided to apply any `MethodInterceptor` for the `MessageHandler.handleMessage()` method and, therefore, the whole sub-flow. For example, a `RetryOperationsInterceptor` could be applied to the whole sub-flow starting from some endpoint; this is not possible, by default, because the consumer endpoint applies advices only to the `AbstractReplyProducingMessageHandler.RequestHandler.handleRequestMessage()`. diff --git a/src/reference/asciidoc/http.adoc b/src/reference/asciidoc/http.adoc index 0506e1fe316..4faa1878f59 100644 --- a/src/reference/asciidoc/http.adoc +++ b/src/reference/asciidoc/http.adoc @@ -720,7 +720,7 @@ By default, the URL string is encoded (see https://docs.spring.io/spring/docs/cu In some scenarios with a non-standard URI (such as the RabbitMQ REST API), it is undesirable to perform the encoding. The `` and `` provide an `encoding-mode` attribute. To disable encoding the URL, set this attribute to `NONE` (by default, it is `TEMPLATE_AND_VALUES`). -If you wish to partially encode some of the URL, use an `expression` within a ``, as the following example shows: +If you wish to partially encode some part of the URL, use an `expression` within a ``, as the following example shows: ==== [source,xml] @@ -861,7 +861,7 @@ For example, from the Java™ Platform, Standard Edition 6 API Specification on [quote] Some non-standard implementation of this method may ignore the specified timeout. -To see the connect timeout set, please call getConnectTimeout(). +To see the `connect timeout` set, please call getConnectTimeout(). If you have specific needs, you should test your timeouts. Consider using the `HttpComponentsClientHttpRequestFactory`, which, in turn, uses https://hc.apache.org/httpcomponents-client-ga/[Apache HttpComponents HttpClient] rather than relying on implementations provided by a JVM. @@ -975,7 +975,7 @@ However, if you do need further customization, you can provide additional config You can provide a comma-separated list of header names, and you can include simple patterns with the '*' character acting as a wildcard. Provide such values overrides the default behavior. Basically, it assumes you are in complete control at that point. -However, if you do want to include all of the standard HTTP headers, you can use the shortcut patterns: `HTTP_REQUEST_HEADERS` and `HTTP_RESPONSE_HEADERS`. +However, if you do want to include all the standard HTTP headers, you can use the shortcut patterns: `HTTP_REQUEST_HEADERS` and `HTTP_RESPONSE_HEADERS`. The following listing shows two examples (the first of which uses a wildcard): ==== diff --git a/src/reference/asciidoc/ip.adoc b/src/reference/asciidoc/ip.adoc index ab6b0988328..8cdef9a750c 100644 --- a/src/reference/asciidoc/ip.adoc +++ b/src/reference/asciidoc/ip.adoc @@ -10,8 +10,8 @@ These are used when two-way communication is needed. You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -19,9 +19,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-ip:{project-version}" ---- @@ -239,7 +238,7 @@ public IntegrationFlow udpIn() { ==== Server Listening Events Starting with version 5.0.2, a `UdpServerListeningEvent` is emitted when an inbound adapter is started and has begun listening. -This is useful when the adapter is configured to listen on port 0, meaning that the operating system chooses the port. +This is useful when the adapter is configured to listen on port `0`, meaning that the operating system chooses the port. It can also be used instead of polling `isListening()`, if you need to wait before starting some other process that will connect to the socket. ==== Advanced Outbound Configuration @@ -675,10 +674,10 @@ The `HelloWorldInterceptor` used in the test case works as follows: The interceptor is first configured with a client connection factory. When the first message is sent over an intercepted connection, the interceptor sends 'Hello' over the connection and expects to receive 'world!'. When that occurs, the negotiation is complete and the original message is sent. -Further messages that use the same connection are sent without any additional negotiation. +Further, messages that use the same connection are sent without any additional negotiation. When configured with a server connection factory, the interceptor requires the first message to be 'Hello' and, if it is, returns 'world!'. -Otherwise it throws an exception that causes the connection to be closed. +Otherwise, it throws an exception that causes the connection to be closed. All `TcpConnection` methods are intercepted. Interceptor instances are created for each connection by an interceptor factory. @@ -748,7 +747,7 @@ Outbound gateways also publish this event when a late reply is received (the sen The event contains the connection ID as well as an exception in the `cause` property, which contains the failed message. Starting with version 4.3, a `TcpConnectionServerListeningEvent` is emitted when a server connection factory is started. -This is useful when the factory is configured to listen on port 0, meaning that the operating system chooses the port. +This is useful when the factory is configured to listen on port `0`, meaning that the operating system chooses the port. It can also be used instead of polling `isListening()`, if you need to wait before starting some other process that connects to the socket. IMPORTANT: To avoid delaying the listening thread from accepting connections, the event is published on a separate thread. @@ -867,8 +866,8 @@ This topology is supported by using `client-mode="true"` on the inbound gateway. In this case, the connection factory must be of type `client` and must have `single-use` set to `false`. Two additional attributes support this mechanism. -`retry-interval` specifies (in milliseconds) how often the framework tries to reconnect after a connection failure. -`scheduler` supplies a `TaskScheduler` to schedule the connection attempts and to test that the connection is still active. +The `retry-interval` specifies (in milliseconds) how often the framework tries to reconnect after a connection failure. +The `scheduler` supplies a `TaskScheduler` to schedule the connection attempts and to test that the connection is still active. If the gateway is started, you may force the gateway to establish a connection by sending a `` command: `@adapter_id.retryConnection()` and examine the current state with `@adapter_id.isClientModeConnected()`. @@ -904,12 +903,11 @@ The following example shows an outbound TCP gateway: reply-channel="replyChannel" connection-factory="cfClient" request-timeout="10000" - remote-timeout="10000"/> + remote-timeout="10000"/> ---- ==== -`client-mode` is not currently available with the outbound gateway. +The `client-mode` is not currently available with the outbound gateway. Starting with version 5.2, the outbound gateway can be configured with the property `closeStreamAfterSend`. If the connection factory is configured for `single-use` (a new connection for each request/reply) the gateway will close the output stream; this signals EOF to the server. @@ -1079,16 +1077,16 @@ You should consider using NIO when handling a large number of connections. However, the use of NIO has some other ramifications. A pool of threads (in the task executor) is shared across all the sockets. Each incoming message is assembled and sent to the configured channel as a separate unit of work on a thread selected from that pool. -Two sequential messages arriving on the same socket might be processed by different threads. +Two sequential messages arriving at the same socket might be processed by different threads. This means that the order in which the messages are sent to the channel is indeterminate. -Strict ordering of the messages arriving on the socket is not maintained. +Strict ordering of the messages arriving at the socket is not maintained. For some applications, this is not an issue. For others, it is a problem. If you require strict ordering, consider setting `using-nio` to `false` and using an asynchronous hand-off. Alternatively, you can insert a resequencer downstream of the inbound endpoint to return the messages to their proper sequence. -If you set `apply-sequence` to `true` on the connection factory, messages arriving on a TCP connection have `sequenceNumber` and `correlationId` headers set. +If you set `apply-sequence` to `true` on the connection factory, messages arriving at a TCP connection have `sequenceNumber` and `correlationId` headers set. The resequencer uses these headers to return the messages to their proper sequence. IMPORTANT: Starting with version 5.1.4, priority is given to accepting new connections over reading from existing connections. @@ -1279,8 +1277,8 @@ If the timeout is exceeded, the process is stopped and the socket is closed. [[tcp-ssl-host-verification]] ==== Host Verification -Starting with version 5.0.8, you can configure whether or not to enable host verification. -Starting with version 5.1, it is enabled by default; the mechanism to disable it depends on whether or not you are using NIO. +Starting with version 5.0.8, you can configure whether to enable host verification. +Starting with version 5.1, it is enabled by default; the mechanism to disable it depends on whether you are using NIO. Host verification is used to ensure the server you are connected to matches information in the certificate, even if the certificate is trusted. @@ -1595,7 +1593,7 @@ Default: `false`. | Y | N | `true`, `false` -| When using NIO, whether or not the connection uses direct buffers. +| When using NIO, whether the connection uses direct buffers. Refer to the `java.nio.ByteBuffer` documentation for more information. Must be `false` if `using-nio` is `false`. @@ -1836,7 +1834,7 @@ For multicast udp adapters, the multicast address. | `acknowledge` | `true`, `false` -| Whether or not a UDP adapter requires an acknowledgment from the destination. +| Whether a UDP adapter requires an acknowledgment from the destination. When enabled, it requires setting the following four attributes: `ack-host`, `ack-port`, `ack-timeout`, and `min-acks-for- success`. | `ack-host` diff --git a/src/reference/asciidoc/jdbc.adoc b/src/reference/asciidoc/jdbc.adoc index a27a093eba3..0237c2e7dd8 100644 --- a/src/reference/asciidoc/jdbc.adoc +++ b/src/reference/asciidoc/jdbc.adoc @@ -7,8 +7,8 @@ Through those adapters, Spring Integration supports not only plain JDBC SQL quer You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -16,9 +16,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-jdbc:{project-version}" ---- @@ -200,7 +199,7 @@ By default, the message payload and headers are available as input parameters to ---- ==== -In the preceding example, messages arriving on the channel labelled `input` have a payload of a map with a key of `something`, so the `[]` operator dereferences that value from the map. +In the preceding example, messages arriving at the channel labelled `input` have a payload of a map with a key of `something`, so the `[]` operator dereferences that value from the map. The headers are also accessed as a map. NOTE: The parameters in the preceding query are bean property expressions on the incoming message (not SpEL expressions). @@ -224,8 +223,7 @@ The following example uses a `ExpressionEvaluatingSqlParameterSourceFactory` to [source,xml] ---- >) is a better option. -NOTE: By default, all of the JMS adapters that require a reference to the `ConnectionFactory` automatically look for a bean named `jmsConnectionFactory`. +NOTE: By default, all JMS adapters that require a reference to the `ConnectionFactory` automatically look for a bean named `jmsConnectionFactory`. That is why you do not see a `connection-factory` attribute in many of the examples. However, if your JMS `ConnectionFactory` has a different bean name, you need to provide that attribute. @@ -240,7 +240,7 @@ The following example shows how to set an error channel on a message-driven chan When comparing the preceding example to the generic gateway configuration or the JMS 'inbound-gateway' that we discuss later, the key difference is that we are in a one-way flow, since this is a 'channel-adapter', not a gateway. Therefore, the flow downstream from the 'error-channel' should also be one-way. -For example, it could send to a logging handler or it could connect to a different JMS `` element. +For example, it could send to a logging handler, or it could connect to a different JMS `` element. When consuming from topics, set the `pub-sub-domain` attribute to true. Set `subscription-durable` to `true` for a durable subscription or `subscription-shared` for a shared subscription (which requires a JMS 2.0 broker and has been available since version 4.2). @@ -258,7 +258,7 @@ Previously, if a JMS `` or `` If the container is configured to use transactions, the message is rolled back and redelivered repeatedly. The conversion process occurs before and during message construction so that such errors are not sent to the 'error-channel'. Now such conversion exceptions result in an `ErrorMessage` being sent to the 'error-channel', with the exception as the `payload`. -If you wish the transaction to roll back and you have an 'error-channel' defined, the integration flow on the 'error-channel' must re-throw the exception (or another exception). +If you wish the transaction to roll back, and you have an 'error-channel' defined, the integration flow on the 'error-channel' must re-throw the exception (or another exception). If the error flow does not throw an exception, the transaction is committed and the message is removed. If no 'error-channel' is defined, the exception is thrown back to the container, as before. @@ -504,7 +504,7 @@ A `TemporaryQueue` is created for each request and deleted when the request is c . A `reply-destination*` property is provided and neither a `` nor a `correlation-key` is provided + -The `JMSCorrelationID` equal to the outgoing message IS is used as a message selector for the consumer: +The `JMSCorrelationID` equal to the outgoing message is used as a message selector for the consumer: + `messageSelector = "JMSCorrelationID = '" + messageId + "'"` + @@ -601,7 +601,7 @@ Starting with version 4.3, you can now specify `async="true"` (or `setAsync(true By default, when a request is sent to the gateway, the requesting thread is suspended until the reply is received. The flow then continues on that thread. -If `async` is `true`, the requesting thread is released immediately after the send completes, and the reply is returned (and the flow continues) on the listener container thread. +If `async` is `true`, the requesting thread is released immediately after the `send()` completes, and the reply is returned (and the flow continues) on the listener container thread. This can be useful when the gateway is invoked on a poller thread. The thread is released and is available for other tasks within the framework. @@ -648,7 +648,7 @@ The following listing shows all the available attributes for an `outbound-gatewa ---- -<1> Reference to a `javax.jms.ConnectionFactory`. +<1> Reference to a `jakarta.jms.ConnectionFactory`. The default `jmsConnectionFactory`. <2> The name of a property that contains correlation data to correlate responses with replies. If omitted, the gateway expects the responding system to return the value of the outbound `JMSMessageID` header in the `JMSCorrelationID` header. @@ -877,7 +877,7 @@ Use `subscription` to name the subscription. [[jms-selectors]] === Using JMS Message Selectors -With JMS message selectors, you can filter https://docs.oracle.com/javaee/6/api/javax/jms/Message.html[JMS Messages] based on JMS headers as well as JMS properties. +With JMS message selectors, you can filter https://javadoc.io/doc/jakarta.jms/jakarta.jms-api/latest/jakarta/jms/Message.html[JMS Messages] based on JMS headers as well as JMS properties. For example, if you want to listen to messages whose custom JMS header property, `myHeaderProperty`, equals `something`, you can specify the following expression: ==== diff --git a/src/reference/asciidoc/jmx.adoc b/src/reference/asciidoc/jmx.adoc index ffcdde9cd47..9ebe4131a5d 100644 --- a/src/reference/asciidoc/jmx.adoc +++ b/src/reference/asciidoc/jmx.adoc @@ -6,8 +6,8 @@ Spring Integration provides channel Adapters for receiving and publishing JMX No You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -15,9 +15,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-jmx:{project-version}" ---- @@ -129,7 +128,7 @@ Alternatively, you can provide a fallback `default-notification-type` attribute ==== Attribute-polling Channel Adapter The attribute-polling channel adapter is useful when you need to periodically check on some value that is available through an MBean as a managed attribute. -You can configured the poller in the same way as any other polling adapter in Spring Integration (or you can rely on the default poller). +You can configure the poller in the same way as any other polling adapter in Spring Integration (or you can rely on the default poller). The `object-name` and the `attribute-name` are required. An MBeanServer reference is also required. However, by default, it automatically checks for a bean named `mbeanServer`, same as the notification-listening channel adapter <>. @@ -155,7 +154,7 @@ By default, the MBeans are mapped to primitives and simple objects, such as `Map Doing so permits simple transformation to (for example) JSON. An MBeanServer reference is also required. However, by default, it automatically checks for a bean named `mbeanServer`, same as the notification-listening channel adapter <>. -The following example shows how to configure an tree-polling channel adapter with XML: +The following example shows how to configure a tree-polling channel adapter with XML: ==== [source,xml] @@ -168,7 +167,7 @@ The following example shows how to configure an tree-polling channel adapter wit ---- ==== -The preceding example includes all of the attributes on the selected MBeans. +The preceding example includes all the attributes on the selected MBeans. You can filter the attributes by providing an `MBeanObjectConverter` that has an appropriate filter configured. You can provide the converter as a reference to a bean definition by using the `converter` attribute, or you can use an inner `` definition. Spring Integration provides a `DefaultMBeanObjectConverter` that can take a `MBeanAttributeFilter` in its constructor argument. @@ -275,9 +274,7 @@ public class ContextConfiguration { ---- ==== -If you need to provide more options or have several `IntegrationMBeanExporter` beans (such as -for different MBean Servers or to avoid conflicts with the standard Spring `MBeanExporter` -- such as through -`@EnableMBeanExport`), you can configure an `IntegrationMBeanExporter` as a generic bean. +If you need to provide more options or have several `IntegrationMBeanExporter` beans (such as for different MBean Servers or to avoid conflicts with the standard Spring `MBeanExporter` -- such as through `@EnableMBeanExport`), you can configure an `IntegrationMBeanExporter` as a generic bean. [[jmx-mbean-features]] ===== MBean Object Names @@ -355,7 +352,7 @@ The exporter propagates the `default-domain` to that object to let it generate a If your custom naming strategy is a `MetadataNamingStrategy` (or a subclass of it), the exporter does not propagate the `default-domain`. You must configure it on your strategy bean. -Starting with version 5.1; any bean names (represented by the `name` key in the object name) will be quoted if they contain any characters that are not allowed in a Java identifier (or period `.`). +Starting with version 5.1, any bean names (represented by the `name` key in the object name) will be quoted if they contain any characters that are not allowed in a Java identifier (or period `.`). [[jmx-42-improvements]] ===== JMX Improvements diff --git a/src/reference/asciidoc/jpa.adoc b/src/reference/asciidoc/jpa.adoc index f47c2d75538..f47c40eafc9 100644 --- a/src/reference/asciidoc/jpa.adoc +++ b/src/reference/asciidoc/jpa.adoc @@ -6,8 +6,8 @@ Spring Integration's JPA (Java Persistence API) module provides components for p You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -15,9 +15,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-jpa:{project-version}" ---- @@ -60,12 +59,7 @@ The following sections describe each of these components in more detail. [[jpa-supported-persistence-providers]] === Supported Persistence Providers -The Spring Integration JPA support has been tested against the following persistence providers: - -* Hibernate -* EclipseLink - -When using a persistence provider, you should ensure that the provider is compatible with JPA 2.1. +The Spring Integration JPA support has been tested against Hibernate persistence provider. [[jpa-java-implementation]] === Java Implementation @@ -157,7 +151,7 @@ If you use the `jpa-operations` attribute, you must not provide the JPA entity m `entity-class`:: The fully qualified name of the entity class. -The exact semantics of this attribute vary, depending on whether we are performing a persist or update operation or whether we are retrieving objects from the database. +The exact semantics of this attribute vary, depending on whether we are performing a `persist` or `update` operation or whether we are retrieving objects from the database. + When retrieving data, you can specify the `entity-class` attribute to indicate that you would like to retrieve objects of this type from the database. In that case, you must not define any of the query attributes (`jpa-query`, `native-query`, or `named-query`). @@ -352,14 +346,14 @@ You must ensure that the component operates as part of a transaction. Otherwise, you may encounter an exception, such as: `java.lang.IllegalArgumentException: Removing a detached instance ...`. Optional. <4> A boolean flag that indicates whether the records can be deleted in bulk or must be deleted one record at a time. -By default the value is `false` (that is, the records can be bulk-deleted). +By default, the value is `false` (that is, the records can be bulk-deleted). Optional. <5> The fully qualified name of the entity class to be queried from the database. The adapter automatically builds a JPA Query based on the entity class name. Optional. -<6> An instance of `javax.persistence.EntityManager` used to perform the JPA operations. +<6> An instance of `jakarta.persistence.EntityManager` used to perform the JPA operations. Optional. -<7> An instance of `javax.persistence.EntityManagerFactory` used to obtain an instance of `javax.persistence.EntityManager` that performs the JPA operations. +<7> An instance of `jakarta.persistence.EntityManagerFactory` used to obtain an instance of `jakarta.persistence.EntityManager` that performs the JPA operations. Optional. <8> A boolean flag indicating whether the select operation is expected to return a single result or a `List` of results. If this flag is set to `true`, the single entity selected is sent as the payload of the message. @@ -497,7 +491,7 @@ The default value is `MERGE`. These four attributes of the `outbound-channel-adapter` configure it to accept entities over the input channel and process them to `PERSIST`, `MERGE`, or `DELETE` the entities from the underlying data source. -NOTE: As of Spring Integration 3.0, payloads to `PERSIST` or `MERGE` can also be of type `https://docs.oracle.com/javase/7/docs/api/java/lang/Iterable.html[java.lang.Iterable]`. +NOTE: As of Spring Integration 3.0, payloads to `PERSIST` or `MERGE` can also be of type `https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Iterable.html[java.lang.Iterable]`. In that case, each object returned by the `Iterable` is treated as an entity and persisted or merged using the underlying `EntityManager`. Null values returned by the iterator are ignored. @@ -690,9 +684,9 @@ Optional. <3> The fully qualified name of the entity class for the JPA Operation. The `entity-class`, `query`, and `named-query` attributes are mutually exclusive. Optional. -<4> An instance of `javax.persistence.EntityManager` used to perform the JPA operations. +<4> An instance of `jakarta.persistence.EntityManager` used to perform the JPA operations. Optional. -<5> An instance of `javax.persistence.EntityManagerFactory` used to obtain an instance of `javax.persistence.EntityManager`, which performs the JPA operations. +<5> An instance of `jakarta.persistence.EntityManagerFactory` used to obtain an instance of `jakarta.persistence.EntityManager`, which performs the JPA operations. Optional. <6> An implementation of `org.springframework.integration.jpa.core.JpaOperations` used to perform the JPA operations. We recommend not providing an implementation of your own but using the default `org.springframework.integration.jpa.core.DefaultJpaOperations` implementation. @@ -1163,7 +1157,7 @@ public class JpaJavaApplication { [IMPORTANT] ==== -When you choose to delete entities upon retrieval and you have retrieved a collection of entities, by default, entities are deleted on a per-entity basis. +When you choose to delete entities upon retrieval, and you have retrieved a collection of entities, by default, entities are deleted on a per-entity basis. This may cause performance issues. Alternatively, you can set attribute `deleteInBatch` to `true`, which performs a batch delete. diff --git a/src/reference/asciidoc/kafka.adoc b/src/reference/asciidoc/kafka.adoc index a806ad89b5f..bd993dfae83 100644 --- a/src/reference/asciidoc/kafka.adoc +++ b/src/reference/asciidoc/kafka.adoc @@ -74,7 +74,7 @@ topic-expression="headers['topic'] != null ? headers['topic'] : 'myTopic'" The adapter requires a `KafkaTemplate`, which, in turn, requires a suitably configured `KafkaProducerFactory`. -If a `send-failure-channel` (`sendFailureChannel`) is provided and a send failure (sync or async) is received, an `ErrorMessage` is sent to the channel. +If a `send-failure-channel` (`sendFailureChannel`) is provided and a `send()` failure (sync or async) is received, an `ErrorMessage` is sent to the channel. The payload is a `KafkaSendFailureException` with `failedMessage`, `record` (the `ProducerRecord`) and `cause` properties. You can override the `DefaultErrorMessageStrategy` by setting the `error-message-strategy` property. @@ -215,7 +215,7 @@ The following example shows how to configure the Kafka outbound channel adapter The `KafkaMessageDrivenChannelAdapter` (``) uses a `spring-kafka` `KafkaMessageListenerContainer` or `ConcurrentListenerContainer`. -Also the `mode` attribute is available. +Also, the `mode` attribute is available. It can accept values of `record` or `batch` (default: `record`). For `record` mode, each message payload is converted from a single `ConsumerRecord`. For `batch` mode, the payload is a list of objects that are converted from all the `ConsumerRecord` instances returned by the consumer poll. @@ -439,14 +439,14 @@ public IntegrationFlow flow(ConsumerFactory cf) { === Outbound Gateway The outbound gateway is for request/reply operations. -It differs from most Spring Integration gateways in that the sending thread does not block in the gateway and the reply is processed on the reply listener container thread. +It differs from most Spring Integration gateways in that the sending thread does not block in the gateway, and the reply is processed on the reply listener container thread. If your code invokes the gateway behind a synchronous https://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#gateway[Messaging Gateway], the user thread blocks there until the reply is received (or a timeout occurs). IMPORTANT: The gateway does not accept requests until the reply container has been assigned its topics and partitions. It is suggested that you add a `ConsumerRebalanceListener` to the template's reply container properties and wait for the `onPartitionsAssigned` call before sending messages to the gateway. The `KafkaProducerMessageHandler` `sendTimeoutExpression` default is `delivery.timeout.ms` Kafka producer property `+ 5000` so that the actual Kafka error after a timeout is propagated to the application, instead of a timeout generated by this framework. -This has been changed for consistency because you may get unexpected behavior (Spring may timeout the send, while it is actually, eventually, successful). +This has been changed for consistency because you may get unexpected behavior (Spring may timeout the `send()`, while it is actually, eventually, successful). IMPORTANT: That timeout is 120 seconds by default so you may wish to reduce it to get more timely failures. ==== Java Configuration @@ -803,9 +803,9 @@ public KafkaMessageDrivenChannelAdapter Spring Messaging `Message` objects cannot have `null` payloads. When you use the endpoints for Apache Kafka, `null` payloads (also known as tombstone records) are represented by a payload of type `KafkaNull`. -See See https://docs.spring.io/spring-kafka/docs/current/reference/html/[the Spring for Apache Kafka documentation] for more information. +See https://docs.spring.io/spring-kafka/docs/current/reference/html/[the Spring for Apache Kafka documentation] for more information. -The POJO methods for Spring Integration endpoints can use a true `null` value instead instead of `KafkaNull`. +The POJO methods for Spring Integration endpoints can use a true `null` value instead of `KafkaNull`. To do so, mark the parameter with `@Payload(required = false)`. The following example shows how to do so: @@ -868,7 +868,7 @@ When an integration flow starts with an interface, the proxy that is created has === Performance Considerations for read/process/write Scenarios Many applications consume from a topic, perform some processing and write to another topic. -In most, cases, if the write fails, the application would want to throw an exception so the incoming request can be retried and/or sent to a dead letter topic. +In most, cases, if the `write` fails, the application would want to throw an exception so the incoming request can be retried and/or sent to a dead letter topic. This functionality is supported by the underlying message listener container, together with a suitably configured error handler. However, in order to support this, we need to block the listener thread until the success (or failure) of the write operation so that any exceptions can be thrown to the container. When consuming single records, this is achieved by setting the `sync` property on the outbound adapter. diff --git a/src/reference/asciidoc/kotlin-dsl.adoc b/src/reference/asciidoc/kotlin-dsl.adoc index 7908b5b0b26..976e568ae92 100644 --- a/src/reference/asciidoc/kotlin-dsl.adoc +++ b/src/reference/asciidoc/kotlin-dsl.adoc @@ -20,7 +20,7 @@ IntegrationFlow { flow -> In this case Kotlin understands that the lambda should be translated into `IntegrationFlow` anonymous instance and target Java DSL processor parses this construction properly into Java objects. -As an alternative to the construction above and for consistency with use-cases explained below, a Kotlin-specif DSL should be used for declaring integration flows in the *builder* pattern style: +As an alternative to the construction above and for consistency with use-cases explained below, a Kotlin-specific DSL should be used for declaring integration flows in the *builder* pattern style: ==== [source, kotlin] diff --git a/src/reference/asciidoc/mail.adoc b/src/reference/asciidoc/mail.adoc index 84d061d311a..24fdab51bda 100644 --- a/src/reference/asciidoc/mail.adoc +++ b/src/reference/asciidoc/mail.adoc @@ -23,7 +23,7 @@ compile "org.springframework.integration:spring-integration-mail:{project-versio ---- ==== -The `javax.mail:javax.mail-api` must be included via vendor-specific implementation. +The `jakarta.mail:jakarta.mail-api` must be included via vendor-specific implementation. [[mail-outbound]] === Mail-sending Channel Adapter @@ -50,7 +50,7 @@ In that case, a `MailMessage` is created with that `String` as the text content. If you work with a message payload type whose `toString()` method returns appropriate mail text content, consider adding Spring Integration's `ObjectToStringTransformer` prior to the outbound mail adapter (see the example in <<./transformer.adoc#transformer-namespace,Configuring a Transformer with XML>> for more detail). You can also configure the outbound `MailMessage` with certain values from `MessageHeaders`. -If available, values are mapped to the outbound mail's properties, such as the recipients (To, Cc, and BCc), the from, the reply-to, and the subject. +If available, values are mapped to the outbound mail's properties, such as the recipients (To, Cc, and BCc), the `from`, the `reply-to`, and the `subject`. The header names are defined by the following constants: ==== @@ -74,7 +74,7 @@ For example, if `MailMessage.to` is set to 'thing1@things.com' and the `MailHead Spring Integration also provides support for inbound email with the `MailReceivingMessageSource`. It delegates to a configured instance of Spring Integration's own `MailReceiver` interface. There are two implementations: `Pop3MailReceiver` and `ImapMailReceiver`. -The easiest way to instantiate either of these is by passing the 'uri' for a mail store to the receiver's constructor, as the following example shows: +The easiest way to instantiate either of these is bypassing the 'uri' for a mail store to the receiver's constructor, as the following example shows: ==== [source,java] @@ -110,10 +110,10 @@ With a simple `MimeMessage`, `getContent()` returns the mail body (`something` i Starting with version 2.2, the framework eagerly fetches IMAP messages and exposes them as an internal subclass of `MimeMessage`. This had the undesired side effect of changing the `getContent()` behavior. This inconsistency was further exacerbated by the <> enhancement introduced in version 4.3, because, when a header mapper was provided, the payload was rendered by the `IMAPMessage.getContent()` method. -This meant that the IMAP content differed, depending on whether or not a header mapper was provided. +This meant that the IMAP content differed, depending on whether a header mapper was provided. Starting with version 5.0, messages originating from an IMAP source render the content in accordance with `IMAPMessage.getContent()` behavior, regardless of whether a header mapper is provided. -If you do not use a header mapper and you wish to revert to the previous behavior of rendering only the body, set the `simpleContent` boolean property on the mail receiver to `true`. +If you do not use a header mapper, and you wish to revert to the previous behavior of rendering only the body, set the `simpleContent` boolean property on the mail receiver to `true`. This property now controls the rendering regardless of whether a header mapper is used. It now allows body-only rendering when a header mapper is provided. @@ -138,7 +138,7 @@ The `close()` on the `IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE` heade Starting with version 5.4, it is possible now to return a `MimeMessage` as is without any conversion or eager content loading. This functionality is enabled with this combination of options: no `headerMapper` provided, the `simpleContent` property is `false` and the `autoCloseFolder` property is `false`. The `MimeMessage` is present as the payload of the Spring message produced. -In this case, the only header populated is the above mentioned `IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE` for the folder which must be closed when processing of the `MimeMessage` is complete. +In this case, the only header populated is the mentioned above `IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE` for the folder which must be closed when processing of the `MimeMessage` is complete. [[mail-mapping]] === Inbound Mail Message Mapping @@ -168,7 +168,7 @@ Email contents are usually rendered by a `DataHandler` within the `MimeMessage`. For a `text/*` email, the payload is a `String` and the `contentType` header is the same as `mail_contentType`. -For a messages with embedded `javax.mail.Part` instances, the `DataHandler` usually renders a `Part` object. +For a messages with embedded `jakarta.mail.Part` instances, the `DataHandler` usually renders a `Part` object. These objects are not `Serializable` and are not suitable for serialization with alternative technologies such as `Kryo`. For this reason, by default, when mapping is enabled, such payloads are rendered as a raw `byte[]` containing the `Part` data. Examples of `Part` are `Message` and `Multipart`. @@ -176,7 +176,7 @@ The `contentType` header is `application/octet-stream` in this case. To change this behavior and receive a `Multipart` object payload, set `embeddedPartsAsBytes` to `false` on `MailReceiver`. For content types that are unknown to the `DataHandler`, the contents are rendered as a `byte[]` with a `contentType` header of `application/octet-stream`. -When you do not provide a header mapper, the message payload is the `MimeMessage` presented by `javax.mail`. +When you do not provide a header mapper, the message payload is the `MimeMessage` presented by `jakarta.mail`. The framework provides a `MailToStringTransformer` that you can use to convert the message by using a strategy to convert the mail contents to a `String`: ==== @@ -216,7 +216,7 @@ If you wish to perform some other transformation on the message, consider subcla Starting with version 5.4, when no `headerMapper` is provided, `autoCloseFolder` is `false` and `simpleContent` is `false`, the `MimeMessage` is returned as-is in the payload of the Spring message produced. This way, the content of the `MimeMessage` is loaded on demand when referenced, later in the flow. -All of the mentioned above transformations are still valid. +All the mentioned above transformations are still valid. [[mail-namespace]] === Mail Namespace Support @@ -259,7 +259,7 @@ Alternatively, you can provide the host, username, and password, as the followin ==== Starting with version 5.1.3, the `host`, `username` ane `mail-sender` can be omitted, if `java-mail-properties` is provided. -However the `host` and `username` has to be configured with appropriate Java mail properties, e.g. for SMTP: +However, the `host` and `username` has to be configured with appropriate Java mail properties, e.g. for SMTP: ==== [source] @@ -375,7 +375,7 @@ public interface SearchTermStrategy { ---- ==== -The following example relies `TestSearchTermStrategy` rather than the default `SearchTermStrategy`: +The following example relies on `TestSearchTermStrategy` rather than the default `SearchTermStrategy`: ==== [source,xml] @@ -406,10 +406,10 @@ If not specified, the previous behavior is retained (peek is `true`). When using an IMAP `idle` channel adapter, connections to the server may be lost (for example, through network failure) and, since the JavaMail documentation explicitly states that the actual IMAP API is experimental, it is important to understand the differences in the API and how to deal with them when configuring IMAP `idle` adapters. Currently, Spring Integration mail adapters were tested with JavaMail 1.4.1 and JavaMail 1.4.3. -Depending on which one is used, you must pay special attention to some of the JavaMail properties that need to be set with regard to auto-reconnect. +Depending on which one is used, you must pay special attention to some JavaMail properties that need to be set with regard to auto-reconnect. NOTE: The following behavior was observed with Gmail but should provide you with some tips on how to solve re-connect issue with other providers. -However feedback is always welcome. +However, feedback is always welcome. Again, the following notes are based on Gmail. With JavaMail 1.4.1, if you set the `mail.imaps.timeout` property to a relatively short period of time (approximately 5 min in our testing), `IMAPFolder.idle()` throws `FolderClosedException` after this timeout. @@ -419,7 +419,7 @@ However, if the connection was lost for a long period of time (over 10 min), `IM Consequently, the only way to make re-connecting work with JavaMail 1.4.1 is to set the `mail.imaps.timeout` property explicitly to some value, but it also means that such value should be relatively short (under 10 min) and the connection should be re-established relatively quickly. Again, it may be different with providers other than Gmail. With JavaMail 1.4.3 introduced significant improvements to the API, ensuring that there is always a condition that forces the `IMAPFolder.idle()` method to return `StoreClosedException` or `FolderClosedException` or to simply return, thus letting you proceed with auto-reconnecting. -Currently auto-reconnecting runs infinitely making attempts to reconnect every ten seconds. +Currently, auto-reconnecting runs infinitely making attempts to reconnect every ten seconds. IMPORTANT: In both configurations, `channel` and `should-delete-messages` are required attributes. You should understand why `should-delete-messages` is required. @@ -447,7 +447,7 @@ RFC 2177 recommends an interval no larger than 29 minutes. [IMPORTANT] ===== -You should understand that that these actions (marking messages read and deleting messages) are performed after the messages are received but before they are processed. +You should understand that these actions (marking messages read and deleting messages) are performed after the messages are received but before they are processed. This can cause messages to be lost. You may wish to consider using transaction synchronization instead. @@ -491,7 +491,7 @@ Spring Integration 2.0.4 introduced the `mail-filter-expression` attribute on `i This attribute lets you provide an expression that is a combination of SpEL and a regular expression. For example if you would like to read only emails that contain 'Spring Integration' in the subject line, you would configure the `mail-filter-expression` attribute like as follows: `mail-filter-expression="subject matches '(?i).*Spring Integration.*"`. -Since `javax.mail.internet.MimeMessage` is the root context of the SpEL evaluation context, you can filter on any value available through `MimeMessage`, including the actual body of the message. +Since `jakarta.mail.internet.MimeMessage` is the root context of the SpEL evaluation context, you can filter on any value available through `MimeMessage`, including the actual body of the message. This one is particularly important, since reading the body of the message typically results in such messages being marked as `SEEN` by default. However, since we now set the `PEEK` flag of every incoming message to 'true', only messages that were explicitly marked as `SEEN` are marked as read. @@ -529,7 +529,7 @@ You can enable transaction synchronization by adding a `` elemen Even if there is no 'real' transaction involved, you can still enable this feature by using a `PseudoTransactionManager` with the `` element. For more information, see <<./transactions.adoc#transaction-synchronization,Transaction Synchronization>>. -Because of the many different mail servers and specifically the limitations that some have, at this time we provide only a strategy for these transaction synchronizations. +Because of the different mail servers and specifically the limitations that some have, at this time we provide only a strategy for these transaction synchronizations. You can send the messages to some other Spring Integration components or invoke a custom bean to perform some action. For example, to move an IMAP message to a different folder after the transaction commits, you might use something similar to the following: @@ -593,7 +593,7 @@ IMPORTANT: For the message to be still available for manipulation after the tran [[mail-java-dsl-configuration]] === Configuring channel adapters with the Java DSL -To configure mail mail component in Java DSL, the framework provides a `o.s.i.mail.dsl.Mail` factory, which can be used like this: +To configure mail component in Java DSL, the framework provides a `o.s.i.mail.dsl.Mail` factory, which can be used like this: ==== [source, java] diff --git a/src/reference/asciidoc/message-history.adoc b/src/reference/asciidoc/message-history.adoc index 7b7c9cd8cfd..950ec63c552 100644 --- a/src/reference/asciidoc/message-history.adoc +++ b/src/reference/asciidoc/message-history.adoc @@ -27,7 +27,7 @@ To enable message history, you need only define the `message-history` element (o ---- ==== -Now every named component (component that has an 'id' defined) is tracked. +Now every named component (that has an 'id' defined) is tracked. The framework sets the 'history' header in your message. Its value a `List`. @@ -88,7 +88,7 @@ assertEquals("sampleChain", chainHistory.get("name")); ---- ==== -You might not want to track all of the components. +You might not want to track all the components. To limit the history to certain components based on their names, you can provide the `tracked-components` attribute and specify a comma-delimited list of component names and patterns that match the components you want to track. The following example shows how to do so: diff --git a/src/reference/asciidoc/message-publishing.adoc b/src/reference/asciidoc/message-publishing.adoc index b96bf73cc2b..3e4096d4ef6 100644 --- a/src/reference/asciidoc/message-publishing.adoc +++ b/src/reference/asciidoc/message-publishing.adoc @@ -271,7 +271,7 @@ This means that the entire message flow has to wait until the publisher's flow c However, developers often want the complete opposite: to use this message-publishing feature to initiate asynchronous flows. For example, you might host a service (HTTP, WS, and so on) which receives a remote request. You may want to send this request internally into a process that might take a while. -However you may also want to reply to the user right away. +However, you may also want to reply to the user right away. So, instead of sending inbound requests for processing to the output channel (the conventional way), you can use 'output-channel' or a 'replyChannel' header to send a simple acknowledgment-like reply back to the caller while using the message-publisher feature to initiate a complex flow. The service in the following example receives a complex payload (which needs to be sent further for processing), but it also needs to reply to the caller with a simple acknowledgment: diff --git a/src/reference/asciidoc/message-store.adoc b/src/reference/asciidoc/message-store.adoc index d4bcace9ef3..79faaa3b7f3 100644 --- a/src/reference/asciidoc/message-store.adoc +++ b/src/reference/asciidoc/message-store.adoc @@ -92,7 +92,7 @@ When used internally by the aggregator, this property was set to `false` to impr It is now `false` by default. Users accessing the group store outside of components such as aggregators now get a direct reference to the group being used by the aggregator instead of a copy. -Manipulation of the group outside of the aggregator may cause unpredictable results. +Manipulation of the group outside the aggregator may cause unpredictable results. For this reason, you should either not perform such manipulation or set the `copyOnGet` property to `true`. ===== @@ -103,7 +103,7 @@ For this reason, you should either not perform such manipulation or set the `cop Starting with version 4.3, some `MessageGroupStore` implementations can be injected with a custom `MessageGroupFactory` strategy to create and customize the `MessageGroup` instances used by the `MessageGroupStore`. This defaults to a `SimpleMessageGroupFactory`, which produces `SimpleMessageGroup` instances based on the `GroupType.HASH_SET` (`LinkedHashSet`) internal collection. Other possible options are `SYNCHRONISED_SET` and `BLOCKING_QUEUE`, where the last one can be used to reinstate the previous `SimpleMessageGroup` behavior. -Also the `PERSISTENT` option is available. +Also, the `PERSISTENT` option is available. See the next section for more information. Starting with version 5.0.1, the `LIST` option is also available for when the order and uniqueness of messages in the group does not matter. @@ -143,7 +143,7 @@ ms % Task name ---- ==== -However starting with version 5.5, all the persistent `MessageGroupStore` implementations provide a `streamMessagesForGroup(Object groupId)` contract based on the target database streaming API. +However, starting with version 5.5, all the persistent `MessageGroupStore` implementations provide a `streamMessagesForGroup(Object groupId)` contract based on the target database streaming API. This improves resources utilization when groups are very big in the store. Internally in the framework this new API is used in the <<./delayer.adoc#delayer,Delayer>> (for example) when it reschedules persisted messages on startup. A returned `Stream>` must be closed in the end of processing, e.g. via auto-close by the `try-with-resources`. diff --git a/src/reference/asciidoc/metrics.adoc b/src/reference/asciidoc/metrics.adoc index 19c07023ce7..a5dcfc63d8f 100644 --- a/src/reference/asciidoc/metrics.adoc +++ b/src/reference/asciidoc/metrics.adoc @@ -4,11 +4,6 @@ This section describes how to capture metrics for Spring Integration. In recent versions, we have relied more on Micrometer (see https://micrometer.io), and we plan to use Micrometer even more in future releases. -[[legacy-metrics]] -==== Legacy Metrics - -Legacy metrics were removed in Version 5.4; see Micrometer Integration below. - ==== Disabling Logging in High Volume Environments You can control debug logging in the main message flow. @@ -122,7 +117,6 @@ and ===== Disabling Meters -With the <> (which have now been removed), you could specify which integration components would collect metrics. By default, all meters are registered when first used. Now, with Micrometer, you can add `MeterFilter` s to the `MeterRegistry` to prevent some or all from being registered. You can filter out (deny) meters by any of the properties provided, `name`, `tag`, etc. diff --git a/src/reference/asciidoc/mongodb.adoc b/src/reference/asciidoc/mongodb.adoc index 9ba0de3f8af..34298cfae34 100644 --- a/src/reference/asciidoc/mongodb.adoc +++ b/src/reference/asciidoc/mongodb.adoc @@ -5,7 +5,6 @@ Version 2.1 introduced support for https://www.mongodb.org/[MongoDB]: a "`high-p You need to include this dependency into your project: - ==== [source, xml, subs="normal", role="primary"] .Maven @@ -74,7 +73,7 @@ To begin interacting with MongoDB, you first need to connect to it. Spring Integration builds on the support provided by another Spring project, https://projects.spring.io/spring-data-mongodb/[Spring Data MongoDB]. It provides factory classes called `MongoDatabaseFactory` and `ReactiveMongoDatabaseFactory`, which simplify integration with the MongoDB Client API. -TIP: Spring Data provides provides the blocking MongoDB driver by default but you may opt-in for reactive usage by including the above dependency. +TIP: Spring Data provides the blocking MongoDB driver by default, but you may opt in for reactive usage by including the above dependency. ==== Using `MongoDatabaseFactory` @@ -348,7 +347,7 @@ If the result of an expression is null or void, no message is generated. For more information about transaction synchronization, see <<./transactions.adoc#transaction-synchronization,Transaction Synchronization>>. Starting with version 5.5, the `MongoDbMessageSource` can be configured with an `updateExpression`, which must evaluate to a `String` with the MongoDb `update` syntax or to an `org.springframework.data.mongodb.core.query.Update` instance. -It can be used as an alternative to abov described post-processing procedure and it modifies those entities that were fetched from the collection, so they won't be pulled from the collection again on the next polling cycle (assuming the update changes some value used in the query). +It can be used as an alternative to described above post-processing procedure, and it modifies those entities that were fetched from the collection, so they won't be pulled from the collection again on the next polling cycle (assuming the update changes some value used in the query). It is still recommended to use transactions to achieve execution isolation and data consistency, when several instances of the `MongoDbMessageSource` for the same collection are used in the cluster. [[mongodb-change-stream-channel-adapter]] @@ -404,7 +403,7 @@ As the preceding configuration shows, you can configure a MongoDB outbound chann * `mongodb-factory`: Reference to an instance of `o.s.data.mongodb.MongoDbFactory`. * `mongo-template`: Reference to an instance of `o.s.data.mongodb.core.MongoTemplate`. NOTE: you can not have both mongo-template and mongodb-factory set. -* Other attributes that are common across all other inbound adapters (such as 'channel'). +* Other attributes that are common across all inbound adapters (such as 'channel'). The preceding example is relatively simple and static, since it has a literal value for the `collection-name`. Sometimes, you may need to change this value at runtime, based on some condition. @@ -594,7 +593,7 @@ The real operation is going to be performed on-demand from the reactive stream c The `ReactiveMongoDbMessageSource` is an `AbstractMessageSource` implementation based on the provided `ReactiveMongoDatabaseFactory` or `ReactiveMongoOperations` and MongoDb query (or expression), calls `find()` or `findOne()` operation according an `expectSingleResult` option with an expected `entityClass` type to convert a query result. A query execution and result evaluation is performed on demand when `Publisher` (`Flux` or `Mono` according `expectSingleResult` option) in the payload of produced message is subscribed. The framework can subscribe to such a payload automatically (essentially `flatMap`) when splitter and `FluxMessageChannel` are used downstream. -Otherwise it is target application responsibility to subscribe into a polled publishers in downstream endpoints. +Otherwise, it is target application responsibility to subscribe into a polled publishers in downstream endpoints. With Java DSL such a channel adapter could be configured like: diff --git a/src/reference/asciidoc/mqtt.adoc b/src/reference/asciidoc/mqtt.adoc index 77c5749b080..012e4f4e53e 100644 --- a/src/reference/asciidoc/mqtt.adoc +++ b/src/reference/asciidoc/mqtt.adoc @@ -6,8 +6,8 @@ Spring Integration provides inbound and outbound channel adapters to support the You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -15,9 +15,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-mqtt:{project-version}" ---- @@ -93,7 +92,7 @@ By default, the default `DefaultPahoMessageConverter` produces a message with a * `mqtt_qos`: The quality of service You can configure the `DefaultPahoMessageConverter` to return the raw `byte[]` in the payload by declaring it as a `` and setting the `payloadAsBytes` property to `true`. <6> The client factory. -<7> The send timeout. +<7> The `send()` timeout. It applies only if the channel might block (such as a bounded `QueueChannel` that is currently full). <8> The error channel. Downstream exceptions are sent to this channel, if supplied, in an `ErrorMessage`. @@ -306,7 +305,7 @@ The default is `false` (the send blocks until delivery is confirmed). <12> When `async` and `async-events` are both `true`, an `MqttMessageSentEvent` is emitted (See <>). It contains the message, the topic, the `messageId` generated by the client library, the `clientId`, and the `clientInstance` (incremented each time the client is connected). When the delivery is confirmed by the client library, an `MqttMessageDeliveredEvent` is emitted. -It contains the `messageId`, the `clientId`, and the `clientInstance`, enabling delivery to be correlated with the send. +It contains the `messageId`, the `clientId`, and the `clientInstance`, enabling delivery to be correlated with the `send()`. Any `ApplicationListener` or an event inbound channel adapter can received these events. Note that it is possible for the `MqttMessageDeliveredEvent` to be received before the `MqttMessageSentEvent`. The default is `false`. @@ -428,7 +427,7 @@ Starting with version 5.5.5, the `spring-integration-mqtt` module provides chann The `org.eclipse.paho:org.eclipse.paho.mqttv5.client` is an `optional` dependency, so has to be included explicitly in the target project. Since the MQTT v5 protocol supports extra arbitrary properties in an MQTT message, the `MqttHeaderMapper` implementation has been introduced to map to/from headers on publish and receive operations. -By default (via the `*` pattern) it maps all the received `PUBLISH` frame properties (including user properties). +By default, (via the `*` pattern) it maps all the received `PUBLISH` frame properties (including user properties). On the outbound side it maps this subset of headers for `PUBLISH` frame: `contentType`, `mqtt_messageExpiryInterval`, `mqtt_responseTopic`, `mqtt_correlationData`. The outbound channel adapter for the MQTT v5 protocol is present as an `Mqttv5PahoMessageHandler`. diff --git a/src/reference/asciidoc/overview.adoc b/src/reference/asciidoc/overview.adoc index ad0bd835fb3..06affa7af40 100644 --- a/src/reference/asciidoc/overview.adoc +++ b/src/reference/asciidoc/overview.adoc @@ -36,7 +36,7 @@ In other words, the messaging and integration concerns are handled by the framew Business components are further isolated from the infrastructure, and developers are relieved of complex integration responsibilities. As an extension of the Spring programming model, Spring Integration provides a wide variety of configuration options, including annotations, XML with namespace support, XML with generic "`bean`" elements, and direct usage of the underlying API. -That API is based upon well defined strategy interfaces and non-invasive, delegating adapters. +That API is based upon well-defined strategy interfaces and non-invasive, delegating adapters. Spring Integration's design is inspired by the recognition of a strong affinity between common patterns within Spring and the well known patterns described in https://www.enterpriseintegrationpatterns.com/[_Enterprise Integration Patterns_], by Gregor Hohpe and Bobby Woolf (Addison Wesley, 2004). Developers who have read that book should be immediately comfortable with the Spring Integration concepts and terminology. @@ -128,7 +128,7 @@ This is similar to the role of a controller in the MVC paradigm. Just as a controller handles HTTP requests, the message endpoint handles messages. Just as controllers are mapped to URL patterns, message endpoints are mapped to message channels. The goal is the same in both cases: isolate application code from the infrastructure. -These concepts and all of the patterns that follow are discussed at length in the https://www.enterpriseintegrationpatterns.com/[_Enterprise Integration Patterns_] book. +These concepts and all the patterns that follow are discussed at length in the https://www.enterpriseintegrationpatterns.com/[_Enterprise Integration Patterns_] book. Here, we provide only a high-level description of the main endpoint types supported by Spring Integration and the roles associated with those types. The chapters that follow elaborate and provide sample code as well as configuration examples. @@ -339,7 +339,7 @@ There is one special case where a third bean is created: For architectural reaso This wrapper supports request handler advice handling and emits the normal 'produced no reply' debug log messages. Its bean name is the handler bean name plus `.wrapper` (when there is an `@EndpointId` -- otherwise, it is the normal generated handler name). -Similarly <<./polling-consumer.adoc#pollable-message-source, Pollable Message Sources>> create two beans, a `SourcePollingChannelAdapter` (SPCA) and a `MessageSource`. +Similarly, <<./polling-consumer.adoc#pollable-message-source, Pollable Message Sources>> create two beans, a `SourcePollingChannelAdapter` (SPCA) and a `MessageSource`. Consider the following XML configuration: @@ -539,7 +539,7 @@ You can add a property for `${spring.boot.version}` or use an explicit version. [[programming-tips]] === Programming Tips and Tricks -This section documents some of the ways to get the most from Spring Integration. +This section documents some ways to get the most from Spring Integration. ==== XML Schemas @@ -556,8 +556,8 @@ Each of these online schemas has a warning similar to the following: ==== This schema is for the 1.0 version of Spring Integration Core. We cannot update it to the current schema because that will break any applications using 1.0.3 or lower. -For subsequent versions, the unversioned schema is resolved from the classpath and obtained from the jar. -Please refer to github: +For subsequent versions, the "unversioned" schema is resolved from the classpath and obtained from the jar. +Please refer to GitHub: https://github.com/spring-projects/spring-integration/tree/main/spring-integration-core/src/main/resources/org/springframework/integration/config ==== diff --git a/src/reference/asciidoc/polling-consumer.adoc b/src/reference/asciidoc/polling-consumer.adoc index e11942bd88f..6f3d67685ca 100644 --- a/src/reference/asciidoc/polling-consumer.adoc +++ b/src/reference/asciidoc/polling-consumer.adoc @@ -151,7 +151,7 @@ What if we wish to take some action depending on the result of the `receive` par Version 5.3 introduced the `ReceiveMessageAdvice` interface. (The `AbstractMessageSourceAdvice` has been deprecated in favor of `default` methods in the `MessageSourceMutator`.) Any `Advice` objects in the `advice-chain` that implement this interface are applied only to the receive operation - `MessageSource.receive()` and `PollableChannel.receive(timeout)`. -Therefore they can be applied only for the `SourcePollingChannelAdapter` or `PollingConsumer`. +Therefore, they can be applied only for the `SourcePollingChannelAdapter` or `PollingConsumer`. Such classes implement the following methods: * `beforeReceive(Object source)` @@ -167,8 +167,8 @@ You can even return a different message .Thread safety [IMPORTANT] ==== -If an advice mutates the, you should not configure the poller with a `TaskExecutor`. -If an advice mutates the source, such mutations are not thread safe and could cause unexpected results, especially with high frequency pollers. +If an `Advice` mutates the source, you should not configure the poller with a `TaskExecutor`. +If an `Advice` mutates the source, such mutations are not thread safe and could cause unexpected results, especially with high frequency pollers. If you need to process poll results concurrently, consider using a downstream `ExecutorChannel` instead of adding an executor to the poller. ==== diff --git a/src/reference/asciidoc/r2dbc.adoc b/src/reference/asciidoc/r2dbc.adoc index 64f32d1a1ed..c2e6fa7aa3a 100644 --- a/src/reference/asciidoc/r2dbc.adoc +++ b/src/reference/asciidoc/r2dbc.adoc @@ -6,8 +6,8 @@ Spring Integration provides channel adapters for receiving and sending messages You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -15,9 +15,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-r2dbc:{project-version}" ---- @@ -29,7 +28,7 @@ compile "org.springframework.integration:spring-integration-r2dbc:{project-versi The `R2dbcMessageSource` is a pollable `MessageSource` implementation based on the `R2dbcEntityOperations` and produces messages with a `Flux` or `Mono` as a payload for data fetched from database according an `expectSingleResult` option. The query to `SELECT` can be statically provided or based on a SpEL expression which is evaluated on every `receive()` call. The `R2dbcMessageSource.SelectCreator` is present as a root object for evaluation context to allow to use a `StatementMapper.SelectSpec` fluent API. -By default this channel adapter maps records from the select into a `LinkedCaseInsensitiveMap` instances. +By default, this channel adapter maps records from the select into a `LinkedCaseInsensitiveMap` instances. It can be customized providing a `payloadType` options which is used underneath by the `EntityRowMapper` based on the `this.r2dbcEntityOperations.getConverter()`. The `updateSql` is optional and used to mark read records in the databased for skipping from the subsequent polls. The `UPDATE` operation can be supplied with a `BiFunction` to bind values into an `UPDATE` based on records in the `SELECT` result. diff --git a/src/reference/asciidoc/reactive-streams.adoc b/src/reference/asciidoc/reactive-streams.adoc index d8de1accc5a..2825436bdbd 100644 --- a/src/reference/asciidoc/reactive-streams.adoc +++ b/src/reference/asciidoc/reactive-streams.adoc @@ -11,7 +11,7 @@ Spring Integration enables lightweight messaging within Spring-based application Spring Integration’s primary goal is to provide a simple model for building enterprise integration solutions while maintaining the separation of concerns that is essential for producing maintainable, testable code. This goal is achieved in the target application using first class citizens like `message`, `channel` and `endpoint`, which allow us to build an integration flow (pipeline), where (in most cases) one endpoint produces messages into a channel to be consumed by another endpoint. This way we distinguish an integration interaction model from the target business logic. -The crucial part here is a channel in between: the flow behavior depends from its implementation leaving endpoints untouched. +The crucial part here is a channel in between: the flow behavior depends on its implementation leaving endpoints untouched. On the other hand, the Reactive Streams is a standard for asynchronous stream processing with non-blocking back pressure. The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary – like passing elements on to another thread or thread-pool – while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. @@ -101,7 +101,7 @@ It is a combination of a provided `MessageSource` and event-driven production in Internally it wraps a `MessageSource` into the repeatedly resubscribed `Mono` producing a `Flux>` to be subscribed in the `subscribeToPublisher(Publisher>)` mentioned above. The subscription for this `Mono` is done using `Schedulers.boundedElastic()` to avoid possible blocking in the target `MessageSource`. When the message source returns `null` (no data to pull), the `Mono` is turned into a `repeatWhenEmpty()` state with a `delay` for a subsequent re-subscription based on a `IntegrationReactiveUtils.DELAY_WHEN_EMPTY_KEY` `Duration` entry from the subscriber context. -By default it is 1 second. +By default, it is 1 second. If the `MessageSource` produces messages with a `IntegrationMessageHeaderAccessor.ACKNOWLEDGMENT_CALLBACK` information in the headers, it is acknowledged (if necessary) in the `doOnSuccess()` of the original `Mono` and rejected in the `doOnError()` if the downstream flow throws a `MessagingException` with the failed message to reject. This `ReactiveMessageSourceProducer` could be used for any use-case when a a polling channel adapter's features should be turned into a reactive, on demand solution for any existing `MessageSource` implementation. @@ -131,7 +131,7 @@ Starting with version 5.5.6, a `toReactivePublisher(boolean autoStartOnSubscribe Typically, the subscription and consumption from the reactive publisher happens in the later runtime phase, not during reactive stream composition, or even `ApplicationContext` startup. To avoid boilerplate code for lifecycle management of the `IntegrationFlow` at the `Publisher>` subscription point and for better end-user experience, this new operator with the `autoStartOnSubscribe` flag has been introduced. It marks (if `true`) the `IntegrationFlow` and its components for `autoStartup = false`, so an `ApplicationContext` won't initiate production and consumption of messages in the flow automatically. -Instead the `start()` for the `IntegrationFlow` is initiated from the internal `Flux.doOnSubscribe()`. +Instead, the `start()` for the `IntegrationFlow` is initiated from the internal `Flux.doOnSubscribe()`. Independently of the `autoStartOnSubscribe` value, the flow is stopped from a `Flux.doOnCancel()` and `Flux.doOnTerminate()` - it does not make sense to produce messages if there is nothing to consume them. For the exact opposite use-case, when `IntegrationFlow` should call a reactive stream and continue after completion, a `fluxTransform()` operator is provided in the `IntegrationFlowDefinition`. @@ -147,7 +147,7 @@ Starting with version 5.3, the `ReactiveMessageHandler` is supported natively in This type of message handler is designed for reactive clients which return a reactive type for on-demand subscription for low-level operation execution and doesn't provide any reply data to continue a reactive stream composition. When a `ReactiveMessageHandler` is used in the imperative integration flow, the `handleMessage()` result in subscribed immediately after return, just because there is no reactive streams composition in such a flow to honor back-pressure. In this case the framework wraps this `ReactiveMessageHandler` into a `ReactiveMessageHandlerAdapter` - a plain implementation of `MessageHandler`. -However when a `ReactiveStreamsConsumer` is involved in the flow (e.g. when channel to consume is a `FluxMessageChannel`), such a `ReactiveMessageHandler` is composed to the whole reactive stream with a `flatMap()` Reactor operator to honor back-pressure during consumption. +However, when a `ReactiveStreamsConsumer` is involved in the flow (e.g. when channel to consume is a `FluxMessageChannel`), such a `ReactiveMessageHandler` is composed to the whole reactive stream with a `flatMap()` Reactor operator to honor back-pressure during consumption. One of the out-of-the-box `ReactiveMessageHandler` implementation is a `ReactiveMongoDbStoringMessageHandler` for Outbound Channel Adapter. See <<./mongodb.adoc#mongodb-reactive-channel-adapters,MongoDB Reactive Channel Adapters>> for more information. @@ -165,7 +165,10 @@ This is not always available by the nature (or with the current implementation) This limitation can be handled using thread pools and queues or `FluxMessageChannel` (see above) before and after integration endpoints when there is no reactive implementation. An example for a reactive **event-driven** inbound channel adapter: -```java + +==== +[source, java] +---- public class CustomReactiveMessageProducer extends MessageProducerSupport { private final CustomReactiveSource customReactiveSource; @@ -188,11 +191,14 @@ public class CustomReactiveMessageProducer extends MessageProducerSupport { subscribeToPublisher(messageFlux); } } -``` +---- +==== Usage would look like: -```java +==== +[source, java] +---- public class MainFlow { @Autowired private CustomReactiveMessageProducer customReactiveMessageProducer; @@ -204,10 +210,14 @@ public class MainFlow { .get(); } } -``` +---- +==== + Or in a declarative way: -```java +==== +[source, java] +---- public class MainFlow { @Bean public IntegrationFlow buildFlow() { @@ -216,14 +226,19 @@ public class MainFlow { .get(); } } -``` +---- +==== + Or even without a channel adapter, we can always use the Java DSL in the following way: -```java + +==== +[source, java] +---- public class MainFlow { @Bean public IntegrationFlow buildFlow() { Flux> myFlux = this.customReactiveSource - .map(event - > + .map(event -> MessageBuilder .withPayload(event.getBody()) .setHeader(MyReactiveHeaders.SOURCE_NAME, event.getSourceName()) @@ -233,14 +248,18 @@ public class MainFlow { .get(); } } -``` +---- +==== A reactive outbound channel adapter implementation is about the initiation (or continuation) of a reactive stream to interaction with an external system according to the provided reactive API for the target protocol. An inbound payload could be a reactive type per se or as an event of the whole integration flow which is a part of the reactive stream on top. A returned reactive type can be subscribed immediately if we are in a one-way, fire-and-forget scenario, or it is propagated downstream (request-reply scenarios) for further integration flow or an explicit subscription in the target business logic, but still downstream preserving reactive streams semantics. An example for a reactive outbound channel adapter: -```java + +==== +[source, java] +---- public class CustomReactiveMessageHandler extends AbstractReactiveMessageHandler { private final CustomEntityOperations customEntityOperations; @@ -280,11 +299,16 @@ public class CustomReactiveMessageHandler extends AbstractReactiveMessageHandler UPDATE, } } -``` +---- +==== + +We will be able to use both of the channel adapters: -We will be able to use both of the channel adatpers: -```java +==== +[source, java] +---- public class MainFlow { + @Autowired private CustomReactiveMessageProducer customReactiveMessageProducer; @@ -299,11 +323,11 @@ public class MainFlow { .get(); } } -``` - +---- +==== -Currently Spring Integration provides channel adapter (or gateway) implementations for <<./webflux.adoc#webflux,WebFlux>>, <<./rsocket.adoc#rsocket,RSocket>>, <<./mongodb.adoc#mongodb,MongoDb>> and <<./r2dbc.adoc#r2dbc,R2DBC>>. +Currently, Spring Integration provides channel adapter (or gateway) implementations for <<./webflux.adoc#webflux,WebFlux>>, <<./rsocket.adoc#rsocket,RSocket>>, <<./mongodb.adoc#mongodb,MongoDb>>, <<./r2dbc.adoc#r2dbc,R2DBC>>, <<./zeromq.adoc#zeromq,ZeroMQ>>. The <<./redis.adoc#redis-stream-outbound,Redis Stream Channel Adapters>> are also reactive and uses `ReactiveStreamOperations` from Spring Data. -Also an https://github.com/spring-projects/spring-integration-extensions/tree/main/spring-integration-cassandra[Apache Cassandra Extension] provides a `MessageHandler` implementation for the Cassandra reactive driver. +Also, an https://github.com/spring-projects/spring-integration-extensions/tree/main/spring-integration-cassandra[Apache Cassandra Extension] provides a `MessageHandler` implementation for the Cassandra reactive driver. More reactive channel adapters are coming, for example for Apache Kafka in <<./kafka.adoc#kafka,Kafka>> based on the `ReactiveKafkaProducerTemplate` and `ReactiveKafkaConsumerTemplate` from https://spring.io/projects/spring-kafka[Spring for Apache Kafka] etc. For many other non-reactive channel adapters thread pools are recommended to avoid blocking during reactive stream processing. diff --git a/src/reference/asciidoc/redis.adoc b/src/reference/asciidoc/redis.adoc index b0a008f99ba..c9fe26091f2 100644 --- a/src/reference/asciidoc/redis.adoc +++ b/src/reference/asciidoc/redis.adoc @@ -34,7 +34,7 @@ To download, install, and run Redis, see the https://redis.io/download[Redis doc To begin interacting with Redis, you first need to connect to it. Spring Integration uses support provided by another Spring project, https://github.com/SpringSource/spring-data-redis[Spring Data Redis], which provides typical Spring constructs: `ConnectionFactory` and `Template`. Those abstractions simplify integration with several Redis client Java APIs. -Currently Spring Data Redis supports https://github.com/xetorthio/jedis[Jedis] and https://lettuce.io/[Lettuce]. +Currently, Spring Data Redis supports https://github.com/xetorthio/jedis[Jedis] and https://lettuce.io/[Lettuce]. ==== Using `RedisConnectionFactory` @@ -117,7 +117,7 @@ As with JMS and AMQP, Spring Integration provides message channels and adapters ==== Redis Publish/Subscribe channel Similarly to JMS, there are cases where both the producer and consumer are intended to be part of the same application, running within the same process. -You can accomplished this by using a pair of inbound and outbound channel adapters. +You can accomplish this by using a pair of inbound and outbound channel adapters. However, as with Spring Integration's JMS support, there is a simpler way to address this use case. You can create a publish-subscribe channel, as the following example shows: @@ -263,7 +263,7 @@ By default, the underlying `MessagePublishingErrorHandler` uses the default `err <8> The `RedisSerializer` bean reference. It can be an empty string, which means 'no serializer'. In this case, the raw `byte[]` from the inbound Redis message is sent to the `channel` as the `Message` payload. -By default it is a `JdkSerializationRedisSerializer`. +By default, it is a `JdkSerializationRedisSerializer`. <9> The timeout in milliseconds for 'pop' operation to wait for a Redis message from the queue. The default is 1 second. <10> The time in milliseconds for which the listener task should sleep after exceptions on the 'pop' operation, before restarting the listener task. @@ -281,7 +281,7 @@ Since version 4.3. ==== IMPORTANT: The `task-executor` has to be configured with more than one thread for processing; otherwise there is a possible deadlock when the `RedisQueueMessageDrivenEndpoint` tries to restart the listener task after an error. -The `errorChannel` can be used to process those errors, to avoid restarts, but it preferable to not expose your application to the possible deadlock situation. +The `errorChannel` can be used to process those errors, to avoid restarts, but it is preferable to not expose your application to the possible deadlock situation. See Spring Framework https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#scheduling-task-executor-types[Reference Manual] for possible `TaskExecutor` implementations. [[redis-queue-outbound-channel-adapter]] @@ -515,7 +515,7 @@ Sometimes, you may need to change the value of the key at runtime based on some To do so, use `key-expression` instead, where the provided expression can be any valid SpEL expression. Also, you may wish to perform some post-processing on the successfully processed data that was read from the Redis collection. -For example, you may want to move or remove the value after its been processed. +For example, you may want to move or remove the value after it has been processed. You can do so by using the transaction synchronization feature that was added with Spring Integration 2.2. The following example uses `key-expression` and transaction synchronization: @@ -647,7 +647,7 @@ Mutually exclusive with the `arguments-strategy` attribute. If you provide neither attribute, the `payload` is used as the command arguments. The argument expressions can evaluate to 'null' to support a variable number of arguments. <10> A `boolean` flag to specify whether the evaluated Redis command string is made available as the `#cmd` variable in the expression evaluation context in the `o.s.i.redis.outbound.ExpressionArgumentsStrategy` when `argument-expressions` is configured. -Otherwise this attribute is ignored. +Otherwise, this attribute is ignored. <11> Reference to an instance of `o.s.i.redis.outbound.ArgumentsStrategy`. It is mutually exclusive with `argument-expressions` attribute. If you provide neither attribute, the `payload` is used as the command arguments. @@ -755,7 +755,7 @@ It is mutually exclusive with 'redis-template' attribute. <8> The `RedisSerializer` bean reference. It can be an empty string, which means "`no serializer`". In this case, the raw `byte[]` from the inbound Redis message is sent to the `channel` as the `Message` payload. -It default to a `JdkSerializationRedisSerializer`. +It defaults to a `JdkSerializationRedisSerializer`. (Note that, in releases before version 4.3, it was a `StringRedisSerializer` by default. To restore that behavior, provide a reference to a `StringRedisSerializer`). <9> The timeout (in milliseconds) to wait until the receive message is fetched. @@ -766,7 +766,7 @@ If this attribute is set to `true`, the `serializer` cannot be an empty string, ==== IMPORTANT: The `task-executor` has to be configured with more than one thread for processing; otherwise there is a possible deadlock when the `RedisQueueMessageDrivenEndpoint` tries to restart the listener task after an error. -The `errorChannel` can be used to process those errors, to avoid restarts, but it preferable to not expose your application to the possible deadlock situation. +The `errorChannel` can be used to process those errors, to avoid restarts, but it is preferable to not expose your application to the possible deadlock situation. See Spring Framework https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#scheduling-task-executor-types[Reference Manual] for possible `TaskExecutor` implementations. [[redis-stream-outbound]] @@ -850,14 +850,14 @@ Reads message as `my-consumer` from group `my-group`. <9> Define the offset to read message. It defaults to `ReadOffset.latest()`. <10> If 'true', channel adapter will extract payload value from the `Record`. -Otherwise the whole `Record` is used as a payload. +Otherwise, the whole `Record` is used as a payload. It defaults to `true`. ==== If the `autoAck` is set to `false`, the `Record` in Redis Stream is not acknowledge automatically by the Redis driver, instead an `IntegrationMessageHeaderAccessor.ACKNOWLEDGMENT_CALLBACK` header is added into a message to produce with a `SimpleAcknowledgment` instance as a value. It is a target integration flow responsibility to call its `acknowledge()` callback whenever the business logic is done for the message based on such a record. Similar logic is required even when an exception happens during deserialization and `errorChannel` is configured. -So, target error handler must decided to ack or nack such a failed message. +So, target error handler must decide to ack or nack such a failed message. Alongside with `IntegrationMessageHeaderAccessor.ACKNOWLEDGMENT_CALLBACK`, the `ReactiveRedisStreamMessageProducer` also populates these headers into the message to produce: `RedisHeaders.STREAM_KEY`, `RedisHeaders.STREAM_MESSAGE_ID`, `RedisHeaders.CONSUMER_GROUP` and `RedisHeaders.CONSUMER`. Starting with version 5.5, you can configure `StreamReceiver.StreamReceiverOptionsBuilder` options explicitly on the `ReactiveRedisStreamMessageProducer`, including the newly introduced `onErrorResume` function, which is required if the Redis Stream consumer should continue polling when deserialization errors occur. diff --git a/src/reference/asciidoc/resource.adoc b/src/reference/asciidoc/resource.adoc index 1356d26cd43..8a5aed2b584 100644 --- a/src/reference/asciidoc/resource.adoc +++ b/src/reference/asciidoc/resource.adoc @@ -46,7 +46,7 @@ However, you can provide a reference to an instance of your own implementation o You may have a use case where you need to further filter the collection of resources resolved by the `ResourcePatternResolver`. For example, you may want to prevent resources that were already resolved from appearing in a collection of resolved resources ever again. -On the other hand, your resources might be updated rather often and you _do_ want them to be picked up again. +On the other hand, your resources might be updated rather often, and you _do_ want them to be picked up again. In other words, both defining an additional filter and disabling filtering altogether are valid use cases. You can provide your own implementation of the `org.springframework.integration.util.CollectionFilter` strategy interface, as the following example shows: diff --git a/src/reference/asciidoc/resources.adoc b/src/reference/asciidoc/resources.adoc index 9d2cef922c0..81c357c00ca 100644 --- a/src/reference/asciidoc/resources.adoc +++ b/src/reference/asciidoc/resources.adoc @@ -1,5 +1,5 @@ [[resources]] == Additional Resources -The definitive source of information about Spring Integration is the https://projects.spring.io/spring-integration/[Spring Integration Home] at https://spring.io[https://spring.io]. +The definitive source of information about Spring Integration is the https://spring.io/projects/spring-integration[Spring Integration Home] at https://spring.io[https://spring.io]. That site serves as a hub of information and is the best place to find up-to-date announcements about the project as well as links to articles, blogs, and new sample applications. diff --git a/src/reference/asciidoc/router.adoc b/src/reference/asciidoc/router.adoc index f36b042bbfe..74c734356aa 100644 --- a/src/reference/asciidoc/router.adoc +++ b/src/reference/asciidoc/router.adoc @@ -28,12 +28,12 @@ Spring Integration provides the following routers: Router implementations share many configuration parameters. However, certain differences exist between routers. -Furthermore, the availability of configuration parameters depends on whether routers are used inside or outside of a chain. +Furthermore, the availability of configuration parameters depends on whether routers are used inside or outside a chain. In order to provide a quick overview, all available attributes are listed in the two following tables . -The following table shows the configuration parameters available for a router outside of a chain: +The following table shows the configuration parameters available for a router outside a chain: -.Routers Outside of a Chain +.Routers Outside a Chain [cols="2,1,1,1,1,1,1", options="header"] |=== | Attribute @@ -174,9 +174,9 @@ a| image::images/tickmark.png[] |=== -The following table shows the configuration parameters available for a router inside of a chain: +The following table shows the configuration parameters available for a router inside a chain: -.Routers Inside of a Chain +.Routers Inside a Chain [cols="2,1,1,1,1,1,1", options="header"] |=== | Attribute @@ -336,7 +336,7 @@ If you do desire to drop messages silently, you can set `default-output-channel= This section describes the parameters common to all router parameters (the parameters with all their boxes ticked in the two tables shown earlier in this chapter). [[router-common-parameters-all]] -===== Inside and Outside of a Chain +===== Inside and Outside a Chain The following parameters are valid for all routers inside and outside of chains. @@ -354,7 +354,7 @@ NOTE: A message is sent only to the `default-output-channel` if `resolution-requ `resolution-required`:: This attribute specifies whether channel names must always be successfully resolved to channel instances that exist. If set to `true`, a `MessagingException` is raised when the channel cannot be resolved. -Setting this attribute to `false` causes any unresovable channels to be ignored. +Setting this attribute to `false` causes any unresolvable channels to be ignored. This optional attribute defaults to `true`. + NOTE: A Message is sent only to the `default-output-channel`, if specified, when `resolution-required` is `false` and the channel is not resolved. @@ -380,7 +380,7 @@ The `timeout` attribute specifies the maximum amount of time in milliseconds to By default, the send operation blocks indefinitely. [[router-common-parameters-top]] -===== Top-Level (Outside of a Chain) +===== Top-Level (Outside a Chain) The following parameters are valid only across all top-level routers that are outside of chains. @@ -702,7 +702,7 @@ However, in this case, it is all combined rather concisely into the router's con ---- In the preceding configuration, a SpEL expression identified by the `selector-expression` attribute is evaluated to determine whether this recipient should be included in the recipient list for a given input message. -The evaluation result of the expression must be a boolean. +The evaluation result of the expression must be a `boolean`. If this attribute is not defined, the channel is always among the list of recipients. [[recipient-list-router-management]] @@ -814,7 +814,7 @@ The following example defines a router that points to a POJO in its `ref` attrib ==== We generally recommend using a `ref` attribute if the custom router implementation is referenced in other `` definitions. -However if the custom router implementation should be scoped to a single definition of the ``, you can provide an inner bean definition, as the following example shows: +However, if the custom router implementation should be scoped to a single definition of the ``, you can provide an inner bean definition, as the following example shows: ==== [source,xml] @@ -1026,7 +1026,7 @@ All of these type of routers exhibit some dynamic characteristics. However, these routers all require static configuration. Even in the case of expression-based routers, the expression itself is defined as part of the router configuration, which means that the same expression operating on the same value always results in the computation of the same channel. -This is acceptable in most cases, since such routes are well defined and therefore predictable. +This is acceptable in most cases, since such routes are well-defined and therefore predictable. But there are times when we need to change router configurations dynamically so that message flows may be routed to a different channel. For example, you might want to bring down some part of your system for maintenance and temporarily re-reroute messages to a different message flow. @@ -1083,7 +1083,7 @@ Now consider an example of a header value router: Now we can consider how the three steps work for a header value router: . Compute a channel identifier that is the value of the header identified by the `header-name` attribute. -. Resolve the channel identifier a to channel name, where the result of the previous step is used to select the appropriate value from the general mapping defined in the `mapping` element. +. Resolve the channel identifier to a channel name, where the result of the previous step is used to select the appropriate value from the general mapping defined in the `mapping` element. . Resolve the channel name to the actual instance of the `MessageChannel` as a reference to a bean within the application context (which is hopefully a `MessageChannel`) identified by the result of the previous step. The preceding two configurations of two different router types look almost identical. @@ -1113,7 +1113,7 @@ That basically involves a bean lookup for the provided name. Now all messages that contain the header-value pair as `testHeader=kermit` are going to be routed to a `MessageChannel` whose bean name (its `id`) is 'kermit'. But what if you want to route these messages to the 'simpson' channel? Obviously changing a static configuration works, but doing so also requires bringing your system down. -However, if you had access to the channel identifier map, you could introduce a new mapping where the header-value pair is now `kermit=simpson`, thus letting the second step treat 'kermit' as a channel identifier while resolving it to 'simpson' as the channel name. +However, if you have had an access to the channel identifier map, you could introduce a new mapping where the header-value pair is now `kermit=simpson`, thus letting the second step treat 'kermit' as a channel identifier while resolving it to 'simpson' as the channel name. The same obviously applies for `PayloadTypeRouter`, where you can now remap or remove a particular payload type mapping. In fact, it applies to every other router, including expression-based routers, since their computed values now have a chance to go through the second step to be resolved to the actual `channel name`. @@ -1135,8 +1135,7 @@ One way to manage the router mappings is through the https://www.enterpriseinteg NOTE: For more information about the control bus, see <<./control-bus.adoc#control-bus,Control Bus>>. -Typically, you would send a control message asking to invoke a particular operation on a particular managed component (such as a -router). +Typically, you would send a control message asking to invoke a particular operation on a particular managed component (such as a router). The following managed operations (methods) are specific to changing the router resolution process: * `public void setChannelMapping(String key, String channelName)`: Lets you add a new or modify an existing mapping between `channel identifier` and `channel name` diff --git a/src/reference/asciidoc/rsocket.adoc b/src/reference/asciidoc/rsocket.adoc index 84c1d7d94c8..c2a5e0ab7a1 100644 --- a/src/reference/asciidoc/rsocket.adoc +++ b/src/reference/asciidoc/rsocket.adoc @@ -6,8 +6,8 @@ The RSocket Spring Integration module (`spring-integration-rsocket`) allows for You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -16,8 +16,8 @@ You need to include this dependency into your project: ---- +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-rsocket:{project-version}" ---- @@ -32,7 +32,7 @@ For this purpose, Spring Integration RSocket support provides the `ServerRSocket The `ServerRSocketConnector` exposes a listener on the host and port according to provided `io.rsocket.transport.ServerTransport` for accepting connections from clients. An internal `RSocketServer` instance can be customized with the `setServerConfigurer()`, as well as other options that can be configured, e.g. `RSocketStrategies` and `MimeType` for payload data and headers metadata. When a `setupRoute` is provided from the client requester (see `ClientRSocketConnector` below), a connected client is stored as a `RSocketRequester` under the key determined by the `clientRSocketKeyStrategy` `BiFunction, DataBuffer, Object>`. -By default a connect data is used for the key as a converted value to string with UTF-8 charset. +By default, a connection data is used for the key as a converted value to string with UTF-8 charset. Such an `RSocketRequester` registry can be used in the application logic to determine a particular client connection for interaction with it, or for publishing the same message to all connected clients. When a connection is established from the client, an `RSocketConnectedEvent` is emitted from the `ServerRSocketConnector`. This is similar to what is provided by the `@ConnectMapping` annotation in Spring Messaging module. @@ -76,7 +76,7 @@ See `ServerRSocketConnector` JavaDocs for more information. Starting with version 5.2.1, the `ServerRSocketMessageHandler` is extracted to a public, top-level class for possible connection with an existing RSocket server. When a `ServerRSocketConnector` is supplied with an external instance of `ServerRSocketMessageHandler`, it doesn't create an RSocket server internally and just delegates all the handling logic to the provided instance. -In addition the `ServerRSocketMessageHandler` can be configured with a `messageMappingCompatible` flag to handle also `@MessageMapping` for an RSocket controller, fully replacing the functionality provided by the standard `RSocketMessageHandler`. +In addition, the `ServerRSocketMessageHandler` can be configured with a `messageMappingCompatible` flag to handle also `@MessageMapping` for an RSocket controller, fully replacing the functionality provided by the standard `RSocketMessageHandler`. This can be useful in mixed configurations, when classic `@MessageMapping` methods are present in the same application along with RSocket channel adapters and an externally configured RSocket server is present in the application. The `ClientRSocketConnector` serves as a holder for `RSocketRequester` based on the `RSocket` connected via the provided `ClientTransport`. @@ -122,8 +122,8 @@ See the next section for more information. The `RSocketInboundGateway` is responsible for receiving RSocket requests and producing responses (if any). It requires an array of `path` mapping which could be as patterns similar to MVC request mapping or `@MessageMapping` semantics. -In addition (since version 5.2.2), a set of interaction models (see `RSocketInteractionModel`) can be configured on the `RSocketInboundGateway` to restrict RSocket requests to this endpoint by the particular frame type. -By default all the interaction models are supported. +In addition, (since version 5.2.2), a set of interaction models (see `RSocketInteractionModel`) can be configured on the `RSocketInboundGateway` to restrict RSocket requests to this endpoint by the particular frame type. +By default, all the interaction models are supported. Such a bean, according its `IntegrationRSocketEndpoint` implementation (extension of a `ReactiveMessageHandler`), is auto detected either by the `ServerRSocketConnector` or `ClientRSocketConnector` for a routing logic in the internal `IntegrationRSocketMessageHandler` for incoming requests. An `AbstractRSocketConnector` can be provided to the `RSocketInboundGateway` for explicit endpoint registration. This way, the auto-detection option is disabled on that `AbstractRSocketConnector`. @@ -131,14 +131,14 @@ The `RSocketStrategies` can also be injected into the `RSocketInboundGateway` or Decoders are used from those `RSocketStrategies` to decode a request payload according to the provided `requestElementType`. If an `RSocketPayloadReturnValueHandler.RESPONSE_HEADER` header is not provided in incoming the `Message`, the `RSocketInboundGateway` treats a request as a `fireAndForget` RSocket interaction model. In this case, an `RSocketInboundGateway` performs a plain `send` operation into the `outputChannel`. -Otherwise a `MonoProcessor` value from the `RSocketPayloadReturnValueHandler.RESPONSE_HEADER` header is used for sending a reply to the RSocket. +Otherwise, a `MonoProcessor` value from the `RSocketPayloadReturnValueHandler.RESPONSE_HEADER` header is used for sending a reply to the RSocket. For this purpose, an `RSocketInboundGateway` performs a `sendAndReceiveMessageReactive` operation on the `outputChannel`. The `payload` of the message to send downstream is always a `Flux` according to `MessagingRSocket` logic. When in a `fireAndForget` RSocket interaction model, the message has a plain converted `payload`. The reply `payload` could be a plain object or a `Publisher` - the `RSocketInboundGateway` converts both of them properly into an RSocket response according to the encoders provided in the `RSocketStrategies`. Starting with version 5.3, a `decodeFluxAsUnit` option (default `false`) is added to the `RSocketInboundGateway`. -By default incoming `Flux` is transformed the way that each its event is decoded separately. +By default, incoming `Flux` is transformed the way that each its event is decoded separately. This is an exact behavior present currently with `@MessageMapping` semantics. To restore a previous behavior or decode the whole `Flux` as single unit according application requirements, the `decodeFluxAsUnit` has to be set to `true`. However the target decoding logic depends on the `Decoder` selected, e.g. a `StringDecoder` requires a new line separator (by default) to be present in the stream to indicate a byte buffer end. @@ -156,7 +156,7 @@ See `ServerRSocketConnector` JavaDocs for more information. The `route` to send request has to be configured explicitly (together with path variables) or via a SpEL expression which is evaluated against request message. The RSocket interaction model can be provided via `RSocketInteractionModel` option or respective expression setting. -By default a `requestResponse` is used for common gateway use-cases. +By default, a `requestResponse` is used for common gateway use-cases. When request message payload is a `Publisher`, a `publisherElementType` option can be provided to encode its elements according an `RSocketStrategies` supplied in the target `RSocketRequester`. An expression for this option can evaluate to a `ParameterizedTypeReference`. @@ -189,7 +189,7 @@ public Flux flattenRSocketResponse(Flux payload) { Or subscribed explicitly in the target application logic. The expected response type can also be configured (or evaluated via expression) to `void` treating this gateway as an outbound channel adapter. -However the `outputChannel` still has to be configured (even if it just a `NullChannel`) to initiate a subscription to the returned `Mono`. +However, the `outputChannel` still has to be configured (even if it just a `NullChannel`) to initiate a subscription to the returned `Mono`. See <> for samples how to configure an `RSocketOutboundGateway` endpoint a deal with payloads downstream. diff --git a/src/reference/asciidoc/samples.adoc b/src/reference/asciidoc/samples.adoc index eea34bf98f3..820df111fcc 100644 --- a/src/reference/asciidoc/samples.adoc +++ b/src/reference/asciidoc/samples.adoc @@ -51,7 +51,7 @@ We greatly appreciate any effort toward improving the samples, including the sha [[samples-how-can-i-contribute]] ==== How Can I Contribute My Own Samples? -Github is for social coding: if you want to submit your own code examples to the Spring Integration Samples project, we encourage contributions through https://help.github.com/en/articles/creating-a-pull-request/[pull requests] from https://help.github.com/en/articles/fork-a-repo[forks] of this repository. +GitHub is for social coding: if you want to submit your own code examples to the Spring Integration Samples project, we encourage contributions through https://help.github.com/en/articles/creating-a-pull-request/[pull requests] from https://help.github.com/en/articles/fork-a-repo[forks] of this repository. If you want to contribute code this way, please reference, if possible, a https://github.com/spring-projects/spring-integration-samples/issues[GutHub issue] that provides some details regarding your sample. [IMPORTANT] @@ -93,7 +93,7 @@ However, they differ in that some samples concentrate on a technical use case, w Also, some samples are about showcasing various techniques that could be applied to address certain scenarios (both technical and business). The new categorization of samples lets us better organize them based on the problem each sample addresses while giving you a simpler way of finding the right sample for your needs. -Currently there are four categories. +Currently, there are four categories. Within the samples repository, each category has its own directory, which is named after the category name: Basic (`samples/basic`):: @@ -122,14 +122,14 @@ In other words, the emphasis of the samples in this category is business use cas For example, if you want to see how a loan broker or travel agent process could be implemented and automated with Spring Integration, this is the right place to find these types of samples. IMPORTANT: Spring Integration is a community-driven framework. -Therefore community participation is IMPORTANT. +Therefore, community participation is IMPORTANT. That includes samples. If you cannot find what you are looking for, let us know! [[samples-impl]] === Samples -Currently, Spring Integration comes with quite a few samples and you can only expect more. +Currently, Spring Integration comes with quite a few samples, and you can only expect more. To help you better navigate through them, each sample comes with its own `readme.txt` file which covers several details about the sample (for example, what EIP patterns it addresses, what problem it is trying to solve, how to run the sample, and other details). However, certain samples require a more detailed and sometimes graphical explanation. In this section, you can find details on samples that we believe require special attention. diff --git a/src/reference/asciidoc/scatter-gather.adoc b/src/reference/asciidoc/scatter-gather.adoc index e791d230350..66567b0f83c 100644 --- a/src/reference/asciidoc/scatter-gather.adoc +++ b/src/reference/asciidoc/scatter-gather.adoc @@ -132,7 +132,7 @@ The startup order proceeds from lowest to highest, and the shutdown order is fro By default, this value is `Integer.MAX_VALUE`, meaning that this container starts as late as possible and stops as soon as possible. Optional. <9> The timeout interval to wait when sending a reply `Message` to the `output-channel`. -By default, the send blocks for one second. +By default, the `send()` blocks for one second. It applies only if the output channel has some 'sending' limitations -- for example, a `QueueChannel` with a fixed 'capacity' that is full. In this case, a `MessageDeliveryException` is thrown. The `send-timeout` is ignored for `AbstractSubscribableChannel` implementations. @@ -211,5 +211,5 @@ NOTE: Before sending scattering results to the gatherer, `ScatterGatherHandler` This way errors from the `AggregatingMessageHandler` are going to be propagated to the caller, even if an async hand off is applied in scatter recipient subflows. For successful operation, a `gatherResultChannel`, `originalReplyChannel` and `originalErrorChannel` headers must be transferred back to replies from scatter recipient subflows. In this case a reasonable, finite `gatherTimeout` must be configured for the `ScatterGatherHandler`. -Otherwise it is going to be blocked waiting for a reply from the gatherer forever, by default. +Otherwise, it is going to be blocked waiting for a reply from the gatherer forever, by default. diff --git a/src/reference/asciidoc/scripting.adoc b/src/reference/asciidoc/scripting.adoc index 7264104f880..0429d410895 100644 --- a/src/reference/asciidoc/scripting.adoc +++ b/src/reference/asciidoc/scripting.adoc @@ -6,7 +6,7 @@ It lets you use scripts written in any supported language (including Ruby, JRuby For more information about JSR223, see the https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/prog_guide/api.html[documentation]. NOTE: Starting with Java 11, the Nashorn JavaScript Engine has been deprecated with possible removal in Java 15. -It is recommended to reconsider in favor of other scripting language from now on. +It is recommended to reconsider it in favor of other scripting language from now on. You need to include this dependency into your project: @@ -27,7 +27,7 @@ compile "org.springframework.integration:spring-integration-scripting:{project-v ---- ==== -In addition you need to add a script engine implementation, e.g. JRuby, Jython. +In addition, you need to add a script engine implementation, e.g. JRuby, Jython. Starting with version 5.2, Spring Integration provides a Kotlin Jsr223 support. You need to add these dependencies into your project to make it working: diff --git a/src/reference/asciidoc/security.adoc b/src/reference/asciidoc/security.adoc index 6ed0ca4e267..9b7430c6d6e 100644 --- a/src/reference/asciidoc/security.adoc +++ b/src/reference/asciidoc/security.adoc @@ -11,8 +11,8 @@ Spring Integration, together with https://projects.spring.io/spring-security/[Sp You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -20,9 +20,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-security:{project-version}" ---- @@ -32,12 +31,12 @@ compile "org.springframework.integration:spring-integration-security:{project-ve === Securing channels Spring Integration provides the `ChannelSecurityInterceptor` interceptor, which extends `AbstractSecurityInterceptor` and intercepts send and receive calls on the channel. -Access decisions are then made with reference to a `ChannelSecurityMetadataSource`, which provides the metadata that describes the send and receive access policies for certain channels. +Access decisions are then made with reference to a `ChannelSecurityMetadataSource`, which provides the metadata that describes the `send()` and `receive()` access policies for certain channels. The interceptor requires that a valid `SecurityContext` has been established by authenticating with Spring Security. -See the https://docs.spring.io/spring-security/site/docs/current/reference/htmlsingle/[Spring Security Reference Guide] for details. +See the https://docs.spring.io/spring-security/reference/[Spring Security Reference Guide] for details. Spring Integration provides Namespace support to allow easy configuration of security constraints. -This support consists of the secured channels tag, which allows definition of one or more channel name patterns in conjunction with a definition of the security configuration for send and receive. +This support consists of the secured channels tag, which allows definition of one or more channel name patterns in conjunction with a definition of the security configuration for `send()` and `receive()`. The pattern is a `java.util.regexp.Pattern`. The following example shows how to configure a bean that includes security and how to set up policies with patterns: @@ -129,8 +128,7 @@ It is accessed by an AOP (Aspect-oriented Programming) interceptor on secured me This works well with the current thread. Often, though, processing logic can be performed on another thread, on several threads, or even on external systems. -Standard thread-bound behavior is easy to configure if our application is built on the Spring Integration components -and its message channels. +Standard thread-bound behavior is easy to configure if our application is built on the Spring Integration components and its message channels. In this case, the secured objects can be any service activator or transformer, secured with a `MethodSecurityInterceptor` in their `` (see <<./handler-advice.adoc#message-handler-advice-chain,Adding Behavior to Endpoints>>) or even `MessageChannel` (see <>, earlier). When using `DirectChannel` communication, the `SecurityContext` is automatically available, because the downstream flow runs on the current thread. diff --git a/src/reference/asciidoc/service-activator.adoc b/src/reference/asciidoc/service-activator.adoc index b4b02280121..975a586ec0c 100644 --- a/src/reference/asciidoc/service-activator.adoc +++ b/src/reference/asciidoc/service-activator.adoc @@ -55,7 +55,7 @@ If that value is available, it then checks its type. If it is a `MessageChannel`, the reply message is sent to that channel. If it is a `String`, the endpoint tries to resolve the channel name to a channel instance. If the channel cannot be resolved, a `DestinationResolutionException` is thrown. -It it can be resolved, the message is sent there. +If it can be resolved, the message is sent there. If the request message does not have a `replyChannel` header and the `reply` object is a `Message`, its `replyChannel` header is consulted for a target destination. This is the technique used for request-reply messaging in Spring Integration, and it is also an example of the return address pattern. @@ -174,7 +174,7 @@ In this case a new `Message` object is created and all the headers from a req This works the same way for most Spring Integration `MessageHandler` implementations, when interaction is based on a POJO method invocation. A complete `Message` object can also be returned from the method. -However keep in mind that, unlike <<./transformer.adoc#transformer, transformers>>, for a Service Activator this message will be modified by copying the headers from the request message if they are not already present in the returned message. +However, keep in mind that, unlike <<./transformer.adoc#transformer, transformers>>, for a Service Activator this message will be modified by copying the headers from the request message if they are not already present in the returned message. So, if your method parameter is a `Message` and you copy some, but not all, existing headers in your service method, they will reappear in the reply message. It is not a Service Activator responsibility to remove headers from a reply message and, pursuing the loosely-coupled principle, it is better to add a `HeaderFilter` in the integration flow. Alternatively, a Transformer can be used instead of a Service Activator but, in that case, when returning a full `Message` the method is completely responsible for the message, including copying request message headers (if needed). diff --git a/src/reference/asciidoc/sftp.adoc b/src/reference/asciidoc/sftp.adoc index eb74e3fdc69..2c8c69bfb60 100644 --- a/src/reference/asciidoc/sftp.adoc +++ b/src/reference/asciidoc/sftp.adoc @@ -13,8 +13,8 @@ It also provides convenient namespace configuration to define these client compo You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -22,9 +22,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-sftp:{project-version}" ---- @@ -162,18 +161,15 @@ If `false`, a pre-populated `knownHosts` file is required. `userInfo`::Set a custom `UserInfo` to be used during authentication. In particular, `promptYesNo()` is invoked when an unknown (or changed) host key is received. See also <>. -When you provide a `UserInfo`, the `password` and private key `passphrase` are obtained from it, and you cannot set discrete -`password` and `privateKeyPassphrase` properties. +When you provide a `UserInfo`, the `password` and private key `passphrase` are obtained from it, and you cannot set discrete `password` and `privateKeyPassphrase` properties. [[sftp-proxy-factory-bean]] === Proxy Factory Bean `Jsch` provides a mechanism to connect to the server over an HTTP or SOCKS proxy. -To use this feature, configure the `Proxy` and provide a reference to the `DefaultSftpSessionFactory`, as discussed -earlier. +To use this feature, configure the `Proxy` and provide a reference to the `DefaultSftpSessionFactory`, as discussed earlier. Three implementations are provided by `Jsch`: `HTTP`, `SOCKS4`, and `SOCKS5`. -Spring Integration 4.3 introduced a `FactoryBean`, easing configuration of these proxies by allowing property -injection, as the following example shows: +Spring Integration 4.3 introduced a `FactoryBean`, easing configuration of these proxies by allowing property injection, as the following example shows: ==== [source, xml] @@ -198,8 +194,7 @@ injection, as the following example shows: [[sftp-dsf]] === Delegating Session Factory -Version 4.2 introduced the `DelegatingSessionFactory`, which allows the selection of the actual session factory at -runtime. +Version 4.2 introduced the `DelegatingSessionFactory`, which allows the selection of the actual session factory at runtime. Prior to invoking the SFTP endpoint, you can call `setThreadKey()` on the factory to associate a key with the current thread. That key is then used to look up the actual session factory to be used. You can clear the key by calling `clearThreadKey()` after use. @@ -345,8 +340,7 @@ Starting with Spring Integration 3.0, you can specify the `preserve-timestamp` a When `true`, the local file's modified timestamp is set to the value retrieved from the server. Otherwise, it is set to the current time. -Starting with version 4.2, you can specify `remote-directory-expression` instead of `remote-directory`, which lets -you dynamically determine the directory on each poll -- for example, `remote-directory-expression="@myBean.determineRemoteDir()"`. +Starting with version 4.2, you can specify `remote-directory-expression` instead of `remote-directory`, which lets you dynamically determine the directory on each poll -- for example, `remote-directory-expression="@myBean.determineRemoteDir()"`. Sometimes, file filtering based on the simple pattern specified via `filename-pattern` attribute might not suffice. If this is the case, you can use the `filename-regex` attribute to specify a regular expression (for example, `filename-regex=".*\.test$"`). @@ -397,7 +391,7 @@ Once the file has been transferred to a local directory, a message with `java.io ==== More on File Filtering and Large Files Sometimes, a file that just appeared in the monitored (remote) directory is not complete. -Typically such a file is written with some temporary extension (such as `.writing` on a file named `something.txt.writing`) and then renamed after the writing process completes. +Typically, such a file is written with some temporary extension (such as `.writing` on a file named `something.txt.writing`) and then renamed after the writing process completes. In most cases, developers are interested only in files that are complete and would like to filter only those files. To handle these scenarios, you can use the filtering support provided by the `filename-pattern`, `filename-regex`, and `filter` attributes. If you need a custom filter implementation, you can include a reference in your adapter by setting the `filter` attribute. @@ -424,20 +418,17 @@ The following example shows how to do so: You should understand the architecture of the adapter. A file synchronizer fetches the files, and a `FileReadingMessageSource` emits a message for each synchronized file. As <>, two filters are involved. -The `filter` attribute (and patterns) refers to the remote (SFTP) file list, to avoid fetching files that have already -been fetched. +The `filter` attribute (and patterns) refers to the remote (SFTP) file list, to avoid fetching files that have already been fetched. the `FileReadingMessageSource` uses the `local-filter` to determine which files are to be sent as messages. The synchronizer lists the remote files and consults its filter. The files are then transferred. -If an IO error occurs during file transfer, any files that have already been added to the filter are removed so that they -are eligible to be re-fetched on the next poll. +If an IO error occurs during file transfer, any files that have already been added to the filter are removed so that they are eligible to be re-fetched on the next poll. This applies only if the filter implements `ReversibleFileListFilter` (such as the `AcceptOnceFileListFilter`). If, after synchronizing the files, an error occurs on the downstream flow processing a file, no automatic rollback of the filter occurs, so the failed file is not reprocessed by default. -If you wish to reprocess such files after a failure, you can use a configuration similar to the following to facilitate -the removal of the failed file from the filter: +If you wish to reprocess such files after a failure, you can use a configuration similar to the following to facilitate the removal of the failed file from the filter: ==== [source, xml] @@ -852,7 +843,7 @@ As with many other components in Spring Integration, you can use the Spring Expr The expression evaluation context has the message as its root object, which lets you use expressions that can dynamically compute the file name or the existing directory path based on the data in the message (from either the 'payload' or the 'headers'). In the preceding example, we define the `remote-filename-generator-expression` attribute with an expression value that computes the file name based on its original name while also appending a suffix: '-mysuffix'. -Starting with version 4.1, you can specify the `mode` when you transferring the file. +Starting with version 4.1, you can specify the `mode` when you are transferring the file. By default, an existing file is overwritten. The modes are defined by the `FileExistsMode` enumeration, which includes the following values: @@ -1037,8 +1028,7 @@ The `file_remoteDirectory` header holds the remote directory, and the `file_remo The message payload resulting from a `get` operation is a `File` object representing the retrieved file. If you use the `-stream` option, the payload is an `InputStream` rather than a `File`. -For text files, a common use case is to combine this operation with a <<./file.adoc#file-splitter,file splitter>> or a -<<./transformer.adoc#stream-transformer,stream transformer>>. +For text files, a common use case is to combine this operation with a <<./file.adoc#file-splitter,file splitter>> or a <<./transformer.adoc#stream-transformer,stream transformer>>. When consuming remote files as streams, you are responsible for closing the `Session` after the stream is consumed. For convenience, the `Session` is provided in the `closeableResource` header, and `IntegrationMessageHeaderAccessor` offers convenience method: @@ -1052,8 +1042,7 @@ if (closeable != null) { ---- ==== -Framework components, such as the <<./file.adoc#file-splitter,File Splitter>> and <<./transformer.adoc#stream-transformer,Stream Transformer>>, -automatically close the session after the data is transferred. +Framework components, such as the <<./file.adoc#file-splitter,File Splitter>> and <<./transformer.adoc#stream-transformer,Stream Transformer>>, automatically close the session after the data is transferred. The following example shows how to consume a file as a stream: @@ -1414,7 +1403,7 @@ When configuring with Java, the `SftpOutboundGateway` class offers different con The `ApacheMinaSftpEventListener`, added in version 5.2, listens for certain Apache Mina SFTP server events and publishes them as `ApplicationEvent` s which can be received by any `ApplicationListener` bean, `@EventListener` bean method, or <<./event.adoc#appevent-inbound, Event Inbound Channel Adapter>>. -Currently supported events are: +Currently, supported events are: * `SessionOpenedEvent` - a client session was opened * `DirectoryCreatedEvent` - a directory was created @@ -1468,5 +1457,5 @@ Since the `SftpInboundFileSynchronizingMessageSource` doesn't produce messages a This metadata is retrieved by the `SftpInboundFileSynchronizingMessageSource` when local file is polled. When local file is deleted, it is recommended to remove its metadata entry. The `AbstractInboundFileSynchronizer` provides a `removeRemoteFileMetadata()` callback for this purpose. -In addition there is a `setMetadataStorePrefix()` to be used in the metadata keys. +In addition, there is a `setMetadataStorePrefix()` to be used in the metadata keys. It is recommended to have this prefix be different from the one used in the `MetadataStore`-based `FileListFilter` implementations, when the same `MetadataStore` instance is shared between these components, to avoid entry overriding because both filter and `AbstractInboundFileSynchronizer` use the same local file name for the metadata entry key. diff --git a/src/reference/asciidoc/spel.adoc b/src/reference/asciidoc/spel.adoc index 9fa37ef8ef5..45586c4435a 100644 --- a/src/reference/asciidoc/spel.adoc +++ b/src/reference/asciidoc/spel.adoc @@ -57,7 +57,7 @@ Note that custom functions are static methods. In the preceding example, the custom function is a static method called `calc` on a class called `MyFunctions` and takes a single parameter of type `MyThing`. Suppose you have a `Message` with a payload that has a type of `MyThing`. -Further suppose that you need to perform some action to create an object called `MyObject` from `MyThing` and then invoke a custom function called `calc` on that object. +Further, suppose that you need to perform some action to create an object called `MyObject` from `MyThing` and then invoke a custom function called `calc` on that object. The standard property accessors do not know how to get a `MyObject` from a `MyThing`, so you could write and configure a custom property accessor to do so. As a result, your final expression might be `"#barcalc(payload.myObject)"`. @@ -154,7 +154,7 @@ The following listing shows some usage examples: `#jsonPath` also supports a third (optional) parameter: an array of https://github.com/json-path/JsonPath#filter-predicates[`com.jayway.jsonpath.Filter`], which can be provided by a reference to a bean or bean method (for example). + NOTE: Using this function requires the Jayway JsonPath library (`json-path.jar`) to be on the classpath. -Otherwise the `#jsonPath` SpEL function is not registered. +Otherwise, the `#jsonPath` SpEL function is not registered. + For more information regarding JSON see 'JSON Transformers' in <<./transformer.adoc#transformer,Transformer>>. diff --git a/src/reference/asciidoc/splitter.adoc b/src/reference/asciidoc/splitter.adoc index 3fdcd0ec9f8..3059979ca04 100644 --- a/src/reference/asciidoc/splitter.adoc +++ b/src/reference/asciidoc/splitter.adoc @@ -101,11 +101,11 @@ Required. <5> The channel to which the splitter sends the results of splitting the incoming message. Optional (because incoming messages can specify a reply channel themselves). <6> The channel to which the request message is sent in case of empty splitting result. -Optional (the will stop as in case of `null` result). +Optional (they will stop as in case of `null` result). ==== We recommend using a `ref` attribute if the custom splitter implementation can be referenced in other `` definitions. -However if the custom splitter handler implementation should be scoped to a single definition of the ``, you can configure an inner bean definition, as the following example follows: +However, if the custom splitter handler implementation should be scoped to a single definition of the ``, you can configure an inner bean definition, as the following example follows: ==== [source,xml] diff --git a/src/reference/asciidoc/stomp.adoc b/src/reference/asciidoc/stomp.adoc index b33201a6b3c..46141c10e94 100644 --- a/src/reference/asciidoc/stomp.adoc +++ b/src/reference/asciidoc/stomp.adoc @@ -9,8 +9,8 @@ For more information, see the https://docs.spring.io/spring/docs/current/spring- You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -18,9 +18,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-stomp:{project-version}" ---- diff --git a/src/reference/asciidoc/stream.adoc b/src/reference/asciidoc/stream.adoc index 65d536d806c..231780d51c6 100644 --- a/src/reference/asciidoc/stream.adoc +++ b/src/reference/asciidoc/stream.adoc @@ -2,14 +2,14 @@ == Stream Support In many cases, application data is obtained from a stream. -It is not recommended to send a reference to a stream as a message payload to a consumer. +It is not recommended sending a reference to a stream as a message payload to a consumer. Instead, messages are created from data that is read from an input stream, and message payloads are written to an output stream one by one. You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -17,9 +17,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-stream:{project-version}" ---- @@ -33,7 +32,7 @@ Both `ByteStreamReadingMessageSource` and `CharacterStreamReadingMessageSource` By configuring one of these within a channel-adapter element, the polling period can be configured and the message bus can automatically detect and schedule them. The byte stream version requires an `InputStream`, and the character stream version requires a `Reader` as the single constructor argument. The `ByteStreamReadingMessageSource` also accepts the 'bytesPerMessage' property to determine how many bytes it tries to read into each `Message`. -The default value is 1024. +The default value is `1024`. The following example creates an input stream that creates messages that each contain 2048 bytes: ==== diff --git a/src/reference/asciidoc/syslog.adoc b/src/reference/asciidoc/syslog.adoc index a40c2611c10..fdb55fad3aa 100644 --- a/src/reference/asciidoc/syslog.adoc +++ b/src/reference/asciidoc/syslog.adoc @@ -6,8 +6,8 @@ Spring Integration 2.2 introduced the syslog transformer: `SyslogToMapTransforme You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -15,9 +15,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-syslog:{project-version}" ---- diff --git a/src/reference/asciidoc/testing.adoc b/src/reference/asciidoc/testing.adoc index 026bb9ee24d..ff8ec73163c 100644 --- a/src/reference/asciidoc/testing.adoc +++ b/src/reference/asciidoc/testing.adoc @@ -179,7 +179,7 @@ The `org.springframework.integration.test.support` package contains various abst ==== JUnit Rules and Conditions The `LongRunningIntegrationTest` JUnit 4 test rule is present to indicate if test should be run if `RUN_LONG_INTEGRATION_TESTS` environment or system property is set to `true`. -Otherwise it is skipped. +Otherwise, it is skipped. For the same reason since version 5.1, a `@LongRunningTest` conditional annotation is provided for JUnit 5 tests. ==== Hamcrest and Mockito Matchers @@ -241,7 +241,7 @@ And also ot can be configured with headers to exclude from expectation as well a Typically, tests for Spring applications use the Spring Test Framework. Since Spring Integration is based on the Spring Framework foundation, everything we can do with the Spring Test Framework also applies when testing integration flows. The `org.springframework.integration.test.context` package provides some components for enhancing the test context for integration needs. -First of all, we configure our test class with a `@SpringIntegrationTest` annotation to enable the Spring Integration Test Framework, as the following example shows: +First we configure our test class with a `@SpringIntegrationTest` annotation to enable the Spring Integration Test Framework, as the following example shows: ==== [source,java] @@ -282,7 +282,7 @@ public void testMockMessageSource() { ==== NOTE: The `mySourceEndpoint` refers here to the bean name of the `SourcePollingChannelAdapter` for which we replace the real `MessageSource` with our mock. -Similarly the `MockIntegrationContext.substituteMessageHandlerFor()` expects a bean name for the `IntegrationConsumer`, which wraps a `MessageHandler` as an endpoint. +Similarly, the `MockIntegrationContext.substituteMessageHandlerFor()` expects a bean name for the `IntegrationConsumer`, which wraps a `MessageHandler` as an endpoint. After test is performed you can restore the state of endpoint beans to the real configuration using `MockIntegrationContext.resetBeans()`: diff --git a/src/reference/asciidoc/transactions.adoc b/src/reference/asciidoc/transactions.adoc index 3bdcfcf2144..7fe1ca5d16a 100644 --- a/src/reference/asciidoc/transactions.adoc +++ b/src/reference/asciidoc/transactions.adoc @@ -50,7 +50,7 @@ After all, every Spring Integration component is a Spring Bean. With this goal in mind, we can again consider the two scenarios: message flows initiated by a user process and message flows initiated by a daemon. Message flows that are initiated by a user process and configured in a Spring application context are subject to the usual transactional configuration of such processes. -Therefore they need not be explicitly configured by Spring Integration to support transactions. +Therefore, they need not be explicitly configured by Spring Integration to support transactions. The transaction could and should be initiated through Spring's standard transaction support. The Spring Integration message flow naturally honors the transactional semantics of the components, because it is itself configured by Spring. For example, a gateway or service activator method could be annotated with `@Transactional`, or a `TransactionInterceptor` could be defined in an XML configuration with a pointcut expression that points to specific methods that should be transactional. @@ -71,7 +71,7 @@ Spring Integration provides transactional support for pollers. Pollers are a special type of component because, within a poller task, we can call `receive()` against a resource that is itself transactional, thus including the `receive()` call in the boundaries of the transaction, which lets it be rolled back in case of a task failure. If we were to add the same support for channels, the added transactions would affect all downstream components starting with the `send()` call. That provides a rather wide scope for transaction demarcation without any strong reason, especially when Spring already provides several ways to address the transactional needs of any component downstream. -However the `receive()` method being included in a transaction boundary is the "`strong reason`" for pollers. +However, the `receive()` method being included in a transaction boundary is the "`strong reason`" for pollers. Any time you configure a Poller, you can provide transactional configuration by using the `transactional` child element and its attributes,as the following example shows: @@ -142,7 +142,7 @@ For example, you can use a Queue-backed Channel that delegates to a transactiona [[transaction-synchronization]] === Transaction Synchronization -In some environments, it help to synchronize operations with a transaction that encompasses the entire flow. +In some environments, it helps to synchronize operations with a transaction that encompasses the entire flow. For example, consider a `` at the start of a flow that performs a number of database updates. If the transaction commits, we might want to move the file to a `success` directory, while we might want to move it to a `failure` directory if the transaction rolls back. diff --git a/src/reference/asciidoc/transformer.adoc b/src/reference/asciidoc/transformer.adoc index b2606f6dda1..cdee7739f88 100644 --- a/src/reference/asciidoc/transformer.adoc +++ b/src/reference/asciidoc/transformer.adoc @@ -255,7 +255,7 @@ Spring Integration provides namespace support for Map-to-Object, as the followin ---- ==== -Alterately, you could use a `ref` attribute and a prototype-scoped bean, as the following example shows: +Alternatively, you could use a `ref` attribute and a prototype-scoped bean, as the following example shows: [source,xml] ---- > for more informat Beginning with version 5.1, the `resultType` can be configured as `BYTES` to produce a message with the `byte[]` payload for convenience when working with downstream handlers which operate with this data type. Starting with version 5.2, the `JsonToObjectTransformer` can be configured with a `ResolvableType` to support generics during deserialization with the target JSON processor. -Also this component now consults request message headers first for the presence of the `JsonHeaders.RESOLVABLE_TYPE` or `JsonHeaders.TYPE_ID` and falls back to the configured type otherwise. +Also, this component now consults request message headers first for the presence of the `JsonHeaders.RESOLVABLE_TYPE` or `JsonHeaders.TYPE_ID` and falls back to the configured type otherwise. The `ObjectToJsonTransformer` now also populates a `JsonHeaders.RESOLVABLE_TYPE` header based on the request message payload for any possible downstream scenarios. Starting with version 5.2.6, the `JsonToObjectTransformer` can be supplied with a `valueTypeExpression` to resolve a `ResolvableType` for the payload to convert from JSON at runtime against the request message. -By default it consults `JsonHeaders` in the request message. +By default, it consults `JsonHeaders` in the request message. If this expression returns `null` or `ResolvableType` building throws a `ClassNotFoundException`, the transformer falls back to the provided `targetType`. This logic is present as an expression because `JsonHeaders` may not have real class values, but rather some type ids which have to be mapped to target classes according some external registry. diff --git a/src/reference/asciidoc/web-sockets.adoc b/src/reference/asciidoc/web-sockets.adoc index ce7509e1f49..a324db73b38 100644 --- a/src/reference/asciidoc/web-sockets.adoc +++ b/src/reference/asciidoc/web-sockets.adoc @@ -50,7 +50,7 @@ public interface WebSocketGateway { [[web-socket-overview]] === Overview -Since the WebSocket protocol is streaming by definition and we can send and receive messages to and from a WebSocket at the same time, we can deal with an appropriate `WebSocketSession`, regardless of being on the client or server side. +Since the WebSocket protocol is streaming by definition, and we can send and receive messages to and from a WebSocket at the same time, we can deal with an appropriate `WebSocketSession`, regardless of being on the client or server side. To encapsulate the connection management and `WebSocketSession` registry, the `IntegrationWebSocketContainer` is provided with `ClientWebSocketContainer` and `ServerWebSocketContainer` implementations. Thanks to the https://www.jcp.org/en/jsr/detail?id=356[WebSocket API] and its implementation in the Spring Framework (with many extensions), the same classes are used on the server side as well as the client side (from a Java perspective, of course). Consequently, most connection and `WebSocketSession` registry options are the same on both sides. @@ -95,7 +95,7 @@ You must supply it with a `IntegrationWebSocketContainer`, and the adapter regis NOTE: Only one `WebSocketListener` can be registered in the `IntegrationWebSocketContainer`. -For WebSocket subprotocols, the `WebSocketInboundChannelAdapter` can be configured with `SubProtocolHandlerRegistry` as the second constructor argument. +For WebSocket sub-protocols, the `WebSocketInboundChannelAdapter` can be configured with `SubProtocolHandlerRegistry` as the second constructor argument. The adapter delegates to the `SubProtocolHandlerRegistry` to determine the appropriate `SubProtocolHandler` for the accepted `WebSocketSession` and to convert a `WebSocketMessage` to a `Message` according to the sub-protocol implementation. NOTE: By default, the `WebSocketInboundChannelAdapter` relies only on the raw `PassThruSubProtocolHandler` implementation, which converts the `WebSocketMessage` to a `Message`. diff --git a/src/reference/asciidoc/webflux.adoc b/src/reference/asciidoc/webflux.adoc index b38451d95d8..0a83d561ece 100644 --- a/src/reference/asciidoc/webflux.adoc +++ b/src/reference/asciidoc/webflux.adoc @@ -300,7 +300,7 @@ The `setExpectedResponseType(Class)` or `setExpectedResponseTypeExpression(Ex If `replyPayloadToFlux` is set to `true`, the response body is converted to a `Flux` with the provided `expectedResponseType` for each element, and this `Flux` is sent as the payload downstream. Afterwards, you can use a <<./splitter.adoc#splitter,splitter>> to iterate over this `Flux` in a reactive manner. -In addition a `BodyExtractor` can be injected into the `WebFluxRequestExecutingMessageHandler` instead of the `expectedResponseType` and `replyPayloadToFlux` properties. +In addition, a `BodyExtractor` can be injected into the `WebFluxRequestExecutingMessageHandler` instead of the `expectedResponseType` and `replyPayloadToFlux` properties. It can be used for low-level access to the `ClientHttpResponse` and more control over body and HTTP headers conversion. Spring Integration provides `ClientHttpResponseBodyExtractor` as a identity function to produce (downstream) the whole `ClientHttpResponse` and any other possible custom logic. diff --git a/src/reference/asciidoc/ws.adoc b/src/reference/asciidoc/ws.adoc index b5e18db8455..f5c4c5c3fea 100644 --- a/src/reference/asciidoc/ws.adoc +++ b/src/reference/asciidoc/ws.adoc @@ -13,8 +13,8 @@ This chapter describes Spring Integration's support for web services, including: You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -22,9 +22,8 @@ You need to include this dependency into your project: {project-version} ---- - +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-ws:{project-version}" ---- @@ -216,7 +215,7 @@ IntegrationFlow inboundMarshalled() { ---- ==== -Other properties can be set on the endpoint specs in a fluent manner (with the properties depending on whether or not an external `WebServiceTemplate` has been provided for outbound gateways). +Other properties can be set on the endpoint specs in a fluent manner (with the properties depending on whether an external `WebServiceTemplate` has been provided for outbound gateways). Examples: ==== diff --git a/src/reference/asciidoc/xml.adoc b/src/reference/asciidoc/xml.adoc index 8453083c2ea..ff1c75bb304 100644 --- a/src/reference/asciidoc/xml.adoc +++ b/src/reference/asciidoc/xml.adoc @@ -253,7 +253,7 @@ The following XPath expression (which use the `myorder` namespace prefix) also m ==== The namespace URI is the really important piece of information, not the prefix. -Thehttps://github.com/jaxen-xpath/jaxen[Jaxen] summarizes the point very well: +The https://github.com/jaxen-xpath/jaxen[Jaxen] summarizes the point very well: [quote] In XPath 1.0, all unprefixed names are unqualified. @@ -274,10 +274,10 @@ This section will explain the workings of the following transformers and how to * link:#xml-marshalling-transformer[MarshallingTransformer] * link:#xml-xslt-payload-transformers[XsltPayloadTransformer] -All of the XML transformers extend either https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/AbstractTransformer.html[`AbstractTransformer`] or https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/AbstractPayloadTransformer.html[`AbstractPayloadTransformer`] and therefore implement https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/Transformer.html[`Transformer`]. +All the XML transformers extend either https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/AbstractTransformer.html[`AbstractTransformer`] or https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/AbstractPayloadTransformer.html[`AbstractPayloadTransformer`] and therefore implement https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/Transformer.html[`Transformer`]. When configuring XML transformers as beans in Spring Integration, you would normally configure the `Transformer` in conjunction with a https://docs.spring.io/spring-integration/api/org/springframework/integration/transformer/MessageTransformingHandler.html[`MessageTransformingHandler`]. This lets the transformer be used as an endpoint. -Finally, we discuss the namespace support , which allows for configuring the transformers as elements in XML. +Finally, we discuss the namespace support, which allows for configuring the transformers as elements in XML. [[xml-unmarshalling-transformer]] ===== UnmarshallingTransformer @@ -907,7 +907,7 @@ The following listing shows all the available configuration parameters: ---- <1> Specifies the default boolean value for whether to overwrite existing header values. -This takes effect only for child elements that do not provide their own 'overwrite' attribute. +It takes effect only for child elements that do not provide their own 'overwrite' attribute. If you do not set the 'default- overwrite' attribute, the specified header values do not overwrite any existing ones with the same header names. Optional. <2> ID for the underlying bean definition. @@ -946,7 +946,7 @@ You must set either this attribute or `xpath-expression`, but not both. === Using the XPath Filter This component defines an XPath-based message filter. -Internally, this components uses a `MessageFilter` that wraps an instance of `AbstractXPathMessageSelector`. +Internally, these components uses a `MessageFilter` that wraps an instance of `AbstractXPathMessageSelector`. NOTE: See <<./filter.adoc#filter,Filter>> for further details. @@ -1000,7 +1000,7 @@ If you do not set this attribute, the XPath evaluation must produce a boolean re Optional. <6> The channel to which messages that matched the filter criteria are dispatched. Optional. -<7> By default, this property is set to `false` and rejected messages (messages that did not match the filter criteria) are silently dropped. +<7> By default, this property is set to `false` and rejected messages (those did not match the filter criteria) are silently dropped. However, if set to `true`, message rejection results in an error condition and an exception being propagated upstream to the caller. Optional. <8> Reference to an XPath expression instance to evaluate. diff --git a/src/reference/asciidoc/xmpp.adoc b/src/reference/asciidoc/xmpp.adoc index a109a06b1e2..d22d5c49e60 100644 --- a/src/reference/asciidoc/xmpp.adoc +++ b/src/reference/asciidoc/xmpp.adoc @@ -18,8 +18,8 @@ Spring integration provides support for XMPP by providing XMPP adapters, which s You need to include this dependency into your project: ==== +[source, xml, subs="normal", role="primary"] .Maven -[source, xml, subs="normal"] ---- org.springframework.integration @@ -28,8 +28,8 @@ You need to include this dependency into your project: ---- +[source, groovy, subs="normal", role="secondary"] .Gradle -[source, groovy, subs="normal"] ---- compile "org.springframework.integration:spring-integration-xmpp:{project-version}" ---- @@ -52,7 +52,7 @@ xsi:schemaLocation="http://www.springframework.org/schema/integration/xmpp Before using inbound or outbound XMPP adapters to participate in the XMPP network, an actor must establish its XMPP connection. All XMPP adapters connected to a particular account can share this connection object. -Typically this requires (at a minimum) `user`, `password`, and `host`. +Typically, this requires (at a minimum) `user`, `password`, and `host`. To create a basic XMPP connection, you can use the convenience of the namespace, as the following example shows: ==== @@ -195,7 +195,7 @@ Starting with version 4.3, the packet extension support has been added to the `C Along with the regular `String` and `org.jivesoftware.smack.packet.Message` payload, now you can send a message with a payload of `org.jivesoftware.smack.packet.ExtensionElement` (which is populated to the `org.jivesoftware.smack.packet.Message.addExtension()`) instead of `setBody()`. For convenience, we added an `extension-provider` option for the `ChatMessageSendingMessageHandler`. It lets you inject `org.jivesoftware.smack.provider.ExtensionElementProvider`, which builds an `ExtensionElement` against the payload at runtime. -For this case, the payload must be a string in JSON or XML format, depending of the XEP protocol. +For this case, the payload must be a string in JSON or XML format, depending on the XEP protocol. [[xmpp-presence]] === XMPP Presence @@ -302,7 +302,7 @@ For more complex cases (such as registering a SASL mechanism), you may need to e One of those static initializers is `SASLAuthentication`, which lets you register supported SASL mechanisms. For that level of complexity, we recommend using Spring Java configuration for the XMPP connection configuration. That way, you can configure the entire component through Java code and execute all other necessary Java code, including static initializers, at the appropriate time. -The following exampe shows how to configure an XMPP connection with an SASL (Simple Authentication and Security Layer) in Java: +The following example shows how to configure an XMPP connection with an SASL (Simple Authentication and Security Layer) in Java: ==== [source,java] @@ -329,8 +329,7 @@ For more information on using Java for application context configuration, see th === XMPP Message Headers The Spring Integration XMPP Adapters automatically map standard XMPP properties. -By default, these properties are copied to and from Spring Integration `MessageHeaders` by using -https://docs.spring.io/spring-integration/api/org/springframework/integration/xmpp/support/DefaultXmppHeaderMapper.html[`DefaultXmppHeaderMapper`]. +By default, these properties are copied to and from Spring Integration `MessageHeaders` by using https://docs.spring.io/spring-integration/api/org/springframework/integration/xmpp/support/DefaultXmppHeaderMapper.html[`DefaultXmppHeaderMapper`]. Any user-defined headers are not copied to or from an XMPP Message, unless explicitly specified by the `requestHeaderNames` or `replyHeaderNames` properties of the `DefaultXmppHeaderMapper`. diff --git a/src/reference/asciidoc/zeromq.adoc b/src/reference/asciidoc/zeromq.adoc index 0bab4d9a11d..134a905f63b 100644 --- a/src/reference/asciidoc/zeromq.adoc +++ b/src/reference/asciidoc/zeromq.adoc @@ -37,7 +37,7 @@ See `ZeroMqProxy.Type` for details. The `ZeroMqProxy` implements `SmartLifecycle` to create, bind and configure the sockets and to start `ZMQ.proxy()` in a dedicated thread from an `Executor` (if any). The binding for frontend and backend sockets is done over the `tcp://` protocol onto all of the available network interfaces with the provided ports. -Otherwise they are bound to random ports which can be obtained later via the respective `getFrontendPort()` and `getBackendPort()` API methods. +Otherwise, they are bound to random ports which can be obtained later via the respective `getFrontendPort()` and `getBackendPort()` API methods. The control socket is exposed as a `SocketType.PAIR` with an inter-thread transport on the `"inproc://" + beanName + ".control"` address; it can be obtained via `getControlAddress()`. It should be used with the same application from another `SocketType.PAIR` socket to send `ZMQ.PROXY_TERMINATE`, `ZMQ.PROXY_PAUSE` and/or `ZMQ.PROXY_RESUME` commands. @@ -73,13 +73,13 @@ All the client nodes should connect to the host of this proxy via `tcp://` and u The `ZeroMqChannel` is a `SubscribableChannel` which uses a pair of ZeroMQ sockets to connect publishers and subscribers for messaging interaction. It can work in a PUB/SUB mode (defaults to PUSH/PULL); it can also be used as a local inter-thread channel (uses `PAIR` sockets) - the `connectUrl` is not provided in this case. In distributed mode it has to be connected to an externally managed ZeroMQ proxy, where it can exchange messages with other similar channels connected to the same proxy. -The connect url option is a standard ZeroMQ connection string with the protocol and host and a pair of ports over colon for frontend and backend sockets of the ZeroMQ proxy. +The connection url option is a standard ZeroMQ connection string with the protocol and host and a pair of ports over colon for frontend and backend sockets of the ZeroMQ proxy. For convenience, the channel could be supplied with the `ZeroMqProxy` instance instead of connection string, if it is configured in the same application as the proxy. Both sending and receiving sockets are managed in their own dedicated threads making this channel concurrency-friendly. This way we can publish and consume to/from a `ZeroMqChannel` from different threads without synchronization. -By default the `ZeroMqChannel` uses an `EmbeddedJsonHeadersMessageMapper` to (de)serialize the `Message` (including headers) from/to `byte[]` using a Jackson JSON processor. +By default, the `ZeroMqChannel` uses an `EmbeddedJsonHeadersMessageMapper` to (de)serialize the `Message` (including headers) from/to `byte[]` using a Jackson JSON processor. This logic can be configured via `setMessageMapper(BytesMessageMapper)`. Sending and receiving sockets can be customized for any options (read/write timeout, security etc.) via respective `setSendSocketConfigurer(Consumer)` and `setSubscribeSocketConfigurer(Consumer)` callbacks. @@ -117,7 +117,7 @@ The actual port can be obtained via `getBoundPort()` after this component is sta The socket options (e.g. security or write timeout) can be configured via `setSocketConfigurer(Consumer socketConfigurer)` callback. If the `receiveRaw` option is set to `true`, a `ZMsg`, consumed from the socket, is sent as is in the payload of the produced `Message`: it's up to the downstream flow to parse and convert the `ZMsg`. -Otherwise an `InboundMessageMapper` is used to convert the consumed data into a `Message`. +Otherwise, an `InboundMessageMapper` is used to convert the consumed data into a `Message`. If the received `ZMsg` is multi-frame, the first frame is treated as the `ZeroMqHeaders.TOPIC` header this ZeroMQ message was published to. With `SocketType.SUB`, the `ZeroMqMessageProducer` uses the provided `topics` option for subscriptions; defaults to subscribe to all. @@ -150,8 +150,8 @@ The `ZeroMqMessageHandler` only supports connecting the ZeroMQ socket; binding i When the `SocketType.PUB` is used, the `topicExpression` is evaluated against a request message to inject a topic frame into a ZeroMQ message if it is not null. The subscriber side (`SocketType.SUB`) must receive the topic frame first before parsing the actual data. When the payload of the request message is a `ZMsg`, no conversion or topic extraction is performed: the `ZMsg` is sent into a socket as is and it is not destroyed for possible further reuse. -Otherwise an `OutboundMessageMapper` is used to convert a request message (or just its payload) into a ZeroMQ frame to publish. -By default a `ConvertingBytesMessageMapper` is used supplied with a `ConfigurableCompositeMessageConverter`. +Otherwise, an `OutboundMessageMapper` is used to convert a request message (or just its payload) into a ZeroMQ frame to publish. +By default, a `ConvertingBytesMessageMapper` is used supplied with a `ConfigurableCompositeMessageConverter`. The socket options (e.g. security or write timeout) can be configured via `setSocketConfigurer(Consumer socketConfigurer)` callback. Here is a sample of `ZeroMqMessageHandler` configuration: