Skip to content

Commit

Permalink
Proofread docs and fix issues
Browse files Browse the repository at this point in the history
  • Loading branch information
kumaranvpl committed Oct 13, 2023
1 parent 8acc97d commit ad38221
Show file tree
Hide file tree
Showing 12 changed files with 61 additions and 61 deletions.
6 changes: 3 additions & 3 deletions docs/docs/en/getting-started/publishing/decorator.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The second easiest way to publish messages is by using the Publisher Decorator.
It creates a structured DataPipeline unit with an input and output. The order of Subscriber and Publisher decorators doesn't matter, but `#!python @broker.publisher(...)` can be used only with functions already decorated by a `#!python @broker.subscriber(...)`.

!!! note
It uses the handler function's return type annotation to cast the function's return value before sending, so be accurate with it
It uses the handler function's return type annotation to cast the function's return value before sending, so be accurate with it.

{!> includes/getting_started/publishing/decorator/1.md !}

Expand All @@ -21,10 +21,10 @@ async def handle(msg) -> str:
return "Response"
```

This way you will send the copy of your return to the all output topics.
This way you will send a copy of your return to the all output topics.

!!! note
Also, if this subscriber consumes a message with **RPC** mode, it sends reply not only to **RPC** channel, but to all publishers as well.
Also, if this subscriber consumes a message with **RPC** mode, it sends a reply not only to the **RPC** channel but also to all publishers as well.

## Details

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/en/getting-started/publishing/direct.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This method creates a reusable Publisher object that can be used directly to pub

{!> includes/getting_started/publishing/direct/1.md !}

It is something in the middle between [broker publish](./broker.md){.internal-link} and [object decorator](./object.md){.internal-link}. It has an **AsyncAPI** representation and *testability* features (like **object decorator**), but allows you to send different messages to different outputs (like **broker publish**).
It is something in the middle between [broker publish](./broker.md){.internal-link} and [object decorator](./object.md){.internal-link}. It has an **AsyncAPI** representation and *testability* features (like the **object decorator**), but allows you to send different messages to different outputs (like the **broker publish**).

```python
@broker.subscriber("in")
Expand All @@ -16,4 +16,4 @@ async def handle(msg) -> str:
```

!!! note
Using this way **FastStream** doesn't reuse incoming `correlation_id` to mark outgoing messages with it. You should set it manually, if it is required.
When using this method, **FastStream** doesn't reuse the incoming `correlation_id` to mark outgoing messages with it. You should set it manually if it is required.
12 changes: 6 additions & 6 deletions docs/docs/en/getting-started/publishing/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,33 +5,33 @@
It offers several use cases for publishing messages:

* Using `#!python broker.publish(...)`
* Using a `#!python @broker.publisher(...)` decorator
* Using the `#!python @broker.publisher(...)` decorator
* Using a publisher object decorator
* Using a publisher object directly

All of these variants have their own advantages and limitations, so you can choose you want based on your demands. Please, visit the following pages for details.
All of these variants have their own advantages and limitations, so you can choose what you want based on your requirements. Please visit the following pages for details.

## Serialization

**FastStream** allows you to publish any JSON-serializable messages (Python types, Pydantic models, etc.) or raw bytes.

It automatically sets up all required headers, especially the `correlation_id`, which is used to trace message processing pipelines across all services.

`content-type` is a meaningfull header for **FastStream** services. It helps framework to serialize messages faster, selecting the right serializer based on the header. This header is setted automatically by **FastStream** too, but you should set it up manually using another libraries to interact with **FastStream** application.
The `content-type` is a meaningfull header for **FastStream** services. It helps the framework serialize messages faster, selecting the right serializer based on the header. This header is automatically set by **FastStream** too, but you should set it up manually using other libraries to interact with **FastStream** applications.

Content-Type can be:

* `text/plain`
* `application/json`
* empty with bytes content

Btw, you can use `application/json` for all of your messages if they are not raw bytes. You can event don't use any header at all, but it makes serialization slower a bit.
By the way, you can use `application/json` for all of your messages if they are not raw bytes. You can even omit using any header at all, but it makes serialization slightly slower.

## Publishing

**FastStream** also can be used as just a Broker client to send messages in your another applications. It is a really easy and pretty close to *aiohttp* or *requests*.
**FastStream** can also be used as a Broker client to send messages in other applications. It is quite straightforward and similar to *aiohttp* or *requests*.

You just need to `#!python connect` your broker - and you are already able to send a message. Also, you can use *Broker* as an async context manager to connect and disconnect at scope exit.
You just need to `#!python connect` your broker, and you are ready to send a message. Additionally, you can use *Broker* as an async context manager to establish a connection and disconnect when leaving the scope.

To publish a message, simply set up the message content and a routing key:

Expand Down
8 changes: 4 additions & 4 deletions docs/docs/en/getting-started/publishing/object.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

The Publisher Object provides a full-featured way to publish messages. It has an [**AsyncAPI**](../asyncapi/custom.md){.internal-link} representation and includes [testability](./test.md){.internal-link} features. This method creates a reusable Publisher object.

Also, this object can be used as a decorator too. The order of Subscriber and Publisher decorators doesn't matter, but `#!python @publisher` can be used only with functions already decorated by a `#!python @broker.subscriber(...)`.
Additionally, this object can be used as a decorator. The order of Subscriber and Publisher decorators doesn't matter, but `#!python @publisher` can be used only with functions already decorated by a `#!python @broker.subscriber(...)`.

!!! note
It uses the handler function's return type annotation to cast the function's return value before sending, so be accurate with it
It uses the handler function's return type annotation to cast the function's return value before sending, so be accurate with it.

{!> includes/getting_started/publishing/object/1.md !}

Expand All @@ -21,10 +21,10 @@ async def handle(msg) -> str:
return "Response"
```

This way you will send the copy of your return to the all output topics.
This way, you will send a copy of your return to all output topics.

!!! note
Also, if this subscriber consumes a message with **RPC** mode, it sends reply not only to **RPC** channel, but to all publishers as well.
Also, if this subscriber consumes a message with **RPC** mode, it sends a reply not only to the **RPC** channel but also to all publishers as well.

## Details

Expand Down
16 changes: 8 additions & 8 deletions docs/docs/en/getting-started/publishing/test.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Publisher Testing

If you are working with a Publisher object (either decorator or direct), you have a several testing features available:
If you are working with a Publisher object (either as a decorator or directly), you have several testing features available:

* In-memory TestClient
* Publishing locally with errors propogation
* Publishing locally with error propogation
* Checking the incoming message body

## Base application
## Base Application

Lets take a look at the simple application example with publisher as a decorator/manuall call:
Let's take a look at a simple application example with a publisher as a decorator or as a direct call:

=== "Decorator"
{!> includes/getting_started/publishing/testing/1.md !}
Expand All @@ -18,19 +18,19 @@ Lets take a look at the simple application example with publisher as a decorator

## Testing

To test it you just need to patch your broker by special *TestBroker*
To test it, you just need to patch your broker with a special *TestBroker*.

{!> includes/getting_started/publishing/testing/3.md !}

By default, it patches you broker to run **In-Memory**, so you can use it without any external broker. It should be extremely usefull in your CI or local development environment.

Also, it allows you to check outgoing message body the same way with a [subscriber](../subscription/test.md#validates-input){.internal-link}
Also, it allows you to check the outgoing message body in the same way as with a [subscriber](../subscription/test.md#validates-input){.internal-link}.

```python
publisher.mock.assert_called_once_with("Hi!")
```

!!! note
Publisher mock contains not just a `publish` method input value. It setups a virtual consumer for an outgoing topic, consumes a message and store this consumed one.
The Publisher mock contains not just a `publish` method input value. It sets up a virtual consumer for an outgoing topic, consumes a message, and stores this consumed one.

Also, *TestBroker* can be used with the real external broker to make you tests end-to-end suitable. To find more information, please visit [subscriber testing page](../subscription/test.md#real-broker-testing){.internal-link}
Additionally, *TestBroker* can be used with a real external broker to make your tests end-to-end suitable. For more information, please visit the [subscriber testing page](../subscription/test.md#real-broker-testing){.internal-link}.
26 changes: 13 additions & 13 deletions docs/docs/en/kafka/Publisher/batch_publisher.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Publishing in Batches

## General overview
## General Overview

If you need to send your data in batches, the `#!python @broker.publisher(...)` decorator offers a convenient way to achieve this. To enable batch production, you need to perform two crucial steps:

1. When creating your publisher, set the batch argument to `True`. This configuration tells the publisher that you intend to send messages in batches.

2. In your producer function, return a tuple containing the messages you want to send as a batch. This action triggers the producer to gather the messages and transmit them as a batch to a **Kafka** broker.

Let's delve into a detailed example illustrating how to produce messages in batches to the output_data topic while consuming from the `#!python "input_data_1"` topic.
Let's delve into a detailed example illustrating how to produce messages in batches to the `#!python "output_data"` topic while consuming from the `#!python "input_data_1"` topic.

## Code example
## Code Example

First, lets take a look at the whole app creation and then dive deep into the steps for producing in batches, here is the application code:
First, let's take a look at the whole app creation and then dive deep into the steps for producing in batches. Here is the application code:

```python linenums="1"
{!> docs_src/kafka/publish_batch/app.py!}
Expand All @@ -28,30 +28,30 @@ Step 1: Creation of the Publisher

Step 2: Publishing an Actual Batch of Messages

You can publish a batch by directly calling the publisher with a batch of messages you want to publish, like shown here:
You can publish a batch by directly calling the publisher with a batch of messages you want to publish, as shown here:

```python linenums="1"
{!> docs_src/kafka/publish_batch/app.py [ln:32-34] !}
```

Or you can decorate your processing function and return a batch of messages like shown here:
Or you can decorate your processing function and return a batch of messages, as shown here:

```python linenums="1"
{!> docs_src/kafka/publish_batch/app.py [ln:22-26] !}
```

The application in the example imelements both of these ways, feel free to use whatever option fits your needs better.
The application in the example imelements both of these ways, so feel free to use whichever option fits your needs better.

## Why publish in batches?
## Why Publish in Batches?

In this example, we've explored how to leverage the `#!python @broker.publisher(...)` decorator to efficiently publish messages in batches using **FastStream** and **Kafka**. By following the two key steps outlined in the previous sections, you can significantly enhance the performance and reliability of your **Kafka**-based applications.
In the above example, we've explored how to leverage the `#!python @broker.publisher(...)` decorator to efficiently publish messages in batches using **FastStream** and **Kafka**. By following the two key steps outlined in the previous sections, you can significantly enhance the performance and reliability of your **Kafka**-based applications.

Publishing messages in batches offers several advantages when working with **Kafka**:

1. Improved Throughput: Batch publishing allows you to send multiple messages in a single transmission, reducing the overhead associated with individual message delivery. This leads to improved throughput and lower latency in your **Kafka** applications.
1. **Improved Throughput**: Batch publishing allows you to send multiple messages in a single transmission, reducing the overhead associated with individual message delivery. This leads to improved throughput and lower latency in your **Kafka** applications.

2. Reduced Network and Broker Load: Sending messages in batches reduces the number of network calls and broker interactions. This optimization minimizes the load on the **Kafka** brokers and network resources, making your **Kafka** cluster more efficient.
2. **Reduced Network and Broker Load**: Sending messages in batches reduces the number of network calls and broker interactions. This optimization minimizes the load on the **Kafka** brokers and network resources, making your **Kafka** cluster more efficient.

3. Atomicity: Batches ensure that a group of related messages is processed together or not at all. This atomicity can be crucial in scenarios where message processing needs to maintain data consistency and integrity.
3. **Atomicity**: Batches ensure that a group of related messages is processed together or not at all. This atomicity can be crucial in scenarios where message processing needs to maintain data consistency and integrity.

4. Enhanced Scalability: With batch publishing, you can efficiently scale your **Kafka** applications to handle high message volumes. By sending messages in larger chunks, you can make the most of **Kafka**'s parallelism and partitioning capabilities.
4. **Enhanced Scalability**: With batch publishing, you can efficiently scale your **Kafka** applications to handle high message volumes. By sending messages in larger chunks, you can make the most of **Kafka**'s parallelism and partitioning capabilities.
4 changes: 2 additions & 2 deletions docs/docs/en/kafka/Subscriber/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Basic Subscriber

To start consuming from a **Kafka** topic, just decorate your consuming function with a `#!python @broker.subscriber(...)` decorator, passing a string as a topic key.
To start consuming from a **Kafka** topic, simply decorate your consuming function with a `#!python @broker.subscriber(...)` decorator, passing a string as a topic key.

In the folowing example, we will create a simple FastStream app that will consume `HelloWorld` messages from a `#!python "hello_world"` topic.

Expand All @@ -12,7 +12,7 @@ The full app code looks like this:

## Import FastStream and KafkaBroker

To use the `#!python @broker.subscriber(...)` decorator, first we need to import the base FastStream app KafkaBroker to create our broker.
To use the `#!python @broker.subscriber(...)` decorator, first, we need to import the base FastStream app KafkaBroker to create our broker.

```python linenums="1"
{!> docs_src/kafka/consumes_basics/app.py [ln:3-4] !}
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/en/kafka/message.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ As you may know, **FastStream** serializes a message body and provides you acces

## Message Access

You can easily access this information by referring to the message object in the [Context](../getting-started/context/existed.md)!
You can easily access this information by referring to the message object in the [Context](../getting-started/context/existed.md)

This object serves as a unified **FastStream** wrapper around the native broker library message (for example, `aiokafka.ConsumerRecord` in the case of *Kafka*). It contains most of the required information, including:

Expand Down
16 changes: 8 additions & 8 deletions docs/docs/en/nats/message.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# Access to Message Information

As you know, **FastStream** serializes a message body and provides you access to it through function arguments. But sometimes you want access to message_id, headers, or other meta-information.
As you know, **FastStream** serializes a message body and provides you access to it through function arguments. But sometimes you need to access message_id, headers, or other meta-information.

## Message Access

You can get it in a simple way: just acces the message object in the [Context](../getting-started/context/existed.md){.internal-link}!
You can get it in a simple way: just acces the message object in the [Context](../getting-started/context/existed.md){.internal-link}.

It contains the required information such as:

{!> includes/message/attrs.md !}

It is a **FastStream** wrapper around a native broker library message (`nats.aio.msg.Msg` in the *NATS*' case), you can access with `raw_message`.
It is a **FastStream** wrapper around a native broker library message (`nats.aio.msg.Msg` in the *NATS*' case) that you can access with `raw_message`.

```python hl_lines="1 6"
from faststream.nats.annotations import NatsMessage
Expand All @@ -23,7 +23,7 @@ async def base_handler(
print(msg.correlation_id)
```

Also, if you can't find the information you reqiure, you can get access directly to the wrapped `nats.aio.msg.Msg`, which contains complete message information.
Also, if you can't find the information you require, you can get access directly to the wrapped `nats.aio.msg.Msg`, which contains complete message information.

```python hl_lines="6"
from nats.aio.msg import Msg
Expand All @@ -39,7 +39,7 @@ async def base_handler(body: str, msg: NatsMessage):

But in most cases, you don't need all message fields; you need to access some of them. You can use [Context Fields access](../getting-started/context/fields.md){.internal-link} feature for this reason.

For example, you can get access to the `correlation_id` like this:
For example, you can access the `correlation_id` like this:

```python hl_lines="6"
from faststream import Context
Expand Down Expand Up @@ -73,9 +73,9 @@ But this code is too long to reuse everywhere. In this case, you can use a Pytho

## Subject Pattern Access

As you know, **NATS** allows you to use pattern like this `logs.*` to subscriber on subjects. Getting access to the real `*` value is an often usecase and **FastStream** provide to you it with the `Path` object (it is shortcut to `#!python Context("message.path.*")`).
As you know, **NATS** allows you to use a pattern like this `logs.*` to subscriber to subjects. Getting access to the real `*` value is an often-used scenario, and **FastStream** provide it to you with the `Path` object (which is a shortcut to `#!python Context("message.path.*")`).

To use it you just need to replace your `*` by `{variable-name}` and use `Path` as a regular `Context` object:
To use it, you just need to replace your `*` with `{variable-name}` and use `Path` as a regular `Context` object:

```python hl_lines="3 6"
from faststream import Path
Expand All @@ -86,4 +86,4 @@ async def base_handler(
level: str = Path(),
):
...
```
```
Loading

0 comments on commit ad38221

Please sign in to comment.