Skip to content

Commit

Permalink
Merge pull request #125 from snacqs/docs-improvements
Browse files Browse the repository at this point in the history
docs: minor improvements
  • Loading branch information
manast authored Feb 4, 2020
2 parents 195e877 + a882a4a commit 36726bf
Show file tree
Hide file tree
Showing 8 changed files with 19 additions and 29 deletions.
15 changes: 7 additions & 8 deletions docs/gitbook/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ $ yarn add bullmq
```

{% hint style="info" %}
Bull is written in typescript, and although it can be used in vanilla javascript, all examples in this guide will be written in typescript.
Bull is written in TypeScript, and although it can be used in vanilla JavaScript, all examples in this guide will be written in TypeScript.
{% endhint %}

Import into your project and add some jobs:
Expand All @@ -32,10 +32,10 @@ addJobs();
```

{% hint style="danger" %}
You need to have a Redis service running in your local computer to run these examples successfully. You can read more about redis connections [here](guide/connections.md).
You need to have a Redis service running in your local computer to run these examples successfully. You can read more about Redis connections [here](guide/connections.md).
{% endhint %}

Jobs are added to the queue and can be processed at any time, with at least one Nodejs process running a worker:
Jobs are added to the queue and can be processed at any time, with at least one Node.js process running a worker:

```typescript
import { Worker } from 'bullmq'
Expand All @@ -48,7 +48,7 @@ const worker = new Worker(queueName, async job => {
```

{% hint style="info" %}
You can have has many worker processes you want, BullMQ will distribute the jobs across your workers in a round robin fashion.
You can have as many worker processes as you want, BullMQ will distribute the jobs across your workers in a round robin fashion.
{% endhint %}

You can listen to completed \(or failed\) jobs by attaching listeners to the workers:
Expand All @@ -64,10 +64,10 @@ worker.on('failed', (job, err) => {
```

{% hint style="info" %}
There are many other events available, check the [Guide ](guide/events.md)or the[ API reference](api-reference.md) for more information.
There are many other events available, check the [Guide](guide/events.md) or the [API reference](api-reference.md) for more information.
{% endhint %}

Sometimes you need to listen to all the workers events in a given place, for this you need to use a special class`QueueEvents`:
Sometimes you need to listen to all the workers events in a given place, for this you need to use a special class `QueueEvents`:

```typescript
import { QueueEvents } from 'bullmq'
Expand Down Expand Up @@ -104,6 +104,5 @@ queueEvents.on('progress', ({ jobId, data }, timestamp) => {
```

{% hint style="danger" %}
For performance reasons the events emited by a `QueueEvents` instance do not contain the `Job` instance, only the `jobId`. Use the `Queue##getJob` method if you need the `Job` instance.
For performance reasons the events emited by a `QueueEvents` instance do not contain the `Job` instance, only the `jobId`. Use the `Queue#getJob` method if you need the `Job` instance.
{% endhint %}

3 changes: 1 addition & 2 deletions docs/gitbook/guide/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,8 @@ description: >-

In order to use the full potential of Bull queues, it is important to understand the lifecycle of a job. From the moment a producer calls the `add` method on a queue instance, a job enters a lifecycle where it will be in different states, until its completion or failure \(although technically a failed job could be retried and get a new lifecycle\).

![](../.gitbook/assets/image.png)
![Lifecycle of a job](../.gitbook/assets/image.png)

When a job is added to a queue it can be in one of two states, it can either be in the “wait” status, which is, in fact, a waiting list, where all jobs must enter before they can be processed, or it can be in a “delayed” status: a delayed status implies that the job is waiting for some timeout or to be promoted for being processed, however, a delayed job will not be processed directly, instead it will be placed at the beginning of the waiting list and processed as soon as a worker is idle.

The next state for a job Is the “active” state. The active state is represented by a set, and are jobs that are currently being processed, i.e. they are running in the `process` function explained in the previous chapter. A job can be in the active state for an unlimited amount of time until the process is completed or an exception is thrown so that the job will end in either the “completed” or the “failed” status.

7 changes: 3 additions & 4 deletions docs/gitbook/guide/connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

In order to start working with a Queue, a connection to a Redis instance is necessary. BullMQ uses the node module [ioredis](https://github.com/luin/ioredis), and the options you pass to BullMQ are just passed to the constructor of ioredis. If you do not provide any options, it will default to port 6739 and localhost.

Every class will consume at least one redis connection, but it is also possible to reuse connections in some situations. For example, the _Queue_ and _Worker_ classes can accept an existing ioredis instance, and by that reusing that connection, however _QueueScheduler_ and _QueueEvents_ cannot do that because they require blocking connections to Redis, which makes it impossible to reuse them.
Every class will consume at least one Redis connection, but it is also possible to reuse connections in some situations. For example, the _Queue_ and _Worker_ classes can accept an existing ioredis instance, and by that reusing that connection, however _QueueScheduler_ and _QueueEvents_ cannot do that because they require blocking connections to Redis, which makes it impossible to reuse them.

Some examples:

Expand All @@ -27,12 +27,11 @@ import IORedis from 'ioredis';

const connection = new IORedis();

// Reuse the redis instance
// Reuse the ioredis instance
const myQueue = new Queue('myqueue', { connection });
const myWorker = new Worker('myworker', { connection });
```

Note that in the second example, even though the redis instance is being reused, the worker will create a duplicated connection that it needs internally to make blocking connections.
Note that in the second example, even though the ioredis instance is being reused, the worker will create a duplicated connection that it needs internally to make blocking connections.

If you can afford many connections, by all means just use them. Redis connections have quite low overhead, so you should not need to care about reusing connections unless your service provider is imposing you hard limitations.

7 changes: 2 additions & 5 deletions docs/gitbook/guide/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,6 @@

BullMQ is based in 4 classes that together can be used to resolve many different problems. These classes are _**Queue**_, _**Worker**_, _**QueueScheduler**_ and _**QueueEvents**_.

The first class you should know about is the Queue class. This class represents a queue and can be used for adding _**jobs**_ to the queue as well as some other basic manipulation such as pausing, cleaning or getting data from the queue.

Jobs in BullMQ are basically a user created data structure that can be stored in the queue. Jobs are processed by _**workers**_. A Worker is the second class you should be aware about. Workers are instances capable of processing jobs. You can have many workers, either running in the same NodeJS process, or in separate processes as well as in different machines. They will all consume jobs from the queue and mark the jobs as completed or failed.


The first class you should know about is the Queue class. This class represents a queue and can be used for adding _**jobs**_ to the queue as well as some other basic manipulation such as pausing, cleaning or getting data from the queue.

Jobs in BullMQ are basically a user created data structure that can be stored in the queue. Jobs are processed by _**workers**_. A Worker is the second class you should be aware about. Workers are instances capable of processing jobs. You can have many workers, either running in the same Node.js process, or in separate processes as well as in different machines. They will all consume jobs from the queue and mark the jobs as completed or failed.
3 changes: 1 addition & 2 deletions docs/gitbook/guide/jobs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,4 @@

Queues can hold different types of jobs which determine how and when they are processed. In this section we will describe them in detail.

An important thing to consider is that you can mix the different job types in the same queue, so you can add FIFO jobs, and at any moment add a LIFO or a delayed job.

An important thing to consider is that you can mix the different job types in the same queue, so you can add FIFO jobs, and at any moment add a LIFO or a delayed job.
5 changes: 2 additions & 3 deletions docs/gitbook/guide/jobs/stalled.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ When a worker is not able to notify the queue that it is still working on a give
There is not a 'stalled' state, only a 'stalled' event emitted when a job is automatically moved from active to waiting state.
{% endhint %}

In order to avoid stalled jobs, make sure that your worker does not keep NodeJS event loop too busy, the default max stalled check duration is 30 seconds, so as long as you do not perform CPU operations exceeding that value you should not get stalled jobs.
In order to avoid stalled jobs, make sure that your worker does not keep Node.js event loop too busy, the default max stalled check duration is 30 seconds, so as long as you do not perform CPU operations exceeding that value you should not get stalled jobs.

Another way to reduce the chance for stalled jobs is using so called "sandboxed" processors. In this case, the workers will spawn new separate NodeJS processes, running separately from the main process.
Another way to reduce the chance for stalled jobs is using so called "sandboxed" processors. In this case, the workers will spawn new separate Node.js processes, running separately from the main process.

{% code title="main.ts" %}
```typescript
Expand All @@ -28,4 +28,3 @@ export default = (job) => {
}
```
{% endcode %}

5 changes: 2 additions & 3 deletions docs/gitbook/guide/untitled.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A Queue is nothing more than a list of jobs waiting to be processed. The jobs can be small, message like, so that the queue can be used as a message broker, or they can be larger long running jobs.

Queues are controlled with the Queue class. As all classes in BullMQ this is a lightweight class with a handfull of methods that gives you control over the queue.
Queues are controlled with the Queue class. As all classes in BullMQ this is a lightweight class with a handful of methods that gives you control over the queue.

The most important method is probably the _**add**_ method. This method allows you to add jobs to the queue in different fashions:

Expand All @@ -12,7 +12,7 @@ await queue.add('paint', { colour: 'red' });

```

The code above will add a job named _paint_ to the queue, with payload `{ color: 'red' }`. This job will now be stored in redis in a list waiting for some worker to pick it up and process it. Workers may not be running when you add the job, however as soon as one worker is connected to the queue it will pick the job and process it.
The code above will add a job named _paint_ to the queue, with payload `{ color: 'red' }`. This job will now be stored in Redis in a list waiting for some worker to pick it up and process it. Workers may not be running when you add the job, however as soon as one worker is connected to the queue it will pick the job and process it.

When adding a job you can also specify an options object. This options object can dramatically change the behaviour of the added jobs. For example you can add a job that is delayed:

Expand All @@ -21,4 +21,3 @@ await queue.add('paint', { colour: 'blue' }, { delay: 5000 });
```

The job will now wait **at** **least** 5 seconds before it is processed. There are many other options available such as priorities, backoff settings, lifo behaviour, remove-on-complete policies, etc. Please check the remaining of this guide for more information regarding these options.

3 changes: 1 addition & 2 deletions docs/gitbook/what-is-bullmq.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: General description of BullMQ and its features

# What is BullMQ

BullMQ is a [NodeJS](https://nodejs.org) library that implements a fast and robust queue system based on [Redis](https://redis.io/).
BullMQ is a [Node.js](https://nodejs.org) library that implements a fast and robust queue system based on [Redis](https://redis.io/).

The library is designed so that it will fulfil the following goals:

Expand All @@ -29,4 +29,3 @@ If you are new to Message Queues, you may wonder why they are needed after all.
* [x] **Concurrency setting per worker**
* [x] **Threaded \(sandboxed\) processing functions**
* [x] **Automatic recovery from process crashes**

0 comments on commit 36726bf

Please sign in to comment.