Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
robmarkcole committed Jul 5, 2020
1 parent 8ba059e commit 4bb0d2c
Showing 1 changed file with 26 additions and 23 deletions.
49 changes: 26 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@
# mqtt-camera-streamer
**TLDR:** Publish frames from a connected camera (e.g. USB webcam, or alternatively an MJPEG/RTSP stream) to an MQTT topic. This can be used to create a simple image processing pipeline that can be used in an IOT or home automation project. The camera stream can be viewed in a browser with [Streamlit](https://github.com/streamlit/streamlit) or Home Assistant. Configuration is via `config.yml`
**TLDR:** Publish frames from a connected camera (e.g. USB webcam, or alternatively an MJPEG/RTSP stream) to an MQTT topic. The camera stream can be viewed in a browser with [Streamlit](https://github.com/streamlit/streamlit) or Home Assistant. Configuration is via `config.yml`

**Long introduction:** A typical task in IOT/science is that you have a camera connected to one machine and you want to view the camera feed on a different machine, and maybe preprocess the images and save them to disk. I have always found this to be may more work than expected. In particular working with camera streams can get quite complicated, and may lead you to experiment with tools like Gstreamer and ffmpeg that have a steep learning curve. In contrast, working with [MQTT](http://mqtt.org/) is very straightforward and is also probably familiar to anyone with an interest in IOT. `mqtt-camera-streamer` uses MQTT to send frames from a usb connected camera over a network at low frames-per-second (FPS). Whilst MQTT is rarely used for this purpose (sending files) I have not encountered any issues doing this. Furthermore it is possible to setup an image processing pipeline by linking MQTT topics together, using an `on_message(topic)` to do some processing and send the processed image downstream on another topic. Note that this is not a high FPS solution, and in practice I achieve around 1 FPS which is practical for tasks such as preprocessing (cropping, rotating) images prior to viewing them. This code is written for simplicity and ease of use, not high performance.
**Long introduction:** A typical task in IOT/science is that you have a camera connected to one computer and you want to view the camera feed on a different computer, and maybe preprocess the images before saving them to disk. I have always found this to be may more work than expected. In particular working with camera streams can get quite complicated, and may lead you to experiment with tools like Gstreamer and ffmpeg that have a steep learning curve. In contrast, working with [MQTT](http://mqtt.org/) is very straightforward and is also probably familiar to anyone with an interest in IOT.

## Setup
On linux/OSX/Windows use a venv to isolate your environment, and install the required dependencies:
`mqtt-camera-streamer` uses MQTT to send frames from a camera connected to a computer over a network at low frames-per-second (FPS). Whilst MQTT is rarely used for this purpose (sending files) I have not encountered any issues doing this. A viewer is provided for viewing the camera stream on any computer on the network. Frames can be saved to disk for further processing. Also it is possible to setup an image processing pipeline by linking MQTT topics together, using an `on_message(topic)` to do some processing and send the processed image downstream on another topic.

**Note** that this is not a high FPS solution, and in practice I achieve around 1 FPS which is practical for tasks such as preprocessing (cropping, rotating) images prior to viewing them. This code is written for simplicity and ease of use, not high performance.

## Installation on linux/OSX/Windows
Use a venv to isolate your environment, and install the required dependencies:
```
$ (base) python3 -m venv venv
$ (base) source venv/bin/activate
$ (venv) pip3 install -r requirements.txt
```

#### Raspberry Pi
#### Installation on Raspberry Pi
Do not use a venv but install openCV system wide using:
```
$ sudo apt install python3-opencv
Expand All @@ -20,52 +24,51 @@ $ pip3 install -r requirements.txt
I have not tested Streamlit on the Raspberry pi, but you can use the viewer on another machine (WIndows, OSX) so don't worry.

## Listing cameras
If your laptop has a built-in webcam this will generally be listed as `VIDEO_SOURCE = 0`. If you plug in an external USB webcam this takes precedence over the inbuilt webcam, with the external camera becoming `VIDEO_SOURCE = 0` and the built-in webcam becoming `VIDEO_SOURCE = 1`.
The `check-cameras.py` script assists in discovering which cameras are on your computer. If your laptop has a built-in webcam this will generally be listed as `VIDEO_SOURCE = 0`. If you plug in an external USB webcam this takes precedence over the built-in webcam, with the external camera becoming `VIDEO_SOURCE = 0` and the built-in webcam becoming `VIDEO_SOURCE = 1`.

To check which cameras are detected run:
```
$ (venv) python3 scripts/check-cameras.py
```
Alternatively you can pass a string to an MJPEG/RTSP stream, For example `"rtsp://admin:[email protected]:554/11" `

You then configure the desired camera as e.g. `video_source: 0`. Alternatively you can configure the video source as an MJPEG or RTSP stream. For example in `config.yml` you would configure `video_source: "rtsp://admin:[email protected]:554/11"`

## Camera usage
Use the `config.yml` file in `config` directory to setup the system (mqtt broker IP etc) and validate the config by running:
Use the `config.yml` file in `config` directory to configure your system (mqtt broker IP etc) and validate the config can be loaded by running:
```
$ (venv) python3 scripts/validate-config.py
```
By default `scripts/camera.py` will look for the config at `./config/config.yml` but an alternative path can be specified using the environment variable `MQTT_CAMERA_CONFIG`.
**Note** that this script does not check the accuracy of any of the values in `config.yml`, just that the file path is correct and the file structure is OK.

By default `scripts/camera.py` will look for the config file at `./config/config.yml` but an alternative path can be specified using the environment variable `MQTT_CAMERA_CONFIG`

To publish camera frames over MQTT:
```
$ (venv) python3 scripts/camera.py
```

To view the camera frames with Streamlit:
## Camera display
To view the camera stream with Streamlit:
```
$ (venv) streamlit run scripts/viewer.py
```

**Note:** if Streamlit becomes unresponsive, `ctrl-z` to pause Streamlit then `kill -9 %%`. Also note that the viewer can be run on amy machine on your network.
<p align="center">
<img src="https://github.com/robmarkcole/mqtt-camera-streamer/blob/master/docs/images/viewer_usage.png" width="500">
</p>

## Image processing pipeline
To process a camera stream (the example rotates the image):
```
$ (venv) python3 scripts/processing.py
```
**Note:** if Streamlit becomes unresponsive, `ctrl-z` to pause Streamlit then `kill -9 %%`. Also note that the viewer can be run on any machine on your network.

## Save frames
To save frames to disk:
```
$ (venv) python3 scripts/save-captures.py
```

## Camera display
The `viewer.py` script uses Streamlit to display the camera feed:

<p align="center">
<img src="https://github.com/robmarkcole/mqtt-camera-streamer/blob/master/docs/images/viewer_usage.png" width="500">
</p>
## Image processing pipeline
To process a camera stream (the example rotates the image):
```
$ (venv) python3 scripts/processing.py
```

## Home Assistant
You can view the camera feed using [Home Assistant](https://www.home-assistant.io/) and configuring an [MQTT camera](https://www.home-assistant.io/components/camera.mqtt/). Add to your `configuration.yaml`:
Expand Down

0 comments on commit 4bb0d2c

Please sign in to comment.