This project focuses on detecting man-made objects in water bodies using deep learning, with applications such as underwater trash collection and shipwreck detection. The project leverages the YOLOv8 model for efficient and accurate object detection. The model was trained on a dataset annotated with images containing underwater objects and deployed on a Jetson Nano using Docker.
- Introduction
- Project Features
- Dataset
- Model Architecture
- Training the Model
- Setup and Installation
- Running the Models on Jetson Nano
- Results
- Acknowledgements
Underwater object detection is essential for various applications, including marine conservation, underwater exploration, and environmental cleanup. This project aims to detect and classify underwater objects using state-of-the-art deep learning techniques. The model is deployed on a Jetson Nano to allow for real-time inference in resource-constrained environments.
- Object Detection: Detects man-made objects underwater, such as trash or shipwrecks.
- Real-Time Inference: Runs on a Jetson Nano for efficient and portable deployments.
- Scalable Dataset: A dataset of 50,000 annotated images was used to train the model, providing robust performance across various underwater scenarios.
The dataset used for this project contains 50,000 images annotated with underwater objects. Images were annotated using Roboflow, starting with a manually annotated dataset of 100 images. The initial model was trained on this small dataset and used to annotate additional images, with manual corrections and improvements applied iteratively.
-
Initial Data: Provided by the owner of a related research paper.
1. Real world underwater dataset-RUOD Authors: Risheng Liu, Xin Fan, Ming Zhu, Minjun Hou,Zhongxuan Luo
2. Underwater brackish datasetFormat: YOLO v8, Number of Images: 14674 images
3. Underwater object detection dataset
4. Real-world Underwater Image Enhancement dataset (RUIE 2020).
-
Annotations: Manually annotated with Roboflow and iteratively expanded using model-assisted labeling.
-
Custom Dataset: You can access the custom dataset used in this project on Roboflow here.
The project uses the YOLOv8 model for object detection. YOLOv8 was chosen for its balance between accuracy and speed, making it suitable for deployment on resource-constrained devices like the Jetson Nano. Training was performed on a powerful lab system, with default hyperparameters used throughout the process.
-
YOLOv5 (v6.1): Initially used for object detection but later replaced by YOLOv8 for better performance on the Jetson Nano.
Yolo V5 vs V8
*Figure Source :*https://www.stereolabs.com/en-in/blog/performance-of-yolo-v5-v7-and-v8
Yolo V8 different Format comparision
*Figure Source :*https://docs.ultralytics.com/guides/nvidia-jetson
Training was performed on a lab system with the following steps:
- Prepare the dataset in the YOLO format.
- Train the model using the following command:
yolo train data=UWD2.yaml model=yolov8.yaml epochs=1000
- Evaluate the model on the validation set:
yolo val model=best.pt data=dataset.yaml
Model Training Details
- Training Time: 1,413.86 minutes for 300 epochs.
- Training Hardware:
- CPU: Intel 13th Gen i9-13900.
- GPU: NVIDIA 4090 with 24 GB of dedicated RAM.
- System RAM: 128 GB.
- Shared Graphics Memory: 64 GB (totaling 88 GB of graphics memory).
- Acceleration: Training was GPU-accelerated using the NVIDIA 4090.
Model Inference Details
- Inference Hardware: Jetson Nano 4GB.
- GPU: 128-core Maxwell GPU.
- CPU: Quad-core ARM A57 @ 1.43GHz.
- RAM: 4 GB.
- Acceleration: Inference on the Jetson Nano is GPU-accelerated.
- Inference Speed: 16.7-25 milliseconds per frame (approximately 40-60 FPS).
Training Results
- Docker: Used to run the YOLOv8 model on Jetson Nano.
- Ultralytics YOLOv8: Pre-trained models and inference framework.
- Jetson Nano: Deployment device.
For the initial setup of the Jetson Nano, please refer to the official Jetson Nano setup guide at the link below.
-
Clone the repository:
git clone https://github.com/Vidhul-S/Underwater-Object-Recognition-on-EDGE cd underwater-object-detection
-
Pull the Docker image for Jetson Nano with JetPack 4:
t=ultralytics/ultralytics:latest-jetson-jetpack4 sudo docker pull \$t
-
Make sure your script
doc.sh
is executable:chmod +x doc.sh
-
Run the Docker container using the provided script:
bash doc.sh
To check if the container test1
already exists, run:
docker ps -a
If the container exists and you know what is installed in it then start it and att to its terminal:
docker start UWD
docker attach UWD
skip thsi step if container doesn't exist
If the container does not exist, run the following command to start it:
bash doc.sh
If the container already exists, skip this step and proceed to connect to the container's terminal.
In the container's terminal, update the package list and install X11 apps:
apt update
apt install x11-apps -y
To run the YOLOv8n model in the container terminal, type:
yolo track model=yolov8n.pt source=0
To run a custom model, upload your model weights to the models
folder. Then, in the container terminal, type:
yolo track model="/models/<Your_Model_Name>.pt" source=0
It is recommended to convert your model to TensorRT format (.engine
extension) for better performance and reduced battery consumption. After conversion, your run command will look like:
yolo track model="/models/<Your_Model_Name>.engine" source=0
- For more Docker commands and support, please refer to the Docker Documentation.
- For guidance on running and exporting YOLOv8 models, check out the YOLOv8 Colab Tutorial.
Here are some results from our underwater object detection model:
credits to DALLMYD's plane wreck exploration video
- Thanks to the owner of the research paper who provided the initial dataset
- The project was developed using Ultralytics' YOLO framework.
- Roboflow for the annotation tool.
- Nvidia Jetson platform and resources
- Finally Thanks to Preet Kanwal ma'am and Prasad B Honnavalli sir for providing me with resource and motivation for this undertaking