Today, more than 50% of News Agencies have Drones Team that performs the coverage of ‘live’ events. Typically these events do not have the luxury of having retakes, hence multiple drones are required for taking multiple vantage points at the same time. A large amount of expertise and coordination required and thus there is a need for Autonomous Flight Formation for multiple drones that can assist the human operators.
This project sets out to explore Reinforcement Learning based approaches to solve Real-time online inference drone flight in formation while tracking target using Camera and GPS sensors. Through the use of calibrated setup of a single target drone with multiple follower drones, the team explores the the effectiveness of Single Multi-Inputs Agent Reinforcement Learning vs Multiple Single-Input Agents Reinforcement Learning. The team demonstrated the results of the models trained in a pre-recorded simulation video and then presented their findings and observations in the accompanied report.
Members : Kenneth Goh, Raymond Ng, Wong Yoke Keong
- OS: Ubuntu 18.04.3 LTS or Windows 10 + Windows Subsystem for Linux 1
- Python: 3.6.7 (Anaconda or Miniconda distribution preferred)
- Microsoft AirSim (v1.2.2 on Windows 10 or 1.2.0 on Ubuntu)
- GPU (Discrete GPU preferred for running environment, playing simulations and training)
- For Ubuntu Setup:
- Docker
- nvidia-docker
- For cloud training:
- Google Cloud Platform
- Clone this project:
git clone https://github.com/raymondng76/IRS-Practice-Module-Dev.git
- Change Directory:
cd IRS-Practice-Module-Dev
- Follow further instructions below
- Download and unzip your preferred environment from the AirSim release 1.2.2 page
- Run the AirSim environment by double-clicking on
run.bat
- Please refer to airsim_docker_local_install_readme.md for details on install
- Create new conda environment:
conda create -n airsim python=3.6.7
- Switch environment:
conda activate airsim
- Install dependencies
- Using pip:
pip install -r requirements.txt
- Using pip:
- Ensure python dependencies have been installed. Then execute the below commands
- Assuming you are in the directory storing
IRS-Practice-Module-Dev
, if not typecd ..
if coming from section B/C - Execute
gdown 'https://drive.google.com/uc?id=1ciGqwUpfNPQu_Ua7cowU8mDIXOG_9kkf'
- Unzip the weights:
unzip Final_Weights_Models.zip
- Note that the file is very large (548MB) and downloading over mobile is not recommended
- Assuming you are in the directory storing
- Copy YOLOv3 model weights to
IRS-Practice-Module-Dev
main directorycp -r Final_Weights_Models/Yolov3_drone_weights/ IRS-Practice-Module-Dev/weights
- Copy desired RL model weights from different iterations to
IRS-Practice-Module-Dev/code
sub-directory- e.g. copy of RDQN Single Model, 3rd Iteration:
cp -r Final_Weights_Models/RDQN_Single_Model/3rd_Iteration/* IRS-Practice-Module-Dev/code
- e.g. copy of RDQN Single Model, 3rd Iteration:
- Ensure the AirSim environment is running and you are in the
IRS-Practice-Module-Dev
directory - To run the simulations for the selected models, execute
python <model>.py --play --load_model
- To stop the simulation press
Ctrl-c
- Ensure the AirSim environment is running
- To train the models from scratch, execute
python <model>.py --verbose
. Options includerdqn.py
rdqn_triple_model.py
rddpg_triple_model.py
- To resume training, execute
python <models>.py --verbose --load_model
- To stop the training press
Ctrl-c
- Please refer to gcp_training_readme.md for details on setup and training on Google Cloud Platform VM.
- Please note that playing the simulation is not recommended on VM due to display challenges.
- Code is based on the efforts of Sung Hoon Hong: sunghoonhong/AirsimDRL: Autonomous UAV Navigation without Collision using Visual Information in Airsim
- Object Detection code is based on: experiencor/keras-yolo3: Training and Detecting Objects with YOLO3
- Neural Network framework used: tensorflow/tensorflow: An Open Source Machine Learning Framework for Everyone
- Drone Simulation Environment is from: microsoft/AirSim: Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
- Additional Citations are in the report