Emergent behaviour and neural dynamics in artificial agents tracking odour plumes (Nature Machine Intelligence. 2023 Jan) Authors: Satpreet H. Singh, Floris van Breugel, Rajesh P. N. Rao, Bingni Wen Brunton Contact: [email protected] OR [email protected]
BibTeX:
@article{singh2023emergent,
title={Emergent behaviour and neural dynamics in artificial agents tracking odour plumes},
author={Singh, Satpreet H and van Breugel, Floris and Rao, Rajesh PN and Brunton, Bingni W},
journal={Nature Machine Intelligence},
volume={5},
number={1},
pages={58--70},
year={2023},
publisher={Nature Publishing Group UK London}
}
All animations, including failure cases and all 5 Vanilla RNN (VRNN) seeds:
-
By seed:
-
By plume configuration:
Code to reproduce this manuscript can be found in the subfolder code/
Data (agent model/network files, model evaluation data) can be downloaded from Figshare: https://doi.org/10.6084/m9.figshare.16879539.v1 (approx. 9GB)
-
Prerequisites: Summary of packages needed, data organization, simulation data generation and configuration
-
Figure/Report generation: Instructions/Scripts to generate the images used in the manuscript from agent evaluation data
-
Agent evaluation: Instructions/Scripts to (re)generate agent evaluation data i.e. the "behavioral assays". Only required if not using the downloaded data
-
Agent Training: Instructions/Scripts to train agents from scratch
-
Animations: Some commands for generating animations
System requirements:
- All development and testing was done on an Ubuntu Linux v20.04 workstation with Intel Core i9-9940X CPU and a TITAN RTX GPU.
- To reproduce manuscript figures, you need to only do the Prerequisites and Figure/Report generation steps; this can be done on a relatively lightweight Linux/POSIX computer/notebook as long as you have the space to download the data and install all prerequisite software (~10GB together). Expected install/run time after data (~9GB) has been downloaded and extracted is about 2-4 hours.
- (For full training) Each seed takes about 16 hours to train and evaluate, with MLP and RNN models using 1 and 4 CPU cores in parallel respectively. RNN training was done using with GPU acceleration. Be sure to see additional notes in Agent Training.
#tweeprint on this paper:
1/n Excited to share our new preprint where we study turbulent plume tracking using deep reinforcement learning (DRL) trained RNN *agents* and find many intriguing similarities with flying insects. w/ @FlorisBreugel @RajeshPNRao @bingbrunton;
— Satpreet Singh (@tweetsatpreet) September 28, 2021
#tweeprint @flypapers #Drosophila pic.twitter.com/PdVKxbP0hs
Invited Talk at Montreal AI-Neuroscience Conference 2021 (Nov 2021) on this work (Direct Youtube link):
Check out @bingbrunton's upcoming talk at MAIN 2021 on our recently released preprint:https://t.co/7fkIuXiRkt https://t.co/eOkylFSltP
— Satpreet Singh (@tweetsatpreet) November 23, 2021
Also presented at:
Preprint: "Emergent behavior and neural dynamics in artificial agents tracking turbulent plumes"
- The statement in the paper "we did not find any fixed points in our RNNs" assumes that the FP analysis was done at points on the "operating manifold" (i.e. the approximate manifold defined by neural activity from many agent trajectories under diverse initial conditions.) This seems to be a standard assumption in the FP analysis literature. Otherwise, there are fixed points to be found in any high-dimensional trained RNN that are well outside it's "operating manifold".