Skip to content
forked from forgi86/dynonet

"dynoNet: a neural network architecture for learning dynamical systems" by Marco Forgione and Dario Piga

License

Notifications You must be signed in to change notification settings

temcdrm/dynonet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dynoNet: A neural network architecture for learning dynamical systems

This repository contains the Python code to reproduce the results of the paper dynoNet: A neural network architecture for learning dynamical systems by Marco Forgione and Dario Piga.

In this work, we introduce the linear dynamical operator as a differentiable layer compatible with back-propagation-based training. The operator is parametrized as a rational transfer function and thus can represent an infinite impulse response (IIR) filtering operation, as opposed to the Convolutional layer of 1D-CNNs that is equivalent to finite impulse response (FIR) filtering.

In the dynoNet architecture, linear dynamical operators are combined with static (i.e., memoryless) non-linearities which can be either elementary activation functions applied channel-wise; fully connected feed-forward neural networks; or other differentiable operators.

dense_dynonet

A 15-min presentation about dynoNet is available here.

Folders:

  • dynonet: PyTorch implementation of the linear dynamical operator (aka G-block in the paper) used in dynoNet
  • examples: examples using dynoNet for system identification
  • util: definition of metrics R-square, RMSE, fit index
  • doc: paper & slides

Three examples discussed in the paper are:

For the WH2009 example, the main scripts are:

  • WH2009_train.py: Training of the dynoNet model
  • WH2009_test.py: Evaluation of the dynoNet model on the test dataset, computation of metrics.

Similar scripts are provided for the other examples.

NOTE: the original data sets are not included in this project. They have to be manually downloaded from http://www.nonlinearbenchmark.org and copied in the data sub-folder of the example.

Software requirements:

Simulations were performed on a Python 3.7 conda environment with

  • numpy
  • scipy
  • matplotlib
  • pandas
  • pytorch (version 1.4)

These dependencies may be installed through the commands:

conda install numpy scipy pandas matplotlib
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

Local installation:

From PyPI

Type in terminal:

pip install dynonet

This will install the latest stable version packaged on PyPI: https://pypi.org/project/dynonet/

From a local copy of this repository

Navigate to a local copy of this repository, where setup.py and setup.cfg are located. Then, type in terminal:

pip install -e .

Citing

If you find this project useful, we encourage you to

  • Star this repository ⭐
  • Cite the paper
@article{forgione2021dyno,
  title={\textit{dyno{N}et}: A neural network architecture for learning dynamical systems},
  author={Forgione, M. and Piga, D.},
  journal={International Journal of Adaptive Control and Signal Processing},
  volume={35},
  number={4},
  pages={612--626},
  year={2021},
  publisher={Wiley}
}

About

"dynoNet: a neural network architecture for learning dynamical systems" by Marco Forgione and Dario Piga

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.8%
  • Batchfile 1.5%
  • Makefile 0.7%