Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latent dynamics & GP factorization #8

Merged
merged 15 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 46 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Numerically solving partial differential equations (PDEs) can be challenging and computationally expensive. This has led to the development of reduced-order models (ROMs) that are accurate but faster than full order models (FOMs). Recently, machine learning advances have enabled the creation of non-linear projection methods, such as Latent Space Dynamics Identification (LaSDI). LaSDI maps full-order PDE solutions to a latent space using autoencoders and learns the system of ODEs governing the latent space dynamics. By interpolating and solving the ODE system in the reduced latent space, fast and accurate ROM predictions can be made by feeding the predicted latent space dynamics into the decoder. In this paper, we introduce GPLaSDI, a novel LaSDI-based framework that relies on Gaussian process (GP) for latent space ODE interpolations. Using GPs offers two significant advantages. First, it enables the quantification of uncertainty over the ROM predictions. Second, leveraging this prediction uncertainty allows for efficient adaptive training through a greedy selection of additional training data points. This approach does not require prior knowledge of the underlying PDEs. Consequently, GPLaSDI is inherently non-intrusive and can be applied to problems without a known PDE or its residual. We demonstrate the effectiveness of our approach on the Burgers equation, Vlasov equation for plasma physics, and a rising thermal bubble problem. Our proposed method achieves between 200 and 100,000 times speed-up, with up to 7% relative error

## Dependencies
<!-- ## Dependencies
The code requires:
* **Python 3.7.10**
Expand All @@ -23,11 +23,11 @@ make install
## For LLNL LC Lassen users
Please install [OpenCE-1.1.2](https://lc.llnl.gov/confluence/pages/viewpage.action?pageId=678892406)
Please install [OpenCE-1.1.2](https://lc.llnl.gov/confluence/pages/viewpage.action?pageId=678892406) -->

## Python packaging (work-in-progress)
## Dependencies and Installation

The repository is in transition to python packaging. Users can install the repository as a python package:
Users can install the repository as a python package:
```
pip install .
```
Expand All @@ -42,13 +42,6 @@ This python package requires updated prerequistes:
"argparse>=1.4.0"
```

Currently, not all features are supported. The example of Burgers 1D equation can be run:
```
cd examples
lasdi burgers1d.yml
```
Post-processing & visualization of the Burgers 1D equation can be seen in the jupyter notebook `examples/burgers1d.ipynb`.

### For LLNL LC Lassen users

The work-in-progress python package is compatiable with [OpenCE-1.9.1](https://lc.llnl.gov/confluence/pages/viewpage.action?pageId=785286611).
Expand All @@ -65,16 +58,51 @@ pip install .

## Examples

Four examples are provided, including
<!-- Four examples are provided, including -->

* 1D Burgers Equation ```GPLaSDI/BurgersEqn1D```
* 2D Burgers Equation ```GPLaSDI/BurgersEqn2D```
* 1D1V Vlasov Equation ```GPLaSDI/Vlasov1D1V```
* Rising Heat Bubble (Convection-Diffusion Equation) ```GPLaSDI/RisingBubble```
* 1D Burgers Equation

The example of Burgers 1D equation can be run:
```
cd examples
lasdi burgers1d.yml
```
Post-processing & visualization of the Burgers 1D equation can be seen in the jupyter notebook `examples/burgers1d.ipynb`.

**TODO** Support offline physics wrapper and Burgers 2D equation

* ~~2D Burgers Equation~~
* ~~1D1V Vlasov Equation~~
* ~~Rising Heat Bubble (Convection-Diffusion Equation)~~

To run the Vlasov equation and rising bubble example, [HyPar](http://hypar.github.io/) also needs to be installed. It can be download and compiled by running:
```
git clone https://bitbucket.org/deboghosh/hypar.git
autoreconf -i
[CFLAGS="..."] [CXXFLAGS="..."] ./configure [options]
make
make install
```

## Code Description

* Initial training and test data can be generated by running ```generate_data.py``` in each example directory.
Core routines and classes are implemented in `src/lasdi` directory:

* `latent_dynamics/__init__.py`: general latent dynamics class that calibrates coefficients and predicts the latent trajectories.
* `sindy.py`: strong SINDy class
* `physics/__init__.py`: general physics class that computes full-order model trajectories based on parameters.
* `burgers1d.py`: Burgers1D physics equation solver (run online in python framework)
* `latent_space.py`: classes for autoencoders. Currently only vanilla multi-layer perceptron is provided.
* `param.py`: parameter space class that handles train/test parameter points.
* `gp.py`: base routines for Gaussian-process calibration and sample generation.
* `gplasdi.py`: GP-based greedy sampler class.
* `workflow.py`: controls the overall workflow of the executable `lasdi`.
* `postprocess.py`: miscellaneous post-processing and plotting routines.
* `inputs.py`: input parser class
* `fd.py`: library of high-order finite-difference stencils.
* `timing.py`: light-weight timer class

<!-- * Initial training and test data can be generated by running ```generate_data.py``` in each example directory.
* GPLaSDI models can be trained by running the file ```train_model1.py``` in each example directory.
* ```train_model1.py``` defines a **torch** autoencoder class and loads the training data and all the relevant training parameters into a ```model_parameter``` dictionnary. A ```BayesianGLaSDI``` object is created and takes into input the autoencoder and ```model_parameter```. GPLaSDI is trained by running ```BayesianGLaSDI(autoencoder, model_parameters).train()```
* ```train_framework.py``` defines the ```BayesianGLaSDI``` class, which contains the main iteration loop.
Expand All @@ -83,7 +111,7 @@ Four examples are provided, including
* For the 1D1V Vlasov equation and rising bubble examples, additional files are being runned within the training loop:
* ```solver.py``` contains all the python functions to run **HyPar**. **HyPar** is a finite difference PDE solver written in C. ```init.c``` must be compiled before running GPLaSDI for the first time, using ```gcc init.c -o INIT```. ```INIT``` loads input parameter files written by ```solver.py/write_files``` and convert them into **HyPar**-readable format. Then, ```solver.py/run_hypar``` and ```solver.py/post_process_data``` run **HyPar** and convert FOM solutions into numpy arrays.
* In the rising bubble example, an additional C file, ```PostProcess.c``` needs to be compiled before running GPLaSDI for the first time, using ```gcc PostProcess.c -o PP```
* In the rising bubble example, an additional C file, ```PostProcess.c``` needs to be compiled before running GPLaSDI for the first time, using ```gcc PostProcess.c -o PP``` -->

## Citation
[Bonneville, C., Choi, Y., Ghosh, D., & Belof, J. L. (2023). GPLaSDI: Gaussian Process-based Interpretable Latent Space Dynamics Identification through Deep Autoencoder. arXiv preprint.]()
Expand Down
Loading