Skip to content
This repository has been archived by the owner on Jul 10, 2024. It is now read-only.
/ sequences Public archive

Exploring possibilities around modelling of loading sequences.

License

Notifications You must be signed in to change notification settings

GrainLearning/sequences

Repository files navigation

sequences

Using recurrent neural networks to predict a sequence of macroscopic observables of a granular material undergoing a given input sequence of strains. Trained on DEM simulations with YADE, using different contact parameters.

How to use

Installation

Clone this repo and go to the created directory

git clone [email protected]:GrainLearning/sequences.git
cd sequences

Create and activate a new environment with conda:

conda create --name sequences
conda activate sequences

Install with pip:

pip install -e .

For the M1 Mac, comment out tensorflow in the setup.cfg and install that separately following this.

To test if it installed properly, run (Actually this probably already requires a wandb account)

python train.py

weights and biases

Create a free account on wandb (easiest is to couple your github account).

This can then be used to run parameter sweeps. A sweep's configuration is specified in a yaml file, like example_sweep.yaml. These list the chosen hyperparameter possibilities. Make sure to change the entity to your account name.

A sweep is then created on the command line using

wandb sweep example_sweep.yaml

This won't do any training yet, it will create a sweep ID, show a link where the results can be tracked, and output the command needed to run it, which is of the form

wandb agent <entity>/<project>/<sweep_id>

Running this command will start a training run with hyperparameters chosen according to the config file, and will keep starting new runs.

Models are saved both locally and also uploaded to wandb.

Usage on Snellius

First clone the repo on your snellius directory

git clone https://github.com/GrainLearning/sequences.git

Manually copy the data into sequences/data/sequences.hdf5, and create a directory job_output.

To run the example sweep, run the run_sweep.sh job script:

sbatch run_sweep.sh example_sweep.yaml

This will store the slurm logs and the information of the weights and biases sweep (which includes a link to the sweep page) in the job_output directory, and creates a wandb folder containing all the wandb output.

It will use a quarter of a node in the fat partition, which has 32 cores, and so it will run 32 agents in parallel in the same sweep.

Predicting

In predict.py a sweep id is used to load the best model found in that sweep, and make predictions on test data.

About

Exploring possibilities around modelling of loading sequences.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •