This repository handles training, reamping, and exporting the weights of a model. For playing trained models in real time in a standalone application or plugin, see the partner repo, NeuralAmpModelerPlugin.
There are three main ways to use the NAM trainer. There are two simplified trainers available (1) in your browser via Google Colab and (2) Locally via a GUI. There is also a full-featured trainer for power users than can be runf rom the command line.
If you don't have a good computer for training ML models, you use Google Colab to train
in the cloud using the pre-made notebooks under bin\train
.
For the very easiest experience, open
easy_colab.ipynb
on Google Colab
and follow the steps!
After installing the Python package, a GUI can be accessed by running nam
in the command line.
Alternatively, you can clone this repo to your computer and use it locally.
Installation uses Anaconda for package management.
For computers with a CUDA-capable GPU (recommended):
conda env create -f environment_gpu.yml
Otherwise, for a CPU-only install (will train much more slowly):
conda env create -f environment_cpu.yml
Note: if Anaconda takes a long time "Solving environment...
", then you can speed up installing the environment by using the mamba experimental sovler with --experimental-solver=libmamba
.
Then activate the environment you've created with
conda activate nam
After installing, you can open a GUI trainer by running
nam
from the terminal.
For users looking to get more fine-grained control over the modeling process, NAM includes a training script that can be run from the terminal. In order to run it
Download the v1_1_1.wav and output.wav to a folder of your choice
Edit bin/train/data/single_pair.json
to point to relevant audio files:
"common": {
"x_path": "C:\\path\\to\\v1_1_1.wav",
"y_path": "C:\\path\\to\\output.wav",
"delay": 0
}
Open up a terminal. Activate your nam environment and call the training with
python bin/train/main.py \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/demonet.json \
bin/train/inputs/learning/demo.json \
bin/train/outputs/MyAmp
data/single_pair.json
contains the information about the data you're training
on
models/demonet.json
contains information about the model architecture that
is being trained. The example used here uses a feather
configured wavenet
.
learning/demo.json
contains information about the training run itself (e.g. number of epochs).
The configuration above runs a short (demo) training. For a real training you may prefer to run something like,
python bin/train/main.py \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/wavenet.json \
bin/train/inputs/learning/default.json \
bin/train/outputs/MyAmp
As a side note, NAM uses PyTorch Lightning
under the hood as a modeling framework, and you can control many of the Pytorch Lightning configuration options from bin/train/inputs/learning/default.json
Export a model (to use with the plugin)
Exporting the trained model to a .nam
file for use with the plugin can be done
with:
python bin/export.py \
path/to/config_model.json \
path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \
path/to/exported_models/MyAmp
Then, point the plugin at the exported model.nam
file and you're good to go!
NAM can train using any paired audio files, but the simplified trainers (Colab and GUI) can use some pre-made audio files for you to reamp through your gear.
You can use any of the following files:
- v3_0_0.wav (preferred)
- v2_0_0.wav
- v1_1_1.wav
- v1.wav
Handy if you want to just check it out without needing to use the plugin:
python bin/run.py \
path/to/source.wav \
path/to/config_model.json \
path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \
path/to/output.wav