AMGSRN++ builds upon and improves the APMGSRN architecture with CUDA kernels, compression aware training, feature grid compression, and time-varying support. The CUDA kernels are implemented in another repository: AMG_Encoder.
- ✅ Windows
- ✅ WSL2
- ✅ Linux
- ❌ MacOS (not supported)
- Install CUDA 12.4 from NVIDIA's website
- Set up the environment:
conda create -n amgsrn python=3.11
conda activate amgsrn
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install -e . --extra-index-url https://download.pytorch.org/whl/cu124
- (Optional) Install TinyCudaNN for faster training:
pip install "tiny-cuda-nn @ git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch"
Note: On Windows, execute this in x64 Native Tools Command Prompt for VS in administrator mode.
Jobs are configured in JSON documents in the AMGSRN/BatchRunSettings
folder. See there for examples.
python AMGSRN/start_jobs.py --settings train.json
python AMGSRN/start_jobs.py --settings test.json
The renderer provides an interactive visualization interface with the following features:
- Real-time volume rendering
- Transfer function editing
- Multiple colormap support
- Adjustable batch size and sampling rate
- Performance statistics
- Image export capabilities
For detailed renderer usage, see AMGSRN/UI/README.md
.
Contributions are welcome! Please feel free to submit pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
If you use this work in your research, please cite this when the paper is published!