Machine Learning plugins for GIMP.
Forked from the original version to improve the user experience in several aspects:
- The PyTorch models are packaged in PyTorch Hub format and are only downloaded as needed. This allows new models to be added more seamlessly, without needing to re-download gigabytes of model weights.
- Models are run with Python 3, saving the needed effort to back-port them to Python 2.
- Fully automatic installation, that has been tested on all major operating systems and distros.
- Errors are now reported directly in the UI, rather than on the command line only.
- Correct handling of alpha channels.
- Automatic conversion between RGB/grayscale as needed by the models.
- Results are always added to the same image instead of creating a new one.
- And many other smaller improvements.
The plugins have been tested with GIMP 2.10 on the following systems:
- macOS Catalina 10.15.5
- Ubuntu 18.04 LTS
- Ubuntu 20.04 LTS (apt-get only, snap is not yet supported)
- Debian 10 (buster)
- Arch Linux
- Windows 10
- Install GIMP.
- Clone this repository:
git clone https://github.com/valgur/GIMP-ML-Hub.git
- On Linux and MacOS run
./install.sh
. - On Windows:
- Install Miniconda.
- Enable execution of Powershell scripts.
- Run
install.ps1
.
- You should now find the GIMP-ML plugins under Layers → GIMP-ML. Feel free to create an issue if they are missing for some reason.
- Source: https://github.com/switchablenorms/CelebAMask-HQ
- Torch Hub fork: https://github.com/valgur/CelebAMask-HQ
- License:
- CC BY-NC-SA 4.0
- Copyright (C) 2017 NVIDIA Corporation. All rights reserved.
- Restricted to non-commercial research and educational purposes
- C.-H. Lee, Z. Liu, L. Wu, and P. Luo, “MaskGAN: Towards Diverse and Interactive Facial Image Manipulation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- Source: https://github.com/zllrunning/face-parsing.PyTorch
- Torch Hub fork: https://github.com/valgur/face-parsing.PyTorch
- License: MIT
- Based on BiSeNet:
- https://github.com/CoinCheung/BiSeNet
- License: MIT
- C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, “BiSeNet: Bilateral segmentation network for real-time semantic segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, vol. 11217 LNCS, pp. 334–349.
- Source: https://github.com/twtygqyy/pytorch-SRResNet
- Torch Hub fork: https://github.com/valgur/pytorch-SRResNet
- License: MIT
- C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 105–114.
- Source: https://github.com/TAMU-VITA/DeblurGANv2
- Torch Hub fork: https://github.com/valgur/DeblurGANv2
- License: BSD 3-clause
- O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8877–8886.
- Source: https://github.com/intel-isl/MiDaS
- License: MIT, (c) 2019 Intel ISL (Intel Intelligent Systems Lab)
- R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, “Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer,” 2019.
- Source: https://github.com/nianticlabs/monodepth2
- Torch Hub fork: https://github.com/valgur/monodepth2
- License:
- See the license file for terms
- Copyright © Niantic, Inc. 2019. Patent Pending. All rights reserved.
- Non-commercial use only
- C. Godard, O. Mac Aodha, M. Firman, and G. Brostow, “Digging Into Self-Supervised Monocular Depth Estimation,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 3827–3837.
- Source: https://github.com/zeruniverse/neural-colorization
- Torch Hub fork: https://github.com/valgur/neural-colorization
- License:
- GNU GPL 3.0 for personal or research use
- Commercial use prohibited
- Model weights released under CC BY 4.0
- Based on fast-neural-style:
- https://github.com/jcjohnson/fast-neural-style
- License:
- Free for personal or research use
- For commercial use please contact the authors
- J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9906 LNCS, 2016, pp. 694–711.
- Martin Valgur (valgur) – this version
- Kritik Soman (kritiksoman) – original GIMP-ML implementation
MIT
Please note that additional license terms apply for each individual model. See the references list for details. Many of the models restrict usage to non-commercial or research purposes only.