Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conda package of NNPOps-PyTorch #15

Closed
wants to merge 80 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
80 commits
Select commit Hold shift + click to select a range
b74a2b8
PyTorch wrapper for the forward pass on CPU
Sep 25, 2020
a8eb8e0
CMake file for the PyTorch wrapper
Sep 25, 2020
bae1ecb
Pytorch wrapper for the backward pass (not yet working)
Sep 28, 2020
21b88b8
Wrap CpuANISymmetryFunctions as a custom Pytorch class
Sep 28, 2020
e7cf48c
Pytorch wrapper of the backward pass
Sep 28, 2020
3fdc9f0
Simplify Pytorch wrapper
Sep 29, 2020
7f65603
Pytorch wrapper for the CUDA implementation
Sep 29, 2020
a9a1fbd
Fix a typo
Sep 29, 2020
45a6031
Simplfy the Pytorch wrapper
Sep 30, 2020
46ef01e
Fix the memory leak in the PyTorch wrapper
Oct 1, 2020
62206eb
Pass the box vector to the PyTorch wrapper
Oct 1, 2020
0ea2863
Merge branch 'master' into pytorch
Oct 5, 2020
d94936a
Unify the names of PyTorch wrapper
Oct 6, 2020
524982a
Implement integration with TorchANI via the PyTorch wrapper
Oct 6, 2020
b2b2a9e
Simplify and add check to the TorchANI integration
Oct 6, 2020
4e6f226
Merge branch 'master' into pytorch
Oct 7, 2020
b3c6ca4
Rename the PyTorch wrapper component
Oct 7, 2020
fec2500
Add a test for TorchANISymmetryFunctions
Oct 7, 2020
060701b
Fix the serialization of TorchANISymmetryFunctions
Oct 7, 2020
5cd9ab9
Add a test for the serialization of TorchANISymmetryFunctions
Oct 7, 2020
f8b4584
Merge remote-tracking branch 'origin/master' into pytorch
Oct 15, 2020
d821ddf
Implement TorchANIBatchedNNs
Oct 15, 2020
6536eae
Add a benchmark script for TorchANIBatchedNNs
Oct 15, 2020
97dd1f6
Disalbe unnecessary derivatives in TorchANIBachedNNs
Oct 16, 2020
551591b
Add more molecules for TorchANISymmetryFunctions tests
Oct 21, 2020
bdae88f
Update TorchANISymmetryFunctions tests to use all the molecules
Oct 21, 2020
ee82560
Improve CMake file for NNPOpsPyTorch
Oct 21, 2020
40bb2fa
Add installation instructions for NNPOpsPyTorch
Oct 21, 2020
df3b1ee
Fix the import of NNPOps in Python
Oct 21, 2020
525153f
Add an usage example for NNPOpsPyTorch
Oct 21, 2020
4d9fdc9
Fix the import in the example
Oct 21, 2020
ae73034
Add docstrings for TorchANISymmetryFunctions
Oct 22, 2020
c5cf004
Add more general text about the wrapper
Oct 22, 2020
cd86134
Fix typo
Oct 23, 2020
f912cc8
Add conda recipe for NNPOps-PyTorch
Oct 23, 2020
3a90c8e
Add version specificiations for NNPOps-PyTorch
Oct 23, 2020
ae5b26f
Disalbe some failing molecules
Oct 23, 2020
4f04d8b
Use GCC <9 to build NNPOps
Oct 26, 2020
8717e7a
Add cuDNN to the build dependencies of NNPOps
Oct 26, 2020
f5c36f1
Add an explicit cudatoolkit dependency to NNPOps
Oct 26, 2020
9876414
Skip explicit Python dependency for NNPOps
Oct 26, 2020
531e26c
Add the about section to the conda recipe
Oct 26, 2020
ddd222c
Add a benchmark script for TorchANISymmetryFunctions
Oct 28, 2020
ed3cc15
Make PyTorch and NNPOps to run on the same GPU
Oct 28, 2020
83807c5
Merge branch 'pytorch' into conda_1
Oct 28, 2020
d023e0b
Version 0.0.0a1
Oct 28, 2020
396414f
Pin PyTorch version
Oct 28, 2020
63ca4ee
Merge branch 'master' into batchedNN
Oct 29, 2020
4d6fa08
Merge branch 'master' into conda_1
Oct 29, 2020
336bfe6
Move the files to pytorch directory
Oct 29, 2020
5fe5973
Add test for TorchANIBatchedNN
Oct 29, 2020
d7aa121
Add file headers
Oct 29, 2020
5eb324e
Make TorchANIBatchedNN to accept atomic number for consistency
Oct 29, 2020
905cba0
Uniform the name of TorchANIBarchedNN
Oct 29, 2020
4dfbcb3
Fix a molecule path
Oct 29, 2020
0095e62
Install BatchedNN.py and update imports
Oct 29, 2020
299b030
Simplify TorchANIBatchedNN.__init__
Oct 29, 2020
1f6e985
Update BenchmarkBatchedNN.py
Oct 29, 2020
9c69fcd
Merge branch 'batchedNN' into conda_1
Oct 29, 2020
2d4f3ca
Temporaly disable some BatchedNN tests
Oct 29, 2020
1a84fe7
Package and run all the test files
Oct 29, 2020
22bf353
Version 0.0.0a2
Oct 29, 2020
2dc94af
Implement BachedLinear
Nov 3, 2020
7e0f281
Implement BachedLinear
Nov 3, 2020
5d8f928
Merge branch 'batchedNN' into conda_1
Nov 3, 2020
52b4d75
Unify the names of BatchedNN
Nov 4, 2020
be29ad1
Clean up BatchedNN
Nov 4, 2020
e54a4d0
Merge branch 'batchedNN' into conda_1
Nov 4, 2020
c104734
Version 0.0.0a3
Nov 4, 2020
b3914b6
Use PyTorch from conda-forge
Feb 10, 2021
dc9d5d1
Pin cudatoolkit version
Feb 10, 2021
697f7bc
Silence the CUDA architecure warning
Feb 11, 2021
c5c9478
Version 0.0.0a4
Feb 11, 2021
6bdc48b
Merge branch 'master' into batchedNN
May 26, 2021
c7ebacd
Merge branch 'batchedNN' into conda_1
May 26, 2021
66cf7de
Updata to PyTorch 1.8
May 26, 2021
c894dc6
Fix documentations
May 26, 2021
faccf46
Version 0.0.0a5
May 26, 2021
ff80469
Switch to CUDA 11.0
May 27, 2021
42c805c
Fix nvcc path
May 27, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions conda/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Conda package

## Build

```bash
export PATH=/usr/local/cuda-10.2/bin:$PATH
conda build nnpops-pytorch --channel conda-forge --python 3.7
```
60 changes: 60 additions & 0 deletions conda/nnpops-pytorch/meta.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
{% set name = "nnpops-pytorch" %}
{% set version = "0.0.0a5" %}
{% set build = 0 %}

package:
name: {{ name }}
version: {{ version }}

source:
path: ../..

build:
number: {{ build }}
rpaths:
- lib/
# Note: $PY_VER isn't expanded here
- lib/python3.6/site-packages/torch/lib/ # [py==36]
- lib/python3.7/site-packages/torch/lib/ # [py==37]
- lib/python3.8/site-packages/torch/lib/ # [py==38]
- lib/python3.9/site-packages/torch/lib/ # [py==39]
script:
- cmake $SRC_DIR/pytorch
-DCMAKE_CUDA_COMPILER=/usr/local/cuda-11.0/bin/nvcc
-DCMAKE_CUDA_HOST_COMPILER=$CXX
-DTorch_DIR=$PREFIX/lib/python$PY_VER/site-packages/torch/share/cmake/Torch
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_INSTALL_PREFIX=$PREFIX
- make install VERBOSE=1

requirements:
build:
- cmake
- gxx_linux-64
- make
host:
- cudatoolkit 11.0.*
- python
- pytorch 1.8.0 cuda*
run:
- cudatoolkit 11.0.*
- pytorch >=1.8 cuda*
- torchani >=2.2

test:
requires:
- conda-build
- mdtraj
- pytest
source_files:
- pytorch/molecules
- pytorch/Test*.py
commands:
- conda inspect linkages {{ name }}
- cd pytorch && pytest Test*.py

about:
home: https://github.com/peastman/NNPOps
license: MIT
license_file: LICENSE
summary: Optimized operations for the neural network potentials
50 changes: 50 additions & 0 deletions pytorch/BatchedNN.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
/**
* Copyright (c) 2020 Acellera
* Authors: Raimondas Galvelis
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/

#include <torch/script.h>

using Context = torch::autograd::AutogradContext;
using Tensor = torch::Tensor;
using tensor_list = torch::autograd::tensor_list;

class BatchedLinearFunction : public torch::autograd::Function<BatchedLinearFunction> {
public:
static Tensor forward(Context* ctx, const Tensor& vectors, const Tensor& weights, const Tensor& biases) {
ctx->save_for_backward({weights});
return torch::matmul(weights, vectors) + biases;
};
static tensor_list backward(Context *ctx, const tensor_list& grads) {
const Tensor grad_in = grads[0].squeeze(-1).unsqueeze(-2);
const Tensor weights = ctx->get_saved_variables()[0];
const Tensor grad_out = torch::matmul(grad_in, weights).squeeze(-2).unsqueeze(-1);
return {grad_out, torch::Tensor(), torch::Tensor()};
};
};

static Tensor BatchedLinear(const Tensor& vector, const Tensor& weights, const Tensor& biases) {
return BatchedLinearFunction::apply(vector, weights, biases);
}

TORCH_LIBRARY(NNPOpsBatchedNN, m) {
m.def("BatchedLinear", BatchedLinear);
}
104 changes: 104 additions & 0 deletions pytorch/BatchedNN.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
#
# Copyright (c) 2020 Acellera
# Authors: Raimondas Galvelis
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#

import os
import torch
from torch import nn
from torch import Tensor
from torch.nn import functional as F
import torchani
from torchani.nn import ANIModel, Ensemble, SpeciesConverter, SpeciesEnergies
from typing import List, Optional, Tuple, Union

torch.ops.load_library(os.path.join(os.path.dirname(__file__), 'libNNPOpsPyTorch.so'))
batchedLinear = torch.ops.NNPOpsBatchedNN.BatchedLinear


class TorchANIBatchedNN(torch.nn.Module):

def __init__(self, converter: SpeciesConverter, ensemble: Union[ANIModel, Ensemble], atomicNumbers: Tensor):

super().__init__()

# Convert atomic numbers to a list of species
species_list = converter((atomicNumbers, torch.empty(0))).species[0].tolist()

# Handle the case when the ensemble is just one model
ensemble = [ensemble] if type(ensemble) == ANIModel else ensemble

# Convert models to the list of linear layers
models = [list(model.values()) for model in ensemble]

# Extract the weihts and biases of the linear layers
for ilayer in [0, 2, 4, 6]:
layers = [[model[species][ilayer] for species in species_list] for model in models]
weights, biases = self.batchLinearLayers(layers)
self.register_parameter(f'layer{ilayer}_weights', weights)
self.register_parameter(f'layer{ilayer}_biases', biases)

# Disable autograd for the parameters
for parameter in self.parameters():
parameter.requires_grad = False

@staticmethod
def batchLinearLayers(layers: List[List[nn.Linear]]) -> Tuple[nn.Parameter, nn.Parameter]:

num_models = len(layers)
num_atoms = len(layers[0])

# Note: different elements have different size linear layers, so we just find maximum sizes
# and pad with zeros.
max_out = max(layer.out_features for layer in sum(layers, []))
max_in = max(layer.in_features for layer in sum(layers, []))

# Copy weights and biases
weights = torch.zeros((1, num_atoms, num_models, max_out, max_in), dtype=torch.float32)
biases = torch.zeros((1, num_atoms, num_models, max_out, 1), dtype=torch.float32)
for imodel, sublayers in enumerate(layers):
for iatom, layer in enumerate(sublayers):
num_out, num_in = layer.weight.shape
weights[0, iatom, imodel, :num_out, :num_in] = layer.weight
biases [0, iatom, imodel, :num_out, 0] = layer.bias

return nn.Parameter(weights), nn.Parameter(biases)

def forward(self, species_aev: Tuple[Tensor, Tensor]) -> SpeciesEnergies:

species, aev = species_aev

# Reshape: [num_mols, num_atoms, num_features] --> [num_mols, num_atoms, 1, num_features, 1]
vectors = aev.unsqueeze(-2).unsqueeze(-1)

vectors = batchedLinear(vectors, self.layer0_weights, self.layer0_biases) # Linear 0
vectors = F.celu(vectors, alpha=0.1) # CELU 1
vectors = batchedLinear(vectors, self.layer2_weights, self.layer2_biases) # Linear 2
vectors = F.celu(vectors, alpha=0.1) # CELU 3
vectors = batchedLinear(vectors, self.layer4_weights, self.layer4_biases) # Linear 4
vectors = F.celu(vectors, alpha=0.1) # CELU 5
vectors = batchedLinear(vectors, self.layer6_weights, self.layer6_biases) # Linear 6

# Sum: [num_mols, num_atoms, num_models, 1, 1] --> [num_mols, num_models]
# Mean: [num_mols, num_models] --> [num_mols]
energies = torch.mean(torch.sum(vectors, (1, 3, 4)), 1)

return SpeciesEnergies(species, energies)
99 changes: 99 additions & 0 deletions pytorch/BenchmarkBatchedNN.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
#
# Copyright (c) 2020 Acellera
# Authors: Raimondas Galvelis
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#

import mdtraj
import time
import torch
import torchani

# from NNPOps.SymmetryFunctions import TorchANISymmetryFunctions
from NNPOps.BatchedNN import TorchANIBatchedNN

device = torch.device('cuda')

mol = mdtraj.load('molecules/2iuz_ligand.mol2')
species = torch.tensor([[atom.element.atomic_number for atom in mol.top.atoms]], device=device)
positions = torch.tensor(mol.xyz, dtype=torch.float32, requires_grad=True, device=device)

nnp = torchani.models.ANI2x(periodic_table_index=True, model_index=None).to(device)
print(nnp)

energy_ref = nnp((species, positions)).energies
energy_ref.backward()
grad_ref = positions.grad.clone()

N = 2000
start = time.time()
for _ in range(N):
energy_ref = nnp((species, positions)).energies
delta = time.time() - start
print(f'ANI-2x (forward pass)')
print(f' Duration: {delta} s')
print(f' Speed: {delta/N*1000} ms/it')

N = 1000
start = time.time()
for _ in range(N):
energy_ref = nnp((species, positions)).energies
positions.grad.zero_()
energy_ref.backward()
delta = time.time() - start
print(f'ANI-2x (forward & backward pass)')
print(f' Duration: {delta} s')
print(f' Speed: {delta/N*1000} ms/it')

# nnp.aev_computer = TorchANISymmetryFunctions(nnp.aev_computer).to(device)
nnp.neural_networks = TorchANIBatchedNN(nnp.species_converter, nnp.neural_networks, species).to(device)
print(nnp)

# nnp = torch.jit.script(nnp)
# nnp.save('nnp.pt')
# npp = torch.jit.load('nnp.pt')

energy = nnp((species, positions)).energies
positions.grad.zero_()
energy.backward()
grad = positions.grad.clone()

N = 10000
start = time.time()
for _ in range(N):
energy = nnp((species, positions)).energies
delta = time.time() - start
print(f'ANI-2x with BatchedNN (forward pass)')
print(f' Duration: {delta} s')
print(f' Speed: {delta/N*1000} ms/it')

N = 5000
start = time.time()
for _ in range(N):
energy = nnp((species, positions)).energies
positions.grad.zero_()
energy.backward()
delta = time.time() - start
print(f'ANI-2x with BatchedNN (forward & backward pass)')
print(f' Duration: {delta} s')
print(f' Speed: {delta/N*1000} ms/it')

# print(float(energy_ref), float(energy), float(energy_ref - energy))
# print(float(torch.max(torch.abs((grad - grad_ref)/grad_ref))))
10 changes: 6 additions & 4 deletions pytorch/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,15 @@ find_package(Torch REQUIRED)

set(CMAKE_INSTALL_RPATH_USE_LINK_PATH true)

add_library(${LIBRARY} SHARED SymmetryFunctions.cpp
../ani/CpuANISymmetryFunctions.cpp
../ani/CudaANISymmetryFunctions.cu)
add_library(${LIBRARY} SHARED BatchedNN.cpp
SymmetryFunctions.cpp
../ani/CpuANISymmetryFunctions.cpp
../ani/CudaANISymmetryFunctions.cu)
target_compile_features(${LIBRARY} PRIVATE cxx_std_14)
target_include_directories(${LIBRARY} PRIVATE ${PYTHON_INCLUDE_DIRS})
target_include_directories(${LIBRARY} PRIVATE ../ani)
target_link_libraries(${LIBRARY} ${TORCH_LIBRARIES} ${PYTHON_LIBRARIES})
set_property(TARGET ${LIBRARY} PROPERTY CUDA_ARCHITECTURES OFF)

install(TARGETS ${LIBRARY} DESTINATION ${Python_SITEARCH}/${NAME})
install(FILES SymmetryFunctions.py DESTINATION ${Python_SITEARCH}/${NAME})
install(FILES BatchedNN.py SymmetryFunctions.py DESTINATION ${Python_SITEARCH}/${NAME})
2 changes: 1 addition & 1 deletion pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ $ git clone https://github.com/openmm/NNPOps.git
- Crate a *Conda* environment
```bash
$ cd NNPOps
$ conda create -f pytorch/environment.yml
$ conda env create -f pytorch/environment.yml
$ conda activate nnpops
```

Expand Down
Loading