Code and examples for directly calling Pytorch ML models from Fortran.
It is desirable be able to run machine learning (ML) models directly in Fortran. Such models are often trained in some other language (say Python) using popular frameworks (say PyTorch) and saved. We want to run inference on this model without having to call a Python executable. To achieve this we use the existing ML C++ interface.
This project provides a library enabling a user to directly couple their PyTorch models to Fortran code. We provide installation instructions for the library as well as instructions and examples for performing coupling.
Project status: This project is currently in pre-release with documentation and code being prepared for a first release. As such breaking changes may be made. If you are interested in using this library please get in touch.
To install the library requires the following to be installed on the system:
To build and install the library:
-
Navigate to the location in which you wish to install the source and run:
git clone [email protected]:Cambridge-ICCS/fortran-pytorch-lib.git
to clone via ssh, or
git clone https://github.com/Cambridge-ICCS/fortran-pytorch-lib.git
to clone via https.
-
Navigate into the library source directory by running:
cd fortran-pytorch-lib/fortran-pytorch-lib/
-
Create a
build
directory and execute cmake from within it using the relevant flags:mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release
It is likely that you will need to provide at least the
CMAKE_PREFIX_PATH
flag.
The following CMake flags are available and can be passed as arguments through-D<Option>=<Value>
:Option Value Description CMAKE_Fortran_COMPILER
ifort
/gfortran
Specify a Fortran compiler to build the library with. This should match the Fortran compiler you're using to build the code you are calling this library from. CMAKE_C_COMPILER
icc
/gcc
Specify a C compiler to build the library with CMAKE_CXX_COMPILER
icc
/gcc
Specify a C++ compiler to build the library with CMAKE_PREFIX_PATH
</path/to/libTorch/>
Location of Torch installation1 CMAKE_INSTALL_PREFIX
</path/to/install/lib/at/>
Location at which the library files should be installed. By default this is /usr/local
CMAKE_BUILD_TYPE
Release
/Debug
Specifies build type. The default is DEBUG
, useRELEASE
for production code1 The path to the Torch installation needs to allow cmake to locate the relevant Torch cmake files.
If Torch has been installed as libtorch then this should be the absolute path to the unzipped libtorch distribution. If Torch has been installed as PyTorch in a python venv (virtual environment), e.g. withpip install torch
, then this should be</path/to/venv/>lib/python<3.xx>/site-packages/torch/
. -
Make and install the code to the chosen location with:
make make install
This will place the following directories at the install location:
include/
- contains header and mod fileslib64/
- contains cmake and.so
files
In order to use fortran-pytorch users will typically need to follow these steps:
- Save a PyTorch model as TorchScript.
- Write Fortran using the fortran-pytorch-lib bindings to use the model from within Fortran.
- Build and compile the code, linking against fortran-pytorch-lib
The trained PyTorch model needs to be exported to TorchScript.
This can be done from within your code using the jit.script
or jit.trace
functionalities from within python.
If you are not familiar with these we provide a tool pt2ts.py
as part of this distribution which contains an easily adaptable script to save your PyTorch model as TorchScript.
To use the trained Torch model from within Fortran we need to import the ftorch
module and use the binding routines to load the model, convert the data, and run inference.
A very simple example is given below. For more detailed documentation please consult the API documentation, source code, and examples.
This minimal snippet loads a saved Torch model, creates inputs consisting of two 10x10
matrices (one of ones, and one of zeros), and runs the model to infer output.
! Import any C bindings as required for this code
use, intrinsic :: iso_c_binding, only: c_int, c_int64_t, c_null_char, c_loc
! Import library for interfacing with PyTorch
use ftorch
implicit none
! Generate an object to hold the Torch model
type(torch_module) :: model
! Set up types of input and output data and the interface with C
integer(c_int), parameter :: dims_input = 2
integer(c_int64_t) :: shape_input(dims_input)
integer(c_int), parameter :: n_inputs = 2
type(torch_tensor), dimension(n_inputs) :: model_input_arr
integer(c_int), parameter :: dims_output = 1
integer(c_int64_t) :: shape_output(dims_output)
type(torch_tensor) :: model_output
! Set up the model inputs as Fortran arrays
real, dimension(10,10), target :: input_1, input_2
real, dimension(5), target :: output
! Initialise the Torch model to be used
model = torch_module_load("/path/to/saved/model.pt"//c_null_char)
! Initialise the inputs as Fortran
input_1 = 0.0
input_2 = 1.0
! Wrap Fortran data as no-copy Torch Tensors
! There may well be some reshaping required depending on the
! structure of the model which is not covered here (see examples)
shape_input = (/10, 10/)
shape_output = (/5/)
model_input_arr(1) = torch_tensor_from_blob(c_loc(input_1), dims_input, shape_input, torch_kFloat64, torch_kCPU)
model_input_arr(2) = torch_tensor_from_blob(c_loc(input_2), dims_input, shape_input, torch_kFloat64, torch_kCPU)
model_output = torch_tensor_from_blob(c_loc(output), dims_output, shape_output, torch_kFloat64, torch_kCPU)
! Run model and Infer
! Again, there may be some reshaping required depending on model design
call torch_module_forward(model, model_input_arr, n_inputs, model_output)
! Write out the result of running the model
write(*,*) output
! Clean up
call torch_module_delete(model)
call torch_tensor_delete(model_input_arr(1))
call torch_tensor_delete(model_input_arr(2))
call torch_tensor_delete(model_output)
The code now needs to be compiled and linked against our installed library. Here we describe how to do this for two build systems, cmake and make.
If our project were using cmake we would need the following in the CMakeLists.txt
file to find the FTorch installation and link it to the executable.
This can be done by adding the following to the CMakeLists.txt
file:
find_package(FTorch)
target_link_libraries( <executable> PRIVATE FTorch::ftorch )
message(STATUS "Building with Fortran PyTorch coupling")
and using the -DFTorch_DIR=</path/to/install/location>
flag when running cmake.
To build with make we need to include the library when compiling and link the executable against it.
To compile with make we need add the following compiler flag when compiling files that use ftorch:
FCFLAGS += -I<path/to/install/location>/include/ftorch
When compiling the final executable add the following link flag:
LDFLAGS += -L<path/to/install/location>/lib64 -lftorch
You may also need to add the location of the .so
files to your LD_LIBRARY_PATH
unless installing in a default location:
export LD_LIBRARY_PATH = $LD_LIBRARY_PATH:<path/to/installation>/lib64
Examples of how to use this library are provided in the examples directory.
They demonstrate different functionalities and are provided with instructions to modify, build, and run as neccessary.
Copyright © ICCS
Fortran-PyTorch-Lib is distributed under the MIT Licence.
Contributions and collaborations are welcome.
For bugs, feature requests, and clear suggestions for improvement please open an issue.
If you have built something upon Fortran-PyTorch-Lib that would be useful to others, or can address an open issue, please fork the repository and open a pull request.
Everyone participating in the Fortran-PyTorch-Lib project, and in particular in the issue tracker, pull requests, and social media activity, is expected to treat other people with respect and, more generally, to follow the guidelines articulated in the Python Community Code of Conduct.
Fortran-PyTorch-Lib is written and maintained by the ICCS
Notable contributors to this project are:
See Contributors for a full list.
The following projects make use of this code or derivatives in some way:
Are we missing anyone? Let us know.