-
Notifications
You must be signed in to change notification settings - Fork 15
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add a web docs page explaining the API changes and how to update.
- Loading branch information
1 parent
ac4c14f
commit 0b8e8c6
Showing
1 changed file
with
106 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
title: Recent API Changes | ||
|
||
[TOC] | ||
|
||
|
||
## Why? | ||
|
||
Recently we made a number of breaking changes to the FTorch API. | ||
|
||
We realise that this forms an inconvenience to those of you who are actively | ||
using FTorch and is not something we did lightly. | ||
These changes were neccessary to improve functionality and we have made them in one go | ||
as we move towards a stable API and first release in the very near future. | ||
Once the first release is set then the API becomes standardised then changes like this | ||
will be avoided. We hope that this is the last time we have such a shift. | ||
|
||
The changes allow us to implement two new features: | ||
|
||
#. Multiple output tensors | ||
Previously you could pass an array of several input tensors to a torch model, but | ||
only recieve a single output tensor back. Now you can use models that return several | ||
output tensors by passing an array of output tensors instead. | ||
#. Preparation for autograd functionality | ||
We hope to make it easier to access the autograd features of pytorch from within Fortran. | ||
To do this we needed to change how data was assigned from a Fortran array to a Torch tensor. | ||
This is now done via a subroutine call rather than a function. | ||
|
||
|
||
## Changes and how to update your code | ||
|
||
### torch_tensors are created using a subroutine call, not a function | ||
|
||
Previously you would have created a torch tensor and assigned some fortran data to it as follows: | ||
```fortran | ||
real, dimension(5), target :: fortran_data | ||
type(torch_tensor) :: my_tensor | ||
integer :: tensor_layout(1) = [1] | ||
my_tensor = torch_tensor_from_array(fortran_data, tensor_layout, torch_kCPU) | ||
``` | ||
|
||
Now a call is made to a subroutine with the tensor as the first argument: | ||
```fortran | ||
real, dimension(5), target :: fortran_data | ||
type(torch_tensor) :: my_tensor | ||
integer :: tensor_layout(1) = [1] | ||
call torch_tensor_from_array(my_tensor, fortran_data, tensor_layout, torch_kCPU) | ||
``` | ||
|
||
|
||
### module becomes model and loading becomes a subroutine call, not a function | ||
|
||
Previously a neural net was referred to as a '`module`' and loaded using appropriately | ||
named functions and types. | ||
``` | ||
type(torch_module) :: model | ||
model = torch_module_load(args(1)) | ||
call torch_module_forward(model, in_tensors, out_tensors) | ||
``` | ||
|
||
Following user feedback we now refer to a neural net and its associated types and calls | ||
as a '`model`'. | ||
The process of loading a net is also now a subroutine call for consistency with the | ||
tensor creation operations: | ||
``` | ||
type(torch_model) :: model | ||
call torch_model_load(model, 'path_to_saved_net.pt') | ||
call torch_model_forward(model, in_tensors, out_tensors) | ||
``` | ||
|
||
|
||
### n_inputs is no longer required | ||
|
||
Previously when you called the forward method on a net you had to specify the number of tensors | ||
in the array of inputs: | ||
``` | ||
call torch_model_forward(model, in_tensors, n_inputs, out_tensors) | ||
``` | ||
|
||
Now all that is supplied to the forward call is the model, and the arrays of input and | ||
output tensors. No need for `n_inputs` (or `n_outputs`)! | ||
``` | ||
call torch_model_forward(model, in_tensors, out_tensors) | ||
``` | ||
|
||
|
||
### outputs now need to be an array of torch_tensors | ||
|
||
Previously you passed an array of `torch_tensor` types as inputs, and a single `torch_tensor` | ||
to the forward method: | ||
``` | ||
type(torch_tensor), dimension(n_inputs) :: input_tensor_array | ||
type(torch_tensor) :: output_tensor | ||
... | ||
call torch_model_forward(model, input_tensor_array, n_inputs, output_tensor) | ||
``` | ||
|
||
Now both the inputs and the outputs need to be an array of `torch_tensor` types: | ||
``` | ||
type(torch_tensor), dimension(n_inputs) :: input_tensor_array | ||
type(torch_tensor), dimension(n_outputs) :: output_tensor_array | ||
... | ||
call torch_model_forward(model, input_tensor_array, output_tensor_array) | ||
``` |