Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add options for eval and gradient required #78 - rebase #103

Merged
merged 8 commits into from
Apr 8, 2024

Conversation

jatkinson1000
Copy link
Member

This is an updated version of #78 rebased onto main after the GPU changes by @jwallwork23

@ElliottKasoar's comment on the original PR:

Resolves #73

Adds flags in all(?) functions that operate on tensors (tensor creation, model loading, forward) to optionally disable autograd, which should improve performance for inference.

Also adds a similar flag to set evaluation mode for the loaded model.

  • Evaluation mode

    • Contrary to my initial comments in Loaded TorchScript missing no_grad context  #73, from testing evaluation mode does appear to be preserved, both between saving and loading TorchScript, and when applied to the loaded model.
    • In most cases evaluation mode is therefore likely to already be set, but I think it's useful to have the option to change it, particularly if FTorch may be extended to facilitate training (Training functionality #22).
  • NoGradMode

    • Enabling or disabling gradients is more complicated, as it defined via a context manager, which only appears to define the behaviour within its own scope, and so it seems necessary to enable/disable gradients before every code block that operates on tensors (similar to the Python equivalent with torch.no_grad():).
  • InferenceMode

    • No changes are currently included, but it would be good to support InferenceMode too eventually, as it should provide further performance benefits over NoGradMode.
    • However, it has stricter requirements, and the mode was only added (as a beta) in PyTorch 1.9, so we would need to be careful if we want to support older versions.
  • Model freezing

    • No changes are currently included, and less directly applicable to the main FTorch library, although there are still interactions e.g. freezing the model can allow InferenceMode to be enabled when loading the model.
    • Freezing is currently the "default" when tracing in pt2ts.py, but not for scripting, despite potentially improving performance.
    • Freezing appears to (sometimes) introduce numerical errors when saving the reloading (differences ~10^-6), and can seem to lead to issues loading with Forpy too.

(For more general explanation of autograd/evaluation mode, see autograd mechanics).

Note: I've also removed the old, commented out torch_from_blob function.

@jatkinson1000
Copy link
Member Author

Hi @ElliottKasoar after the work @jwallwork23 did there were some conflicts with your PR in #78.

I have done my best to rebase your work, but please could you take a look and see if everything seems in order to you?

@TomMelt You originally reviewed this PR and approved, but if you could take a quick glance and re-review it would be appreciated - since @jwallwork23 restructured the order of functions in the files and added additional arguments in the same place as @ElliottKasoar some of the merge conflicts got a little hairy so I may have missed the odd thing!

@jatkinson1000 jatkinson1000 changed the title Add opt options rebase Add options for eval and gradient required #78 - rebase Mar 28, 2024
@jatkinson1000
Copy link
Member Author

jatkinson1000 commented Mar 28, 2024

Before we merge we need to:

  • Add some docs for these new options
  • Add an example showing the use of these new options?
    Or perhaps not given we are not training yet and just use the defaults...?

Copy link
Contributor

@jwallwork23 jwallwork23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, seems like useful functionality.

Just a couple of comments that we need to address either by following my recommendations or by updating the n_c_and_cpp example to use the new API.

src/ctorch.cpp Outdated Show resolved Hide resolved
src/ctorch.cpp Outdated Show resolved Hide resolved
Copy link
Contributor

@ElliottKasoar ElliottKasoar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Looks great other than a couple of places where I think the code may have got lost in the rebase (unless intentional?)

src/ftorch.fypp Show resolved Hide resolved
src/ftorch.fypp Outdated Show resolved Hide resolved
Copy link
Contributor

@jwallwork23 jwallwork23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor point on formatting but otherwise looks good to me!

src/ctorch.cpp Outdated Show resolved Hide resolved
Copy link
Contributor

@ElliottKasoar ElliottKasoar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks @jatkinson1000!

@jatkinson1000
Copy link
Member Author

jatkinson1000 commented Apr 8, 2024

Added a note to the FAQ about eval and no_grad settings.
A detailed example will perhaps wait until these are used as part of #111 since for now they are the sensible defaults for running inference.

I will also move some of @ElliottKasoar's points in his original comment to separate issues for future consideration.

Squashing and merging shortly.

@jatkinson1000 jatkinson1000 merged commit 4b451d7 into main Apr 8, 2024
4 checks passed
@jatkinson1000 jatkinson1000 deleted the add-opt-options-rebase branch April 8, 2024 15:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Loaded TorchScript missing no_grad context
3 participants