Skip to content

functorch 0.2.0

Compare
Choose a tag to compare
@zou3519 zou3519 released this 05 Jul 13:54
· 9 commits to release/0.2 since this release

functorch 0.2.0 release notes

Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include:

We’re excited to announce functorch 0.2.0 with a number of improvements and new experimental features.

Caveats

functorch's Linux binaries are compatible with all PyTorch 1.12.0 binaries aside from the PyTorch 1.12.0 cu102 binary; functorch will raise an error if it is used with an incompatible PyTorch binary. This is due to a bug in PyTorch (pytorch/pytorch#80489); in previous versions of PyTorch, it is possible to build a single Linux binary for functorch that works with all PyTorch Linux binaries. This will be fixed in the next PyTorch (and functorch) minor release.

Highlights

Significantly improved coverage

We significantly improved coverage for functorch.jvp (our forward-mode autodiff API) and other APIs that rely on it (functorch.{jacfwd, hessian}).

(Prototype) functorch.experimental.functionalize

Given a function f, functionalize(f) returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use make_fx(functionalize(f)) to construct a mutation-free trace of a pytorch function. To learn more, please see the documentation

Windows support

There are now official functorch pip wheels for Windows.

Changelog

Note that this is not an exhaustive list of changes, e.g. changes to pytorch/pytorch can fix bugs in functorch or improve our transform coverage. Here we include user-facing changes that were committed to pytorch/functorch.

  • Added functorch.experimental.functionalize (#236, #720, and more)
  • Added support for Windows (#696)
  • Fixed vmap support for torch.norm (#708)
  • Added disable_autograd_tracking to make_functional variants. This is useful if you’re not using torch.autograd (#701)
  • Fixed a bug in the neural tangent kernels tutorial (#788)
  • Improve vmap over indexing with Tensors (#777, #862)
  • Fixed vmap over torch.nn.functional.mse_loss (#860)
  • Raise an error on unsupported combinations of torch.autograd.functional and functorch transforms (#849)
  • Improved docs on the limitations of functorch transforms (#879)