Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support opt_einsum in xr.dot #7764

Closed
dcherian opened this issue Apr 18, 2023 · 7 comments · Fixed by #8373
Closed

Support opt_einsum in xr.dot #7764

dcherian opened this issue Apr 18, 2023 · 7 comments · Fixed by #8373

Comments

@dcherian
Copy link
Contributor

dcherian commented Apr 18, 2023

Is your feature request related to a problem?

Shall we support opt_einsum as an optional backend for xr.dot?

opt_einsum.contract is a drop-in replacement for np.einsum so this monkey-patch works today

xr.core.duck_array_ops.einsum  = opt_einsum.contract

Describe the solution you'd like

Add a backend kwarg with options "numpy" and "opt_einsum", with the default being "numpy"

Describe alternatives you've considered

We could create a new package but it seems a bit silly.

Additional context

No response

@TomNicholas
Copy link
Member

I support this (seems just like what we do for bottleneck) but maybe don't use the word backend for the kwarg again 😅 In fact as we're only talking about one function could our kwarg literally point to that function? i.e.

def dot(..., einsum_func=np.einsum):
    ....

@rabernat
Copy link
Contributor

Is there ever a case where it would be preferable to use numpy if opt_einsum were installed? If not, I would propose that, like bottleneck, we just automatically use it if available.

@keewis
Copy link
Collaborator

keewis commented Apr 27, 2023

we still want to be able to explicitly switch it off as we did for bottleneck (mostly for debugging purposes), so the kwarg / option would be good to have either way.

@dcherian
Copy link
Contributor Author

numpy.einsum has some version of opt_einsum implemented under the optimize kwarg. IIUC this is False by default because it adds overhead to small problems (comment)

The complete overhead for computing a path (parsing the input, finding the path, and organization that data) with default options is about 150us. Looks like einsum takes a minimum of 5-10us to call as a reference. So the worst case scenario would be that the optimization overhead makes einsum 30x slower. Personally id go for turning optimization off by default and then revisiting if someone tackles the parsing issue to reduce the overhead.

@dcherian
Copy link
Contributor Author

I think I agree with use_opt_einsum: bool

@shoyer
Copy link
Member

shoyer commented Apr 27, 2023

Allowing for explicitly passing a function matching the einsum interface is certainly more flexible than a boolean or enum argument, so @TomNicholas's suggestion of einsum_func=np.einsum is the version I would suggest.

The overhead from optimizing contraction paths is probably very small relative to the overhead of Xarray in general, so I would support setting optimize=True by default in Xarray, and/or using opt-einsum automatically if it is installed. JAX always use opt-einsum (opt-einsum is actually a hard dependency) and I have never heard any complaints.

@dcherian
Copy link
Contributor Author

so I would support setting optimize=True by default in Xarray

Apparently not all backends support optimize and so we can't set it by default. :(

dcherian added a commit to dcherian/xarray that referenced this issue Oct 25, 2023
dcherian added a commit that referenced this issue Oct 28, 2023
* Use `opt_einsum` by default if installed.

Closes #7764
Closes #8017

* docstring update

* _

* _

Co-authored-by: Maximilian Roos <[email protected]>

* Update xarray/core/computation.py

Co-authored-by: Maximilian Roos <[email protected]>

* Fix docs?

* Add use_opt_einsum option.

* mypy ignore

* one more test ignore

* Disable navigation_with_keys

* remove intersphinx

* One more skip

---------

Co-authored-by: Maximilian Roos <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants