Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure ufunc/function dispatching is narrow #977

Merged
merged 1 commit into from
Jun 29, 2023

Conversation

seberg
Copy link
Contributor

@seberg seberg commented Jun 27, 2023

To allow potential wrapping e.g. via projects like pint, cuNumeric should only accept types in the __array_function__ and __array_ufunc__ protocols that it clearly understands. Right now, these are superclasses (if there was subclassing in theory) and NumPy arrays. CuPy arrays would also fall into this category.

Ping @manopapad since we talked about it briefly yesterday, thought I would just look into it. It is correct that cuNumeric currently only knows about NumPy arrays really (and objects that don't have __array_function__/__array_ufunc__); i.e. no cupy?

I would lean towards it being OK to just fail if you don't implement this. Dask has a FutureWarning for their NumPy fallback path in __array_function__, but it looks a bit like this may be important for other fallbacks and I think its fine.

@rapids-bot
Copy link

rapids-bot bot commented Jun 27, 2023

Pull requests from external contributors require approval from a nv-legate organization member with write permissions or greater before CI can begin.

@manopapad manopapad added the category:improvement PR introduces an improvement and will be classified as such in release notes label Jun 27, 2023
@manopapad manopapad requested a review from bryevdv June 28, 2023 00:54
Copy link
Contributor

@manopapad manopapad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, giving a chance to @bryevdv to comment if he wants

@manopapad
Copy link
Contributor

It is correct that cuNumeric currently only knows about NumPy arrays really (and objects that don't have __array_function__/__array_ufunc__); i.e. no cupy?

Right, cuNumeric can attach directly to the buffers of NumPy arrays. Today that doesn't mean much, since we'll likely have to make a copy right away to move the data to the GPU, and/or partition the array for the needs of the next operation, but the ability is theoretically there to use the memory directly. And in any case, the updates will be automatically mirrored on the original NumPy array.

This also simplifies how we handle non-array values where an array would be expected. In that case we just call np.array/np.asarray on it, then attach to the result. This way we automatically handle things like scalars and lists.

IIRC cupy doesn't like it when np.array is called directly on its arrays, because that is a silent conversion with heavy performance implications (they would rather users do a more explicit conversion). Similar to NumPy, we could probably attach to cupy arrays directly (by taking advantage of the CUDA Array Interface), but we're not doing that today.

Copy link
Contributor

@bryevdv bryevdv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

To allow potential wrapping e.g. via projects like pint, cuNumeric
should only accept types in the `__array_function__` and
`__array_ufunc__` protocols that it clearly understands.
Right now, these are superclasses (if there was subclassing in theory)
and NumPy arrays.  CuPy arrays would also fall into this category.
@manopapad manopapad merged commit a1d26ea into nv-legate:branch-23.07 Jun 29, 2023
@seberg seberg deleted the strict-dispatching branch June 30, 2023 07:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category:improvement PR introduces an improvement and will be classified as such in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants