Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interchange between two dataframe types which use the same native storage representation #48

Open
rgommers opened this issue Jul 22, 2021 · 4 comments

Comments

@rgommers
Copy link
Member

This was brought up by @jorisvandenbossche: if two libraries both use the same library for in-memory data storage (e.g. buffers/columns are backed by NumPy or Arrow arrays), can we avoid iterating through each buffer on each column by directly handing over that native representation?

This is a similar question to https://github.com/data-apis/dataframe-api/blob/main/protocol/dataframe_protocol_summary.md#what-is-wrong-with-to_numpy-and-to_arrow - but it's not the same, there is one important difference. The key point of that FAQ entry is that it's consumers who should rely on NumPy/Arrow, and not producers. Having a to_numpy() method somewhere is at odds with that. Here is an alternative:

  1. A Column instance may define __array__ or __arrow_array__ if and only if the column itself is backed by a single NumPy or an Arrow array.
  2. DataFrame and Buffer instance must not define __array__ or __arrow_array__.

(1) is motivated by wanting a simple shortcut like this:

    # inside `from_dataframe` constructor
    for name in df.column_names():
        col = df.get_column_by_name(name)
        # say my library natively uses Arrow:
        if hasattr(col, '__arrow_array__'):
            # apparently we're both using Arrow, take the shortcut
            columns[name] = col.__arrow_array__()
        elif ...: # continue parsing dtypes, null values, etc.

However, there are other constraints then. For __array__ this then also implies:

  • the column has either no missing values or uses NaN or a sentinel value for nulls (and this needs checking first in the code above - otherwise the consumer may still misinterpret the data)
  • this does not work for categorical or string dtypes - those are not representable by a single array

For __arrow_array__ I cannot think of issues right away. Of course the producer should also be careful to ensure that there are no differences in behavior due to adding one of these methods. For example, if there's a dataframe with a nested dtype that is supported by Arrow but not by the protocol, calling __dataframe__() should raise because of the unsupported dtype.

The main pro of doing this is:

  • A potential performance gain in the dataframe conversion (TBD how significant)

The main con is:

  • Extra code complexity to get that performance gain, because now there are two code paths on the consumer side and both must be equivalent.

My impression is: this may be useful to do for __arrow_array__, I don't think it's a good idea for __array__ because the gain is fairly limited and there's too many constraints or ways to get it wrong (e.g. describe_null must always be checked before using __array__). If __array__ is to be added, then maybe at the Buffer level where it plays the same role as __dlpack__.

@kkraus14
Copy link
Collaborator

I haven't been in the discussions lately but drive by commenting since I have some strong opinions about this and was one of the main people voicing concerns about the to_numpy and to_arrow stuff:

The main pro of doing this is:

  • A potential performance gain in the dataframe conversion (TBD how significant)

Is this performance gain to just eliminate the control flow code of constructing say Arrow's containers around memory that we'd be passing around zero copy anyway? If this was implemented in C/C++ (which I imagine most Python libraries would end up doing) then I'd argue this becomes negligible anyway.

For arrow_array I cannot think of issues right away.

Arrow Array objects are backed by Arrow Buffer objects which is an abstract interface that can be backed by CPU or GPU or future devices memory. This wouldn't make any guarantees about where the memory is, only what the container is and possibly give a standard API to work with against the container (though most Arrow APIs will currently throw exceptions or segfault if you try to use them with GPU memory).

My 2c: we should keep the interchange protocol limited to a memory layout description and focus on ensuring we can make the memory interchange zero copy and then doing our best to ensure libraries can use it as efficiently as possible.

@rgommers
Copy link
Member Author

Thanks for the input @kkraus14

Is this performance gain to just eliminate the control flow code of constructing say Arrow's containers around memory that we'd be passing around zero copy anyway? If this was implemented in C/C++ (which I imagine most Python libraries would end up doing) then I'd argue this becomes negligible anyway.

Yes indeed, just about control flow. And I agree it'd be a very minor gain.

Arrow Array objects are backed by Arrow Buffer objects which is an abstract interface that can be backed by CPU or GPU or future devices memory. This wouldn't make any guarantees about where the memory is, only what the container is and possibly give a standard API to work with against the container (though most Arrow APIs will currently throw exceptions or segfault if you try to use them with GPU memory).

I'm actually not quite sure how to interpret this bit. Why would these guarantees be needed (if __arrow_array__ is used by both consumer and producer, it seems like this should "just work")?

My 2c: we should keep the interchange protocol limited to a memory layout description and focus on ensuring we can make the memory interchange zero copy and then doing our best to ensure libraries can use it as efficiently as possible.

This does sound like the better option to me too - it's less complexity overall.

@kkraus14
Copy link
Collaborator

I'm actually not quite sure how to interpret this bit. Why would these guarantees be needed (if __arrow_array__ is used by both consumer and producer, it seems like this should "just work")?

Because you're not guaranteed that downstream of every consumer is just using high level dataframe code / PyArrow code. Someone could have an extension written in C/C++ that assumes buffers are in CPU memory for example.

So then we still need to inspect the flag to determine whether to copy the data to the CPU and presumably call a PyArrow specific API to get a new PyArrow array backed by CPU memory. It adds a bunch of complexity for basically 0 gain.

@jorisvandenbossche
Copy link
Member

My 2c: we should keep the interchange protocol limited to a memory layout description and focus on ensuring we can make the memory interchange zero copy and then doing our best to ensure libraries can use it as efficiently as possible.

This does sound like the better option to me too - it's less complexity overall.

I opened #279 as an alternative to this issue but to achieve the same goal. That proposal is then actually only about a memory layout, without being tied to a specific library (i.e. pyarrow in this case).

It wouldn't yet support GPU (since the Arrow PyCapsule interface doesn't support that yet), but GPU dataframe interchange objects can then simply not add those methods for now to indicate they don't support this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants