Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DataFrame.insert_columns #231

Closed
wants to merge 6 commits into from

Conversation

MarcoGorelli
Copy link
Contributor

Example from polars of where this makes a difference:

Inserting sequentially:

In [19]: df = pl.LazyFrame({"a": [1, 1, 2], 'b': [4,5,6]})

In [20]: df = df.with_columns(a_plus_one=pl.col('a')+1)

In [21]: df = df.with_columns(b_plus_one=pl.col('b')+1)

In [22]: print(df.explain())
 WITH_COLUMNS:
 [[(col("b")) + (1)].alias("b_plus_one")]
   WITH_COLUMNS:
   [[(col("a")) + (1)].alias("a_plus_one")]
    DF ["a", "b"]; PROJECT */2 COLUMNS; SELECTION: "None"

Inserting in parallel, as a_plus_1 and b_plus_1 are independent:

In [25]: df = pl.LazyFrame({"a": [1, 1, 2], 'b': [4,5,6]})

In [26]: df = df.with_columns(
    ...:     a_plus_one=pl.col('a')+1,
    ...:     b_plus_one=pl.col('b')+1,
    ...: )

In [27]: print(df.explain())
 WITH_COLUMNS:
 [[(col("a")) + (1)].alias("a_plus_one"), [(col("b")) + (1)].alias("b_plus_one")]
  DF ["a", "b"]; PROJECT */2 COLUMNS; SELECTION: "None"

@MarcoGorelli MarcoGorelli marked this pull request as draft August 27, 2023 12:12
@MarcoGorelli
Copy link
Contributor Author

needs rebasing onto #239

@MarcoGorelli MarcoGorelli marked this pull request as ready for review August 28, 2023 09:27
Copy link
Member

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inserting multiple columns at once seems clearly useful to me. The "columns must be independent" is a little concerning though - it seems like a fuzzy boundary and I'm not even sure how independent is defined. Whether two columns are truly independent may be very hard to establish within the context of a single function. What if I have a function like:

def a_func(df, col1, col2):
    return df.insert_columns(col1, col2)

and col1 and col2 are related through some earlier operation one or more levels up in the call stack?

It seems to me like this restriction should be dropped, and that instead the implementation is responsible for correctly serializing the operations if it detects that the columns are related somehow.

spec/API_specification/dataframe_api/dataframe_object.py Outdated Show resolved Hide resolved
@MarcoGorelli
Copy link
Contributor Author

and col1 and col2 are related through some earlier operation one or more levels up in the call stack?

If you really don't know, then you can just call DataFrame.insert twice:

def a_func(df, col1, col2):
    return df.insert_columns(col1).insert_columns(col2)

Anyway, I've rephrased to "insertion order is not guaranteed and may vary across implementation", so if you try inserting one column and then another one which is a transformation of the one you just inserted, then that's not supported

If you do insert multiple columns at the same time, then it's up to you (the user) to know that insertion can happen in an "embarassingly parallel" manner

@MarcoGorelli
Copy link
Contributor Author

and that instead the implementation is responsible for correctly serializing the operations if it detects that the columns are related somehow.

How, with a try-except? I'd rather keep such magic fallbacks out, but thanks for the suggestion

Comment on lines +206 to +217
If inserting multiple columns, then the order in which they are inserted
is not guaranteed and may vary across implementations. For example, the
following

.. code-block:: python

new_column_1 = df.get_column_by_name('a').rename('b')
new_column_2 = (new_column_1 + 2).rename('c')
df.insert_columns([new_column_1, new_column_2])

is not supported, as `new_column_2` is derived from `new_column_1`, which may
not be part of the dataframe if `new_column_2` is inserted first.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of now our Column objects are immutable and have no concept of being derived from. Could you explain why this wouldn't be supported?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Say you start with

a b
1 4
1 5
2 6

and you want to add two new columns:

  • c: which is b + 1
  • d: which is a + c

These two new columns can't be added in any order, they need to be added first 'c' and then 'd'

If you try inserting 'd' before 'c', then you could get "KeyError: column not found, 'c' not part of dataframe"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e.g.

In [21]: df = pl.DataFrame({'a':[1,1,3], 'b':[4,5,6]})

In [22]: df.with_columns(c=pl.col('a')+1, d=pl.col('c')+pl.col('a'))
---------------------------------------------------------------------------
ColumnNotFoundError: c

Error originated just after this operation:
DF ["a", "b"]; PROJECT */2 COLUMNS; SELECTION: "None"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay that makes sense as to why in a lazy implementation you can't use derived columns from other columns being inserted. Assuming that isn't allowed, then can inserting the columns in the order they are passed possible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eager too, the above example was eager

Should be possible, yes (we could always call .get_columns_by_name internally and reoder) I was just thinking about how to get the concept across without saying "independent"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The need for a namespace.col seems natural for lazy implementations like Polars, but not so much for eager implementations as far as I can tell. So I do think this is a lazy vs eager thing.

I'm getting the sense that conflating lazy and eager into a single API is going to be the source of some tension (and likely a bad UX).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, as another point of reference, Ibis does support this:

import ibis
import polars as pl

con = ibis.polars.connect()

df = pl.DataFrame({'a':[1,1,3], 'b':[4,5,6]})
idf = con.create_table("df", df)

c = (idf['a'] + 1).name('c')
d = (c + idf['a']).name('d')

idf.select(['a', 'b', c, d])

Where it resolves it to:

Selection[r0]
  selections:
    a: r0.a
    b: r0.b
    c: r0.a + 1
    d: r0.a + 1 + r0.a

I'm guessing this is a design difference between Ibis and Polars-lazy since Ibis allows selecting columns, i.e. idf['a'] which returns a expression Column type of object which is bound to its owning table, whereas Polars expressions aren't bound to a table?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does Ibis insert the columns in parallel in select? If not, I think that's probably the underlying reason

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ibis generates SQL (or DataFrame code) and hands that generated code to the connected backend, so it's an implementation detail of the backend. But in general, Ibis binds its expressions to Tables which allows writing code like I did above which isn't dependent on a column named "c". On the other hand, the Polars code using expressions you posted isn't bound to a table so it is dependent on having a column named "c".

I think this is again a discussion more about expressions and Columns and what we do in the standard regarding them than this specific function at this point. Should we move discussion back to #229?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll work on updating the Expressions proposal this week and next, but just wanted to note that that will make the independence condition very easy to state:

all expressions passed to insert_columns must have root names already present in the dataframe

I'll define as part of the proposal exactly what I mean by "root names"

spec/API_specification/dataframe_api/dataframe_object.py Outdated Show resolved Hide resolved
@MarcoGorelli
Copy link
Contributor Author

I've articulated some goals I'd like to see for the project here: #244

Currently, inserting multiple independent columns is not a zero-cost-abstraction for the Polars implementation - hence, this PR

I wasn't expecting this to be controversial - let's discuss it on the next call

@MarcoGorelli
Copy link
Contributor Author

MarcoGorelli commented Sep 13, 2023

The need for a namespace.col seems natural for lazy implementations like Polars, but not so much for eager implementations as far as I can tell. So I do think this is a lazy vs eager thing.

It's a readability thing too

Compare

    plant_statistics = plant_statistics.filter(
        (
            plant_statistics.get_column_by_name("sepal_width")
            > plant_statistics.get_column_by_name("sepal_height")
        )
        | (plant_statistics.get_column_by_name("species") == "setosa")
    )

with

    plant_statistics = plant_statistics.filter(
        (col("sepal_width") > col("sepal_height")) | (col("species") == "setosa")
    )

To be honest, the former looks to me like we've taken the worst parts of pandas and make them even uglier

I'm getting the sense that conflating lazy and eager into a single API is going to be the source of some tension (and likely a bad UX).

Indeed - here's what I'd like to discuss tomorrow #249

@MarcoGorelli
Copy link
Contributor Author

closing in favour of #269

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants