Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a test for ORC write with more than one stripe #11743

Open
wants to merge 2 commits into
base: branch-25.02
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions integration_tests/src/main/python/orc_write_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,20 @@ def test_write_round_trip(spark_tmp_path, orc_gens, orc_impl):
data_path,
conf={'spark.sql.orc.impl': orc_impl, 'spark.rapids.sql.format.orc.write.enabled': True})

@pytest.mark.parametrize('orc_gen', [pytest.param(boolean_gen, marks=pytest.mark.xfail(reason='https://github.com/NVIDIA/spark-rapids/issues/11736'))], ids=idfn)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my understanding, we also need to test other kinds of data, not just the ones that failed?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to see more data types so that we are not concerned about what other errors we might be seeing. I would also like to see parquet tests with more that one row group. I am fine if that is a follow on issue too.

@pytest.mark.parametrize('orc_impl', ["native", "hive"])
@allow_non_gpu(*non_utc_allow)
def test_write_more_than_one_stripe_round_trip(spark_tmp_path, orc_gen, orc_impl):
gen_list = [('_c0', orc_gen)]
data_path = spark_tmp_path + '/ORC_DATA'
assert_gpu_and_cpu_writes_are_equal_collect(
# Generate a large enough dataframe to produce more than one stripe
# Preferably use only one partition to avoid splitting the data
lambda spark, path: gen_df(spark, gen_list, 12800, num_slices=1).write.orc(path),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Where does the 12800 number come from? Do we know it will be greater than 64m (orc stripe size) for all the datagens you tested?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This number comes from my experiment.(

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general CUDF will split the data by rows and by size.

https://github.com/rapidsai/cudf/blob/f54c1a5ad34133605d3b5b447d9717ce7eb6dba0/cpp/include/cudf/io/orc.hpp#L585-L587

https://github.com/rapidsai/cudf/blob/f54c1a5ad34133605d3b5b447d9717ce7eb6dba0/cpp/include/cudf/io/orc.hpp#L41-L42

In parquet the split rows in 20,000 but for ORC it is 1,000,000. I am not sure how 12,800 booelan values produces a more than one stripe. I would really like to understand this better because I would expect that to be no where close to the row group count we expect to cause multiple slices.

lambda spark, path: spark.read.orc(path),
data_path,
conf={'spark.sql.orc.impl': orc_impl, 'spark.rapids.sql.format.orc.write.enabled': True})

@pytest.mark.parametrize('orc_gen', orc_write_odd_empty_strings_gens_sample, ids=idfn)
@pytest.mark.parametrize('orc_impl', ["native", "hive"])
def test_write_round_trip_corner(spark_tmp_path, orc_gen, orc_impl):
Expand Down