PERF: Huge regression in groupby
+ sum
in dtype_backend == 'pyarrow'
.
#53737
Labels
Needs Triage
Issue that has not been reviewed by a pandas team member
Performance
Memory or execution speed performance
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
There's a huge regression (70x) in
groupby
+sum
when using pyarrow as backend.The file used:
wiki100.zip
is a small subset of wiki clickstream dataset for March 2022. This zip contains a parquet single file. The regression is the same when working with larger subset when it takes minutes to run pyarrow, compared to seconds in numpy backed Series.
P.S. Resetting index didn't change anything.
P.S.2 Regression stays the same if I run
instead.
Installed Versions
pandas : 2.0.2
numpy : 1.24.3
pytz : 2023.3
dateutil : 2.8.2
setuptools : 67.3.3
pip : 23.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.14.0
pandas_datareader: None
bs4 : None
bottleneck : 1.3.7
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : 0.57.0
numexpr : 2.8.4
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 12.0.1
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
Prior Performance
No response
The text was updated successfully, but these errors were encountered: