You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
Below is a toy DataFrame example with 10M rows and 20 columns. The CSV write speed differ significantly between whether the multi-index is dropped first or not, even if the resulting CSV files are essentially the same.
The benchmark for PyArrow is also attached for reference. Notice that the CSV generated from PyArrow has column names and column values additionally double-quoted.
import pandas as pd
import pyarrow as pa
import pyarrow.csv as csv
import time
NUM_ROWS = 10000000
NUM_COLS = 20
# Example Multi-Index DataFrame
df = pd.DataFrame(
{
f"col_{col_idx}": range(col_idx * NUM_ROWS, (col_idx + 1) * NUM_ROWS)
for col_idx in range(NUM_COLS)
}
)
df = df.set_index(["col_0", "col_1"], drop=False)
# Timing Operation A
start_time = time.time()
df.to_csv("file_A.csv", index=False)
end_time = time.time()
print(f"Operation A time: {end_time - start_time} seconds")
# Timing Operation B
start_time = time.time()
df_reset = df.reset_index(drop=True)
df_reset.to_csv("file_B.csv", index=False)
end_time = time.time()
print(f"Operation B time: {end_time - start_time} seconds")
# Timing Operation C
start_time = time.time()
table = pa.Table.from_pandas(df)
csv.write_csv(table, 'file_C.csv')
end_time = time.time()
print(f"Operation C time: {end_time - start_time} seconds")
Output is as below.
Operation A time: 1080.621277809143 seconds
Operation B time: 45.777050733566284 seconds
Operation C time: 15.710699558258057 seconds
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2e
python : 3.10.8.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.266-178.365.amzn2.x86_64
Version : #1 SMP Fri Jan 12 12:52:04 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
A significant amount of time on that line is spent getting the index values, only to be ignored because self.nlevels is 0 when index=False. In addition, it seems to me that there may be performance improvements possible when index=True too. Further investigations and PRs to fix are welcome!
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
Below is a toy DataFrame example with 10M rows and 20 columns. The CSV write speed differ significantly between whether the multi-index is dropped first or not, even if the resulting CSV files are essentially the same.
The benchmark for PyArrow is also attached for reference. Notice that the CSV generated from PyArrow has column names and column values additionally double-quoted.
Output is as below.
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2e
python : 3.10.8.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.266-178.365.amzn2.x86_64
Version : #1 SMP Fri Jan 12 12:52:04 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 1.23.5
pytz : 2024.1
dateutil : 2.8.2
setuptools : 69.1.1
pip : 24.0
Cython : None
pytest : 8.0.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.22.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.1
gcsfs : None
matplotlib : 3.8.3
numba : 0.57.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 15.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.6.1
scipy : 1.12.0
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
Prior Performance
No response
The text was updated successfully, but these errors were encountered: