You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To help improve the end-to-end efficiency of compaction and similar operations (e.g. hash-equality joins, dataset filters, hash partitioning, etc.) and based on current benchmarks showing Daft I/O to be more performance that PyArrow and S3FS for S3 Parquet file reads, we would like to add an Daft-native Dataframe reader for Iceberg, where "Daft-native" refers to the desired end state of the implementation relying on no intermediate conversion through a Ray Dataset or any other intermediate format.
The text was updated successfully, but these errors were encountered:
To help improve the end-to-end efficiency of compaction and similar operations (e.g. hash-equality joins, dataset filters, hash partitioning, etc.) and based on current benchmarks showing Daft I/O to be more performance that PyArrow and S3FS for S3 Parquet file reads, we would like to add an Daft-native Dataframe reader for Iceberg, where "Daft-native" refers to the desired end state of the implementation relying on no intermediate conversion through a Ray Dataset or any other intermediate format.
The text was updated successfully, but these errors were encountered: