Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request][Spark] Support for categorical/enum data types in delta #3885

Open
3 of 7 tasks
tunayokumus opened this issue Nov 15, 2024 · 0 comments
Open
3 of 7 tasks
Labels
enhancement New feature or request

Comments

@tunayokumus
Copy link

Feature request

Which Delta project/connector is this regarding?

  • Spark
  • Standalone
  • Flink
  • Kernel
  • Other (fill in here)

Overview

Support for categorical / enum / dictionary data type in delta tables and spark dataframes. This seems to be generally possible with parquet. Example data types: Arrow (Dictionary), pandas (categorical), duckdb (enum).

Motivation

This would be useful to save space for large tables with low-cardinality columns, while also achieving simplicity by not having to create a separate dimension table and a PK - FK relationship just for a couple of value mapping.

Further details

Willingness to contribute

The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?

  • Yes. I can contribute this feature independently.
  • Yes. I would be willing to contribute this feature with guidance from the Delta Lake community.
  • [ X ] No. I cannot contribute this feature at this time.
@tunayokumus tunayokumus added the enhancement New feature or request label Nov 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant