Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handling overflow for the integer types #3520

Open
kmitchener opened this issue Sep 16, 2022 · 6 comments
Open

handling overflow for the integer types #3520

kmitchener opened this issue Sep 16, 2022 · 6 comments
Labels
bug Something isn't working enhancement New feature or request

Comments

@kmitchener
Copy link
Contributor

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and why for this feature, in addition to the what)

Describe the solution you'd like
A clear and concise description of what you want to happen.

I'm opening this issue to get consensus on what the desired DataFusion behavior should be when overflowing numeric types in DataFusion. All tests below done on master as of time of issue creation.

Current situation for overflow:

DataType Test SQL DataFusion (release) Postgres
Int8 select 127::tinyint + 1::tinyint; wraps -
Int16 select 32767::smallint + 1::smallint; wraps ERROR: smallint out of range
Int32 select 2147483647::int + 1::int; wraps ERROR: integer out of range
Int64 select 9223372036854775807::bigint + 1::bigint; wraps ERROR: bigint out of range
UInt8 select 255::tinyint unsigned + 1::tinyint unsigned; wraps -
UInt16 select 65535::smallint unsigned + 1::smallint unsigned; wraps -
UInt32 select 4294967295::int unsigned + 1::int unsigned; wraps -
UInt64 select power(2,64)::bigint unsigned; wraps -

Current situation for attempting to cast an oversized number:

DataType Test SQL DataFusion (release) Postgres
Int8 select 128::tinyint; null -
Int16 select 32768::smallint; null ERROR: smallint out of range
Int32 select 2147483648::int; null ERROR: integer out of range
Int64 select 9223372036854775808::bigint; null ERROR: bigint out of range
UInt8 select 256::tinyint unsigned; null -
UInt16 select 65536::smallint unsigned; null -
UInt32 select 4294967296::int unsigned; null -
UInt64 select 18446744073709551615::bigint unsigned; null for values even less than 2^64. some weird behavior here. -

I think the behavior between casting a "too big" number, and overflowing should be the same.

My proposal would be to make 2 changes:

  • return an error during overflow situations, rather than wrapping around silently. principle of least surprise. I believe most databases throw errors in case of overflow and users would be surprised if DF silently returns "bad" data.
  • we should make cast operations on out of range data (over- or under-sized) return an error

My proposals are based on years of Oracle and Postgres use though, I have no Spark experience. What other thoughts and opinions are out there? How does Spark behave in these cases?

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@kmitchener kmitchener added the enhancement New feature or request label Sep 16, 2022
@liukun4515
Copy link
Contributor

@kmitchener @alamb @andygrove
I think this is an issue about behavior when overflow.
We can do more investigation about this, and decide the behavior by default.
We also can change the behavior with the config or option.

@alamb
Copy link
Contributor

alamb commented Sep 17, 2022

I don't have a huge preference -- when in doubt I think we have tried to follow the postgres semantics, for consistency.

In terms of checking for overflows, etc I would also say we should try and avoid slowing things down too much, if possible

@alamb
Copy link
Contributor

alamb commented Sep 17, 2022

My proposal would be to make 2 changes:

I think those proposals are very reasonable

@liukun4515
Copy link
Contributor

My proposals are based on years of Oracle and Postgres use though, I have no Spark experience. What other thoughts and opinions are out there? How does Spark behave in these cases?

Like cast, if we convert a value to another type which is overflow, the default result is NULL.

For the mathematical operations, we should add the option for that.

I think the two behavior is ok, but we should make them consistent.

@kmitchener

In the spark, If we don't set the special parameter, spark will not throw the error, and just return the wrapping value.

You can try it.

the doc ref in the spark: https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html
https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html#arithmetic-operations

cc @alamb

@alamb
Copy link
Contributor

alamb commented Sep 20, 2022

An option to control the behavior also seems reasonable to be (though I suspect it would add some non trivial complexity to the implementation, so perhaps we can only do this if/when a user has a compelling usecase 🤔 )

@findepi
Copy link
Member

findepi commented Aug 23, 2024

@kmitchener thanks for creating this issue!
btw can we perhaps consider labelling it as a bug too?

@alamb alamb added the bug Something isn't working label Aug 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants