-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback request for providing configurable UDF functions #10744
Comments
I think options 1 and 3 would be straightforward You could even potentially implement pub fn to_timestamp_safe(args: Vec<Expr>) -> Expr {
...
} Directly in your application (rather than in the core of datafusion) Another crazy thought might be to implement a rewrite pass (e.g. |
I think the key thing to figure out is "will safemode to_timestamp be part of the datafusion core"? Maybe it is time to make a |
I think it is possible to extend the For the expression API, we can either
I prefer the third one. Also, there are many We can have
It is an interesting question, we can think of implementing functions based on other DB in the first place. For example, we usually follow postgres, duckdb, and others. We can have |
Most of the function has the same behavior in different db, we can also implement different functions in one crate |
@andygrove are there udf's already in the comet project that handle spark specific behaviour? If so is that a separate project or embedded in comet currently? (I haven't looked at that codebase myself since the initial drop) |
The one issue with moving this functionality into a spark module is that for that to really be valid the formats would have to be spark compatible, which they are not currently. I do not have the spare time in the near future to implement a parser to do that. |
@Omega359 so far we have been implementing custom I think we need to have the discussion of whether it makes sense to upstream these into the core datafusion project or not, or whether we publish a |
We are porting Spark parsing logic as part of Comet. |
Thank you for chiming in. While I wouldn't mind spark compatibility it really isn't the focus of this request as I've already converted all the spark expressions and function usages to DF compatible ones. It's the general system behaviour that is what I would like to address - being able to essentially switch from a db focused perspective (fail fast) to a processing engine one (nominally lenient - return null) for some (all) of the UDF's. If the general consensus is to separate out this desired behaviour than I would think a separate crate might be the best approach. However from searching the issues here there seems to have been some talk of how to handle mirroring the behaviour of other databases in the past but it also includes sql syntax as well so it's not quite as simple as just having a db specific crate full of UDF's and calling it a day. |
I have read the context now and understand that this is about Isn't this just a case of adding a new flag to the session context that UDFs can choose to use when deciding whether to return null or throw an error? |
That would be nice ... except UDF's don't have a way to access the session context currently :( Option #2 and #3 provide that via different mechanisms. |
I wonder if we could take a page from what @jayzhan211 is implementing in #10560 and go with a trait So we could implement something like let expr = to_timestamp(lit("2021-01-01"))
// set the to_timestamp mode to "safe"
.safe(); I realize that this would require changing the callsites so maybe it isn't viable |
After thinking about this a fair bit the builder approach like what @jayzhan211 did with aggregate functions seems to be the best way forward on this feature imho. While I do like the idea of a separate crate(s) for mirroring functionality from other systems I think that is a much much larger project and is encompasses a lot more functionality than this specific feature entails. Putting this feature into core I don't believe limits DF in the future to extracting out this and other similar behaviour 'traits' and functionality to system specific crates. I'll start work on this and see how that works out. If it does then I'll add safe support via a trait to the to_timestamp*, to_date and to_unixtime functions. If there are other UDF's that could benefit from having a 'safe' mode (return null on error) please let me know and I'll see about adding safe mode to those as well. Thank you everyone for your feedback and guidance on this feedback request! 👍 |
We just merged the aggregate builder in #10560 -- I am quite happy with how it turned out, in case you want to take a friendly look |
After attempting to implement the builder approach it became apparent to me that it will touch too many things and really won't work well without changing the signature of ScalarUDFImpl anyways. It works for the aggregate functions because the functions defined in the AggregateUDFImpl trait have arguments where the additional information (distinct, sort, ordering, etc) is provided to the UDF implementation. In the case of ScalarUDFImpl though that is not the case. After some more thought I think the cleanest approach may be to add a get_config_options function to the SimplifyInfo trait and add a err, scratch that. Onto the next idea :/ |
Is your feature request related to a problem or challenge?
During work on adding a 'safe' mode to to_timestamp and to_date UDF functions I've come across an issue that I would like feedback on before proceeding.
The feature
Currently for timestamp and date parsing if a source string cannot be parsed using any of the provided chrono formats datafusion will return an error. This is normal for a database-type solution however it is not ideal for a system that is parsing billions of dates in batches - some of which are human entered. Systems such as Spark default to a null value for anything that cannot be parsed and this feature enables a mode ('safe' to mirror the name and same behaviour as CastOptions) that allows the to_timestamp* and to_date UDFs to have the same behaviour (return null on error).
The problem
Since UDF functions have no context provided and there isn't a way I know of to statically get access to config to add the above mentioned functionality I resorted to using a new constructor function to allow the UDF to switch behaviour:
To use the alternative 'safe' mode for these functions is as simple as
Unfortunately this only affect sql queries - any calls to the to_timestamp(args: Vec) function will not use the new definition as registered in the function registry. This is because that function and every other function like it use a static singleton instance that only uses a ::new() call to initial it and there is no way that I can see to replace that instance.
Describe the solution you'd like
I see a few possible solutions to this:
ctx.udf("to_timestamp").unwrap().call(args)
instead of the to_timestamp() function anytime 'safe' mode is required. This is less than ideal imho as it can lead to confusion and unintuitive behavior.Any opinions, suggestions and critiques would be welcome.
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: