You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Spark 3.1 has changed the behaviour of the CSV reader. It now decides whether to stop parsing at the delimiter based on the value of unescapedQuoteHandling.
spark-rapids needs to ensure that reading CSV tables through the plugin will honour the settings for unescapedQuoteHandling.
Just for information, our CSV does not really match Spark's all that closely. We should test it, but we might just end up documenting an incompatibility.
This arises from audit of apache/spark@433ae9064f.
Spark 3.1 has changed the behaviour of the CSV reader. It now decides whether to stop parsing at the delimiter based on the value of
unescapedQuoteHandling
.spark-rapids
needs to ensure that reading CSV tables through the plugin will honour the settings forunescapedQuoteHandling
.More info in the JIRA: https://issues.apache.org/jira/browse/SPARK-33566
The text was updated successfully, but these errors were encountered: