Clean Frames is a library for Apache Spark SQL module. It provides a type class for data cleansing.
The current stable version is 0.3.0, which is cross built against Scala (2.11-2.12) and Apache Spark (2.1.0-2.4.3).
If you're using SBT, add the following line to your build file:
libraryDependencies += "io.funkyminds" %% "cleanframes" % "2.4.3_0.3.0"
or Maven dependency:
<dependency>
<groupId>io.funkyminds</groupId>
<artifactId>cleanframes_2.12</artifactId>
<version>2.4.3_0.3.0</version>
</dependency>
Assuming DataFrame is loaded from a csv file with following content:
1,true,1.0
lmfao,true,2.0
3,false,3.0
4,true,yolo data
5,true,5.0
and a domain model is defined as:
case class Example(col1: Option[Int], col2: Option[Boolean], col3: Option[Float])
library clean data to:
Example(Some(1), Some(true), Some(1.0f)),
Example(None, Some(true), Some(2.0f)),
Example(Some(3), Some(false), Some(3.0f)),
Example(Some(4), Some(true), None),
Example(Some(5), Some(true), Some(5.0f))
with a minimal code:
import cleanframes.instances.all._
import cleanframes.syntax._
frame
.clean[Example]
We would like to live in a world where data quality is superb but only unicorns are perfect.
Apache Spark by default discards entire row if it contains any invalid values.
Having called Spark for same data:
frame
.as[Example]
would give a dataset with content:
Example(Some(1), Some(true), Some(1.0f)),
Example(None, None, None),
Example(Some(3), Some(false), Some(3.0f)),
Example(None, None, None),
Example(Some(5), Some(true), Some(5.0f))
As noticed, data in second and forth rows are lost due to particular malformed cells. Such behaviour might not be accepted in some domains.
To save valid cells and discard only invalid ones, such Spark SQL API might be called:
val cleaned = frame.withColumn(
"col1",
when(
not(
frame.col("col1").isNaN
),
frame.col("col1")
) cast IntegerType
).withColumn(
"col2",
when(
trim(lower(frame.col("col2"))) === "true",
lit(true)
).otherwise(false)
).withColumn(
"col3",
when(
not(
frame.col("col3").isNaN
),
frame.col("col3")
) cast FloatType
)
cleanframes is a small library that does such boilerplate as above for you by calling:
frame
.clean[CaseClass]
It resolves type-related transformations in a compile time using implicit resolutions in a type-safe way.
The library is shipped with common basic transformations and can be extended via custom ones.
There is no performance penalty, all code is generated by the compiler (currently by shapeless).
For further instructions, refer to:
- project's Wiki tab
- cleanframes-examples project
- a bundle of unit tests with instructions how to use the library
-
Why minimal Spark version is 2.1.0 when Datasets where introduced in 1.6.0?
There is a problem with value classes support in versions 2.0.x, Spark throws runtime exception during its code generation. Spark in 1.6.x has a problem with testing library.
- Dawid Rutowicz [email protected] @dawrutowicz