From 5ebacfae74899ee7f630a2a00526ca63911f4e50 Mon Sep 17 00:00:00 2001 From: Nick Pentreath Date: Thu, 5 Jun 2014 10:01:02 +0200 Subject: [PATCH] Update docs for PySpark input formats --- docs/programming-guide.md | 48 +++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 22 deletions(-) diff --git a/docs/programming-guide.md b/docs/programming-guide.md index 3e333876bf85f..8466b7233e980 100644 --- a/docs/programming-guide.md +++ b/docs/programming-guide.md @@ -381,32 +381,32 @@ Some notes on reading files with Spark: Apart from reading files as a collection of lines, `SparkContext.wholeTextFiles` lets you read a directory containing multiple small text files, and returns each of them as (filename, content) pairs. This is in contrast with `textFile`, which would return one record per line in each file. -## SequenceFile and Hadoop InputFormats +### SequenceFile and Hadoop InputFormats In addition to reading text files, PySpark supports reading [SequenceFile](http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/SequenceFileInputFormat.html) and any arbitrary [InputFormat](http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/InputFormat.html). -### Writable Support +#### Writable Support PySpark SequenceFile support loads an RDD within Java, and pickles the resulting Java objects using [Pyrolite](https://github.com/irmen/Pyrolite/). The following Writables are automatically converted: - - - - - - - - - - - + + + + + + + + + + +
Writable TypeScala TypePython Type
TextStringunicode str
IntWritableIntint
FloatWritableFloatfloat
DoubleWritableDoublefloat
BooleanWritableBooleanbool
BytesWritableArray[Byte]bytearray
NullWritablenullNone
ArrayWritableArray[T]list of primitives, or tuple of objects
MapWritablejava.util.Map[K, V]dict
Custom ClassCustom Class conforming to Java Bean conventions
Writable TypePython Type
Textunicode str
IntWritableint
FloatWritablefloat
DoubleWritablefloat
BooleanWritablebool
BytesWritablebytearray
NullWritableNone
ArrayWritablelist of primitives, or tuple of objects
MapWritabledict
Custom Class conforming to Java Bean conventions dict of public properties (via JavaBean getters and setters) + __class__ for the class type
-### Loading SequenceFiles +#### Loading SequenceFiles Similarly to text files, SequenceFiles can be loaded by specifying the path. The key and value classes can be specified, but for standard Writables it should work without requiring this. @@ -420,10 +420,9 @@ classes can be specified, but for standard Writables it should work without requ (3.0, u'cc'), (2.0, u'bb'), (1.0, u'aa')] ->>> help(sc.sequenceFile) # Show sequencefile documentation {% endhighlight %} -### Loading Arbitrary Hadoop InputFormats +#### Loading Arbitrary Hadoop InputFormats PySpark can also read any Hadoop InputFormat, for both 'new' and 'old' Hadoop APIs. If required, a Hadoop configuration can be passed in as a Python dict. Here is an example using the @@ -439,7 +438,6 @@ $ SPARK_CLASSPATH=/path/to/elasticsearch-hadoop.jar ./bin/pyspark {u'field1': True, u'field2': u'Some Text', u'field3': 12345}) ->>> help(sc.newAPIHadoopRDD) # Show help for new API Hadoop RDD {% endhighlight %} Note that, if the InputFormat simply depends on a Hadoop configuration and/or input path, and @@ -447,15 +445,21 @@ the key and value classes can easily be converted according to the above table, then this approach should work well for such cases. If you have custom serialized binary data (like pulling data from Cassandra / HBase) or custom -classes that don't conform to the JavaBean requirements, then you will probably have to first +classes that don't conform to the JavaBean requirements, then you will first need to transform that data on the Scala/Java side to something which can be handled by Pyrolite's pickler. +A [Converter](api/scala/index.html#org.apache.spark.api.python.Converter) trait is provided +for this. Simply extend this trait and implement your transformation code in the ```convert``` +method. The ensure this class is packaged into your Spark job jar and included on the PySpark +classpath. -Future support for custom 'converter' functions for keys/values that allows this to be written in Java/Scala, -and called from Python, as well as support for writing data out as SequenceFileOutputFormat -and other OutputFormats, is forthcoming. +See the [Python examples]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/python) and +the [Converter examples]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/pythonconverters) +for examples using HBase and Cassandra. - +Future support for writing data out as SequenceFileOutputFormat and other OutputFormats, +is forthcoming. +