Skip to content

Commit

Permalink
Merge pull request #458 from tdas/docs-update
Browse files Browse the repository at this point in the history
Updated java API docs for streaming, along with very minor changes in the code examples.

Docs updated for:
Scala: StreamingContext, DStream, PairDStreamFunctions
Java: JavaStreamingContext, JavaDStream, JavaPairDStream

Example updated:
JavaQueueStream: Not use deprecated method
ActorWordCount: Use the public interface the right way.
  • Loading branch information
pwendell committed Jan 19, 2014
2 parents fe8a354 + 11e6534 commit 256a355
Show file tree
Hide file tree
Showing 9 changed files with 79 additions and 76 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,9 @@ public static void main(String[] args) throws Exception {
}

for (int i = 0; i < 30; i++) {
rddQueue.add(ssc.sc().parallelize(list));
rddQueue.add(ssc.sparkContext().parallelize(list));
}


// Create the QueueInputDStream and use it do some processing
JavaDStream<Integer> inputStream = ssc.queueStream(rddQueue);
JavaPairDStream<Integer, Integer> mappedStream = inputStream.map(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ extends Actor with Receiver {
override def preStart = remotePublisher ! SubscribeReceiver(context.self)

def receive = {
case msg context.parent ! pushBlock(msg.asInstanceOf[T])
case msg pushBlock(msg.asInstanceOf[T])
}

override def postStop() = remotePublisher ! UnsubscribeReceiver(context.self)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,15 @@ import org.apache.spark.streaming.scheduler._
import org.apache.hadoop.conf.Configuration

/**
* A StreamingContext is the main entry point for Spark Streaming functionality. Besides the basic
* information (such as, cluster URL and job name) to internally create a SparkContext, it provides
* methods used to create DStream from various input sources.
* Main entry point for Spark Streaming functionality. It provides methods used to create
* [[org.apache.spark.streaming.dstream.DStream]]s from various input sources. It can be either
* created by providing a Spark master URL and an appName, or from a org.apache.spark.SparkConf
* configuration (see core Spark documentation), or from an existing org.apache.spark.SparkContext.
* The associated SparkContext can be accessed using `context.sparkContext`. After
* creating and transforming DStreams, the streaming computation can be started and stopped
* using `context.start()` and `context.stop()`, respectively.
* `context.awaitTransformation()` allows the current thread to wait for the termination
* of the context by `stop()` or by an exception.
*/
class StreamingContext private[streaming] (
sc_ : SparkContext,
Expand All @@ -63,7 +69,7 @@ class StreamingContext private[streaming] (

/**
* Create a StreamingContext by providing the configuration necessary for a new SparkContext.
* @param conf a [[org.apache.spark.SparkConf]] object specifying Spark parameters
* @param conf a org.apache.spark.SparkConf object specifying Spark parameters
* @param batchDuration the time interval at which streaming data will be divided into batches
*/
def this(conf: SparkConf, batchDuration: Duration) = {
Expand All @@ -88,7 +94,7 @@ class StreamingContext private[streaming] (
}

/**
* Re-create a StreamingContext from a checkpoint file.
* Recreate a StreamingContext from a checkpoint file.
* @param path Path to the directory that was specified as the checkpoint directory
* @param hadoopConf Optional, configuration object if necessary for reading from
* HDFS compatible filesystems
Expand Down Expand Up @@ -151,6 +157,7 @@ class StreamingContext private[streaming] (
private[streaming] val scheduler = new JobScheduler(this)

private[streaming] val waiter = new ContextWaiter

/**
* Return the associated Spark context
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,22 +27,12 @@ import scala.reflect.ClassTag
import org.apache.spark.streaming.dstream.DStream

/**
* A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous
* sequence of RDDs (of the same type) representing a continuous stream of data (see [[org.apache.spark.rdd.RDD]]
* for more details on RDDs). DStreams can either be created from live data (such as, data from
* HDFS, Kafka or Flume) or it can be generated by transformation existing DStreams using operations
* such as `map`, `window` and `reduceByKeyAndWindow`. While a Spark Streaming program is running, each
* DStream periodically generates a RDD, either from live data or by transforming the RDD generated
* by a parent DStream.
*
* This class contains the basic operations available on all DStreams, such as `map`, `filter` and
* `window`. In addition, [[org.apache.spark.streaming.api.java.JavaPairDStream]] contains operations available
* only on DStreams of key-value pairs, such as `groupByKeyAndWindow` and `join`.
*
* DStreams internally is characterized by a few basic properties:
* - A list of other DStreams that the DStream depends on
* - A time interval at which the DStream generates an RDD
* - A function that is used to generate an RDD after each time interval
* A Java-friendly interface to [[org.apache.spark.streaming.dstream.DStream]], the basic
* abstraction in Spark Streaming that represents a continuous stream of data.
* DStreams can either be created from live data (such as, data from TCP sockets, Kafka, Flume,
* etc.) or it can be generated by transforming existing DStreams using operations such as `map`,
* `window`. For operations applicable to key-value pair DStreams, see
* [[org.apache.spark.streaming.api.java.JavaPairDStream]].
*/
class JavaDStream[T](val dstream: DStream[T])(implicit val classTag: ClassTag[T])
extends JavaDStreamLike[T, JavaDStream[T], JavaRDD[T]] {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,10 @@ import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.PairRDDFunctions
import org.apache.spark.streaming.dstream.DStream

/**
* A Java-friendly interface to a DStream of key-value pairs, which provides extra methods
* like `reduceByKey` and `join`.
*/
class JavaPairDStream[K, V](val dstream: DStream[(K, V)])(
implicit val kManifest: ClassTag[K],
implicit val vManifest: ClassTag[V])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ import scala.collection.JavaConversions._
import scala.reflect.ClassTag

import java.io.InputStream
import java.lang.{Integer => JInt}
import java.util.{List => JList, Map => JMap}

import akka.actor.{Props, SupervisorStrategy}
Expand All @@ -39,19 +38,20 @@ import org.apache.hadoop.conf.Configuration
import org.apache.spark.streaming.dstream.DStream

/**
* A StreamingContext is the main entry point for Spark Streaming functionality. Besides the basic
* information (such as, cluster URL and job name) to internally create a SparkContext, it provides
* methods used to create DStream from various input sources.
* A Java-friendly version of [[org.apache.spark.streaming.StreamingContext]] which is the main
* entry point for Spark Streaming functionality. It provides methods to create
* [[org.apache.spark.streaming.api.java.JavaDStream]] and
* [[org.apache.spark.streaming.api.java.JavaPairDStream.]] from input sources. The internal
* org.apache.spark.api.java.JavaSparkContext (see core Spark documentation) can be accessed
* using `context.sparkContext`. After creating and transforming DStreams, the streaming
* computation can be started and stopped using `context.start()` and `context.stop()`,
* respectively. `context.awaitTransformation()` allows the current thread to wait for the
* termination of a context by `stop()` or by an exception.
*/
class JavaStreamingContext(val ssc: StreamingContext) {

// TODOs:
// - Test to/from Hadoop functions
// - Support creating and registering InputStreams


/**
* Creates a StreamingContext.
* Create a StreamingContext.
* @param master Name of the Spark Master
* @param appName Name to be used when registering with the scheduler
* @param batchDuration The time interval at which streaming data will be divided into batches
Expand All @@ -60,7 +60,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
this(new StreamingContext(master, appName, batchDuration, null, Nil, Map()))

/**
* Creates a StreamingContext.
* Create a StreamingContext.
* @param master Name of the Spark Master
* @param appName Name to be used when registering with the scheduler
* @param batchDuration The time interval at which streaming data will be divided into batches
Expand All @@ -77,7 +77,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
this(new StreamingContext(master, appName, batchDuration, sparkHome, Seq(jarFile), Map()))

/**
* Creates a StreamingContext.
* Create a StreamingContext.
* @param master Name of the Spark Master
* @param appName Name to be used when registering with the scheduler
* @param batchDuration The time interval at which streaming data will be divided into batches
Expand All @@ -94,7 +94,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
this(new StreamingContext(master, appName, batchDuration, sparkHome, jars, Map()))

/**
* Creates a StreamingContext.
* Create a StreamingContext.
* @param master Name of the Spark Master
* @param appName Name to be used when registering with the scheduler
* @param batchDuration The time interval at which streaming data will be divided into batches
Expand All @@ -113,43 +113,42 @@ class JavaStreamingContext(val ssc: StreamingContext) {
this(new StreamingContext(master, appName, batchDuration, sparkHome, jars, environment))

/**
* Creates a StreamingContext using an existing SparkContext.
* Create a JavaStreamingContext using an existing JavaSparkContext.
* @param sparkContext The underlying JavaSparkContext to use
* @param batchDuration The time interval at which streaming data will be divided into batches
*/
def this(sparkContext: JavaSparkContext, batchDuration: Duration) =
this(new StreamingContext(sparkContext.sc, batchDuration))

/**
* Creates a StreamingContext using an existing SparkContext.
* Create a JavaStreamingContext using a SparkConf configuration.
* @param conf A Spark application configuration
* @param batchDuration The time interval at which streaming data will be divided into batches
*/
def this(conf: SparkConf, batchDuration: Duration) =
this(new StreamingContext(conf, batchDuration))

/**
* Re-creates a StreamingContext from a checkpoint file.
* Recreate a JavaStreamingContext from a checkpoint file.
* @param path Path to the directory that was specified as the checkpoint directory
*/
def this(path: String) = this(new StreamingContext(path, new Configuration))

/**
* Re-creates a StreamingContext from a checkpoint file.
* Re-creates a JavaStreamingContext from a checkpoint file.
* @param path Path to the directory that was specified as the checkpoint directory
*
*/
def this(path: String, hadoopConf: Configuration) = this(new StreamingContext(path, hadoopConf))


@deprecated("use sparkContext", "0.9.0")
val sc: JavaSparkContext = sparkContext

/** The underlying SparkContext */
val sparkContext = new JavaSparkContext(ssc.sc)

/**
* Create a input stream from network source hostname:port. Data is received using
* Create an input stream from network source hostname:port. Data is received using
* a TCP socket and the receive bytes is interpreted as UTF8 encoded \n delimited
* lines.
* @param hostname Hostname to connect to for receiving data
Expand All @@ -162,7 +161,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Create a input stream from network source hostname:port. Data is received using
* Create an input stream from network source hostname:port. Data is received using
* a TCP socket and the receive bytes is interpreted as UTF8 encoded \n delimited
* lines. Storage level of the data will be the default StorageLevel.MEMORY_AND_DISK_SER_2.
* @param hostname Hostname to connect to for receiving data
Expand All @@ -173,7 +172,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Create a input stream from network source hostname:port. Data is received using
* Create an input stream from network source hostname:port. Data is received using
* a TCP socket and the receive bytes it interepreted as object using the given
* converter.
* @param hostname Hostname to connect to for receiving data
Expand All @@ -195,7 +194,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Create a input stream that monitors a Hadoop-compatible filesystem
* Create an input stream that monitors a Hadoop-compatible filesystem
* for new files and reads them as text files (using key as LongWritable, value
* as Text and input format as TextInputFormat). Files must be written to the
* monitored directory by "moving" them from another location within the same
Expand All @@ -207,7 +206,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Create a input stream from network source hostname:port, where data is received
* Create an input stream from network source hostname:port, where data is received
* as serialized blocks (serialized using the Spark's serializer) that can be directly
* pushed into the block manager without deserializing them. This is the most efficient
* way to receive data.
Expand All @@ -226,7 +225,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Create a input stream from network source hostname:port, where data is received
* Create an input stream from network source hostname:port, where data is received
* as serialized blocks (serialized using the Spark's serializer) that can be directly
* pushed into the block manager without deserializing them. This is the most efficient
* way to receive data.
Expand All @@ -241,7 +240,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Create a input stream that monitors a Hadoop-compatible filesystem
* Create an input stream that monitors a Hadoop-compatible filesystem
* for new files and reads them using the given key-value types and input format.
* Files must be written to the monitored directory by "moving" them from another
* location within the same file system. File names starting with . are ignored.
Expand Down Expand Up @@ -324,7 +323,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Creates a input stream from an queue of RDDs. In each batch,
* Creates an input stream from an queue of RDDs. In each batch,
* it will process either one or all of the RDDs returned by the queue.
*
* NOTE: changes to the queue after the stream is created will not be recognized.
Expand All @@ -340,7 +339,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Creates a input stream from an queue of RDDs. In each batch,
* Creates an input stream from an queue of RDDs. In each batch,
* it will process either one or all of the RDDs returned by the queue.
*
* NOTE: changes to the queue after the stream is created will not be recognized.
Expand All @@ -357,7 +356,7 @@ class JavaStreamingContext(val ssc: StreamingContext) {
}

/**
* Creates a input stream from an queue of RDDs. In each batch,
* Creates an input stream from an queue of RDDs. In each batch,
* it will process either one or all of the RDDs returned by the queue.
*
* NOTE: changes to the queue after the stream is created will not be recognized.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,9 @@ import org.apache.spark.streaming.Duration
* A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous
* sequence of RDDs (of the same type) representing a continuous stream of data (see
* org.apache.spark.rdd.RDD in the Spark core documentation for more details on RDDs).
* DStreams can either be created from live data (such as, data from Kafka, Flume, sockets, HDFS)
* or it can be generated by transforming existing DStreams using operations such as `map`,
* DStreams can either be created from live data (such as, data from TCP sockets, Kafka, Flume,
* etc.) using a [[org.apache.spark.streaming.StreamingContext]] or it can be generated by
* transforming existing DStreams using operations such as `map`,
* `window` and `reduceByKeyAndWindow`. While a Spark Streaming program is running, each DStream
* periodically generates a RDD, either from live data or by transforming the RDD generated by a
* parent DStream.
Expand Down Expand Up @@ -540,7 +541,6 @@ abstract class DStream[T: ClassTag] (
* on each RDD of 'this' DStream.
*/
def transform[U: ClassTag](transformFunc: (RDD[T], Time) => RDD[U]): DStream[U] = {
//new TransformedDStream(this, context.sparkContext.clean(transformFunc))
val cleanedF = context.sparkContext.clean(transformFunc)
val realTransformFunc = (rdds: Seq[RDD[_]], time: Time) => {
assert(rdds.length == 1)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,17 @@
package org.apache.spark.streaming.dstream

import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.dstream._

import org.apache.spark.{Partitioner, HashPartitioner}
import org.apache.spark.SparkContext._
import org.apache.spark.rdd.{ClassTags, RDD, PairRDDFunctions}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.rdd.RDD

import scala.collection.mutable.ArrayBuffer
import scala.reflect.{ClassTag, classTag}
import scala.reflect.ClassTag

import org.apache.hadoop.mapred.{JobConf, OutputFormat}
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.mapreduce.{OutputFormat => NewOutputFormat}
import org.apache.hadoop.mapred.OutputFormat
import org.apache.hadoop.security.UserGroupInformation
import org.apache.hadoop.conf.Configuration
import org.apache.spark.streaming.{Time, Duration}

Expand Down Expand Up @@ -108,7 +105,7 @@ extends Serializable {
/**
* Combine elements of each key in DStream's RDDs using custom functions. This is similar to the
* combineByKey for RDDs. Please refer to combineByKey in
* [[org.apache.spark.rdd.PairRDDFunctions]] for more information.
* org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.
*/
def combineByKey[C: ClassTag](
createCombiner: V => C,
Expand Down
Loading

0 comments on commit 256a355

Please sign in to comment.