Skip to content

Commit

Permalink
[MAPR-26289][SPARK-2.1] Streaming general improvements (apache#93)
Browse files Browse the repository at this point in the history
* Added include-kafka-09 profile to Assembly
* Set default poll timeout to 120s
  • Loading branch information
rsotn-mapr authored Mar 1, 2017
1 parent 519f6f6 commit 611e920
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 1 deletion.
10 changes: 10 additions & 0 deletions assembly/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,16 @@
</dependency>
</dependencies>
</profile>
<profile>
<id>include-kafka-09</id>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-9_${scala.binary.version}</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
</profile>
<profile>
<id>spark-ganglia-lgpl</id>
<dependencies>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,8 @@ private[spark] class KafkaRDD[K, V](
" must be set to false for executor kafka params, else offsets may commit before processing")

// TODO is it necessary to have separate configs for initial poll time vs ongoing poll time?
private val pollTimeout = conf.getLong("spark.streaming.kafka.consumer.poll.ms", 512)
private val pollTimeout = conf.getLong("spark.streaming.kafka.consumer.poll.ms",
conf.getTimeAsMs("spark.network.timeout", "120s"))
private val cacheInitialCapacity =
conf.getInt("spark.streaming.kafka.consumer.cache.initialCapacity", 16)
private val cacheMaxCapacity =
Expand Down

0 comments on commit 611e920

Please sign in to comment.