Skip to content

Commit

Permalink
[Docs] change to dataset for java code in structured-streaming-kafka-…
Browse files Browse the repository at this point in the history
…integration document

In latest structured-streaming-kafka-integration document, Java code example for Kafka integration is using `DataFrame<Row>`, shouldn't it be changed to `DataSet<Row>`?
  • Loading branch information
brandonJY authored Jan 18, 2018
1 parent 1c76a91 commit 10afff2
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions docs/structured-streaming-kafka-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
{% highlight java %}

// Subscribe to 1 topic
DataFrame<Row> df = spark
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -70,7 +70,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")

// Subscribe to multiple topics
DataFrame<Row> df = spark
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -79,7 +79,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")

// Subscribe to a pattern
DataFrame<Row> df = spark
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand Down Expand Up @@ -171,7 +171,7 @@ df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
{% highlight java %}

// Subscribe to 1 topic defaults to the earliest and latest offsets
DataFrame<Row> df = spark
Dataset<Row> df = spark
.read()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -180,7 +180,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");

// Subscribe to multiple topics, specifying explicit Kafka offsets
DataFrame<Row> df = spark
Dataset<Row> df = spark
.read()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -191,7 +191,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");

// Subscribe to a pattern, at the earliest and latest offsets
DataFrame<Row> df = spark
Dataset<Row> df = spark
.read()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand Down

0 comments on commit 10afff2

Please sign in to comment.