Skip to content

Commit

Permalink
[DOCS][MINOR] Fixed a few typos in the Structured Streaming documenta…
Browse files Browse the repository at this point in the history
…tion

Fixed a few typos.

There is one more I'm not sure of:

```
        Append mode uses watermark to drop old aggregation state. But the output of a
        windowed aggregation is delayed the late threshold specified in `withWatermark()` as by
        the modes semantics, rows can be added to the Result Table only once after they are
```

Not sure how to change `is delayed the late threshold`.

Author: Seigneurin, Alexis (CONT) <[email protected]>

Closes #17443 from aseigneurin/typos.
  • Loading branch information
Seigneurin, Alexis (CONT) authored and srowen committed Mar 30, 2017
1 parent e9d268f commit 669a11b
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions docs/structured-streaming-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -717,11 +717,11 @@ However, to run this query for days, it's necessary for the system to bound the
intermediate in-memory state it accumulates. This means the system needs to know when an old
aggregate can be dropped from the in-memory state because the application is not going to receive
late data for that aggregate any more. To enable this, in Spark 2.1, we have introduced
**watermarking**, which let's the engine automatically track the current event time in the data and
**watermarking**, which lets the engine automatically track the current event time in the data
and attempt to clean up old state accordingly. You can define the watermark of a query by
specifying the event time column and the threshold on how late the data is expected be in terms of
specifying the event time column and the threshold on how late the data is expected to be in terms of
event time. For a specific window starting at time `T`, the engine will maintain state and allow late
data to be update the state until `(max event time seen by the engine - late threshold > T)`.
data to update the state until `(max event time seen by the engine - late threshold > T)`.
In other words, late data within the threshold will be aggregated,
but data later than the threshold will be dropped. Let's understand this with an example. We can
easily define watermarking on the previous example using `withWatermark()` as shown below.
Expand Down Expand Up @@ -792,7 +792,7 @@ This watermark lets the engine maintain intermediate state for additional 10 min
data to be counted. For example, the data `(12:09, cat)` is out of order and late, and it falls in
windows `12:05 - 12:15` and `12:10 - 12:20`. Since, it is still ahead of the watermark `12:04` in
the trigger, the engine still maintains the intermediate counts as state and correctly updates the
counts of the related windows. However, when the watermark is updated to 12:11, the intermediate
counts of the related windows. However, when the watermark is updated to `12:11`, the intermediate
state for window `(12:00 - 12:10)` is cleared, and all subsequent data (e.g. `(12:04, donkey)`)
is considered "too late" and therefore ignored. Note that after every trigger,
the updated counts (i.e. purple rows) are written to sink as the trigger output, as dictated by
Expand Down Expand Up @@ -825,7 +825,7 @@ section for detailed explanation of the semantics of each output mode.
same column as the timestamp column used in the aggregate. For example,
`df.withWatermark("time", "1 min").groupBy("time2").count()` is invalid
in Append output mode, as watermark is defined on a different column
as the aggregation column.
from the aggregation column.

- `withWatermark` must be called before the aggregation for the watermark details to be used.
For example, `df.groupBy("time").count().withWatermark("time", "1 min")` is invalid in Append
Expand Down Expand Up @@ -909,7 +909,7 @@ track of all the data received in the stream. This is therefore fundamentally ha
efficiently.

## Starting Streaming Queries
Once you have defined the final result DataFrame/Dataset, all that is left is for you start the streaming computation. To do that, you have to use the `DataStreamWriter`
Once you have defined the final result DataFrame/Dataset, all that is left is for you to start the streaming computation. To do that, you have to use the `DataStreamWriter`
([Scala](api/scala/index.html#org.apache.spark.sql.streaming.DataStreamWriter)/[Java](api/java/org/apache/spark/sql/streaming/DataStreamWriter.html)/[Python](api/python/pyspark.sql.html#pyspark.sql.streaming.DataStreamWriter) docs)
returned through `Dataset.writeStream()`. You will have to specify one or more of the following in this interface.

Expand Down Expand Up @@ -1396,15 +1396,15 @@ You can directly get the current status and metrics of an active query using
`lastProgress()` returns a `StreamingQueryProgress` object
in [Scala](api/scala/index.html#org.apache.spark.sql.streaming.StreamingQueryProgress)
and [Java](api/java/org/apache/spark/sql/streaming/StreamingQueryProgress.html)
and an dictionary with the same fields in Python. It has all the information about
and a dictionary with the same fields in Python. It has all the information about
the progress made in the last trigger of the stream - what data was processed,
what were the processing rates, latencies, etc. There is also
`streamingQuery.recentProgress` which returns an array of last few progresses.

In addition, `streamingQuery.status()` returns `StreamingQueryStatus` object
In addition, `streamingQuery.status()` returns a `StreamingQueryStatus` object
in [Scala](api/scala/index.html#org.apache.spark.sql.streaming.StreamingQueryStatus)
and [Java](api/java/org/apache/spark/sql/streaming/StreamingQueryStatus.html)
and an dictionary with the same fields in Python. It gives information about
and a dictionary with the same fields in Python. It gives information about
what the query is immediately doing - is a trigger active, is data being processed, etc.

Here are a few examples.
Expand Down

0 comments on commit 669a11b

Please sign in to comment.