Skip to content

Commit

Permalink
reverting a couple of change as per Sean Owen's request
Browse files Browse the repository at this point in the history
  • Loading branch information
Seigneurin, Alexis (CONT) authored and Seigneurin, Alexis (CONT) committed Mar 27, 2017
1 parent 96a3ccc commit a2faf88
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/structured-streaming-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -713,7 +713,7 @@ old windows correctly, as illustrated below.

![Handling Late Data](img/structured-streaming-late-data.png)

However, to run this query for days, it is necessary for the system to bound the amount of
However, to run this query for days, it's necessary for the system to bound the amount of
intermediate in-memory state it accumulates. This means the system needs to know when an old
aggregate can be dropped from the in-memory state because the application is not going to receive
late data for that aggregate any more. To enable this, in Spark 2.1, we have introduced
Expand Down Expand Up @@ -930,7 +930,7 @@ There are a few types of output modes.
new rows added to the Result Table since the last trigger will be
outputted to the sink. This is supported for only those queries where
rows added to the Result Table is never going to change. Hence, this mode
guarantees that each row will be outputted only once (assuming
guarantees that each row will be output only once (assuming
fault-tolerant sink). For example, queries with only `select`,
`where`, `map`, `flatMap`, `filter`, `join`, etc. will support Append mode.

Expand Down

0 comments on commit a2faf88

Please sign in to comment.