Skip to content

Commit

Permalink
[DOC] Adjust coverage for partitionBy()
Browse files Browse the repository at this point in the history
This is the related thread: http://search-hadoop.com/m/q3RTtO3ReeJ1iF02&subj=Re+partitioning+json+data+in+spark

Michael suggested fixing the doc.

Please review.

Author: tedyu <[email protected]>

Closes #10499 from ted-yu/master.

(cherry picked from commit 40d0396)
Signed-off-by: Michael Armbrust <[email protected]>
  • Loading branch information
tedyu authored and marmbrus committed Jan 4, 2016
1 parent 7f37c1e commit 1005ee3
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ final class DataFrameWriter private[sql](df: DataFrame) {
* Partitions the output by the given columns on the file system. If specified, the output is
* laid out on the file system similar to Hive's partitioning scheme.
*
* This is only applicable for Parquet at the moment.
* This was initially applicable for Parquet but in 1.5+ covers JSON, text, ORC and avro as well.
*
* @since 1.4.0
*/
Expand Down

0 comments on commit 1005ee3

Please sign in to comment.