Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-19324][SPARKR] Spark VJM stdout output is getting dropped in S…
…parkR ## What changes were proposed in this pull request? This affects mostly running job from the driver in client mode when results are expected to be through stdout (which should be somewhat rare, but possible) Before: ``` > a <- as.DataFrame(cars) > b <- group_by(a, "dist") > c <- count(b) > sparkR.callJMethod(c$countjc, "explain", TRUE) NULL ``` After: ``` > a <- as.DataFrame(cars) > b <- group_by(a, "dist") > c <- count(b) > sparkR.callJMethod(c$countjc, "explain", TRUE) count#11L NULL ``` Now, `column.explain()` doesn't seem very useful (we can get more extensive output with `DataFrame.explain()`) but there are other more complex examples with calls of `println` in Scala/JVM side, that are getting dropped. ## How was this patch tested? manual Author: Felix Cheung <[email protected]> Closes #16670 from felixcheung/rjvmstdout. (cherry picked from commit a7ab6f9) Signed-off-by: Shivaram Venkataraman <[email protected]>
- Loading branch information