Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix #204] Update out-dated comments #381

Closed
wants to merge 1 commit into from

Conversation

andrewor14
Copy link
Contributor

This PR is self-explanatory.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14008/

@pwendell
Copy link
Contributor

Thanks - merged.

@asfgit asfgit closed this in c2d160f Apr 12, 2014
asfgit pushed a commit that referenced this pull request Apr 12, 2014
This PR is self-explanatory.

Author: Andrew Or <[email protected]>

Closes #381 from andrewor14/master and squashes the following commits:

3e8dde2 [Andrew Or] Fix comments for #204
(cherry picked from commit c2d160f)

Signed-off-by: Patrick Wendell <[email protected]>
pwendell added a commit to pwendell/spark that referenced this pull request May 12, 2014
Better error handling in Spark Streaming and more API cleanup

Earlier errors in jobs generated by Spark Streaming (or in the generation of jobs) could not be caught from the main driver thread (i.e. the thread that called StreamingContext.start()) as it would be thrown in different threads. With this change, after `ssc.start`, one can call `ssc.awaitTermination()` which will be block until the ssc is closed, or there is an exception. This makes it easier to debug.

This change also adds ssc.stop(<stop-spark-context>) where you can stop StreamingContext without stopping the SparkContext.

Also fixes the bug that came up with PRs apache#393 and apache#381. MetadataCleaner default value has been changed from 3500 to -1 for normal SparkContext and 3600 when creating a StreamingContext. Also, updated StreamingListenerBus with changes similar to SparkListenerBus in apache#392.

And changed a lot of protected[streaming] to private[streaming].
pdeyhim pushed a commit to pdeyhim/spark-1 that referenced this pull request Jun 25, 2014
This PR is self-explanatory.

Author: Andrew Or <[email protected]>

Closes apache#381 from andrewor14/master and squashes the following commits:

3e8dde2 [Andrew Or] Fix comments for apache#204
mccheah added a commit to mccheah/spark that referenced this pull request Nov 28, 2018
…ons (apache#381)

* Implement a Docker image generator gradle plugin for Spark applications.

* Fix circle

* Fix more circle

* No need to setup docker for compilation only

* Fix gradle

* Add license headers

* Address comments.

* Remove extra task

* Remove 2.11

* Remove extra script

* Remove some imports

* Add more tests

* Don't bundle resources in tgz, just include individual files in resources.

* Fix license placement.

* Use InputFile and not Input

* Fix build

* Add back K8s integration tests

* Revert changes to SparkBuild

* Remove hive version 2.0.2 from test suite

* Use shared Spark session for unsafe row suite

* Revert "Use shared Spark session for unsafe row suite"

This reverts commit 1ae4f61.

* Address comments.

- Move Gradle project to the root directory
- Use Gradle 4.9
- More properties
- Use baseline

* Fix build script

* Fix build again

* Fix licenses and ignore build dir licenses

* Remove extraneous scripts
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
holdenk pushed a commit to holdenk/spark that referenced this pull request Sep 12, 2019
Initialize UrlStreamHandlerFactory per jvm and fix typo.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants