Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HOTFIX for broken CI, by SPARK-1336 #321

Closed
wants to merge 4 commits into from

Conversation

ScrapCodes
Copy link
Member

Learnt about set -o pipefail is very useful.

@AmplabJenkins
Copy link

Merged build triggered.

@pwendell
Copy link
Contributor

pwendell commented Apr 4, 2014

Soo... how do we test this... :)

@AmplabJenkins
Copy link

Merged build triggered.

@ScrapCodes
Copy link
Member Author

@ScrapCodes
Copy link
Member Author

Still on it, one small bug.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished.

@AmplabJenkins
Copy link

Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13746/

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished.

@AmplabJenkins
Copy link

Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13748/

…is there to make sure build fails if provided SparkBuild has compile errors.
@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13749/

@ScrapCodes
Copy link
Member Author

Should be good to go. tested it locally by failing in various ways.

@pwendell
Copy link
Contributor

pwendell commented Apr 4, 2014

@marmbrus does this look okay to you?

@marmbrus
Copy link
Contributor

marmbrus commented Apr 4, 2014

LGTM

@@ -49,7 +48,7 @@ dev/scalastyle
echo "========================================================================="
echo "Running Spark unit tests"
echo "========================================================================="
sbt/sbt assembly test
echo -e "q\n" | sbt/sbt assembly test | grep -v -e "info.*Resolving" -e "warn.*Merging" -e "info.*Including"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does echo -e "q\n" do?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the build file has compilation errors, then sbt blocks on press Q for quit etc. in this place it might not be necessary since this can not happen here as it would have been already been checked at scalastyle step. Do you think we should remove it ? I just thought it is harmless.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fine to have it here - but then I would add some comment explaining this inline.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13794/

@pwendell
Copy link
Contributor

pwendell commented Apr 5, 2014

Thanks, merged.

@asfgit asfgit closed this in 7c18428 Apr 5, 2014
andrewor14 pushed a commit to andrewor14/spark that referenced this pull request Apr 7, 2014
…pache#321.

Inform DAG scheduler about all started/finished tasks.

Previously, the DAG scheduler was not always informed
when tasks started and finished. The simplest example here
is for speculated tasks: the DAGScheduler was only told about
the first attempt of a task, meaning that SparkListeners were
also not told about multiple task attempts, so users can't see
what's going on with speculation in the UI.  The DAGScheduler
also wasn't always told about finished tasks, so in the UI, some
tasks will never be shown as finished (this occurs, for example,
if a task set gets killed).

The other problem is that the fairness accounting was wrong
-- the number of running tasks in a pool was decreased when a
task set was considered done, even if all of its tasks hadn't
yet finished.

Author: Kay Ousterhout <[email protected]>

== Merge branch commits ==

commit c8d547d0f7a17f5a193bef05f5872b9f475675c5
Author: Kay Ousterhout <[email protected]>
Date:   Wed Jan 15 16:47:33 2014 -0800

    Addressed Reynold's review comments.

    Always use a TaskEndReason (remove the option), and explicitly
    signal when we don't know the reason. Also, always tell
    DAGScheduler (and associated listeners) about started tasks, even
    when they're speculated.

commit 3fee1e2e3c06b975ff7f95d595448f38cce97a04
Author: Kay Ousterhout <[email protected]>
Date:   Wed Jan 8 22:58:13 2014 -0800

    Fixed broken test and improved logging

commit ff12fcaa2567c5d02b75a1d5db35687225bcd46f
Author: Kay Ousterhout <[email protected]>
Date:   Sun Dec 29 21:08:20 2013 -0800

    Inform DAG scheduler about all finished tasks.

    Previously, the DAG scheduler was not always informed
    when tasks finished. For example, when a task set was
    aborted, the DAG scheduler was never told when the tasks
    in that task set finished. The DAG scheduler was also
    never told about the completion of speculated tasks.
    This led to confusion with SparkListeners because information
    about the completion of those tasks was never passed on to
    the listeners (so in the UI, for example, some tasks will never
    be shown as finished).

    The other problem is that the fairness accounting was wrong
    -- the number of running tasks in a pool was decreased when a
    task set was considered done, even if all of its tasks hadn't
    yet finished.
pdeyhim pushed a commit to pdeyhim/spark-1 that referenced this pull request Jun 25, 2014
Learnt about `set -o pipefail` is very useful.

Author: Prashant Sharma <[email protected]>
Author: Prashant Sharma <[email protected]>

Closes apache#321 from ScrapCodes/hf-SPARK-1336 and squashes the following commits:

9d22bc2 [Prashant Sharma] added comment why echo -e q exists.
f865951 [Prashant Sharma] made error to match with word boundry so errors does not match. This is there to make sure build fails if provided SparkBuild has compile errors.
7fffdf2 [Prashant Sharma] Removed a stray line.
97379d8 [Prashant Sharma] HOTFIX for broken CI, by SPARK-1336
gzm55 pushed a commit to MediaV/spark that referenced this pull request Jul 17, 2014
…pache#321.

Inform DAG scheduler about all started/finished tasks.

Previously, the DAG scheduler was not always informed
when tasks started and finished. The simplest example here
is for speculated tasks: the DAGScheduler was only told about
the first attempt of a task, meaning that SparkListeners were
also not told about multiple task attempts, so users can't see
what's going on with speculation in the UI.  The DAGScheduler
also wasn't always told about finished tasks, so in the UI, some
tasks will never be shown as finished (this occurs, for example,
if a task set gets killed).

The other problem is that the fairness accounting was wrong
-- the number of running tasks in a pool was decreased when a
task set was considered done, even if all of its tasks hadn't
yet finished.

Author: Kay Ousterhout <[email protected]>

== Merge branch commits ==

commit c8d547d0f7a17f5a193bef05f5872b9f475675c5
Author: Kay Ousterhout <[email protected]>
Date:   Wed Jan 15 16:47:33 2014 -0800

    Addressed Reynold's review comments.

    Always use a TaskEndReason (remove the option), and explicitly
    signal when we don't know the reason. Also, always tell
    DAGScheduler (and associated listeners) about started tasks, even
    when they're speculated.

commit 3fee1e2e3c06b975ff7f95d595448f38cce97a04
Author: Kay Ousterhout <[email protected]>
Date:   Wed Jan 8 22:58:13 2014 -0800

    Fixed broken test and improved logging

commit ff12fcaa2567c5d02b75a1d5db35687225bcd46f
Author: Kay Ousterhout <[email protected]>
Date:   Sun Dec 29 21:08:20 2013 -0800

    Inform DAG scheduler about all finished tasks.

    Previously, the DAG scheduler was not always informed
    when tasks finished. For example, when a task set was
    aborted, the DAG scheduler was never told when the tasks
    in that task set finished. The DAG scheduler was also
    never told about the completion of speculated tasks.
    This led to confusion with SparkListeners because information
    about the completion of those tasks was never passed on to
    the listeners (so in the UI, for example, some tasks will never
    be shown as finished).

    The other problem is that the fairness accounting was wrong
    -- the number of running tasks in a pool was decreased when a
    task set was considered done, even if all of its tasks hadn't
    yet finished.
(cherry picked from commit 18ad59e)

Signed-off-by: Patrick Wendell <[email protected]>
andrewor14 pushed a commit to andrewor14/spark that referenced this pull request Jan 8, 2015
…pache#321.

Inform DAG scheduler about all started/finished tasks.

Previously, the DAG scheduler was not always informed
when tasks started and finished. The simplest example here
is for speculated tasks: the DAGScheduler was only told about
the first attempt of a task, meaning that SparkListeners were
also not told about multiple task attempts, so users can't see
what's going on with speculation in the UI.  The DAGScheduler
also wasn't always told about finished tasks, so in the UI, some
tasks will never be shown as finished (this occurs, for example,
if a task set gets killed).

The other problem is that the fairness accounting was wrong
-- the number of running tasks in a pool was decreased when a
task set was considered done, even if all of its tasks hadn't
yet finished.

Author: Kay Ousterhout <[email protected]>

== Merge branch commits ==

commit c8d547d0f7a17f5a193bef05f5872b9f475675c5
Author: Kay Ousterhout <[email protected]>
Date:   Wed Jan 15 16:47:33 2014 -0800

    Addressed Reynold's review comments.

    Always use a TaskEndReason (remove the option), and explicitly
    signal when we don't know the reason. Also, always tell
    DAGScheduler (and associated listeners) about started tasks, even
    when they're speculated.

commit 3fee1e2e3c06b975ff7f95d595448f38cce97a04
Author: Kay Ousterhout <[email protected]>
Date:   Wed Jan 8 22:58:13 2014 -0800

    Fixed broken test and improved logging

commit ff12fcaa2567c5d02b75a1d5db35687225bcd46f
Author: Kay Ousterhout <[email protected]>
Date:   Sun Dec 29 21:08:20 2013 -0800

    Inform DAG scheduler about all finished tasks.

    Previously, the DAG scheduler was not always informed
    when tasks finished. For example, when a task set was
    aborted, the DAG scheduler was never told when the tasks
    in that task set finished. The DAG scheduler was also
    never told about the completion of speculated tasks.
    This led to confusion with SparkListeners because information
    about the completion of those tasks was never passed on to
    the listeners (so in the UI, for example, some tasks will never
    be shown as finished).

    The other problem is that the fairness accounting was wrong
    -- the number of running tasks in a pool was decreased when a
    task set was considered done, even if all of its tasks hadn't
    yet finished.
(cherry picked from commit 18ad59e)

Signed-off-by: Patrick Wendell <[email protected]>
@ScrapCodes ScrapCodes deleted the hf-SPARK-1336 branch June 3, 2015 05:52
mccheah added a commit to lins05/spark that referenced this pull request Jun 2, 2017
* Allow custom executor labels and annotations

* Address comments.

* Fix scalastyle.
erikerlandson pushed a commit to erikerlandson/spark that referenced this pull request Jul 28, 2017
* Allow custom executor labels and annotations

* Address comments.

* Fix scalastyle.
gatesn pushed a commit to gatesn/spark that referenced this pull request Mar 14, 2018
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
* manila-provisioner job: set manila share type
* manila-provisioner job: added a sleep in between deleting pvc+pod and sc+secret
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
* Add job terraform-provider-openstack-acceptance-test-rocky
* manila-provisioner job: set manila share type (apache#321)
* manila-provisioner job: set manila share type
* manila-provisioner job: added a sleep in between deleting pvc+pod and sc+secret
* Add job for ansible against OpenStack Rocky release
* Add periodic job for ansible against OpenStack Rocky release
* Add job for packer against OpenStack Rocky release
* Add periodic job for packer against OpenStack Rocky release
* Add periodic job for docker-machine against OpenStack Rocky release

Related-Bug: theopenlab/openlab#82
RolatZhang pushed a commit to RolatZhang/spark that referenced this pull request Mar 18, 2022
* stop task result getter correctly

* r36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants