Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-5751] [SQL] Sets SPARK_HOME as SPARK_PID_DIR when running Thrift server test suites #4758

Closed
wants to merge 2 commits into from

Conversation

liancheng
Copy link
Contributor

This is a follow-up of #4720. By default, spark-daemon.sh writes PID files under /tmp, which makes it impossible to start multiple server instances simultaneously. This PR sets SPARK_PID_DIR to Spark home directory to workaround this problem.

Many thanks to @chenghao-intel for pointing out this issue!

Review on Reviewable

@SparkQA
Copy link

SparkQA commented Feb 25, 2015

Test build #27935 has started for PR 4758 at commit 1b3d1e3.

  • This patch merges cleanly.

"SPARK_TESTING" -> "0",
// Points SPARK_PID_DIR to SPARK_HOME, otherwise only 1 Thrift server instance can be started
// at a time, which is not Jenkins friendly.
"SPARK_PID_DIR" -> "../../")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering a better path would be Utils.createTempDir(..), just in case we want to run the test suite simultaneously in the local machine as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, can we just use sys.props(java.io.tmpdir) instead of creating a new directory?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a different directory for each run, otherwise the PID file will be overwritten when multiple jenkins unit tests run parallel.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh you're right. Looks like we currently set it to /tmp already, which is probably what java.io.tmpdir is anyway in most cases

@SparkQA
Copy link

SparkQA commented Feb 25, 2015

Test build #27935 has finished for PR 4758 at commit 1b3d1e3.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@AmplabJenkins
Copy link

Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/27935/
Test PASSed.

@andrewor14
Copy link
Contributor

I said I was going to merge this, but instead I left one more comment inline.

@liancheng
Copy link
Contributor Author

Updated, you can merge it after Jenkins nods :)

@SparkQA
Copy link

SparkQA commented Feb 27, 2015

Test build #28077 has started for PR 4758 at commit 252fa0f.

  • This patch merges cleanly.

@SparkQA
Copy link

SparkQA commented Feb 27, 2015

Test build #28077 has finished for PR 4758 at commit 252fa0f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@AmplabJenkins
Copy link

Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28077/
Test PASSed.

asfgit pushed a commit that referenced this pull request Feb 28, 2015
…ft server test suites

This is a follow-up of #4720. By default, `spark-daemon.sh` writes PID files under `/tmp`, which makes it impossible to start multiple server instances simultaneously. This PR sets `SPARK_PID_DIR` to Spark home directory to workaround this problem.

Many thanks to chenghao-intel for pointing out this issue!

<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4758)
<!-- Reviewable:end -->

Author: Cheng Lian <[email protected]>

Closes #4758 from liancheng/thriftserver-pid-dir and squashes the following commits:

252fa0f [Cheng Lian] Uses temporary directory as Thrift server PID directory
1b3d1e3 [Cheng Lian] Sets SPARK_HOME as SPARK_PID_DIR when running Thrift server test suites

(cherry picked from commit 8c468a6)
Signed-off-by: Cheng Lian <[email protected]>
@asfgit asfgit closed this in 8c468a6 Feb 28, 2015
@liancheng
Copy link
Contributor Author

@andrewor14 I just merged it to master and branch-1.3.

@liancheng liancheng deleted the thriftserver-pid-dir branch February 28, 2015 00:43
@andrewor14
Copy link
Contributor

Cool, thanks.

kazuyukitanimura pushed a commit to kazuyukitanimura/spark that referenced this pull request Aug 10, 2022
Iceberg 0.13.0.3 - ADT 1.1.7

2022-05-20

PRs Merged

* Internal: Parquet bloom filter support (apache#594 (https://github.pie.apple.com/IPR/apache-incubator-iceberg/pull/594))
* Internal: AWS Kms Client (apache#630 (https://github.pie.apple.com/IPR/apache-incubator-iceberg/pull/630))
* Internal: Core: Add client-side check of encryption properties (apache#626 (https://github.pie.apple.com/IPR/apache-incubator-iceberg/pull/626))
* Core: Align snapshot summary property names for delete files (apache#4766 (apache/iceberg#4766))
* Core: Add eq and pos delete file counts to snapshot summary (apache#4677 (apache/iceberg#4677))
* Spark 3.2: Clean static vars in SparkTableUtil (apache#4765 (apache/iceberg#4765))
* Spark 3.2: Avoid reflection to load metadata tables in SparkTableUtil (apache#4758 (apache/iceberg#4758))
* Core: Fix query failure when using projection on top of partitions metadata table (apache#4720) (apache#619 (https://github.pie.apple.com/IPR/apache-incubator-iceberg/pull/619))

Key Notes

Bloom filter support and Client Side Encryption Features can be used in this release. Both features are only enabled with explicit flags and will not effect existing tables or jobs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants