-
Notifications
You must be signed in to change notification settings - Fork 28.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-6490][Docs] Add docs for rpc configurations #5607
Conversation
LGTM |
Test build #30659 timed out for PR 5607 at commit |
retest this please |
Test build #30667 has finished for PR 5607 at commit
|
<td><code>spark.rpc.retry.wait</code></td> | ||
<td>3s</td> | ||
<td> | ||
How long for an RPC ask operation to wait before starting the next retry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "before retrying"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duration for an ...
LGTM. |
<tr> | ||
<td><code>spark.rpc.numRetries</code></td> | ||
<td>3</td> | ||
How many times for an RPC ask operation to retry before giving up. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Number of times to retry before an RPC task gives up.
(We should also indicate whether 1 = retry once, or 1 = run it once in total)
Can you also change the default timeout? Thanks. |
Updated docs and the timeout. |
@@ -48,11 +48,13 @@ object RpcUtils { | |||
|
|||
/** Returns the default Spark timeout to use for RPC ask operations. */ | |||
def askTimeout(conf: SparkConf): FiniteDuration = { | |||
conf.getTimeAsSeconds("spark.rpc.askTimeout", "30s") seconds | |||
conf.getTimeAsSeconds("spark.rpc.askTimeout", | |||
conf.get("spark.network.timeout", "30s")) seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we change it to the same timeout where we set for spark.network.timeout else where? I think we use a number higher than 30s.
Increased the default timeout to 120s |
LGTM |
Test build #30713 has finished for PR 5607 at commit
|
Thanks. I've merged this. |
Test build #30714 has finished for PR 5607 at commit
|
Added docs for rpc configurations and also fixed two places that should have been fixed in apache#5595. Author: zsxwing <[email protected]> Closes apache#5607 from zsxwing/SPARK-6490-docs and squashes the following commits: 25a6736 [zsxwing] Increase the default timeout to 120s 6e37c30 [zsxwing] Update docs 5577540 [zsxwing] Use spark.network.timeout as the default timeout if it presents 4f07174 [zsxwing] Fix unit tests 1c2cf26 [zsxwing] Add docs for rpc configurations
Added docs for rpc configurations and also fixed two places that should have been fixed in #5595.