-
Notifications
You must be signed in to change notification settings - Fork 118
V0.2 dev #341
V0.2 dev #341
Conversation
Signed-off-by: duyanghao <[email protected]>
….driver.pod.name` Signed-off-by: duyanghao <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just marking this as unresolved until we have finished discussion on #335.
@@ -261,7 +281,8 @@ private[spark] object Client { | |||
.getOrElse(Array.empty[String]) | |||
val appName = sparkConf.getOption("spark.app.name") | |||
.getOrElse("spark") | |||
val kubernetesAppId = s"$appName-$launchTime".toLowerCase.replaceAll("\\.", "-") | |||
val kubernetesAppId = sparkConf.getOption("spark.kubernetes.driver.pod.name") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is out of date with branch-2.1-kubernetes - since #331 has merged, the semantics of these variables have changed.
@@ -97,6 +102,15 @@ private[spark] class Client( | |||
.withValue(classPath) | |||
.build() | |||
} | |||
val driverCpuQuantity = new QuantityBuilder(false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is already included in #340
@duyanghao is this PR still active? I'm not sure what's left since some of the pieces have been merged in through separate PRs. If there are still changes that need to be made, please open a new PR against I figured that between fixing merge conflicts and changing the destination branch from 2.1 to 2.2 it's easier to start a new PR than continue working on this one. Thanks again for helping make this project better! |
What changes were proposed in this pull request?
Set driver and executor Label:spark-app-id =
--conf spark.kubernetes.driver.pod.name
.The main reason for this is that we can search all pods for a spark application with the specified label(
spark-app-id
) value through parspark.kubernetes.driver.pod.name
,which can be more easily combined with web server(if there is any),guess that use A specifies par--conf spark.kubernetes.driver.pod.name=xxx
,then web server could get all pods information through k8s with label:spark-app-id
=xxx
,that is good anyway!!!Refs here.
How was this patch tested?
Manual tests were successful.Example is as follows:
i submit with pars:
then driver pod name and labels will be:
executor #1 pod name and labels will be:
executor #2 pod name and labels will be: