-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Account for spark.executor.pyspark.memory in Yunikorn gang scheduling #2178
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -223,6 +223,49 @@ func TestSchedule(t *testing.T) { | |
}, | ||
}, | ||
}, | ||
{ | ||
name: "spark.executor.pyspark.memory", | ||
app: &v1beta2.SparkApplication{ | ||
Spec: v1beta2.SparkApplicationSpec{ | ||
Type: v1beta2.SparkApplicationTypePython, | ||
Driver: v1beta2.DriverSpec{ | ||
SparkPodSpec: v1beta2.SparkPodSpec{ | ||
Cores: util.Int32Ptr(1), | ||
Memory: util.StringPtr("512m"), | ||
}, | ||
}, | ||
Executor: v1beta2.ExecutorSpec{ | ||
Instances: util.Int32Ptr(2), | ||
SparkPodSpec: v1beta2.SparkPodSpec{ | ||
Cores: util.Int32Ptr(1), | ||
Memory: util.StringPtr("512m"), | ||
}, | ||
}, | ||
SparkConf: map[string]string{ | ||
"spark.executor.pyspark.memory": "500m", | ||
}, | ||
}, | ||
}, | ||
expected: []taskGroup{ | ||
{ | ||
Name: "spark-driver", | ||
MinMember: 1, | ||
MinResource: map[string]string{ | ||
"cpu": "1", | ||
"memory": "896Mi", // 512Mi + 384Mi min overhead | ||
}, | ||
}, | ||
{ | ||
Name: "spark-executor", | ||
MinMember: 2, | ||
MinResource: map[string]string{ | ||
"cpu": "1", | ||
// 512Mi + 384Mi min overhead + 500Mi spark.executor.pyspark.memory | ||
"memory": "1396Mi", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if it would be more robust to move these these to an end-to-end test suite and assert against the actual container request value? Thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe we can add some e2e tests for yunikorn scheduler in the future. |
||
}, | ||
}, | ||
}, | ||
}, | ||
} | ||
|
||
scheduler := &Scheduler{} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm also missing offheap bytes in this calculation https://github.com/apache/spark/blob/7de71a2ec78d985c2a045f13c1275101b126cec4/core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala#L525-L532
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the offheap memory conf
spark.memory.offHeap.enabled
andspark.memory.offHeap.size
should also be considered. Maybe we can fix this in another PR.