-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resync with apache-spark-on-k8s upstream #248
Conversation
This reverts commit 50c690d.
This reverts commit 34d7af2.
* Use a secret to mount small files in driver and executors. Allows bypassing the resource staging server in a few scenarios. * Fix scalstyle * Address comments and add tests. * Lightly brush up formatting. * Make the working directory empty so that added files don't clobber existing binaries. * Address comments. * Drop testing file size to N+1 of the limit (cherry picked from commit 455317d)
…tion (cherry picked from commit 58cebd1)
* Use a list of environment variables for JVM options. * Fix merge conflicts. (cherry picked from commit f7b5820)
Looks ok once the build passes. |
mountSmallFilesBootstrap.map { bootstrap => | ||
bootstrap.mountSmallFilesSecret( | ||
withMaybeShuffleConfigPod, withMaybeShuffleConfigExecutorContainer) | ||
}.getOrElse(withMaybeShuffleConfigPod, withMaybeShuffleConfigExecutorContainer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this change doesn't compile in circle:
5767 [INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ spark-kubernetes_2.11 ---^M
5768 [INFO] Using zinc server for incremental compilation^M
5769 [info] Compiling 9 Scala sources to /home/ubuntu/spark/resource-managers/kubernetes/core/target/scala-2.11/classes...^M
5770 [warn] /home/ubuntu/spark/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala:566: No automatic adaptation here: use
5771 [warn] signature: Option.getOrElse[B >: A](default: => B): B^M
5772 [warn] given arguments: withMaybeShuffleConfigPod, withMaybeShuffleConfigExecutorContainer^M
5773 [warn] after adaptation: Option.getOrElse((withMaybeShuffleConfigPod, withMaybeShuffleConfigExecutorContainer): (io.fabric8.kubernetes.api.model.Pod, io.fabric8.kubernetes.api.model.Container))^M
5774 [warn] }.getOrElse(withMaybeShuffleConfigPod, withMaybeShuffleConfigExecutorContainer)^M
5775 [warn] ^^M
5776 [error] /home/ubuntu/spark/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala:566: too many arguments for method ge
5777 [error] }.getOrElse(withMaybeShuffleConfigPod, withMaybeShuffleConfigExecutorContainer)^M
5778 [error] ^^M
5779 [error] /home/ubuntu/spark/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala:571: type mismatch;^M
5780 [error] found : Any^M
5781 [error] required: io.fabric8.kubernetes.api.model.Pod^M
5782 [error] withMaybeSmallFilesMountedPod,^M
5783 [error] ^^M
5784 [error] /home/ubuntu/spark/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala:573: type mismatch;^M
5785 [error] found : Any^M
5786 [error] required: io.fabric8.kubernetes.api.model.Container^M
5787 [error] withMaybeSmallFilesMountedContainer))^M
5788 [error] ^^M
5789 [error] /home/ubuntu/spark/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala:591: type mismatch;^M
5790 [error] found : Any^M
5791 [error] required: io.fabric8.kubernetes.api.model.Pod^M
5792 [error] executorPodWithInitContainer, nodeToLocalTaskCount)^M
5793 [error] ^^M
5794 [warn] one warning found^M
5795 [error] four errors found^M
5796 [error] Compile failed at Aug 23, 2017 7:53:45 PM [4.000s]^M
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we using a different scala compiler from upstream? Maybe a different Scala version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only thing I can think of is SBT vs maven? or SBT also uses the incremental zinc compiler
(cherry picked from commit a7f7176)
* Part 1: making test code cluster-agnostic * Final checked * Move all test code into KubernetesTestComponents * Addressed comments * Fixed doc * Restructure the test backends (#248) * Restructured the test backends * Address comments * var -> val * Comments * removed deadcode (cherry picked from commit 6b489c2)
We merged things into palantir/spark in a slightly different order than upstream -- revert to get to a clean slate, then cherry pick the PRs in order of upstream
Notable inclusion: the PR that allows small-files to skip RSS