Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate CRD specs, bump to v1beta2 #578

Merged
merged 9 commits into from
Sep 13, 2019
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

The Kubernetes Operator for Apache Spark is under active development, but backward compatibility of the APIs is guaranteed for beta releases.

**If you are currently using the `v1alpha1` version of the APIs in your manifests, please update them to use the `v1beta1` version by changing `apiVersion: "sparkoperator.k8s.io/v1alpha1"` to `apiVersion: "sparkoperator.k8s.io/v1beta1"`. You will also need to delete the `v1alpha1` version of the CustomResourceDefinitions named `sparkapplications.sparkoperator.k8s.io` and `scheduledsparkapplications.sparkoperator.k8s.io`, and replace them with the `v1beta1` version either by installing the latest version of the operator or by running `kubectl create -f manifest/spark-operator-crds.yaml`.**
**If you are currently using the `v1alpha1` or `v1beta1` version of the APIs in your manifests, please update them to use the `v1beta2` version by changing `apiVersion: "sparkoperator.k8s.io/<version>"` to `apiVersion: "sparkoperator.k8s.io/v1beta1"`. You will also need to delete the `previous` version of the CustomResourceDefinitions named `sparkapplications.sparkoperator.k8s.io` and `scheduledsparkapplications.sparkoperator.k8s.io`, and replace them with the `v1beta2` version either by installing the latest version of the operator or by running `kubectl create -f manifest/crds`.**
khogeland marked this conversation as resolved.
Show resolved Hide resolved

Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is currently experimental and implemented using a Kubernetes
[Mutating Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), which became beta in Kubernetes 1.9.
Expand Down
7 changes: 7 additions & 0 deletions docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ Before building the operator the first time, run the following commands to get t
$ go get -u k8s.io/code-generator/cmd/client-gen
$ go get -u k8s.io/code-generator/cmd/deepcopy-gen
$ go get -u k8s.io/code-generator/cmd/defaulter-gen
$ go get -u sigs.k8s.io/controller-tools/cmd/contoller-gen
```

To update the auto-generated code, run the following command. (This step is only required if the CRD types have been changed):
Expand All @@ -59,6 +60,12 @@ To update the auto-generated code, run the following command. (This step is only
$ go generate
```

To update the auto-generated CRD definitions, run the following command:

```bash
$ controller-gen crd:trivialVersions=true,maxDescLen=0 paths="./pkg/apis/sparkoperator.k8s.io/v1beta2" output:crd:artifacts:config=./manifest/crds/
```

You can verify the current auto-generated code is up to date with:

```bash
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi-configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -33,8 +33,8 @@ spec:
configMap:
name: dummy-cm
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi-prometheus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -32,8 +32,8 @@ spec:
restartPolicy:
type: Never
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi-schedule.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: ScheduledSparkApplication
metadata:
name: spark-pi-scheduled
Expand All @@ -32,8 +32,8 @@ spec:
restartPolicy:
type: Never
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -34,8 +34,8 @@ spec:
path: "/tmp"
type: Directory
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-py-pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# Support for Python is experimental, and requires building SNAPSHOT image of Apache Spark,
# with `imagePullPolicy` set to Always

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-pi
Expand All @@ -36,8 +36,8 @@ spec:
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
2 changes: 1 addition & 1 deletion hack/update-codegen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${SCRIPT_ROOT}; ls -d -1 ./vendor/k8s.io/code-ge
# instead of the $GOPATH directly. For normal projects this can be dropped.
${CODEGEN_PKG}/generate-groups.sh "all" \
github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/client github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis \
sparkoperator.k8s.io:v1alpha1,v1beta1 \
sparkoperator.k8s.io:v1alpha1,v1beta1,v1beta2 \
--go-header-file "$(dirname ${BASH_SOURCE})/custom-boilerplate.go.txt" \
--output-base "$(dirname ${BASH_SOURCE})/../../../.."

Expand Down
8 changes: 0 additions & 8 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@ import (
operatorConfig "github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/config"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/controller/scheduledsparkapplication"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/controller/sparkapplication"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/crd"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/util"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/webhook"
)
Expand Down Expand Up @@ -155,13 +154,6 @@ func main() {
batchSchedulerMgr = batchscheduler.NewSchedulerManager(config)
}

if *installCRDs {
err = crd.CreateOrUpdateCRDs(apiExtensionsClient)
if err != nil {
glog.Fatal(err)
}
}

crInformerFactory := buildCustomResourceInformerFactory(crClient)
podInformerFactory := buildPodInformerFactory(kubeClient)

Expand Down
Loading