Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change make manifests to not rely on $PATH #674

Closed
sfzylad opened this issue Mar 20, 2019 · 6 comments · Fixed by #721
Closed

Change make manifests to not rely on $PATH #674

sfzylad opened this issue Mar 20, 2019 · 6 comments · Fixed by #721
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@sfzylad
Copy link
Contributor

sfzylad commented Mar 20, 2019

/kind bug

What steps did you take and what happened:
I used make manifests (wrapped in aws-vault exec command) to generate fresh manifests, then:

$ ./bazel-bin/cmd/clusterctl/darwin_amd64_pure_stripped/clusterctl create cluster --bootstrap-type kind  -v 3 --provider aws -m cmd/clusterctl/examples/aws/out/machines.yaml -c cmd/clusterctl/examples/aws/out/cluster.yaml -p cmd/clusterctl/examples/aws/out/provider-components.yaml -a cmd/clusterctl/examples/aws/out/addons.yaml
I0320 19:37:52.838943   90914 createbootstrapcluster.go:27] Creating bootstrap cluster
I0320 19:37:52.838967   90914 kind.go:57] Running: kind [create cluster --name=clusterapi]
I0320 19:38:33.845120   90914 kind.go:60] Ran: kind [create cluster --name=clusterapi] Output: Creating cluster 'kind-clusterapi' ...
 • Ensuring node image (kindest/node:v1.13.2) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.13.2) 🖼
 • [control-plane] Creating node container 📦  ...
 ✓ [control-plane] Creating node container 📦
 • [control-plane] Fixing mounts 🗻  ...
 ✓ [control-plane] Fixing mounts 🗻
 • [control-plane] Starting systemd 🖥  ...
 ✓ [control-plane] Starting systemd 🖥
 • [control-plane] Waiting for docker to be ready 🐋  ...
 ✓ [control-plane] Waiting for docker to be ready 🐋
 • [control-plane] Pre-loading images 🐋  ...
 ✓ [control-plane] Pre-loading images 🐋
 • [control-plane] Creating the kubeadm config file ⛵  ...
 ✓ [control-plane] Creating the kubeadm config file ⛵
 • [control-plane] Starting Kubernetes (this may take a minute) ☸  ...
 ✓ [control-plane] Starting Kubernetes (this may take a minute) ☸
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="clusterapi")"
kubectl cluster-info
I0320 19:38:33.845200   90914 kind.go:57] Running: kind [get kubeconfig-path --name=clusterapi]
I0320 19:38:33.899207   90914 kind.go:60] Ran: kind [get kubeconfig-path --name=clusterapi] Output: /Users/zylad/.kube/kind-config-clusterapi
I0320 19:38:33.916335   90914 clusterdeployer.go:78] Applying Cluster API stack to bootstrap cluster
I0320 19:38:33.916359   90914 applyclusterapicomponents.go:26] Applying Cluster API Provider Components
I0320 19:38:33.916592   90914 clusterclient.go:763] Waiting for kubectl apply...
W0320 19:38:34.982351   90914 clusterclient.go:779] Waiting for kubectl apply... unknown error couldn't kubectl apply, output: namespace/aws-provider-system created
namespace/cluster-api-system created
customresourcedefinition.apiextensions.k8s.io/awsclusterproviderspecs.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/awsclusterproviderstatuses.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/awsmachineproviderspecs.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/awsmachineproviderstatuses.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusters.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machineclasses.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machinedeployments.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machines.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machinesets.cluster.k8s.io created
clusterrole.rbac.authorization.k8s.io/aws-provider-manager-role created
clusterrole.rbac.authorization.k8s.io/cluster-api-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/aws-provider-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cluster-api-manager-rolebinding created
service/aws-provider-controller-manager-service created
service/cluster-api-controller-manager-service created
statefulset.apps/aws-provider-controller-manager created
error: error validating "STDIN": error validating data: ValidationError(Secret): unknown field "spec" in io.k8s.api.core.v1.Secret; if you choose to ignore these errors, turn validation off with --validate=false
: exit status 1
I0320 19:38:34.982714   90914 createbootstrapcluster.go:36] Cleaning up bootstrap cluster.
I0320 19:38:34.982737   90914 kind.go:57] Running: kind [delete cluster --name=clusterapi]
I0320 19:38:35.976413   90914 kind.go:60] Ran: kind [delete cluster --name=clusterapi] Output: $KUBECONFIG is still set to use /Users/zylad/.kube/kind-config-clusterapi even though that file has been deleted, remember to unset it
F0320 19:38:35.976834   90914 create_cluster.go:61] unable to apply cluster api stack to bootstrap cluster: unable to apply cluster api controllers: couldn't kubectl apply, output: namespace/aws-provider-system created
namespace/cluster-api-system created
customresourcedefinition.apiextensions.k8s.io/awsclusterproviderspecs.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/awsclusterproviderstatuses.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/awsmachineproviderspecs.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/awsmachineproviderstatuses.awsprovider.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusters.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machineclasses.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machinedeployments.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machines.cluster.k8s.io created
customresourcedefinition.apiextensions.k8s.io/machinesets.cluster.k8s.io created
clusterrole.rbac.authorization.k8s.io/aws-provider-manager-role created
clusterrole.rbac.authorization.k8s.io/cluster-api-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/aws-provider-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cluster-api-manager-rolebinding created
service/aws-provider-controller-manager-service created
service/cluster-api-controller-manager-service created
statefulset.apps/aws-provider-controller-manager created
error: error validating "STDIN": error validating data: ValidationError(Secret): unknown field "spec" in io.k8s.api.core.v1.Secret; if you choose to ignore these errors, turn validation off with --validate=false
: exit status 1

What did you expect to happen:
New AWS cluster should be created.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

When tried with kubectl apply -f cmd/clusterctl/examples/aws/out/provider-components.yaml --validate=false it applies without errors.

Environment:

  • Cluster-api-provider-aws version:
    commit: 32a5c5147684829fcd53a9787ad1474eff937469
  • Kubernetes version: (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
  • OS (e.g. from /etc/os-release):
    Darwin Dominiks-MacBook-Pro.local 18.2.0 Darwin Kernel Version 18.2.0: Thu Dec 20 20:46:53 PST 2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 20, 2019
@randomvariable
Copy link
Member

randomvariable commented Mar 20, 2019

Probably an out of date clusterawsadm in the path, causing a --- to be missing between one object and the secret during concatenation.

This make manifests part should probably be changed to run via Bazel in a developer environment, this way the versions of the binaries that match the current workspace can be used.

@sfzylad
Copy link
Contributor Author

sfzylad commented Mar 20, 2019

Correct. clusterawsadm in my $PATH was from before prehistory. Closong this issue.

@sfzylad sfzylad closed this as completed Mar 20, 2019
@randomvariable randomvariable changed the title Error validating provider components Change make manifests to not rely on $PATH Mar 20, 2019
@randomvariable
Copy link
Member

Reopening, I think there's still a devx improvement to be had here.

Possibly ties into #299

/milestone next
/kind feature

@k8s-ci-robot
Copy link
Contributor

@randomvariable: The provided milestone is not valid for this repository. Milestones in this repository: [Next, v1alpha1]

Use /milestone clear to clear the milestone.

In response to this:

Reopening, I think there's still a devx improvement to be had here.

Possibly ties into #299

/milestone next
/kind feature

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 20, 2019
@randomvariable
Copy link
Member

/milestone Next

/remove-kind bug

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Mar 20, 2019
@k8s-ci-robot k8s-ci-robot removed the kind/bug Categorizes issue or PR as related to a bug. label Mar 20, 2019
@randomvariable
Copy link
Member

/priority important-soon

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Mar 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
3 participants