Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

Missing spec field causes Controller CrashLoopBackOff #1079

Closed
freegroup opened this issue Aug 5, 2019 · 10 comments · Fixed by #1068 or #1082
Closed

Missing spec field causes Controller CrashLoopBackOff #1079

freegroup opened this issue Aug 5, 2019 · 10 comments · Fixed by #1068 or #1082
Labels
kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@freegroup
Copy link

What happened:
FederatedSecrets / FederatedConfigmaps takes up to 30 minutes to be visible in
joined clusters. After several minutes some of them are visible,.....after minutes next
ConfigMaps comes (maybe)
You can't rely on the Secrects or ConfigMaps being synchronized at all. Some are not sync

What you expected to happen:
new created Federated#### comes within minutes and not hours

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version)
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • KubeFed version
kubefedctl version: version.Info{Version:"v0.1.0-rc5-dirty", GitCommit:"99be0218bf5ac7d560ec0d7c2cfbfcbb86ba2d61", GitTreeState:"dirty", BuildDate:"2019-08-01T15:41:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}

  • Scope of installation (namespaced or cluster)
    cluster

  • Others

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 5, 2019
@freegroup
Copy link
Author

freegroup commented Aug 5, 2019

Note: a synchronized secret/configmap has a status

status:
  clusters:
  - name: canary
  - name: cluster01
  - name: cluster02
  conditions:
  - lastProbeTime: 2019-08-05T15:27:07Z
    lastTransitionTime: 2019-08-05T15:05:31Z
    status: "True"
    type: Propagation

The secret, which is not in sync, has no status or event attribute

@freegroup
Copy link
Author

freegroup commented Aug 5, 2019

found stacktrace in the federated controller

I0805 15:44:45.396799       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0805 15:44:45.400094       1 federated_informer.go:205] Cluster kube-federation-system/cluster02 is ready
I0805 15:44:45.436013       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
E0805 15:44:45.595028       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:45.680786       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:45.792457       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:46.693225       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:46.773813       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:46.880391       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:47.889417       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:47.889426       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:47.978721       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:48.985860       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:48.987017       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:49.068801       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:50.117646       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:50.121035       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:50.155733       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:51.208579       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:51.213022       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:51.245135       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:52.295846       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:52.298331       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:52.389389       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:53.332684       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:53.389340       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:53.589363       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:54.494755       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:54.589629       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:54.789352       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:55.595440       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:55.693861       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:55.889499       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:56.792861       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:56.989470       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:57.389939       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:44:57.799064       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/
E0805 15:40:40.600477       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:277
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283
/go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1561171]

goroutine 3336 [running]:
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1794f60, 0x2d1e8e0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
sigs.k8s.io/kubefed/pkg/controller/sync.GetOverrideHash(0xc00048c3e8, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:278 +0xa1
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).OverrideVersion(0xc000d1e5b0, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108 +0x32
sigs.k8s.io/kubefed/pkg/controller/sync/version.(*VersionManager).Update(0xc000295300, 0x1cb0fa0, 0xc000d1e5b0, 0x2d59f18, 0x0, 0x0, 0xc001696ae0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147 +0x93
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).UpdateVersions(0xc000d1e5b0, 0x2d59f18, 0x0, 0x0, 0xc001696ae0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125 +0x6f
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).syncToClusters(0xc0001e2e10, 0x1cee020, 0xc000d1e5b0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373 +0x7a7
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).reconcile(0xc0001e2e10, 0xc00024d110, 0x8, 0xc0007d1c80, 0x12, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283 +0x724
sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).worker(0xc0009a2b90)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156 +0xae
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0007724e0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007724e0, 0x3b9aca00, 0x0, 0x1, 0xc0016424e0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0007724e0, 0x3b9aca00, 0xc0016424e0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).Run
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:119 +0xe5
WDFM33957623A:lis-configuration d023280$ (jenkins-shoot--lis--canary) 

@freegroup freegroup changed the title FederatedSecrets / FederatedConfigmaps takes up to 30 minutes to be visible in joined clusters. Controller CrashLoopBackOff and FederatedSecrets / FederatedConfigmaps takes up to 30 minutes to be visible in joined clusters. Aug 5, 2019
@freegroup
Copy link
Author

freegroup commented Aug 5, 2019

I changed the controller image to use "canary". Same Problem.
( https://quay.io/repository/kubernetes-multicluster/kubefed?tab=tags )

E0805 15:59:31.889638       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:32.289507       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:32.694443       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:33.199124       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:33.389431       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:33.803807       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:34.289361       1 reflector.go:251] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: Failed to watch <nil>: the server does not allow this method on the requested resource
E0805 15:59:34.389791       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:277
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283
/go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1561251]

goroutine 3126 [running]:
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1794f60, 0x2d1e8e0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
sigs.k8s.io/kubefed/pkg/controller/sync.GetOverrideHash(0xc000335140, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:278 +0xa1
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).OverrideVersion(0xc0002cbc70, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108 +0x32
sigs.k8s.io/kubefed/pkg/controller/sync/version.(*VersionManager).Update(0xc000284780, 0x1cb0fe0, 0xc0002cbc70, 0x2d59f18, 0x0, 0x0, 0xc0027faed0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147 +0x93
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).UpdateVersions(0xc0002cbc70, 0x2d59f18, 0x0, 0x0, 0xc0027faed0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125 +0x6f
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).syncToClusters(0xc000202870, 0x1cee060, 0xc0002cbc70, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373 +0x7a7
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).reconcile(0xc000202870, 0xc0003caf29, 0x5, 0xc0003caf10, 0xf, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283 +0x724
sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).worker(0xc001110be0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156 +0xae
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0008933e0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008933e0, 0x3b9aca00, 0x0, 0x1, 0xc000a43380)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0008933e0, 0x3b9aca00, 0xc000a43380)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).Run
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:119 +0xe5

@marun marun changed the title Controller CrashLoopBackOff and FederatedSecrets / FederatedConfigmaps takes up to 30 minutes to be visible in joined clusters. Kubernetes 1.15 causes Controller CrashLoopBackOff Aug 5, 2019
@marun
Copy link
Contributor

marun commented Aug 5, 2019

@freegroup The problem you are experiencing is almost certainly related to Kubernetes 1.15. A PR introducing its use does not pass CI. If you want to use KubeFed at this time, it will be necessary for the cluster hosting KubeFed to be Kubernetes 1.14.x (1.15 member clusters should work fine).

@freegroup
Copy link
Author

kubectl version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-26T00:05:06Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.5", GitCommit:"0e9fcb426b100a2aea5ed5c25b3d8cfbb01a8acf", GitTreeState:"clean", BuildDate:"2019-08-05T09:13:08Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

and the pods again in CrashLoopBack

E0806 10:06:33.933851       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:277
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283
/go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1561171]

goroutine 4278 [running]:
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1794f60, 0x2d1e8e0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
sigs.k8s.io/kubefed/pkg/controller/sync.GetOverrideHash(0xc001236418, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:278 +0xa1
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).OverrideVersion(0xc000580410, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108 +0x32
sigs.k8s.io/kubefed/pkg/controller/sync/version.(*VersionManager).Update(0xc00028b800, 0x1cb0fa0, 0xc000580410, 0x2d59f18, 0x0, 0x0, 0xc0025c60c0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147 +0x93
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).UpdateVersions(0xc000580410, 0x2d59f18, 0x0, 0x0, 0xc0025c60c0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125 +0x6f
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).syncToClusters(0xc0019dd7a0, 0x1cee020, 0xc000580410, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373 +0x7a7
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).reconcile(0xc0019dd7a0, 0xc000fe5cc9, 0x5, 0xc000fe5cb0, 0xf, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283 +0x724
sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).worker(0xc001323180)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156 +0xae
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000269bb0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000269bb0, 0x3b9aca00, 0x0, 0x1, 0xc001c5ecc0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000269bb0, 0x3b9aca00, 0xc001c5ecc0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).Run
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:119 +0xe5

one of the joined cluster is still 1.15. But the cluster with the federated control plane is 1.14

@freegroup
Copy link
Author

freegroup commented Aug 6, 2019

unjoin member cluster with 1.15 helps to sync secrets and configmaps ....but controller pods still crashing after some minutes

W0806 10:19:00.735869       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0806 10:19:00.986292       1 controller-manager.go:224] Setting Options with KubeFedConfig "kube-federation-system/kubefed"
I0806 10:19:00.986320       1 controller-manager.go:315] Using valid KubeFedConfig "kube-federation-system/kubefed"
I0806 10:19:00.986338       1 controller-manager.go:139] KubeFed will target all namespaces
I0806 10:19:01.139209       1 leaderelection.go:205] attempting to acquire leader lease  kube-federation-system/kubefed-controller-manager...
SAP-Laptop:lis-mixer-rest d023280$ (shoot--lis--canary) k logs kubefed-controller-manager-86d84db5f5-55zsb  -n kube-federation-system  -f
KubeFed controller-manager version: version.Info{Version:"v0.1.0-rc5", GitCommit:"99be0218bf5ac7d560ec0d7c2cfbfcbb86ba2d61", GitTreeState:"clean", BuildDate:"2019-08-01T16:50:07Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
W0806 10:19:00.735869       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0806 10:19:00.986292       1 controller-manager.go:224] Setting Options with KubeFedConfig "kube-federation-system/kubefed"
I0806 10:19:00.986320       1 controller-manager.go:315] Using valid KubeFedConfig "kube-federation-system/kubefed"
I0806 10:19:00.986338       1 controller-manager.go:139] KubeFed will target all namespaces
I0806 10:19:01.139209       1 leaderelection.go:205] attempting to acquire leader lease  kube-federation-system/kubefed-controller-manager...
I0806 10:19:22.119157       1 leaderelection.go:214] successfully acquired lease kube-federation-system/kubefed-controller-manager
I0806 10:19:22.119570       1 leaderelection.go:75] promoted as leader
I0806 10:19:22.459921       1 controller.go:91] Starting cluster controller
I0806 10:19:22.737442       1 controller.go:70] Starting scheduling manager
I0806 10:19:23.033504       1 controller.go:179] Starting schedulingpreference controller for ReplicaSchedulingPreference
I0806 10:19:24.338685       1 controller.go:81] Starting replicaschedulingpreferences controller
I0806 10:19:24.338714       1 controller.go:197] Starting plugin FederatedReplicaSet for ReplicaSchedulingPreference
I0806 10:19:24.433574       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:24.436403       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:25.153256       1 controller.go:197] Starting plugin FederatedDeployment for ReplicaSchedulingPreference
I0806 10:19:25.233516       1 controller.go:88] Starting ServiceDNS controller
I0806 10:19:25.233573       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:25.236325       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:25.239376       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:25.239862       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:25.437800       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:25.438399       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:26.162874       1 controller.go:113] Starting "service" DNSEndpoint controller
I0806 10:19:26.433163       1 controller.go:124] "service" DNSEndpoint controller synced and ready
I0806 10:19:26.536758       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:26.539655       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:27.053943       1 controller.go:79] Starting IngressDNS controller
I0806 10:19:27.133716       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:27.236135       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:27.674256       1 controller.go:113] Starting "ingress" DNSEndpoint controller
I0806 10:19:27.774464       1 controller.go:124] "ingress" DNSEndpoint controller synced and ready
I0806 10:19:28.033258       1 controller.go:70] Starting FederatedTypeConfig controller
I0806 10:19:28.838226       1 controller.go:101] Starting sync controller for "FederatedNamespace"
I0806 10:19:28.838260       1 controller.go:330] Started sync controller for "FederatedNamespace"
I0806 10:19:29.039784       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:29.042842       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:29.349223       1 controller.go:101] Starting sync controller for "FederatedDeployment"
I0806 10:19:29.349257       1 controller.go:330] Started sync controller for "FederatedDeployment"
I0806 10:19:29.538211       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:29.633566       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:30.255112       1 controller.go:101] Starting sync controller for "FederatedService"
I0806 10:19:30.255145       1 controller.go:330] Started sync controller for "FederatedService"
I0806 10:19:30.260821       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:30.263336       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:30.734197       1 controller.go:101] Starting sync controller for "FederatedCronJob"
I0806 10:19:30.734237       1 controller.go:330] Started sync controller for "FederatedCronJob"
I0806 10:19:30.739411       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:30.833613       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:31.673296       1 controller.go:101] Starting sync controller for "FederatedClusterRole"
I0806 10:19:31.673325       1 controller.go:330] Started sync controller for "FederatedClusterRole"
I0806 10:19:31.738083       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:31.740550       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:32.155638       1 controller.go:101] Starting sync controller for "FederatedIngress"
I0806 10:19:32.155673       1 controller.go:330] Started sync controller for "FederatedIngress"
I0806 10:19:32.339095       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:32.341728       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:33.243873       1 controller.go:101] Starting sync controller for "FederatedConfigMap"
I0806 10:19:33.243900       1 controller.go:330] Started sync controller for "FederatedConfigMap"
I0806 10:19:33.333513       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:33.336258       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:33.833506       1 controller.go:101] Starting sync controller for "FederatedJob"
I0806 10:19:33.833540       1 controller.go:330] Started sync controller for "FederatedJob"
I0806 10:19:33.839785       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:33.842324       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:34.853020       1 controller.go:101] Starting sync controller for "FederatedReplicaSet"
I0806 10:19:34.853056       1 controller.go:330] Started sync controller for "FederatedReplicaSet"
I0806 10:19:35.037807       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:35.133523       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:35.462888       1 controller.go:101] Starting sync controller for "FederatedSecret"
I0806 10:19:35.462922       1 controller.go:330] Started sync controller for "FederatedSecret"
I0806 10:19:35.494598       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
I0806 10:19:35.639871       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:36.080345       1 controller.go:101] Starting sync controller for "FederatedServiceAccount"
I0806 10:19:36.080376       1 controller.go:330] Started sync controller for "FederatedServiceAccount"
I0806 10:19:36.135959       1 federated_informer.go:205] Cluster kube-federation-system/canary is ready
I0806 10:19:36.347005       1 federated_informer.go:205] Cluster kube-federation-system/cluster01 is ready
E0806 10:19:42.133657       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:277
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373
/go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283
/go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1561171]

goroutine 4111 [running]:
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1794f60, 0x2d1e8e0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
sigs.k8s.io/kubefed/pkg/controller/sync.GetOverrideHash(0xc00156c768, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:278 +0xa1
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).OverrideVersion(0xc00088e8f0, 0x0, 0x0, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:108 +0x32
sigs.k8s.io/kubefed/pkg/controller/sync/version.(*VersionManager).Update(0xc000ca0900, 0x1cb0fa0, 0xc00088e8f0, 0x2d59f18, 0x0, 0x0, 0xc001c9ee40, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/version/manager.go:147 +0x93
sigs.k8s.io/kubefed/pkg/controller/sync.(*federatedResource).UpdateVersions(0xc00088e8f0, 0x2d59f18, 0x0, 0x0, 0xc001c9ee40, 0x0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/resource.go:125 +0x6f
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).syncToClusters(0xc000fefb00, 0x1cee020, 0xc00088e8f0, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:373 +0x7a7
sigs.k8s.io/kubefed/pkg/controller/sync.(*KubeFedSyncController).reconcile(0xc000fefb00, 0xc0007b22f0, 0x9, 0xc000f9a7e0, 0x13, 0x0)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/sync/controller.go:283 +0x724
sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).worker(0xc001006960)
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:156 +0xae
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0014a8c60)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014a8c60, 0x3b9aca00, 0x0, 0x6977694974564701, 0xc0014bdbc0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0014a8c60, 0x3b9aca00, 0xc0014bdbc0)
        /go/src/sigs.k8s.io/kubefed/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/kubefed/pkg/controller/util.(*asyncWorker).Run
        /go/src/sigs.k8s.io/kubefed/pkg/controller/util/worker.go:119 +0xe5


NAME                                          READY   STATUS    RESTARTS   AGE
kubefed-admission-webhook-86545b9c7f-p9x6p    1/1     Running   0          4m14s
kubefed-controller-manager-86d84db5f5-55zsb   1/1     Running   3          4m14s
kubefed-controller-manager-86d84db5f5-kc9vz   1/1     Running   2          4m14s

@marun
Copy link
Contributor

marun commented Aug 6, 2019

@freegroup Can you provide more detail about how you have deployed your host cluster (minikube/kind/gke, released chart version, helm version)? I'm not able to reproduce the behavior you report when I deploy kubefed with the v0.1.0-rc5 chart to a 1.14 cluster and member clusters are also 1.14.

@freegroup
Copy link
Author

I'm using https://gardener.cloud/ to provision the cluster.
Gardener has per default RBAC enabled. default has no access right to the API-Server.

I federate 5 namespaces like this:

apiVersion: v1
kind: Namespace
metadata:
  name: cld
  labels:
    component_id: ns.cld

---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
  name: cld
  namespace: cld
spec:
  placement:
    clusterSelector: {}

after this I apply a lot of secrets like this one below:

apiVersion: types.kubefed.io/v1beta1
kind: FederatedSecret
type: kubernetes.io/dockerconfigjson
metadata:
  name: artifactory
  namespace: default
spec:
  placement:
    clusterSelector: {}
  template:
    data:
      .dockerconfigjson: eyJhd#########T0ifX19

---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedConfigMap
metadata:
  name: cld-configmap
  namespace: cld
spec:
  placement:
    clusterSelector: {}
  template:
    data:
      service.ini: |
        [common]
        # [seconds]. Files in the Bucket are deleted after this time period
        #
        maxage=600

        # place where the CLD files are located in the Bucket
        #
        folder=cld

        # Minimum of files to keep. Required for audit log and debugging
        #
        keep=5

        # Which kind of storage backend is currently in use
        #
        storage_backend=storage.google.Backend
        #storage_backend=storage.s3.Backend


        # which kind of data source we have. E.g. test data for local developent, PROD,...
        #
        input_source=input_source.testdata_small_gen.FileSource
        #input_source=input_source.testdata_small.FileSource
        #input_source=input_source.testdata_s3.FileSource

        # the region of the bucket
        #
        region=eu-central-1

        # the bucket to use
        #
        bucket=lis-persistence

        # dashboard API endpoint to report ERROR/SUCCESS event
        #
        [dashboard]
        enabled=true
        url=http://api.dashboard.svc.cluster.local/event

greetings

@marun
Copy link
Contributor

marun commented Aug 6, 2019

Also, feel free to reach out on the #sig-multicluster channel on kubernetes.slack.com - an interactive chat might be more productive for debugging.

@freegroup
Copy link
Author

freegroup commented Aug 6, 2019

thanks @marun for the interactive support.

marun found the problem. Some FederatedConfigMaps and FederatedSecrets didn't have the field spec. I think for this kind of federated types the field should be required. In this case the API can decline the manifest and the developer gets proper feedback.

@marun marun changed the title Kubernetes 1.15 causes Controller CrashLoopBackOff Missing spec field causes Controller CrashLoopBackOff Aug 6, 2019
@marun marun added this to the v0.1.0 milestone Aug 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
3 participants