-
Notifications
You must be signed in to change notification settings - Fork 530
Missing spec field causes Controller CrashLoopBackOff #1079
Comments
Note: a synchronized secret/configmap has a status
The secret, which is not in sync, has no status or event attribute |
found stacktrace in the federated controller
|
I changed the controller image to use "canary". Same Problem.
|
@freegroup The problem you are experiencing is almost certainly related to Kubernetes 1.15. A PR introducing its use does not pass CI. If you want to use KubeFed at this time, it will be necessary for the cluster hosting KubeFed to be Kubernetes 1.14.x (1.15 member clusters should work fine). |
kubectl version
and the pods again in CrashLoopBack
one of the joined cluster is still 1.15. But the cluster with the federated control plane is 1.14 |
|
@freegroup Can you provide more detail about how you have deployed your host cluster (minikube/kind/gke, released chart version, helm version)? I'm not able to reproduce the behavior you report when I deploy kubefed with the v0.1.0-rc5 chart to a 1.14 cluster and member clusters are also 1.14. |
I'm using https://gardener.cloud/ to provision the cluster. I federate 5 namespaces like this: apiVersion: v1
kind: Namespace
metadata:
name: cld
labels:
component_id: ns.cld
---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
name: cld
namespace: cld
spec:
placement:
clusterSelector: {} after this I apply a lot of secrets like this one below: apiVersion: types.kubefed.io/v1beta1
kind: FederatedSecret
type: kubernetes.io/dockerconfigjson
metadata:
name: artifactory
namespace: default
spec:
placement:
clusterSelector: {}
template:
data:
.dockerconfigjson: eyJhd#########T0ifX19
---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedConfigMap
metadata:
name: cld-configmap
namespace: cld
spec:
placement:
clusterSelector: {}
template:
data:
service.ini: |
[common]
# [seconds]. Files in the Bucket are deleted after this time period
#
maxage=600
# place where the CLD files are located in the Bucket
#
folder=cld
# Minimum of files to keep. Required for audit log and debugging
#
keep=5
# Which kind of storage backend is currently in use
#
storage_backend=storage.google.Backend
#storage_backend=storage.s3.Backend
# which kind of data source we have. E.g. test data for local developent, PROD,...
#
input_source=input_source.testdata_small_gen.FileSource
#input_source=input_source.testdata_small.FileSource
#input_source=input_source.testdata_s3.FileSource
# the region of the bucket
#
region=eu-central-1
# the bucket to use
#
bucket=lis-persistence
# dashboard API endpoint to report ERROR/SUCCESS event
#
[dashboard]
enabled=true
url=http://api.dashboard.svc.cluster.local/event
greetings |
Also, feel free to reach out on the #sig-multicluster channel on kubernetes.slack.com - an interactive chat might be more productive for debugging. |
thanks @marun for the interactive support. marun found the problem. Some FederatedConfigMaps and FederatedSecrets didn't have the field |
What happened:
FederatedSecrets / FederatedConfigmaps takes up to 30 minutes to be visible in
joined clusters. After several minutes some of them are visible,.....after minutes next
ConfigMaps comes (maybe)
You can't rely on the Secrects or ConfigMaps being synchronized at all. Some are not sync
What you expected to happen:
new created Federated#### comes within minutes and not hours
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
)Scope of installation (namespaced or cluster)
cluster
Others
/kind bug
The text was updated successfully, but these errors were encountered: