You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The install of lustre-fs-operator is done with kubectl apply -k, and release 1.27 of kubectl is reporting an error on that. This is due to a null creationTimestamp in our CRDs, which was placed by the older controller-tools that we've been using.
The output, from a ./kind.sh create, looks like this:
Have a nice day! 👋
Retrieving Context...
Retrieving System Config...
Target System: kind
Applying NNF node labels and taints to rabbit nodes: kind-worker2, kind-worker3...
Installing cert-manager...
waiting for it to be ready...
Installing mpi-operator...
Installing lustre-csi-driver...
Overlay for lustre-csi-driver found: overlays/kind
Installing lustre-fs-operator...
# Warning: 'bases' is deprecated. Please use 'resources' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
# Warning: 'vars' is deprecated. Please use 'replacements' instead. [EXPERIMENTAL] Run 'kustomize edit fix' to update your Kustomization automatically.
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
namespace/nnf-lustre-fs-system created
serviceaccount/lustre-fs-controller-manager created
role.rbac.authorization.k8s.io/lustre-fs-leader-election-role created
clusterrole.rbac.authorization.k8s.io/lustre-fs-manager-role created
clusterrole.rbac.authorization.k8s.io/lustre-fs-metrics-reader created
clusterrole.rbac.authorization.k8s.io/lustre-fs-proxy-role created
rolebinding.rbac.authorization.k8s.io/lustre-fs-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/lustre-fs-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/lustre-fs-proxy-rolebinding created
configmap/lustre-fs-manager-config created
service/lustre-fs-controller-manager-metrics-service created
service/lustre-fs-webhook-service created
deployment.apps/lustre-fs-controller-manager created
certificate.cert-manager.io/lustre-fs-serving-cert created
issuer.cert-manager.io/lustre-fs-selfsigned-issuer created
validatingwebhookconfiguration.admissionregistration.k8s.io/lustre-fs-validating-webhook-configuration created
error: unable to decode "https://github.com/NearNodeFlash/lustre-fs-operator.git/config/default/?ref=v0.0.3": parsing time "null" as "2006-01-02T15:04:05Z07:00": cannot parse "null" as "2006"
Exit Error: exit status 1 (1)
nnf-deploy: error: exit status 1
The text was updated successfully, but these errors were encountered:
The install of lustre-fs-operator is done with
kubectl apply -k
, and release 1.27 of kubectl is reporting an error on that. This is due to a null creationTimestamp in our CRDs, which was placed by the older controller-tools that we've been using.The output, from a
./kind.sh create
, looks like this:The text was updated successfully, but these errors were encountered: