Skip to content
This repository has been archived by the owner on Oct 28, 2024. It is now read-only.

The pod is always pending in the nested cluster #156

Open
jinsongo opened this issue Jul 2, 2021 · 10 comments
Open

The pod is always pending in the nested cluster #156

jinsongo opened this issue Jul 2, 2021 · 10 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@jinsongo
Copy link
Contributor

jinsongo commented Jul 2, 2021

What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]

Follow the guide:
https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/docs/README.md

  1. Deploy a nested cluster in Kind
  2. After nested cluster ready, deploy memcached service

The memcached pod is always pending, but the same deployment can work on real kubernetes environment

# kubectl --kubeconfig ./kubeconfig/kubeconfig.sample get pod
NAME          READY   STATUS    RESTARTS   AGE
memcached-0   0/1     Pending   0          6h43m
# kubectl --kubeconfig ./kubeconfig/kubeconfig.sample describe pod memcached-0
Name:           memcached-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=memcached
                controller-revision-hash=memcached-6b8cf9888
                statefulset.kubernetes.io/pod-name=memcached-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/memcached
Containers:
  memcached-ct:
    Image:      memcached:1.5-alpine
    Port:       11211/TCP
    Host Port:  0/TCP
    Args:
      memcached
      -m
      256
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zrvr9 (ro)
Volumes:
  default-token-zrvr9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zrvr9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

What did you expect to happen:
The memcatched pods are running, just like

# kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
memcached-0                    1/1     Running   0          12m
memcached-1                    1/1     Running   0          12m

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The deploy yaml for memcached

apiVersion: v1
kind: Service
metadata:
  name: memcached
spec:
  ports:
  - port: 11211
  selector:
    app: memcached
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: memcached
spec:
  selector:
    matchLabels:
      app: memcached
  serviceName: "memcached"
  replicas: 2
  template:
    metadata:
      labels:
        app: memcached
    spec:
      restartPolicy: Always
      hostname: memcached
      containers:
      - name: memcached-ct
        image: memcached:1.5-alpine
        ports:
        - containerPort: 11211
        args: ["memcached", "-m", "256"]

Environment:

  • cluster-api-provider-nested version: v0.10
  • Minikube/KIND version: kind v0.11.1
  • Kubernetes version: (use kubectl version): v1.21.1
  • OS (e.g. from /etc/os-release): Red Hat Enterprise Linux 8

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-provider-nested/labels?q=area for the list of labels]

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 2, 2021
@jinsongo
Copy link
Contributor Author

jinsongo commented Jul 2, 2021

@christopherhein
Copy link
Contributor

Hey @wangjsty this is expected right now, we haven't finished the integration updates to immediately support CAPN + VC, the doc updates should be mostly called out in #141. Which is blocked on updating VC to support the way we're releasing images and manifests w/ CAPN/Prow.

@Fei-Guo
Copy link

Fei-Guo commented Jul 2, 2021

Yes. At this moment, for VC demo, you can try the following demo which use the old clusterversion CR unless your purpose is to try CAPN specifically,
https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md

@jinsongo
Copy link
Contributor Author

jinsongo commented Jul 4, 2021

@christopherhein @Fei-Guo Thank you, I will try the VC demo first, before the CAPN+VC integration done.

@jinsongo
Copy link
Contributor Author

jinsongo commented Jul 5, 2021

@christopherhein @Fei-Guo Could you please answer some questions for understanding. Thanks in advanced.

  1. Currently, the nested cluster that created by CAPN without VC can't be used to deploy workloads(deployment/statefulset) before CAPN+VC integration, is it right?
  2. Looks like it's okay to deploy workloads on VC without CAPN, what's the user scenario of CAPN? Or it could just be used as a common/unified cluster-API provider for VC? Thanks.

@Fei-Guo
Copy link

Fei-Guo commented Jul 5, 2021

  1. Yes
  2. CAPN is the replacement for the original ClusterVersion CR/controller in VC. It follows CAPI standard and has much better manageability for tenant control plane Pods.

@christopherhein
Copy link
Contributor

To add a little more color to #1 you could technically use this and run a set of data plane nodes w/ VMs or anything else and just use pod based control planes but you'd likely need to do some customizing as well, ie as of today that isn't supported.

@jinsongo
Copy link
Contributor Author

jinsongo commented Jul 6, 2021

@Fei-Guo @christopherhein Thank you for your explanation !

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2021
@christopherhein
Copy link
Contributor

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 4, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants