-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start rendering assets using cluster-kube-apiserver operator renderer #322
Start rendering assets using cluster-kube-apiserver operator renderer #322
Conversation
|
||
# shellcheck disable=SC2154 | ||
podman run \ | ||
--volume "$PWD:/assets:z" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this assumes the $PWD has the generated/tls secrets, is that assumption correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this assumes the $PWD has the generated/tls secrets, is that assumption correct?
The working directory is set here, so the generated TLS will be in ${PWD}/tls
. So no generated
directory, but I think you're handling this correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, ${PWD}/tls
sounds what I want, thanks!
db98924
to
74084ed
Compare
74084ed
to
6bce6ba
Compare
@smarterclayton @deads2k I updated this to fit into new go templates and also added the controller manager render. Once we are confident that the our operators provide same experience as kube-core control plane, we can switch it over just by copying the manifests we need. Also this demonstrates that our operator rendering functionality works (we probably should think about the image versions in |
@abhinavdahiya this is needed to set up the resources used by our operators. Can you have a look? /assign @abhinavdahiya |
This PR is more useful
Otherwise this is unused code-path. |
We create additional/different resources. Doesn't this start producing those? We want to enable new things and see them work before removing old. |
I meant to say these new files are rendered on disk but not actually used. |
Where do we put them to have them created? |
installer/pkg/asset/ignition/content/bootkube.go Lines 32 to 47 in 7f8f397
kube-core-operator renders it assets in 3 dirs: $ ls -l /opt/tectonic/kco-bootstrap/
bootstrap-configs
bootstrap-manifests
manifests
https://github.com/kubernetes-incubator/bootkube/blob/master/pkg/bootkube/bootstrap.go#L28 |
@deads2k @mfojtik any progress on this? You can now get image for your operator using https://github.com/openshift/installer/blob/master/pkg/asset/ignition/content/bootkube.go#L32 |
6bce6ba
to
96d533e
Compare
@abhinavdahiya updated, i think this can merge (even if it is a no-op for now) and we can figure out what manifest to copy where as second step. /cc @smarterclayton |
96d533e
to
fc0e120
Compare
/approve |
--config-output-file=/assets/kube-controller-manager-bootstrap/config | ||
|
||
# TODO: copy the bootstrap manifests to replace kube-core-operator | ||
cp --recursive kube-apiserver-bootstrap/manifests/openshift-kube-controller-manager-ns.yaml manifests/00_openshift-kube-controller-manager-ns.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cp --recursive kube-controller-manager-bootstrap/manifests/openshift-kube-controller-manager-ns.yaml manifests/00_openshift-kube-controller-manager-ns.yaml
@sjenning fixed, I hate bash... @abhinavdahiya can you re-tag please, hopefully last time... |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, mfojtik The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
1 similar comment
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
With multiple folks (including the bot ;) banging away on |
e2e:
We've seen that before, e.g. here. It's a wait flake. /retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
smoke:
i suspect this is this PR fault? |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
failure was:
/retest |
failed with:
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/hold Waiting on #415 to unstick CI. |
#415 is in. /hold cancel |
@mfojtik: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Please review the full test history for this PR and help us cut down flakes. |
This PR will start generating assets using the new cluster-kube-apiserver-operator image.
Along with manifests and bootstrap static pods that should be used in future by bootkube start, it will provide secrets and configmaps we can feed the openshift-api-operator.
/cc @deads2k
/cc @juanvallejo
/cc @sttts