Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD tls-e support for edpm_nova #588

Merged

Conversation

SeanMooney
Copy link
Contributor

@SeanMooney SeanMooney commented Mar 4, 2024

Nova already connected to libvirt via unix socket so it does not
need the libvirt tls certs for local communcitaiton. nova does
not need to connect to remote libvirt, as a result we do not need
to provide access to the libvirt client cert.
The 02-nova-host-specific.conf.j2 template has been updated to
enable libvirt native tls for live migration when tls is enabled.

molecule test coverage is added for the above changes.

Closes: OSPRH-5053

@openshift-ci openshift-ci bot requested review from frenzyfriday and fultonj March 4, 2024 20:07
@openshift-ci openshift-ci bot added the approved label Mar 4, 2024
@SeanMooney SeanMooney requested review from gibizer and vakwetu March 4, 2024 20:08
{% endif %}
"/etc/localtime:/etc/localtime:ro",
"/lib/modules:/lib/modules:ro",
"/dev:/dev",
"/run/openvswitch:/run/openvswitch",
"/run/openvswitch:/run/openvswitch:shared",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi the only reason i added shared is for parity with libvirt and incase for some reason the nova_compute container stated before ovs on the host

in that case shared should allow nova to see the ovsdb socket when it gets created on the host.

@SeanMooney
Copy link
Contributor Author

i have not tested this outside of molecule so i will try and create an env with tls enabled.

it would be ideal if podified-multinode-edpm-deployment-crc or a similar tempst zuul job could be used to validate this instead.

i currently do not have a CRC deploy so it will take me a day or too to test this end to end.
if anyone can test it quickly in the mean time then please update this with your findings.

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/b15fe6633333470a8350cc0ef48b3ad2

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 03m 55s
podified-multinode-edpm-deployment-crc FAILURE in 1h 25m 47s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 46m 01s
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 5m 48s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 5m 50s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 4m 39s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 11m 48s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 9m 49s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 10m 33s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 6m 49s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 4m 43s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 8m 02s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 5m 52s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/ce739d5f15e746ec998ef07941aed9ea

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 52m 34s
podified-multinode-edpm-deployment-crc FAILURE in 1h 24m 08s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 34m 17s
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 6m 29s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 5m 46s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 5m 00s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 12m 48s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 9m 32s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 10m 11s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 7m 15s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 4m 35s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 6m 51s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 5m 10s

[libvirt]
live_migration_with_native_tls = true
live_migration_scheme = tls
{% endif %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, we agreed that we don't have information in nova-operator if TLS is enabled in the dataplane or not, so we cannot provide this information via the cell config. If in the future we make every nova compute config a j2 template then we can move this template logic to the cell config. However we might not want to as that will couple the nova-operator with the edpm ansible vars.

@gibizer
Copy link
Contributor

gibizer commented Mar 5, 2024

recheck
tempest failed. It seem that nova-compute deployed successfully based on the ansible logs, but there is 0 nova-compute log is visible

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/650ff5dae0da430dbf9d2b291661feb2

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 00m 29s
podified-multinode-edpm-deployment-crc FAILURE in 1h 25m 02s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 41m 03s
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 6m 22s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 6m 12s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 4m 54s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 12m 46s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 9m 52s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 9m 52s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 6m 37s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 4m 38s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 7m 34s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 4m 51s

Copy link

Merge Failed.

This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset.
Warning:
Error merging github.com/openstack-k8s-operators/edpm-ansible for 588,4148445392e5eb1b62db39db20da2fb537c89729

@SeanMooney
Copy link
Contributor Author

tempest failed. It seem that nova-compute deployed successfully based on the ansible logs, but there is 0 nova-compute log is visible

the logs are now in /var/log/messages i plan to submit a patch to the ci framework or must gather to make them easier to read.. the current error is when os-vif connects to ovs via the UNIX socket ovsdbapp raises a error stating it cant retrieve the schema.

I'm not seeing nay deinals with the service logs or in the ovsdb log
I'm currently trying to use ,z on the bind mount like the neutron container do incase its selinux related but its really odd.

looks like i need a rebase too so ill do that now then try and reproduce this locally

@SeanMooney
Copy link
Contributor Author

This was the error

Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.317 2 ERROR ovsdbapp.backend.ovs_idl.idlutils [None req-79ae71e2-835e-4c4a-9bad-800951cff8be efbad3f6e4b14cfda6834d8f64599d1f f21742f9e5b84ad1a4c24d0024a0d988 - - default default] Unable to open stream to unix:/var/run/openvswitch/db.sock│
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif [None req-79ae71e2-835e-4c4a-9bad-800951cff8be efbad3f6e4b14cfda6834d8f64599d1f f21742f9e5b84ad1a4c24d0024a0d988 - - default default] Failed to plug vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:d5:78,bridge_name=┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif Traceback (most recent call last):                                                                                                                                                                                         │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/os_vif/__init__.py", line 77, in plug                                                                                                                                             │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     plugin.plug(vif, instance_info)                                                                                                                                                                                        │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovs.py", line 347, in plug                                                                                                                                           │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     self._plug_vif_generic(vif, instance_info)                                                                                                                                                                             │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovs.py", line 280, in _plug_vif_generic                                                                                                                              │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     self.ovsdb.ensure_ovs_bridge(vif.network.bridge,                                                                                                                                                                       ┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovsdb/ovsdb_lib.py", line 75, in ensure_ovs_bridge                                                                                                                   │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     return self.ovsdb.add_br(bridge, may_exist=True,                                                                                                                                                                       │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovsdb/ovsdb_lib.py", line 41, in ovsdb                                                                                                                               │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     self._ovsdb = ovsdb_api.get_instance(self)                                                                                                                                                                             │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovsdb/api.py", line 28, in get_instance                                                                                                                              │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     return iface.api_factory(context)                                                                                                                                                                                      │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovsdb/impl_idl.py", line 40, in api_factory                                                                                                                          │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     idl=idl_factory(config),                                                                                                                                                                                               │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/vif_plug_ovs/ovsdb/impl_idl.py", line 32, in idl_factory                                                                                                                          ┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     helper = idlutils.get_schema_helper(conn, schema_name)                                                                                                                                                                 │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 215, in get_schema_helper                                                                                                             │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     return create_schema_helper(fetch_schema_json(connection, schema_name))                                                                                                                                                ┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif   File "/usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 204, in fetch_schema_json                                                                                                             │
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif     raise Exception("Could not retrieve schema from %s" % connection)                                                                                                                                                      ┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif Exception: Could not retrieve schema from unix:/var/run/openvswitch/db.sock                                                                                                                                                ┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.318 2 ERROR os_vif #033[00m                                                                                                                                                                                                                   ┤
│Mar  5 04:17:54 np0004517961 nova_compute[117833]: 2024-03-05 09:17:54.320 2 ERROR nova.virt.libvirt.driver [None req-79ae71e2-835e-4c4a-9bad-800951cff8be efbad3f6e4b14cfda6834d8f64599d1f f21742f9e5b84ad1a4c24d0024a0d988 - - default default] [instance: dd0b7b57-7334-4ffb-b6f0-960a81d2baa7] Failed to start li┤
│M

i have been using lnav -q https://logserver.rdoproject.org/88/588/23fe056c41b9d2bba62646efa8f781f7f847a7f8/github-check/podified-multinode-edpm-deployment-crc/78957db/controller/ci-framework-data/logs/192.168.122.100/log/messages

to view them

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/cf5a6d0e4d124f7f9e48af0be36a4699

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 03m 01s
podified-multinode-edpm-deployment-crc FAILURE in 1h 28m 54s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 44m 15s
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 5m 30s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 5m 33s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 4m 58s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 13m 17s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 10m 05s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 10m 17s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 7m 06s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 4m 47s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 8m 07s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 5m 28s

@olliewalsh
Copy link
Contributor

The OVN TLS support is only for the central DBs so not relevant here. IIRC OVN manages TLS on it's local OVS using a built-in CA

@olliewalsh
Copy link
Contributor

(so not sure how nova-compute could previously have been connecting to it over TLS)

@@ -10,10 +10,13 @@
},
"volumes": [
"/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro",
{% if edpm_nova_tls_certs_enabled %}
"{{ edpm_nova_tls_ca_src_dir }}/tls-ca-bundle.pem:/etc/pki/CA/cacert.pem:ro",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't that the path to the libvirt CA? The CA bundle should be mounted to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem (at least that's where it's mounted on the control plane pods)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the path @vakwetu previously used so i just used the same but i can mount it to
/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack, fixed that in #591, it should have been the libvirt CA, not the bundle (as it's mTLS)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... not sure why libvirt uses such a generic path for it's CA cert

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya i asked them and they mentioned it was also probably a poor naming choice. they have libvirt in thename or path for all the rest except that root ca cert.

# will not need the libvirt or ovs client certs or ca. nova will communicate other services
# via there api endpoints and will connect to rabbitmq. To support this we will need to trust
# the general ca root cert.
edpm_nova_tls_ca_src_dir: /var/lib/openstack/cacerts/nova
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on my deployment it's /var/lib/openstack/cacerts/nova-custom (OpenStackDataPlaneService name). @vakwetu FYI

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hum that was not ment ot change based on the dataplane service name
i don't think we have access to that in ansible

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the dataplane-operator doesn't know anything about whether the service name is nova or nova-custom or whatever. It just knows that there is a dataplane service named X and so it puts its ca bundle under /var/lib/openstack/cacerts/X or certs and keys (and now ca.certs) under /var/lib/openstack/certs/X.

We can fix this in two ways. 1) We can add something to the service spec -- something like "mountAlias" that tells the dataplane-operator to use a different path instead. 2) we can add an ansible param that specifies the service name

Copy link
Contributor Author

@SeanMooney SeanMooney Mar 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i tought about both 2 would only work if differnt named service shared the same tls ca

i.e. if two libivrt serivce had the same ca

i dont think you can do that automatically

so instead of a mount alias i think we need to extend the dataplane service with a type field.
then use that to generate the crts and deifne the mount point.

so type=ovs, type=libvirt or type=nova

type=nova with name=nova-custom woudl result in the tls certs beign placed in .../nova ignoring the service name "nova-custom"
that woudl map to /var/lib/openstack/cacerts/nova

do we think that is feisable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, edpm-nova handles this discrepancy between nova and nova-custom by discovering the config files

- name: Discover configmaps in {{ edpm_nova_config_src }}
and copying them to the right place on the compute node.

install-certs has no such logic. We just copy the right certs, keys, cacerts for the node from /var/lib/openstack/cacerts in the ansibleEE pod to /var/lib/openstack/cacerts on the compute node. Same for /var/lib/openstack/certs.

So, we can solve this either by mounting the certs in the ansibleEE container to the right place (/var/lib/openstack/cacerts/nova) to begin with -- by using the mountAlias thing I just mentioned.

Or we can add an ansible param that provides the service-name and then have the edpm-nova role pick things up from the right path.

Which do you guys prefer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes so i briefly ran the type idea past gibi and his reaction is that would allow use to add validating/default webhooks for the standard datapalne services to be able to check for example that if you define a dataplane service of type=nova tha the secrete always have a migration ssh key defined.

it would also allow use to enforce that for any node set we only have one datapalneservice for any given type.

the value of type for the standard service would basically map to the last component of the playbook

https://github.com/openstack-k8s-operators/nova-operator/blob/main/api/v1beta1/nova_webhook.go#L74-L139

so type=nova is related to playbook: osp.edpm.nova

we don't have to enforce that but if we are wondering what the type would be for the exiting service that's what i would use.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think option 1 is correct. If we have 2 libvirt services (not sure what the use-case would be but hypothetically....) then they have independent config. If they happen to be using the same CA that's fine, but they should get individual keys/certs/secrets

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the service name can solve this problem for this specific usecase so i updated the patch to support that.

i think have a type on the dataplane service so we can use that for validation or defaulting is still worth considering but that can be done separately.

@olliewalsh are you going to submit a pr to the dataplane operator for that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Included in the OVN dataplane PR openstack-k8s-operators/dataplane-operator#745

@SeanMooney
Copy link
Contributor Author

i just pushed this before i finish for the week. i have it running locally but the package installation time is very long for some reason so I'm just going to leave it run and let ci run on it over the weekend.

Nova already connected to libvirt via unix socket so it does not
need the libvirt tls certs for local communcitaiton. nova does
not need to connect to remote libvirt, as a result we do not need
to provide access to the libvirt client cert.
The 02-nova-host-specific.conf.j2 template has been updated to
enable libvirt native tls for live migration when tls is enabled.

molecule test coverage is added for the above changes.

Closes: OSPRH-5053
@SeanMooney
Copy link
Contributor Author

check-rdo i added a depends on to openstack-k8s-operators/dataplane-operator#754 to enable tls by default and openstack-k8s-operators/dataplane-operator#745 to pass the edpm_service_name.

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/b1f048ca87f2454ba61556f8877128b3

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 22m 47s
podified-multinode-edpm-deployment-crc FAILURE in 55m 24s
cifmw-crc-podified-edpm-baremetal FAILURE in 2h 00m 41s
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 5m 14s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 6m 01s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 5m 00s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 11m 52s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 9m 39s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 9m 45s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 7m 12s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 5m 01s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 7m 14s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 5m 07s

@SeanMooney
Copy link
Contributor Author

check-rdo

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/c71c25960c9a4f93b4bee01ab22b2bde

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 23m 46s
podified-multinode-edpm-deployment-crc FAILURE in 1h 42m 30s
cifmw-crc-podified-edpm-baremetal FAILURE in 2h 00m 59s
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 6m 08s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 4m 46s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 5m 13s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 11m 55s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 9m 43s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 10m 11s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 7m 08s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 4m 51s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 7m 50s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 5m 17s

@SeanMooney
Copy link
Contributor Author

check-rdo this should not pass i belive as podified-multinode-edpm-deployment-crc now passes in openstack-k8s-operators/dataplane-operator#754

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/d679e2684ecb4a6ba0470c71edd3b324

openstack-k8s-operators-content-provider FAILURE in 10m 19s
⚠️ podified-multinode-edpm-deployment-crc SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider
⚠️ cifmw-crc-podified-edpm-baremetal SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider
✔️ edpm-ansible-molecule-edpm_bootstrap SUCCESS in 7m 05s
✔️ edpm-ansible-molecule-edpm_podman SUCCESS in 6m 30s
✔️ edpm-ansible-molecule-edpm_module_load SUCCESS in 5m 20s
✔️ edpm-ansible-molecule-edpm_kernel SUCCESS in 12m 47s
✔️ edpm-ansible-molecule-edpm_libvirt SUCCESS in 10m 19s
✔️ edpm-ansible-molecule-edpm_nova SUCCESS in 10m 32s
✔️ edpm-ansible-molecule-edpm_frr SUCCESS in 7m 27s
✔️ edpm-ansible-molecule-edpm_iscsid SUCCESS in 5m 15s
✔️ edpm-ansible-molecule-edpm_ovn_bgp_agent SUCCESS in 8m 02s
✔️ edpm-ansible-molecule-edpm_ovs SUCCESS in 5m 42s

Copy link
Contributor

@olliewalsh olliewalsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Contributor

openshift-ci bot commented Mar 14, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: olliewalsh, SeanMooney

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [SeanMooney,olliewalsh]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@SeanMooney
Copy link
Contributor Author

check-rdo

@openshift-merge-bot openshift-merge-bot bot merged commit fcbb4f2 into openstack-k8s-operators:main Mar 14, 2024
32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants