Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

preview DNS changes in reconfigurator-cli #5338

Merged
merged 18 commits into from
Mar 28, 2024
Merged

Conversation

davepacheco
Copy link
Collaborator

@davepacheco davepacheco commented Mar 27, 2024

The goal of this PR is to be able to preview the DNS changes that Reconfigurator will make to a real production system. This PR adds:

In omdb:

  • omdb db reconfigurator-save now saves details about internal and external DNS versions, plus silo names and external DNS zone names

In reconfigurator-CLI:

  • blueprint-diff BLUEPRINT1 BLUEPRINT2 now shows the internal and external DNS changes between two blueprints
  • blueprint-diff-dns internal|external VERSION BLUEPRINT: view the internal or external DNS changes between a specific internal/external DNS version and a blueprint

Implementation changes as part of this:

  • DnsDiff and most of the Reconfigurator DNS execution stuff now works in terms of DNS zones (DnsConfigZone) rather than complete configs (DnsConfigParams). The difference is that a whole config contains any number of zones plus a generation number. (Note that we were already assuming there was exactly one DNS zone each for internal and external DNS (in DnsDiff::new) -- this doesn't change that.) The whole config that we were generating before was not suitable for diff'ing because it would always bump the generation number even if the zone contents hadn't changed (because it didn't know at that point if they had changed). This didn't matter before because later we would check if anything had changed and just throw away the config altogether if nothing changed. Now that we're diff'ing these values, the automatic generation bump showed up as a spurious delta. I think this approach of operating on zones is much cleaner because these helpers now only return what they're really able to reliably compute (the zone contents) and the generation number becomes a concern of the code that's deciding what to do with it (which is also the code that knows what it should be).

I tested this against dogfood by:

  • copying the new omdb to dogfood and running omdb db reconfigurator-save to save the relevant state
  • copying the state back to my dev system
  • running reconfigurator-cli:

(NOTE: the format here has changed slightly in subsequent commits.)

First, load the dogfood state:

〉load dogfood-reconfigurator.out 04829cbc-096a-45f7-b883-60e057baa365
using collection 04829cbc-096a-45f7-b883-60e057baa365 as source of sled inventory data
sled 0c7011f7-a4bf-4daf-90cc-1c2410103300 loaded
sled 2707b587-9c7f-4fb0-a7af-37c3b7a9a0fa loaded
sled 5f6720b8-8a31-45f8-8c94-8e699218f28b loaded
sled 71def415-55ad-46b4-ba88-3ca55d7fb287 loaded
sled 7b862eb6-7f50-4c2f-b9a6-0d12ac913d3c loaded
sled 87c2c4fc-b0c7-4fef-a305-78f0ed265bbc loaded
sled a2adea92-b56e-44fc-8a0d-7d63b5fd3b93 loaded
sled b886b58a-1e3f-4be1-b9f2-0c2e66c6bc88 loaded
sled db183874-65b5-4263-a1c1-ddb2737ae0e9 loaded
sled dd83e75a-1edf-4aa1-89a0-cd8b2091a7cd loaded
sled f15774c1-b8e5-434f-a493-ec43f96cba06 loaded
collection 3def9c90-d8e9-41e4-94f6-b48ae80d0d47 loaded
collection d3faf644-c7f1-47a6-9629-499f50808c7a loaded
collection 8c94fdd3-08c8-4cab-b5f2-ea81368abc56 loaded
collection 04829cbc-096a-45f7-b883-60e057baa365 loaded
blueprint 95c3f06b-4dbf-4614-ae7c-507c1193bde9 loaded
configured external DNS zone name: rack2.eng.oxide.computer
configured silo names: default-silo, oxide-local2, oxide, oxide-local, test, recovery, silo12, silo2, silo11, now-with-quotas, silo1
internal DNS generations: 1
external DNS generations: 25
loaded data from "dogfood-reconfigurator.out"

〉

We see that dogfood has internal DNS generation 1 and external generation 25. It also has one blueprint. Here's what would change about internal DNS if we executed this blueprint (elided a bunch of irrelevant stuff with ...):

〉blueprint-diff-dns internal 1 95c3f06b-4dbf-4614-ae7c-507c1193bde9
~ DNS zone: "control-plane.oxide.internal": 
    name: 0022703b-dcfc-44d4-897a-b42f6f53b433.host          (records: 1)
        AAAA fd00:1122:3344:106::c
...
    name: _internal-ntp._tcp                                 (records: 9)
        SRV  port   123 209b6213-588b-43b6-a89b-19ee5c84ffba.host.control-plane.oxide.internal
        SRV  port   123 3ccea933-89f2-4ce5-8367-efb0afeffe97.host.control-plane.oxide.internal
        SRV  port   123 71ab91b7-48d4-4d31-b47e-59f29f419116.host.control-plane.oxide.internal
        SRV  port   123 7529be1c-ca8b-441a-89aa-37166cc450df.host.control-plane.oxide.internal
        SRV  port   123 76b79b96-eaa2-4341-9aba-e77cfc92e0a9.host.control-plane.oxide.internal
        SRV  port   123 7a85d50e-b524-41c1-a052-118027eb77db.host.control-plane.oxide.internal
        SRV  port   123 82500cc9-f33d-4d59-9e6e-d70ea6133077.host.control-plane.oxide.internal
        SRV  port   123 83257100-5590-484a-b72a-a079389d8da6.host.control-plane.oxide.internal
        SRV  port   123 d34c7184-5d4e-4cb5-8f91-df74a343ffbc.host.control-plane.oxide.internal
  + name: _mgd._tcp                                          (records: 2)
  +     SRV  port  4676 dendrite-71def415-55ad-46b4-ba88-3ca55d7fb287.host.control-plane.oxide.internal
  +     SRV  port  4676 dendrite-87c2c4fc-b0c7-4fef-a305-78f0ed265bbc.host.control-plane.oxide.internal
    name: _mgs._tcp                                          (records: 2)
        SRV  port 12225 dendrite-71def415-55ad-46b4-ba88-3ca55d7fb287.host.control-plane.oxide.internal
        SRV  port 12225 dendrite-87c2c4fc-b0c7-4fef-a305-78f0ed265bbc.host.control-plane.oxide.internal
...

Reconfigurator wants to add records for mgd. I think this is because we added mgd DNS entries to Omicron some time after Dogfood was set up, so it never had them, but current systems do. I looked around and found no references to _mgd or ServiceName::Mgd so I think this change is at worst harmless.

What about external DNS?

〉blueprint-diff-dns external 25 95c3f06b-4dbf-4614-ae7c-507c1193bde9
  DNS zone: "rack2.eng.oxide.computer" (unchanged)
    name: now-with-quotas.sys                                (records: 3)
        A    172.20.26.3
        A    172.20.26.5
        A    172.20.26.4
    name: oxide-local.sys                                    (records: 3)
        A    172.20.26.3
        A    172.20.26.5
        A    172.20.26.4
...

No changes. Great!

Let's create a new blueprint and see how DNS changes between the two of them:

〉blueprint-plan 95c3f06b-4dbf-4614-ae7c-507c1193bde9 04829cbc-096a-45f7-b883-60e057baa365
Mar 27 22:05:33.758 INFO sufficient Nexus zones exist in plan, current_count: generated blueprint 99f5bf65-c987-4f6f-9e17-a0445b17bed2 based on parent blueprint 95c3f06b-4dbf-4614-ae7c-507c1193bde9
3, desired_count: 3

〉blueprint-list
ID                                   
95c3f06b-4dbf-4614-ae7c-507c1193bde9 
99f5bf65-c987-4f6f-9e17-a0445b17bed2 
diff blueprint 95c3f06b-4dbf-4614-ae7c-507c1193bde9 blueprint 99f5bf65-c987-4f6f-9e17-a0445b17bed2
--- blueprint 95c3f06b-4dbf-4614-ae7c-507c1193bde9
+++ blueprint 99f5bf65-c987-4f6f-9e17-a0445b17bed2
  sled 0c7011f7-a4bf-4daf-90cc-1c2410103300
      zone config generation 2
          167cf6a2-ec51-4de2-bc6c-7785bbc0e436 in service crucible [underlay IP fd00:1122:3344:104::c] (unchanged)
          20b100d0-84c3-4119-aa9b-0c632b0b6a3a in service nexus [underlay IP fd00:1122:3344:104::3] (unchanged)
          23e1cf01-70ab-422f-997b-6216158965c3 in service crucible [underlay IP fd00:1122:3344:104::8] (unchanged)
          50209816-89fb-48ed-9595-16899d114844 in service crucible [underlay IP fd00:1122:3344:104::6] (unchanged)
...
internal DNS:
  DNS zone: "control-plane.oxide.internal" (unchanged)
    name: 0022703b-dcfc-44d4-897a-b42f6f53b433.host          (records: 1)
        AAAA fd00:1122:3344:106::c
    name: 01f93020-7e7d-4185-93fb-6ca234056c82.host          (records: 1)
        AAAA fd00:1122:3344:103::5
...
    name: fcdda266-fc6a-4518-89db-aec007a4b682.host          (records: 1)
        AAAA fd00:1122:3344:104::b
    name: fffddf56-10ca-4b62-9be3-5b3764a5f682.host          (records: 1)
        AAAA fd00:1122:3344:106::d

external DNS:
  DNS zone: "rack2.eng.oxide.computer" (unchanged)
    name: now-with-quotas.sys                                (records: 3)
        A    172.20.26.4
        A    172.20.26.3
        A    172.20.26.5
    name: oxide-local.sys                                    (records: 3)
        A    172.20.26.4
        A    172.20.26.3
        A    172.20.26.5
...

Great -- no changes. To make this more interesting, I also added a knob for tuning the target number of Nexus nodes. Let's bump that up and generate a new plan and see what changes:

〉show
configured external DNS zone name: rack2.eng.oxide.computer
configured silo names: default-silo, oxide-local2, oxide, oxide-local, test, recovery, silo12, silo2, silo11, now-with-quotas, silo1
internal DNS generations: 1
external DNS generations: 25
target number of Nexus instances: default

〉set num-nexus 4
None -> 4

〉blueprint-plan 99f5bf65-c987-4f6f-9e17-a0445b17bed2 04829cbc-096a-45f7-b883-60e057baa365
Mar 27 22:07:42.226 INFO will add 1 Nexus zone(s) to sled, sled_id: f15774c1-b8e5-434f-a493-ec43f96cba06generated blueprint 64705664-047d-4e9e-8b2c-3ac7a8fb233f based on parent blueprint 99f5bf65-c987-4f6f-9e17-a0445b17bed2

〉blueprint-list
ID                                   
95c3f06b-4dbf-4614-ae7c-507c1193bde9 
99f5bf65-c987-4f6f-9e17-a0445b17bed2 
64705664-047d-4e9e-8b2c-3ac7a8fb233f 

〉blueprint-diff 99f5bf65-c987-4f6f-9e17-a0445b17bed2 64705664-047d-4e9e-8b2c-3ac7a8fb233f
diff blueprint 99f5bf65-c987-4f6f-9e17-a0445b17bed2 blueprint 64705664-047d-4e9e-8b2c-3ac7a8fb233f
--- blueprint 99f5bf65-c987-4f6f-9e17-a0445b17bed2
+++ blueprint 64705664-047d-4e9e-8b2c-3ac7a8fb233f
...
  sled f15774c1-b8e5-434f-a493-ec43f96cba06
-     zone config generation 2
+     zone config generation 3
          23dca27d-c79b-4930-a817-392e8aeaa4c1 in service crucible [underlay IP fd00:1122:3344:105::e] (unchanged)
          375296e5-0a23-466c-b605-4204080f8103 in service crucible_pantry [underlay IP fd00:1122:3344:105::4] (unchanged)
          3d420dff-c616-4c7d-bab1-0f9c2b5396bf in service crucible [underlay IP fd00:1122:3344:105::a] (unchanged)
          4c3ef132-ec83-4b1b-9574-7c7d3035f9e9 in service cockroach_db [underlay IP fd00:1122:3344:105::3] (unchanged)
          76b79b96-eaa2-4341-9aba-e77cfc92e0a9 in service internal_ntp [underlay IP fd00:1122:3344:105::f] (unchanged)
          912346a2-d7e6-427e-b373-e8dcbe4fcea9 in service crucible [underlay IP fd00:1122:3344:105::5] (unchanged)
          92d3e4e9-0768-4772-83c1-23cce52190e9 in service crucible [underlay IP fd00:1122:3344:105::6] (unchanged)
          9470ea7d-1920-4b4b-8fca-e7659a1ef733 in service crucible [underlay IP fd00:1122:3344:105::c] (unchanged)
          9c5d88c9-8ff1-4f23-9438-7b81322eaf68 in service crucible [underlay IP fd00:1122:3344:105::b] (unchanged)
          b3e9fee2-24d2-44e7-8539-a6918e85cf2b in service crucible [underlay IP fd00:1122:3344:105::d] (unchanged)
          ce8563f3-4a93-45ff-b727-cbfbee6aa413 in service crucible [underlay IP fd00:1122:3344:105::9] (unchanged)
          f9940969-b0e8-4e8c-86c7-4bc49cd15a5f in service crucible [underlay IP fd00:1122:3344:105::7] (unchanged)
          f9c1deca-1898-429e-8c93-254c7aa7bae6 in service crucible [underlay IP fd00:1122:3344:105::8] (unchanged)
+         1a02afda-0113-43f4-a4d7-c62b224207b6 in service nexus [underlay IP fd00:1122:3344:105::21] (added)

internal DNS:
~ DNS zone: "control-plane.oxide.internal": 
...
    name: 1876cdcf-b2e7-4b79-ad2e-67df716e1860.host          (records: 1)
        AAAA fd00:1122:3344:10a::8
  + name: 1a02afda-0113-43f4-a4d7-c62b224207b6.host          (records: 1)
  +     AAAA fd00:1122:3344:105::21
    name: 1a77bd1d-4fd4-4d6c-a105-17f942d94ba6.host          (records: 1)
        AAAA fd00:1122:3344:107::c
...
    name: _nameservice._tcp                                  (records: 3)
        SRV  port  5353 3a1ea15f-06a4-4afd-959a-c3a00b2bdd80.host.control-plane.oxide.internal
        SRV  port  5353 46ccd8fe-4e3c-4307-97ae-1f7ac505082a.host.control-plane.oxide.internal
        SRV  port  5353 51c9ad09-7814-4643-8ad4-689ccbe53fbd.host.control-plane.oxide.internal
  ~ name: _nexus._tcp                                        (records: 3 -> 4)
  -     SRV  port 12221 20b100d0-84c3-4119-aa9b-0c632b0b6a3a.host.control-plane.oxide.internal
  -     SRV  port 12221 2898657e-4141-4c05-851b-147bffc6bbbd.host.control-plane.oxide.internal
  -     SRV  port 12221 65a11c18-7f59-41ac-b9e7-680627f996e7.host.control-plane.oxide.internal
  +     SRV  port 12221 1a02afda-0113-43f4-a4d7-c62b224207b6.host.control-plane.oxide.internal
  +     SRV  port 12221 20b100d0-84c3-4119-aa9b-0c632b0b6a3a.host.control-plane.oxide.internal
  +     SRV  port 12221 2898657e-4141-4c05-851b-147bffc6bbbd.host.control-plane.oxide.internal
  +     SRV  port 12221 65a11c18-7f59-41ac-b9e7-680627f996e7.host.control-plane.oxide.internal
    name: _oximeter._tcp                                     (records: 1)
        SRV  port 12223 da510a57-3af1-4d2b-b2ed-2e8849f27d8b.host.control-plane.oxide.internal
...

external DNS:
~ DNS zone: "rack2.eng.oxide.computer": 
  ~ name: now-with-quotas.sys                                (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: oxide-local.sys                                    (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: oxide-local2.sys                                   (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: oxide.sys                                          (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: recovery.sys                                       (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: silo1.sys                                          (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: silo11.sys                                         (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: silo12.sys                                         (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: silo2.sys                                          (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2
  ~ name: test.sys                                           (records: 3 -> 4)
  -     A    172.20.26.4
  -     A    172.20.26.3
  -     A    172.20.26.5
  +     A    172.20.26.4
  +     A    172.20.26.3
  +     A    172.20.26.5
  +     A    192.0.2.2

Almost all of that looks right:

  • We have one new zone on one sled (and no other zones on any other sleds changed)
  • We have a AAAA record for that zone
  • We have a new SRV record for Nexus pointing to that AAAA record
  • The external DNS names for every Silo changed: they all have one new record, which is the new external IP

The only thing a little weird is the choice of external IP. That's just because the services IP pool range is not configurable in reconfigurator-cli so it uses a different one than dogfood and so picks the 192.0.2.2 address instead of something from dogfood's actual external IP range.

Remaining steps at this point are:

- have reconfigurator-cli compute and store DNS generations with each
  blueprint that gets generated.  I could do this now but it will
  probably conflict a little with my other DNS PR.
- add DNS diff'ing to blueprint diff'ing.  This is probably blocked on
  the above since otherwise I don't have any sample data to test with.
- update all of this to support external DNS too.  This is definitely
  blocked on my other DNS PR.
@davepacheco davepacheco requested a review from jgallagher March 27, 2024 22:13
Copy link
Contributor

@jgallagher jgallagher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - only tiny nitpicks and a couple questions

.map(|dns_zone| dns_zone.zone_name)
.collect();
let state = UnstableReconfiguratorState {
policy: policy,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
policy: policy,
policy,

DnsConfigParams {
generation: u64::from(self.generation),
generation: u64::from(Generation::new()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asking this without more context from elsewhere in the PR, which might make this question moot: should this function take a generation number as input instead of always returning params with generation=1?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fair question. The history as I recall it is:

  • This struct was originally only used to build the initial DNS configuration so the generation was always 1. I believe it's used in this way by both RSS and the automated test runner setup stuff.
  • A few weeks ago I wanted to use this struct in nexus-reconfigurator-execution to build a new DNS configuration, too. There, I needed to produce a different generation, so I added a function to specify the generation and it would use that.
  • Now for the reasons mentioned in the description I'm changing the caller in nexus-reconfigurator-execution so that it only gets the zone out of this object. So I'm removing the (now unused) method to change the generation and hardcoding it back to 1, since the only callers that get a DnsConfigParams here want generation 1. We could always add the other thing back but at this point it does not appear needed.

I think of it as: this thing used to build a DnsConfigParams, but now really generates a DnsConfigZone, with a convenience method for assembling that into a DnsConfigParams with generation 1 for the callers that want that. I thought about renaming it to build_full_config_for_initial_generation() or something to better distinguish it from build_zone() but I didn't bother -- wasn't sure the extra mouthful was really clearer. But since you had this question, I went ahead and did this. (I don't think it makes sense to accept the generation here given that all existing callers provide generation 1. It's not at all hard to build your own DnsConfigParams if you want a different one and the whole point of this convenience function is to commonize code among the 5 or so callers so if they all have to decide which generation to put here it defeats part of the point.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, thanks. I do think the longer name is worth the clarity.

);
let records = &external_dns_config.zones[0].records;
assert_eq!(external_dns_zone.zone_name, String::from("oxide.test"),);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert_eq!(external_dns_zone.zone_name, String::from("oxide.test"),);
assert_eq!(external_dns_zone.zone_name, String::from("oxide.test"));


fn iter_names(&self) -> impl Iterator<Item = NameDiff<'_>> {
let all_names: BTreeSet<_> =
self.left.keys().chain(self.right.keys()).collect();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to ignore this as premature optimization, but - should we cache all_names? If a caller calls multiple methods that all internally call iter_names(), we'll recreate this set every time.

.sled_agents
.iter()
.map(|(sled_id, sled_agent_info)| {
let sled = nexus_reconfigurator_execution::Sled::new(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to ask if this should consider the sled policy (e.g., to skip expunged sleds), but I don't think that's a property of collections, right? Any sled_agent present in a collection was not expunged at the time, by definition, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a really good question. There's definitely something a little fishy about this. tl;dr: I think it doesn't affect what we're currently using this tool for, and fixing it is somewhat hard, so I think it's beyond the scope of this PR.

Background: we're constructing this list of sleds solely to be able to figure out what internal DNS records reconfigurator would create for a given blueprint. Reconfigurator only needs this list of sleds to figure out which sleds are scrimlets so that it can generate the switch zone services' DNS records. Eventually it should generate sled agent DNS records too but we don't seem to do that today.

This code is being used in two contexts:

  • when diff'ing two blueprints, to figure out how DNS would change between them
  • when diff'ing a blueprint against an actual DNS config, to see how executing that blueprint would change DNS

But as you're observing, the actual DNS contents for a blueprint depends on the policy, not just the blueprint. It's maybe misleading that reconfigurator-cli has an operation to diff blueprints without saying anything about the policy. It should probably let you keep track of multiple policies and make you specify which policy you want. You would need this if you wanted to use these tools to see how DNS would change if you just expunged a sled.

I do think we should think about about this longer term. For now I think the summary is:

  • there is one state of the world that reconfigurator-cli knows about
  • it's populated either with "sled-add" or "load" commands
  • it is implicitly used wherever we need this information

This isn't the first place we implicitly use the policy like this. We use it when generating an inventory and when generating a blueprint from the inventory, too. But I think this is the first place where the user is referencing two points in time and might reasonably want two different policies.

@davepacheco davepacheco enabled auto-merge (squash) March 28, 2024 18:44
@davepacheco davepacheco merged commit 4ca89ca into main Mar 28, 2024
21 checks passed
@davepacheco davepacheco deleted the dap/dns-preview-rebased branch March 28, 2024 19:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants