Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADR-038 State Streaming Plugin System Updates #11175

Conversation

egaxhaj
Copy link
Contributor

@egaxhaj egaxhaj commented Feb 11, 2022

For #10096

This PR introduces updates to ADR-038 for the plugin-based streaming services. These updates reflect the implementation approach taken in provenance-io#49.


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title
  • added ! to the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • followed the guidelines for building modules
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • included comments for documenting Go code
  • updated the relevant documentation or specification
  • reviewed "Files changed" and left comments if necessary
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed ! in the type prefix if API or client breaking change
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic
  • reviewed API design and naming
  • reviewed documentation is accurate
  • reviewed tests and test coverage
  • manually tested (if applicable)

@github-actions github-actions bot added the T: ADR An issue or PR relating to an architectural decision record label Feb 11, 2022
@amaury1093
Copy link
Contributor

@egaxhaj-figure Can you rename the PR title?

@egaxhaj egaxhaj changed the title remove trailing whitespace ADR-038 State Streaming Plugin System Updates Feb 14, 2022
Copy link
Contributor

@i-norden i-norden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new HaltAppOnDeliveryError approach greatly simplifies things vs the previous channel based communication approach, but it does require that the listening methods (e.g. ListenBeginBlock) be synchronous with the app. Obviously this is not an issue- and indeed is the entire point- when HaltAppOnDeliveryError is true. But doesn't this mean that when HaltAppOnDeliveryError is false we still block and wait for an error to return (even though we'll ignore it)? So we would need a different implementation of the listening service to have asynchronous listening when HaltAppOnDeliveryError is false (an implementation where a nil err is immediately returned).

Whereas with the channel communication whether or not we wait on the external service is managed by the SDK using the channel communication and the same external service implementation can be used in either case.

@i-norden
Copy link
Contributor

i-norden commented Feb 14, 2022

Needing a second implementation was a dramatization, we just need to add additional configuration option for the external service that tells it whether or not to immediately return a nil error. Or rather, just use the existing halt_app_on_delivery_error to additionally configure the external service in this capacity.

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Feb 17, 2022

Needing a second implementation was a dramatization, we just need to add additional configuration option for the external service that tells it whether or not to immediately return a nil error. Or rather, just use the existing halt_app_on_delivery_error to additionally configure the external service in this capacity.

We also need to handle listeners concurrently. Something like below?

// EndBlock implements the ABCI interface.
func (app *BaseApp) EndBlock(req abci.RequestEndBlock) (res abci.ResponseEndBlock) {
    
    ... 
    
    // call the hooks with the BeginBlock messages
    wg := new(sync.WaitGroup)
    var halt = false
    for _, streamingListener := range app.abciListeners {
        // increment the wait group counter
        wg.Add(1)
        streamingListener := streamingListener // https://go.dev/doc/faq#closures_and_goroutines
        go func() {
            // decrement the counter when the go routine completes
            defer wg.Done()
            if err := streamingListener.ListenEndBlock(app.deliverState.ctx, req, res); err != nil {
                app.logger.Error("EndBlock listening hook failed", "height", req.Height, "err", err)
                if streamingListener.HaltAppOnDeliveryError() {
                    halt = true
                }
            }
        }()
    }

    // wait for all the listener calls to finish
    wg.Wait()

    if halt {
        app.halt()
    }
    
    ...
    

@i-norden
Copy link
Contributor

i-norden commented Mar 1, 2022

@egaxhaj-figure oh yeah, good point. If HaltAppOnDeliveryError == true do we want to wait for all the listeners to return before halting or should we break immediately (or at least prevent spinning up any of the remaining ListenEndBlock goroutines)? Also wondering if when we have multiple listeners like this, if we want anything else in the SDK for managing their synchronization in the event one/some fail but the others successfully stream the data. If we halt and then restart the node and replay the block wherein the streaming error occurred, then we would send duplicate data to listeners that hadn't failed. I believe this came up in our previous discussion, and the conclusion was that this can and should be sorted out by the external service (e.g. Postgres will have unique constraints that prevent duplication, file streaming will simply overwrite the old file, etc), but perhaps we should make some mention of this in the docs.

Overall, if we add some language around using halt_app_on_delivery_error or some other param to configure the external service(s) to immediately return nil error when we wish to operate asynchronously then I think this PR is good-to-go.

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Mar 2, 2022

@egaxhaj-figure oh yeah, good point. If HaltAppOnDeliveryError == true do we want to wait for all the listeners to return before halting or should we break immediately (or at least prevent spinning up any of the remaining ListenEndBlock goroutines)? Also wondering if when we have multiple listeners like this, if we want anything else in the SDK for managing their synchronization in the event one/some fail but the others successfully stream the data. If we halt and then restart the node and replay the block wherein the streaming error occurred, then we would send duplicate data to listeners that hadn't failed. I believe this came up in our previous discussion, and the conclusion was that this can and should be sorted out by the external service (e.g. Postgres will have unique constraints that prevent duplication, file streaming will simply overwrite the old file, etc), but perhaps we should make some mention of this in the docs.

I can't think of a reason to wait for other goroutines to finish when HaltAppOnDeliveryError == true.

I agree, we do need to mention it in the docs that client application need to account for duplicate data when blocks are replayed on a node restart.

Overall, if we add some language around using halt_app_on_delivery_error or some other param to configure the external service(s) to immediately return nil error when we wish to operate asynchronously then I think this PR is good-to-go.

The only concern I have about immediately returning nil error is that we're muting any errors that may occur. Implementors of ListenBeginBlock, ListenEndBlock and DeliverTx may forget to log errors. Also, the function signatures suggest that they need to implemented in a synchronous manner, or am I wrong thinking that here?

Would the following cover our two cases?

  1. execute asynchronously and wait for listeners that have HaltAppOnDeliveryError == true, halt app when one fails
  2. execute asynchronously and don't wait for listeners when HaltAppOnDeliveryError == false, don't halt app when one fails
        wg := new(sync.WaitGroup)
	for _, streamingListener := range app.abciListeners {
		streamingListener := streamingListener // https://go.dev/doc/faq#closures_and_goroutines
		if streamingListener.HaltAppOnDeliveryError() {
			// increment the wait group counter
			wg.Add(1)
			go func() {
				// decrement the counter when the go routine completes
				defer wg.Done()
				if err := streamingListener.ListenEndBlock(app.deliverState.ctx, req, res); err != nil {
					app.logger.Error("EndBlock listening hook failed", "height", req.Height, "err", err)
					app.halt()
				}
			}()
		} else {
			go func() {
				if err := streamingListener.ListenEndBlock(app.deliverState.ctx, req, res); err != nil {
					app.logger.Error("EndBlock listening hook failed", "height", req.Height, "err", err)
				}
			}()
		}
	}

        // wait for all the listener calls to finish
	wg.Wait()

@i-norden
Copy link
Contributor

i-norden commented Mar 11, 2022

The only concern I have about immediately returning nil error is that we're muting any errors that may occur. Implementors of ListenBeginBlock, ListenEndBlock and DeliverTx may forget to log errors. Also, the function signatures suggest that they need to implemented in a synchronous manner, or am I wrong thinking that here?

I see what you mean, although in my experience it is not unusual for an async function to return an err for errors that might occur during a synchronous initialization stage before spinning up the concurrent background process. For this reason, I think the original listener interface is better at prescribing things if we want support for both sync and async operation with the same ListenX methods. The ListenSuccess() <-chan bool method signals that async behavior can be expected. The err returned by the specific ListenX methods can be used for returning "initialization" errors e.g. an async file writing service could return an error here if it can't even open the file, but it wouldn't wait here to see if an error occurred on every write to the file once it is opened.

Would the following cover our two cases?

  1. execute asynchronously and wait for listeners that have HaltAppOnDeliveryError == true, halt app when one fails
  2. execute asynchronously and don't wait for listeners when HaltAppOnDeliveryError == false, don't halt app when one fails

LGTM! But this approach of execute asynchronously then syncing back up (or not) with the use of a wg is starting to look a lot like the original approach using the ListenSuccess() channel to sync back up. It's approaching similar complexity and this actually adds more LOC while removing the ability to define a global wait threshold. I forgot to address that last point in my previous comments, but I think being able to configure a global wait threshold from the perspective of the SDK is pretty useful feature- to be able to avoid blocking indefinitely if a synchronous listener hangs.

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Mar 14, 2022

The only concern I have about immediately returning nil error is that we're muting any errors that may occur. Implementors of ListenBeginBlock, ListenEndBlock and DeliverTx may forget to log errors. Also, the function signatures suggest that they need to implemented in a synchronous manner, or am I wrong thinking that here?

I see what you mean, although in my experience it is not unusual for an async function to return an err for errors that might occur during a synchronous initialization stage before spinning up the concurrent background process. For this reason, I think the original listener interface is better at prescribing things if we want support for both sync and async operation with the same ListenX methods. The ListenSuccess() <-chan bool method signals that async behavior can be expected. The err returned by the specific ListenX methods can be used for returning "initialization" errors e.g. an async file writing service could return an error here if it can't even open the file, but it wouldn't wait here to see if an error occurred on every write to the file once it is opened.

Would the following cover our two cases?

  1. execute asynchronously and wait for listeners that have HaltAppOnDeliveryError == true, halt app when one fails
  2. execute asynchronously and don't wait for listeners when HaltAppOnDeliveryError == false, don't halt app when one fails

LGTM! But this approach of execute asynchronously then syncing back up (or not) with the use of a wg is starting to look a lot like the original approach using the ListenSuccess() channel to sync back up. It's approaching similar complexity and this actually adds more LOC while removing the ability to define a global wait threshold. I forgot to address that last point in my previous comments, but I think being able to configure a global wait threshold from the perspective of the SDK is pretty useful feature- to be able to avoid blocking indefinitely if a synchronous listener hangs.

How about the following?

// BeginBlock implements the ABCI application interface.
func (app *BaseApp) BeginBlock(req abci.RequestBeginBlock) (res abci.ResponseBeginBlock) {
        ...

        // call the hooks with the BeginBlock messages
	wg := new(sync.WaitGroup)
	for _, streamingListener := range app.abciListeners {
		streamingListener := streamingListener // https://go.dev/doc/faq#closures_and_goroutines
		if streamingListener.HaltAppOnDeliveryError() {
			// increment the wait group counter
			wg.Add(1)
			go func() {
				app.listenBeginBlock(req, res, streamingListener, wg)
			}()
		} else {
			go func() {
				if err := streamingListener.ListenBeginBlock(app.deliverState.ctx, req, res); err != nil {
					app.logger.Error("BeginBlock listening hook failed", "height", req.Header.Height, "err", err)
				}
			}()
		}
	}
	// wait for all the listener calls to finish
	wg.Wait()

        return res
}

// listenBeginBlock asynchronously processes BeginBlock state change events.
// The listener must complete its work before the global threshold is reached.
// Otherwise, all work will be abandoned and resources released.
func (app *BaseApp) listenBeginBlock(
	req abci.RequestBeginBlock,
	res abci.ResponseBeginBlock,
	streamingListener ABCIListener,
	wg *sync.WaitGroup,
) {
	defer wg.Done()

	// Set timer so goroutines don't block indefinitely
	ctx, cancel := context.WithTimeout(context.Background(), app.globalWaitLimit*time.Second)
	defer cancel()

	// update app context
	timeoutCtx := app.deliverState.ctx.WithContext(ctx)

	var listenErr error
	ch := make(chan struct{})

	go func(ch chan struct{}) {
		if err := streamingListener.ListenBeginBlock(timeoutCtx, req, res); err != nil {
			listenErr = err
		}
		ch <- struct{}{}
	}(ch)

	select {
	case <-ch:
	case <-ctx.Done():
		listenErr = ctx.Err()
	}

	if listenErr != nil {
		app.logger.Error("BeginBlock listening hook failed", "height", req.Header.Height, "err", listenErr)
		app.halt()
	}
}

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Mar 29, 2022

@i-norden I'm updating the PR and removing the suggested WaitGroup timeout. Adding the timeout caused non-determinism tests to fail on timeouts <120s (I'm using a MacBook Pro 2.6 GHz 6-Core Intel Core i7, 16 GB of RAM).

Adding a timeout in the commit cycle will cause more problems than the one we're trying to solve for. For example:

  • a timeout may be triggered in the middle of a long running upgrade
  • we are now arbitrarily putting a constraint in the API that was never put it, inadvertently enforcing a commit cycle processing window

I understand your concern about a goroutine possibly blocking indefinitely but this is an extreme edge case that we should not cover for given the two reasons above. Client APIs for databases like Postgres and streaming platforms like Kafka have built-in timeouts for when they fail to communicate with the server side. If the edge case does occur then the a node will simply fail to make progress. Node operators have monitoring tools at their disposal that alert them for scenarios like this.

Also, the community has been waiting for this feature for quite some time now and we are at a good point where we can get this out and wait for feedback.

@peterbourgon
Copy link

I understand your concern about a goroutine possibly blocking indefinitely but this is an extreme edge case that we should not cover for given the two reasons above. Client APIs for databases like Postgres and streaming platforms like Kafka have built-in timeouts for when they fail to communicate with the server side. If the edge case does occur then the a node will simply fail to make progress.

Does this mean that if a client subscribes to some events, and then doesn't actually receive any events from the connection, the node will halt?

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Mar 30, 2022

I understand your concern about a goroutine possibly blocking indefinitely but this is an extreme edge case that we should not cover for given the two reasons above. Client APIs for databases like Postgres and streaming platforms like Kafka have built-in timeouts for when they fail to communicate with the server side. If the edge case does occur then the a node will simply fail to make progress.

Does this mean that if a client subscribes to some events, and then doesn't actually receive any events from the connection, the node will halt?

There are two modes that subscribers can operate in:

  • fire-and-forget - the node will continue to make progress regardless of whether or not subscribers process events successfully.
  • synchronized - the node will halt when subscribers fail to process events. Synchronized mode guarantees events are not missed, ensuring that events sent to subscribers have all been captured and processed successfully.

Both scenarios are controlled by halt_app_on_delivery_error = true|false config property.

@peterbourgon
Copy link

@egaxhaj-figure

There are two modes that subscribers can operate in . . .

And that configuration parameter is defined by the node, not the subscriber, correct? If so, 👍 (I will note again that events don't go through consensus and therefore are not verifiable or necessarily accurate and cannot be treated as a source of truth by consumers.)

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Mar 31, 2022

@egaxhaj-figure

There are two modes that subscribers can operate in . . .

And that configuration parameter is defined by the node, not the subscriber, correct?...

Yes. Here's an example of what that might look like.

# app.toml

. . .

###############################################################################
###                      Plugin system configuration                        ###
###############################################################################

[plugins]

# turn the plugin system, as a whole, on or off
on = true

# List of plugin names to enable from the plugin/plugins/*
enabled = ["kafka"]

# The directory to load non-preloaded plugins from; defaults $GOPATH/src/github.com/cosmos/cosmos-sdk/plugin/plugins
dir = ""


###############################################################################
###                      File plugin configuration                          ###
###############################################################################

# the specific parameters for the file streaming service plugin
[plugins.streaming.file]

# List of store keys to expose to this streaming service.
# Leaving this blank will include all store keys.
keys = []

# Path to the write directory
write_dir = ""

# Optional prefix to prepend to the generated file names
prefix = ""

# Whether or not to halt the application when plugin fails to deliver message(s).
halt_app_on_delivery_error = false

###############################################################################
###                       Trace Plugin configuration                        ###
###############################################################################

# The specific parameters for the trace streaming service plugin
[plugins.streaming.trace]

# List of store keys we want to expose for this streaming service.
keys = []

# In addition to block event info, print the data to stdout as well. 
print_data_to_stdout = false

# Whether or not to halt the application when plugin fails to deliver message(s).
halt_app_on_delivery_error = true

###############################################################################
###                       Kafka Plugin configuration                        ###
###############################################################################

# The specific parameters for the kafka streaming service plugin
[plugins.streaming.kafka]

# List of store keys we want to expose for this streaming service.
keys = []

# Optional prefix for topic names where data will be stored.
topic_prefix = "block"

# Flush and wait for outstanding messages and requests to complete delivery. (milliseconds)
flush_timeout_ms = 5000

# Whether or not to halt the application when plugin fails to deliver message(s).
halt_app_on_delivery_error = true

# Producer configuration properties.
# The plugin uses confluent-kafka-go which is a lightweight wrapper around librdkafka.
# For a full list of producer configuration properties
# see https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
[plugins.streaming.kafka.producer]

# Initial list of brokers as a comma seperated list of broker host or host:port[, host:port[,...]]
bootstrap_servers = "localhost:9092"

# Client identifier
client_id = "my-app-id"

# This field indicates the number of acknowledgements the leader
# broker must receive from ISR brokers before responding to the request
acks = "all"

# When set to true, the producer will ensure that messages
# are successfully produced exactly once and in the original produce order.
# The following configuration properties are adjusted automatically (if not modified by the user)
# when idempotence is enabled: max.in.flight.requests.per.connection=5 (must be less than or equal to 5),
# retries=INT32_MAX (must be greater than 0), acks=all, queuing.strategy=fifo.
# Producer instantation will fail if user-supplied configuration is incompatible.
enable_idempotence = true

@peterbourgon
Copy link

peterbourgon commented Apr 2, 2022

# When set to true, the producer will ensure that messages
# are successfully produced exactly once and in the original produce order.
# The following configuration properties are adjusted automatically (if not modified by the user)
# when idempotence is enabled: max.in.flight.requests.per.connection=5 (must be less than or equal to 5),
# retries=INT32_MAX (must be greater than 0), acks=all, queuing.strategy=fifo.
# Producer instantation will fail if user-supplied configuration is incompatible.
enable_idempotence = true

Can you point me to the code that ensures these invariants?

Also, what informed the specific values for each of those configuration settings?

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Apr 3, 2022

# When set to true, the producer will ensure that messages
# are successfully produced exactly once and in the original produce order.
# The following configuration properties are adjusted automatically (if not modified by the user)
# when idempotence is enabled: max.in.flight.requests.per.connection=5 (must be less than or equal to 5),
# retries=INT32_MAX (must be greater than 0), acks=all, queuing.strategy=fifo.
# Producer instantation will fail if user-supplied configuration is incompatible.
enable_idempotence = true

Can you point me to the code that ensures these invariants?

Also, what informed the specific values for each of those configuration settings?

You can read more about idempotence in the Kafka docs. The Kafka plugin implementation uses the confluent-kafka-go client which is a wrapper around librdkafka. Check out librdkafka's docs on Idempotent Producer.

If you have question beyond the docs, you can reach out to members of the Kafka core team on slack #clients channel.

@peterbourgon
Copy link

Oh, Kafka is a requirement for this feature? That's surprising; Kafka is an enormous operational burden. But, okay!

@iramiller
Copy link
Contributor

Oh, Kafka is a requirement for this feature? That's surprising; Kafka is an enormous operational burden. But, okay!

The enable_idempotence is within the Kafka Plugin configuration section so I don't see how the Kafka dependency is surprising.

@peterbourgon
Copy link

Oh, Kafka is a requirement for this feature? That's surprising; Kafka is an enormous operational burden. But, okay!

The enable_idempotence is within the Kafka Plugin configuration section so I don't see how the Kafka dependency is surprising.

You're right, I missed that in the quoted config. Mea culpa. I am curious what "delivery error" means specifically for each of the various plugins.

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Apr 4, 2022

Oh, Kafka is a requirement for this feature? That's surprising; Kafka is an enormous operational burden. But, okay!

The enable_idempotence is within the Kafka Plugin configuration section so I don't see how the Kafka dependency is surprising.

You're right, I missed that in the quoted config. Mea culpa. I am curious what "delivery error" means specifically for each of the various plugins.

halt_app_on_delivery_error has been thoroughly covered in previous comments, check there to understand its use. You can take a look in ABCIListener as well.

@egaxhaj egaxhaj mentioned this pull request Apr 19, 2022
19 tasks
@egaxhaj
Copy link
Contributor Author

egaxhaj commented Apr 19, 2022

@i-norden @marbar3778 @robert-zaremba Please review the latest ADR changes and approve? Checkout this PR for it's implementation.

Love to see this make it in the 0.46 release. We have teams that will hugely benefit from state listening.

In addition, please note that my time to support for this feature going forward will be very limited as resources within our team are being moved to support other efforts.

Copy link
Contributor

@i-norden i-norden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! The primary point of debate that remains is whether or not to include the kafka plugin code here or in another repo as the core SDK teams are concerned about providing ongoing support for that pkg. I think that discussion can be brought over to your implementation PR.

@i-norden i-norden mentioned this pull request Apr 23, 2022
9 tasks
@iramiller
Copy link
Contributor

iramiller commented Apr 23, 2022

whether or not to include the kafka plugin code here or in another repo

This is something we would be happy to host separately and maintain over on the Provenance side.
Having a solid way to include optional components in the build such as this feels like a very important part of demonstrating the overall design of the plug-in architecture.

@github-actions
Copy link
Contributor

github-actions bot commented Jun 8, 2022

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Jun 8, 2022
@tac0turtle
Copy link
Member

does this only need to be rereviewed?

@egaxhaj
Copy link
Contributor Author

egaxhaj commented Jun 8, 2022

does this only need to be rereviewed?

Yes.

@github-actions github-actions bot removed the stale label Jun 9, 2022
@tac0turtle
Copy link
Member

tac0turtle commented Jun 9, 2022

Reading at a high level, through the code and comments not sure how Kafka is coming into this story. Kafka here is a user specific thing not necessarily an sdk thing. We decided to not have a dependency on Kafka, does this need to be integrated into this adr?

Got caught up now, and talked with Ian. He will review and we should be ready to merge this soon

I think the scope of adr-038 needs to be evaluated it seems like the spec is still influx and its hard to grasp what is going on for at least me. @alexanderbez do you have thoughts to this?

@tac0turtle
Copy link
Member

Having a solid way to include optional components in the build such as this feels like a very important part of demonstrating the overall design of the plug-in architecture.

extending app.toml is already possible so there should be a way to get the possible app.toml mentioned above

Copy link
Contributor

@i-norden i-norden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

for _, listener := range app.abciListeners {
listener.ListenDeliverTx(app.deliverState.ctx, req, res)
wg := new(sync.WaitGroup)
for _, streamingListener := range app.abciListeners {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there any concerns for performance with this approach?

@tac0turtle tac0turtle mentioned this pull request Jul 19, 2022
19 tasks
@tac0turtle
Copy link
Member

closing in favour of #12629 to get this merged

@tac0turtle tac0turtle closed this Jul 19, 2022
@tac0turtle tac0turtle self-assigned this Jul 21, 2022
@egaxhaj egaxhaj deleted the egaxhaj-figure/adr-038-plugin-proposal-update branch July 29, 2022 20:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T: ADR An issue or PR relating to an architectural decision record
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

7 participants