Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add consent decision-making process documentation #887

Merged
merged 14 commits into from
Apr 4, 2023

Conversation

sarayourfriend
Copy link
Collaborator

@sarayourfriend sarayourfriend commented Mar 10, 2023

Decision-making process

For the purposes of this discussion, we will follow Openverse's decision-making model. This process follows formalised steps with specific expectations of participants. Before contributing, please read the document added in this PR as well as Openverse's Code of Conduct.

The consent decision-making document linked above also includes instructions for opting out of a decision discussion you do not wish to or cannot participate in.

Current round

The discussion is currently in the Decision round.

Note

I modified the above text to remove the link to "this document" that doesn't exist yet (because we haven't merged this yet). For now it just refers to the proposal text in the PR.

Fixes

Fixes #874 by @sarayourfriend

Description

This PR introduces the public facing document describing the Openverse consent decision-making process.

Testing Instructions

Read the new document. Ensure that the revisions agreed upon in previous discussions about this process are adequately captured. Likewise, ensure that the document sufficiently covers the basic requirements of the process with an eye for accessibility.

If reviewers have any suggestions for ways to improve the accessibility of the document for a wider audience, please let me know.

Checklist

  • My pull request has a descriptive title (not a vague title like
    Update index.md).
  • My pull request targets the default branch of the repository (main) or
    a parent feature branch.
  • My commit messages follow best practices.
  • My code follows the established code style of the repository.
  • [N/A] I added or updated tests for the changes I made (if applicable).
  • I added or updated documentation (if applicable).
  • [N/A] I tried running the project locally and verified that there are no visible
    errors.

Developer Certificate of Origin

Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

@sarayourfriend sarayourfriend added 🟨 priority: medium Not blocking but should be addressed soon 📄 aspect: text Concerns the textual material in the repository 🧰 goal: internal improvement Improvement that benefits maintainers, not users 🧱 stack: mgmt Related to repo management and automations labels Mar 10, 2023
@github-actions
Copy link

github-actions bot commented Mar 10, 2023

Full-stack documentation: https://docs.openverse.org/_preview/887

Please note that GitHub pages takes a little time to deploy newly pushed code, if the links above don't work or you see old versions, wait 5 minutes and try again.

You can check the GitHub pages deployment action list to see the current status of the deployments.

This dashboard also serves as a record of decisions made on the team since the
dashboard's inception. Historical decisions that happened before the dashboard
started being used are not documented there.

Copy link
Contributor

@obulat obulat Mar 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a couple of views to track everyone's load:
Screenshot 2023-03-10 at 3 47 54 PM

I feel that it would be useful, and hope they do not make the board too crowded. What do you think, @sarayourfriend ?
I wish there was a way to group the proposals by reviewer, but because it's a field that can have several values, it's not possible. We can only group by the proposal author (the last tab).
Edit: Oh, you can actually filter by a reviewer name. I added one more view: "Are they free?" (naming suggestions welcome!): you can change the username in the filter and check if the person is involved in too many discussions:
Screenshot 2023-03-10 at 3 56 32 PM

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, and I added my open proposal :)

Copy link
Contributor

@obulat obulat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for preparing this proposal and writing down all the steps, @sarayourfriend! I would approve the PR as soon as it is undrafted.

@sarayourfriend sarayourfriend force-pushed the add/consent-decision-making-doc branch from 2f013a0 to 1a1da7b Compare March 27, 2023 04:23
@sarayourfriend
Copy link
Collaborator Author

I've made the changes Staci and I discussed to the round names and descriptions. I've also made small editorial changes where I thought wording could be further clarified—however, I'll admit that I got tired of reading the document again: I'm sure there is ample opportunity for further simplification and clarification without loss of meaning.

I've added the diagram that Krystle requested be added and made changes to the process/round call-out text as requested by Olga.

This discussion is now in the Decision round. If no paramount objections are raised, this proposal will be marked as accepted on 29 March.

Copy link
Member

@zackkrida zackkrida left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some objections with the decision and approval rounds:

  • Assuming we are typically using pull requests in GitHub for these decision making proposals, our repository requires two approving reviews in order to merge a PR. This process, instead of requiring two approvals, requires contributors to explicitly raise paramount objections. This process is incompatible with the constraints of the user interface and the workflow our team is accustomed to.
  • Relatedly, it seems that proposals are essentially auto-approved, barring any objections. How do we know if someone truly has no objections, or if was busy/afk/missed the deadline for a proposal? Given this proposal already decreases our existing ten business day, open feedback period down to fewer days, by default, this is a significant concern.
  • "Raising paramount objections", rather than not approving the PR, puts the reviewers of a proposal in a position to feel like they're vetoing someone else's work at the end of a lot of hard work. To me it feels a bit mean, and quite difficult, to determine if my objections are "paramount" or not, and explicitly declaring so feels presumptuous. I worry this would discourage participants from sharing. I see this being harmful to authors (who instead of being asked questions are being presented facts about why their proposal is harmful) and reviewers (who have to make careful judgements about their own objections and may be hesitant to call their own objections "paramount")

@sarayourfriend
Copy link
Collaborator Author

Thanks for the feedback, @zackkrida.

The first and second objections seem related and like they could be solved pretty easily. Rather than closing the decision round and auto-approving after two days, just leave the PR open (indefinitely?) until two approvals are left or objections are raised.

I don't personally feel that two days for a decision, assuming participants were involved in the clarification round (which should be the case), is a big ask. We can adjust both rounds to help people feel more at ease. And, to be honest, we already have difficulty with accountability in 10 business days, so reducing that down, even with more structure, is probably not fair. But again: the decision round is either "There is a problem I don't see a resolution for in the proposal" or, "there isn't a problem with this proposal I can see". Ideally questions about how things would work have been raised earlier during the clarification round. I do also feel that it's worth pointing out that I'd assume people would check if someone was AFK or unable to respond for another reason, particularly as we're assuming people are sharing (when possible) if they will be delayed in responding to something. We agreed to that expectation as a team months ago and we do follow it, if inconsistently. We can change the language to remove the auto-approving/auto-round ending, but we also do very truly need to improve accountability in discussion and PR review. I don't know how to do that and it's a discussion for elsewhere. It's also worth reiterating that front-loading conversations in the clarification round will make the decision round much easier for folks to work through, which is why it stayed shorter. But, again, we can expand it if it would address the concern about the entire process being too short and closing before people are able to respond. I do think having specific expectations around response times is fine (we already do) and that increasing accountability and consistency with that is an important thing for us to explore. To build on what you said in the third point: it feels pretty awful to work hard on something, only to have it ignored for an extended period of time without any indication for why that is happening or when it will be looked at. I would rather have something critically engaged with early and consistently (and understand expectations about when that would happen) rather than it be outright ignored or systematically deprioritised without explanation.

The third objection is curious to me and I wish it had been shared earlier, primarily because it deals with the very premise of formalised decision-making. But I'm glad you're raising it as I think there is a good opportunity to draw out precisely the ways in which this process improves on the structure and process of our existing decision-making process, but does not actually change its fundamental nature. The fact is that there needs to be some way of deciding whether objections are worth stopping something going in for. Many of us already do this in PR reviews. We explicitly label feedback as nitpicks or blockers. Why is that different for a decision-making process about non-code changes? I'll note that this is something that any formal decision making process would have, other than autocratic. Both consensus and democratic models still require people to speak up when they don't think something will work or will cause harm, either by voting no (or abstaining, which isn't always neutral) or by refusing to add to the consensus until something is addressed. Our unstructured decision model that we have been following also includes this, as I've noted above, in the form of request changes.

That leads me to believe that perhaps the issue is with the language. If I'm missing a different point you're trying to get at here, please let me know. Below I've made suggestions for different language we could use that more closely mirrors what we already use.

If the issue is indeed the language of "paramount", which I'll admit could sound severe, does changing it to "blocking objection" help? Whether we call something a paramount objection or otherwise, the fact is that if someone requests changes on a PR they are objecting to it going in. If a PR has requested changes it cannot be merged, even with two other approvals (unless the PR author dismisses the blocking review). I cannot think of another word other than "objection", however, not even ones that are coy/beat around the bush (which options would sacrifice clarity for an unknown, to me, benefit).

Alternatively, is the issue with requesting formal objections or approvals? I share my thoughts on that in the next paragraph, but if that isn't what you're getting at either, please skip ahead the next one.

If the issue is, alternatively, with a formal request for people to share objections or approvals, I will need to ask for further clarification on what specifically the issue is with that. I get that it sometimes does not feel good to tell someone you think there is an issue with something they've worked hard on: but hard work alone doesn't make something worth implementing. Careful review and collaboration on the solution does. I've certainly worked hard on many things that ended up being non-viable solutions, and I was grateful for when people gave feedback to that fact. And, as I said above: we already do this, we just haven't formalised the language for it. Because of that I don't see how that is worse from the process we use today. When we name and make explicit the need to address feedback and ask participants to think deeply about whether their objections are worth blocking, we are just asking people to be more intentional (explicit, clear) about something they already do. On top of that, we're giving people language to reach for so that they don't have to figure out "what's the best way to say this without making it seem like I'm disparaging this person's hard work?" That's still a question worth asking, but reducing the emotional load of prevarication over terms (by having standard terms and labels for feedback) we make the work more accessible and reduce the overall psychological burden of giving feedback. It also makes it easier to review because the person receiving the feedback has less opportunity to try to "read between the lines" of feedback. If we use our common and agreed upon labels for things, then whether something is a show stopper or is just something to keep a close eye on after implementing to quickly iterate if a failure appears, is finally clearer.

If neither of those are the issue you're getting at, then I need clarification to be able to understand and recommend adjustments or clarify.

To summarise my general response, though: I think we can fix the approval issue by leaving the decision round open until the participants leave two approvals. I still think we require better accountability and clarity when things aren't going to be addressed within the time frame, but that is a separate issue. With respect to the last point but speaking generally, I don't think this formalised process is different from what we currently do other than making expectations and language explicit. Avoiding explicit labels on feedback feels conflict avoidant rather than kind and makes processing and understanding feedback, especially on something you've worked hard on, much more difficult.

I will also add that I think the team has largely agreed that we require some kind of way to improve our decision-making process. I do not wish to impose anything on people but am also keenly aware that I first raised this with the team 3 months ago and so far no one objected to the premise that we needed some processes. I would be fine admitting that this process isn't good for our team, but then I would also request that someone else volunteer to quickly prepare an alternative proposal for how to address the issues we've encountered in decision-making.

@sarayourfriend
Copy link
Collaborator Author

sarayourfriend commented Mar 30, 2023

I was thinking this over a bit more and realised that there are two more things that are worth keeping in mind and which may need small revisions to make (more) explicit in the process documentation:

  1. The process makes explicit the understanding that people raising paramount objections and the proposal author should work together to revise the proposal to address the paramount objections. That's not really an expectation we have explicitly now. However, I think it makes it easier to cultivate a sense of collaboration rather than feeling like you are denigrating someone's hard work (or that someone is doing that to you). This is because rather than just saying "Nope, this won't work because of [problem]," and then walking away from a proposal, the understanding is that the person raising the paramount objection is going to work together with the author to address it. That's measurably different because it reinforces the understanding that the success of the proposal rests on all participants shoulders, not just the author.
  2. In light of that, we should revise any sample objection text not to be "We shouldn't do this" and rather to be "We can implement this but must fix [issue] first. Here are some ideas I had for how to address this: …." and then folks collaborate and iterate on the proposal until the paramount objections are addressed. That means most proposals are going to be accepted and have a formal, collaborative, and accountable process for doing so. Outside the rare situations where either a proposal purpose isn't accepted or where a proposal is completely unable to be implemented (it is illegal, costs too much, fundamentally misses the requirements) I don't think we will often see paramount objections in the form of "We cannot do this proposal under any circumstances." Saying "I think there are ways we need to change this proposal to make it acceptable" is pretty innocuous, non-judgemental, and still respects the work someone has put into it.

The feedback Staci shared, for example, really clearly falls under this. She shared concerns and pointed out things that were contradictory or didn't make sense but also shared clear alternatives that would help rectify the issue. I think that's a good standard for us to try to emulate with objections. Pointing out things that aren't working but also sharing ideas for how to change them (or, at the very least, making explicit that you're willing to collaborate on potential changes).

It does indeed feel pretty bad for someone to approach your hard work and leave critical feedback requesting changes without any suggestion for how to address it. Thinking in terms of PRs (because, again, this isn't that different from how we operate now in that area), I would be surprised and sad if someone went into a PR and left a change request without explaining the change they thought would fix the issue.

So, that being said and building on my previous comment: there's a lot of room for language to be dicey here, but so long as the structure of the process is something people feel would work, then I hope we can find ways of explaining it that don't make people feel bad. I hope it is obvious that making people feel bad or putting people in uncomfortable situations isn't the point of the proposal. But it is also imperative to realise that we already do all the things described in this proposal, we just do it with zero formal process and practically zero expectations or guidelines on how to explicitly label types of feedback. This current lack of expectations causes an undeniable psychological burden for reviewers and authors alike, especially when discussing big projects or process proposals.

@zackkrida
Copy link
Member

Thanks, @sarayourfriend, for the very thoughtful replies. I've read them along with re-reading the proposal, and the discussion around the original proposal you shared with Openverse maintainers earlier in the year. Based on your feedback I definitely think my paramount objections could be revised. On reflection my main concerns were around the technicalities of fitting this historically sync process into an async one in GitHub, along with some language choices.

Rather than closing the decision round and auto-approving after two days, just leave the PR open (indefinitely?) until two approvals are left or objections are raised.

Since this process is meant to expedite conversations and reduce harm to authors and reviewers, it makes sense to me to keep some X-days timeline but require the two approving reviews. The explicit day window would prevent:

  • A proposal getting two approving reviews and merged on day 1 of the decision round, before someone has had time to share objections.
  • Respect proposal authors so they don't have to be prepared to anticipate feedback coming in for an extended period of time. The longer this window is open, the more first-time participants may jump into a conversation and derail the process.

In general this gets into a quirk of having a process like this: in an async setting, we don't have a fixed set of participants engaged in the process end-to-end as we would in a sync setting. It's possible for folks to drop in and out at various stages. Part of the safety of the proposal, IMO, comes from going through these shared steps together as a group.

A possible suggestion: What if we keep our existing system of assigning two reviewers up-front, and the final approval has to come from them? With exceptions for AFK, of course, and reviewers can still decline up front.

In general, I do really like trading our current, flexible-but-drawn-out communication style for a shorter-but-stricter system.

revise any sample objection text not to be "We shouldn't do this" and rather to be "We can implement this but must fix [issue] first. Here are some ideas I had for how to address this: …." and then folks collaborate and iterate on the proposal until the paramount objections are addressed.

👍 This sounds excellent.

it feels pretty awful to work hard on something, only to have it ignored for an extended period of time without any indication for why that is happening or when it will be looked at. I would rather have something critically engaged with early and consistently (and understand expectations about when that would happen) rather than it be outright ignored or systematically deprioritised without explanation.

I agree with this very strongly, both as someone who occasionally takes a while to get to proposals and as somewhone who writes things and wonders "did anyone read this?" I think this communication process encforces a cultural shift in the team to address this and it's one of the things I'm most excited about.

Our unstructured decision model that we have been following also includes this [sharing blocking feedback], as I've noted above, in the form of request changes.

And to be honest, I think we need to use "request changes" more as a team. We have a tendency to simply leave a comment with requested changes, rather than formally request changes, and I think the later would be clearer for PR authors.

If the issue is indeed the language of "paramount", which I'll admit could sound severe, does changing it to "blocking objection" help?

I honestly do think it helps quite a bit. I'd be open to "blocking objections" or even just "blockers" for parity with how we talk about code. I think "paramount objections" does feel quite loaded with meaning for me.

Copy link
Collaborator

@stacimc stacimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve left some comments inline to show what I mean, but I think there are a couple places where intended revisions might have been missed? I still see references to Reactions, the original feedback label definitions, and references to objections being excluded in some of the places that I pointed out earlier as contradictions.

Regarding the length of the decision round, and auto-approving

But again: the decision round is either "There is a problem I don't see a resolution for in the proposal" or, "there isn't a problem with this proposal I can see".

Where I struggle with this a little is that because we don’t allow (non-trivial) revisions to the document until the revision round, reviewers are actually also reviewing potentially large changes to a proposal during this stage. Hopefully most questions and feedback can be addressed earlier, but depending on what those changes are that might not be the case.

We can change the language to remove the auto-approving/auto-round ending, but we also do very truly need to improve accountability in discussion and PR review.

+1 to removing the auto-approve, and I agree that accountability is important. The discussion board is a great tool for doing so that I’m excited about. But, to my point earlier about needing to review all the revisions, I think a two day window is just always going to be tight for some people and we should probably expect things to be imperfect for awhile. Ideally reviewers should be pro-active about explicitly requesting more time when they need it, but occasionally things are just going to be missed (particularly since the decision round only starts when the revision round ends, which is a date that can’t be known in advance… so a missed ping can make this window really tight).

FWIW as an author I wouldn’t mind pinging people on Slack every now and then while we figure out this process. I have missed some updates on this discussion, for example, because my main workflow is to check the PR list for ‘awaiting review by me’. Maybe down the line if we continue to see low participation we can automate PR pings on a schedule or something. I’m confident we’ll get it right.

Regarding ‘paramount objections’ language

For what it’s worth, now this has been pointed out I realize that I also found myself really resistant to calling any of my own feedback “paramount”. I think you’re right @sarayourfriend in that it’s a language thing; it just feels much more severe than I would generally phrase something. I would hazard that it might also feel more severe to read as an author for some people, although I completely understand everything you say about the utility of making approval clear.

I’ve given a lot of feedback about the way the terminology felt confusing to me in this context and I don’t want to drag the discussion out too much. But perhaps we could just continue using the word “blocker” / “blocking feedback” as we already do, as it has wide usage outside of the team and is less charged a word.

In light of that, we should revise any sample objection text not to be "We shouldn't do this" and rather to be "We can implement this but must fix [issue] first. Here are some ideas I had for how to address this: …."

This is great, I think feedback should be phrased this way when it makes sense. I worry about getting really specific with things like this because we get weird edge cases. Sometimes something really shouldn’t be implemented, or sometimes someone may want to be helpful but legitimately have no idea how to address a problem they’ve identified. This is the kind of thing that I think would be most useful as a suggestion in a Feedback Etiquette or Code of Conduct guide, like “When leaving critical feedback, try to make concrete suggestions of what you would change”.

But it is also imperative to realise that we already do all the things described in this proposal, we just do it with zero formal process and practically zero expectations or guidelines on how to explicitly label types of feedback.

I’ve been thinking this too. At the heart of this, in its current state the proposed process is pretty simple. I think all the terminology is adding a lot of complexity in explaining the process. Just adding deadlines, explicit approvers, and some kind of process for explicitly approving or objecting (whether that's called "paramount objections" or "blockers" or what have you) brings us so much value and can be explained very quickly. Paradoxically, it's easy to create confusion and edge cases when we try to create rules for every scenario.

ballooning in size and complexity.

As with every other aspect of our ever evolving processes, this process is
merely a guideline that we expect contributors to adhere to. It cannot cover
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you still intend to make this change?

described in the proposal.
- "Question": A request for clarification, explanation, or additional
information about an aspect of the proposal.
- "Objection": An explained uncertainty about the appropriateness, efficacy, or
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you still plan on changing these terms? I thought the intention was to remove these. If we change the language to "blocking" I think we could remove all of these labels and just define "blocking".

something important being discussed.
- Any round with a suggested length of time is subject to
[shortcutting as described below](#shortcutting-a-round) or to extension at
the discretion of the participants.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you still intend to remove these?

While the original process has a separate reactions round, in Openverse's
process, reactions can be shared at any time. Be careful to ensure that they're
not objections (which should, with few exceptions, be reserved for the objection
round). Examples of reactions are:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you still think this section is useful, or can it be removed? At the least I think the section I mentioned above is still contradictory and references the old "objection round".

@sarayourfriend
Copy link
Collaborator Author

@zackkrida @stacimc In light of this discussion, I'm going to re-write the document from the ground up. I think it can be simplified along these lines:

  • Ignoring the Sociocratic model, other than a brief mention to credit inspiration, including ditching basically all their language and using words we already use
  • Couching every round description in our existing practices to help draw the connection that this is a refinement rather than an entirely new process
  • Make everything less specific to avoid anyone feeling like the process is too rigid

The last one is the most difficult to me. I really find scripts useful, especially in giving feedback, and I think specificity is the biggest benefit here. However, it also adds to the length of the document and good examples can be gathered and archived over time in a separate document. I think this will allow more room for organic growth within the process.

My hope is that I will address all the issues folks have shared and produce a much more concise document. I'm going to go ahead and split the additional practices into a separate document as well so that the process itself is easier to approach.

@sarayourfriend
Copy link
Collaborator Author

I've uploaded a re-written version of the process following the approach described in my previous comment. I hope that this fundamental change in the approach lays a good foundation for addressing the concerns brought up and more closely ties it to our existing practices.

I'm having some trouble with the Sphinx sidebar though. For some reason it doesn't show the decision-making entries if you select another entry under the references section. I'll keep poking around to see if I can figure it out but if anyone has hints (maybe @dhruvkb especially, who knows the doc site well) I would appreciate it 🙏

Copy link
Collaborator

@stacimc stacimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really wonderful, @sarayourfriend 🧡 This distills the spirit and motivations of the process so clearly. The formatting additions are above and beyond and make it even cleaner. Your fortitude in re-working this document, and your commitment throughout to preserving the underlying principles are so appreciated. This is tremendous work :)

I'm having some trouble with the Sphinx sidebar though. For some reason it doesn't show the decision-making entries if you select another entry under the references section

I viewed the documentation through the preview link and I wasn't able to see any issue with the sidebar, if I understand the problem correctly 🤔 Can anyone else reproduce?

Screenshot of the decision making entries visible with another section selected:
Screen Shot 2023-03-31 at 4 43 36 PM

@zackkrida zackkrida self-requested a review April 3, 2023 14:09
Copy link
Member

@zackkrida zackkrida left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sarayourfriend I am really grateful for this work. I also reviewed the docs preview and found the flow of the restructured documents so easy to follow. The adjusted suggested timelines look good, and the language feels much more accessible.

Copy link
Collaborator

@AetherUnbound AetherUnbound left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fantastic, thank you to all who were involved for your ardent commitment to making sure the final document is approachable, understood, and meets the needs of the team. I am excited for the clarity and structure that this document will bring around discussions, and I appreciate the effort that's been put into making sure we can still be flexible about each step as needed 💖

I also love that it will be available on our docs site, that's very rad ✨

@zackkrida zackkrida merged commit 777d087 into main Apr 4, 2023
@zackkrida zackkrida deleted the add/consent-decision-making-doc branch April 4, 2023 15:17
obulat added a commit that referenced this pull request Apr 5, 2023
* Fix issues in the workflow simplifications of #1054 (#1058)

* Retry `up` recipe in case port is occupied (#990)

* Fix typo in docs building on `main` (#1067)

* Restore Django Admin views (#1065)

* Update other references of media count to 700 million (#1098)

* Dispatch workflows instead of regular reuse to show deployment runs (#1034)

* Use label.yml to determine required labels (#1063)

Co-authored-by: Dhruv Bhanushali <[email protected]>

* Add `GITHUB_TOKEN` to GitHub CLI step (#1103)

* Pass actor for staging deploys with the `-f` flag (#1104)

* Bump ipython from 8.11.0 to 8.12.0 in /api (#1113)

Bumps [ipython](https://github.com/ipython/ipython) from 8.11.0 to 8.12.0.
- [Release notes](https://github.com/ipython/ipython/releases)
- [Commits](ipython/ipython@8.11.0...8.12.0)

---
updated-dependencies:
- dependency-name: ipython
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Absorb `actionlint` into pre-commit (#1028)

Co-authored-by: Dhruv Bhanushali <[email protected]>
Co-authored-by: sarayourfriend <[email protected]>

* Bump orjson from 3.8.8 to 3.8.9 in /api (#1114)

Bumps [orjson](https://github.com/ijl/orjson) from 3.8.8 to 3.8.9.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](ijl/orjson@3.8.8...3.8.9)

---
updated-dependencies:
- dependency-name: orjson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add Sentry to the ingestion server (#1106)

* Add a wait to filter button test to fix CI (#1124)

* Bump boto3 from 1.26.100 to 1.26.104 in /ingestion_server (#1110)

* Bump sentry-sdk from 1.17.0 to 1.18.0 in /api (#1112)

* Bump pillow from 9.4.0 to 9.5.0 in /api (#1115)

* Update redis Docker tag to v4.0.14 (#1109)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* 🔄 synced file(s) with WordPress/openverse-infrastructure (#1127)

Co-authored-by: openverse-bot <null>

* Update other references of media count to 700 million (#1100)

* Fix prod deployment workflow dispatch call (#1117)

* Add a Slack notification job to the CI + CD workflow (#1066)

* Fix types in VFilters and VContentReport (#1030)

* Move the svgs for radiomark and check to components

* Add files to tsconfig and fix types

* Mock report service in the unit test

* Type svg?inline as vue Component

* Better License code type checking

* Update frontend/src/components/VFilters/VFilterChecklist.vue

* Revert unnecessary changes

* Update frontend/src/components/VFilters/VFilterChecklist.vue

Co-authored-by: Zack Krida <[email protected]>

* Rename `ownValue` to `value_`

---------

Co-authored-by: Zack Krida <[email protected]>

* Convert VPill and VItemGroup stories to mdx (#1092)

* Convert VPill story to MDX
* Convert VItemGroup story to mdx
* Fixing argTypes issue and fixing the headers

* Update ci to use github.token (#1123)

* Add `SLACK_WEBHOOK_TYPE` env var to reporting job (#1131)

* Add consent decision-making process documentation (#887)

* Prepare VButton for updates (#1002)

* Rename button sizes and apply some styles only to 'old' buttons

* Rename the snapshot tests to v-button-old

* Fix VTab focus style

* Small fixes (large-old, border, group/button)

* Revert VTab focus changes

Moved to a different PR

* Revert "Revert VTab focus changes"

This reverts commit ec9312d.

* Use only focus-visible for consistency

* Bump boto3 from 1.26.99 to 1.26.105 in /api (#1133)

Bumps [boto3](https://github.com/boto/boto3) from 1.26.99 to 1.26.105.
- [Release notes](https://github.com/boto/boto3/releases)
- [Changelog](https://github.com/boto/boto3/blob/develop/CHANGELOG.rst)
- [Commits](boto/boto3@1.26.99...1.26.105)

---
updated-dependencies:
- dependency-name: boto3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add more docs for Plausible and auto-initialise custom event names (#1122)

* Add more docs for Plausible and auto-initialise custom event names

* Update existing docs

* Add caveat that it is not necessary to run Plausible if not working on custom events

* Fix ToC

* Add new buttons variants and sizes (#1003)

* Add new VButton sizes and variants

* Add new Storybook tests

* Add border to transparent- buttons

* Update bordered and transparent buttons

* Update stories

* Update snapshots

* Remove pressed variants

* Add missing snapshots

* Fix transparent buttons

* Update paddings

In accordance with #860 (comment)

* Update snapshots

* Update frontend/src/components/VButton.vue

Co-authored-by: Zack Krida <[email protected]>

---------

Co-authored-by: Zack Krida <[email protected]>

* Pass `GITHUB_TOKEN` to deploy docs (#1134)

* Add context manager and join()

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Dhruv Bhanushali <[email protected]>
Co-authored-by: Krystle Salazar <[email protected]>
Co-authored-by: Madison Swain-Bowden <[email protected]>
Co-authored-by: sarayourfriend <[email protected]>
Co-authored-by: Tomvth <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Adarsh Rawat <[email protected]>
Co-authored-by: Dhruv Bhanushali <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Openverse (Bot) <[email protected]>
Co-authored-by: Zack Krida <[email protected]>
Co-authored-by: Sepehr Rezaei <[email protected]>
Co-authored-by: Sumit Kashyap <[email protected]>
@sarayourfriend sarayourfriend mentioned this pull request Apr 12, 2023
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📄 aspect: text Concerns the textual material in the repository 🧰 goal: internal improvement Improvement that benefits maintainers, not users 🟨 priority: medium Not blocking but should be addressed soon 🧱 stack: mgmt Related to repo management and automations
Projects
Status: Accepted
Development

Successfully merging this pull request may close these issues.

Consent decision-making
8 participants