Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically file complaints for Failure to Participate in Peer Review #8

Open
jrwashburn opened this issue Jul 23, 2024 · 41 comments
Assignees

Comments

@jrwashburn
Copy link
Contributor

Add menu item for Sponsor to automatically file complaints

Need a way to track which complaints have been filed and what period it is for - new sheet perhaps.

Send a complaint email to the Subject and to the current CRT via bcc. (Verify how this was done with Fradique.)

Also check ability to create the vote table via Coda API to automate everything -- only gap would be the response from the Subject. Discuss this more with Fradique re: operations.

@jrwashburn
Copy link
Contributor Author

Also have this file complaints for Inadequate Contribution automatically as well.

@mathematicw
Copy link
Collaborator

sent you google sheets access request

@jrwashburn
Copy link
Contributor Author

Originally, the requirement was to send complaints to the CRT if an ambassador failed to participate. In the last team call, the governance team proposed to change the bylaws in favor of automatic termination to avoid clogging the CRT agenda. Let's assume that will pass, and design this accordingly.

Currently, we assume we just have the current month's submissions and evaluations; however, for this purpose, we will need to have at least 6 months of history. I suggest we add a new link to a historical sheet, (similar to https://docs.google.com/spreadsheets/d/1cjhrqgc84HdS59eQJPsiNIPKbusHtp2j7dN55u-mKdc/edit?gid=1515736355#gid=1515736355 ) so that we can check the last 6 months of submissions. Unfortunately, this example link will not work because it does not have email address, which is what we need to key off of. We will need to discuss with Fradique to get the underlying data structure for the actual submissions and evaluations from the google forms that include the email address.

The format of that data will dictate how this will work, but the requirement is to look at the most recent 6 months of submissions, and accumulate 1 point for each month that an ambassador did not submit work for review. Then, check the last 6 months of reviews, and if in any month the ambassador did not score at least one of the assigned submissions, then accumulate an additional point for each month with 0 evaluations provided. If the total score is > 2 for any ambassador, they will be terminated from the program, and we could just send a termination notice to the Ambassador and copy the Sponsor. We could also send a warning email for any ambassador with a score of 1 - to let them know that if they have another failure to participate they may be automatically terminated from the program.

If we want to be precise with the warning, you would need to track the month of their first violation, and then let them know if they fail within 6 months of that first violation to accurately explain the sliding window.

@mathematicw
Copy link
Collaborator

I'm a bit concerned about a strange glitch that I think might exist somewhere in the current mechanism. I'll try to find it. Do you think it's possible that submitting a contrib report might not be counted if the ambassador filled out the form without being logged into a Google account?

@jrwashburn
Copy link
Contributor Author

The google forms go to a sheet automatically. I do not think that would/should happen. The google forms data automatically writes to a google sheet. If the Ambassador does not enter their email address, they may not be counted, but they don't have to be logged in to a google account. Email is a form field that is entered.

@mathematicw
Copy link
Collaborator

If each ambassador has to evaluate three other ambassadors, then it means that each ambassador receives three evaluations from peer reviewers. And so the numbers in Fradique's table should be the arithmetic mean of these three evaluations?

@mathematicw
Copy link
Collaborator

FullyAutomatic.drawio.pdf
please check the diagram if the algorithm is correct

@mathematicw
Copy link
Collaborator

Including PP Assignment
pp_separating
Please review the diagrams of the two slightly different algorithms.

@mathematicw
Copy link
Collaborator

comparing the answers received from the Submission form and recorded in the "Responses" sheet with the list of ambassadors from the Registry sheet (email column?) we take their Discord nicknames or emails? Discord nicknames seem to be more simple, but e-mail addresses should be more exact.....

@jrwashburn
Copy link
Contributor Author

Originally it was based on discord handle but we had many issues with typos, changes, etc. we switched to email and have not had problems since. I recommend sticking with the email implementation.

@mathematicw
Copy link
Collaborator

mathematicw commented Sep 13, 2024

So, do we put the email addresses into the Review Log sheet after all, and create the entire matrix from the email addresses , right?

@jrwashburn
Copy link
Contributor Author

@mathematicw I prefer the second flow, but move the two big If it's been 7 days since... checks into series instead of parallel, so that everything can happen in a single Processing Responses run. I would also rename it to Processing Evaluations so that it's more intuitive when it should be run.

@mathematicw
Copy link
Collaborator

have doubt.
I remind you one function:
Handling Form Responses:
The evaluations are extracted from the Form Responses sheet.
We need to:
Identify who is evaluating whom based on the 'Discord handle of the ambassador you are evaluating' column in the form.
Place evaluations in the correct columns based on the sequence they come in (1st, 2nd, or 3rd evaluation).
If an evaluator doesn't respond, their email is placed in the grades column instead of the score.
The question is,
Isn't it better to not reveal emails of non-evaluated ambs-evaluators, but put their Discords in month-sheets, if this spreadsheet going to be publicly available?

@mathematicw
Copy link
Collaborator

However, I have long ago made everything exactly as it been discussed: if an ambassador-evaluator has not evaluated the ambassador-submitter assigned to him, his e-mail address will be displayed in the monthly list, instead evaluation. And, by the way, the matrix of matching submitter-evaluator has also been made long ago from discord handles :) But this logic is not difficult to change

@jrwashburn
Copy link
Contributor Author

We should not reveal ambassador emails. Not sure why the email is being placed there? Is that so you can track that they did not submit a response? You could do that from the evaluation responses sheet instead? If you use discord, need to think about handling if discord does not match the registry.

We should not have a problem today, we use email to track who send a submission, and email to track who responded with a score. The link from submission to score is on discordid, but that should be the same in both since we provide it to them in email. If you cannot match from a response to a submission on discord handle, there should be an error - I think alert sponsor to the discrepancy for manual review before continuing.

@jrwashburn
Copy link
Contributor Author

It would be nice to add one more feature -- in case an ambassador is not selected to review any submissions, we should email them confirming that they were not selected and are not expected to submit evaluations that period. (see #9)

@mathematicw
Copy link
Collaborator

mathematicw commented Sep 26, 2024

Not sure why the email is being placed there? Is that so you can track that they did not submit a response?

Not I came up with it.

If you use discord, need to think about handling if discord does not match the registry.

If ambassador- evaluator specified discord handle of the submitter with a typo, that's a problem.
An option is to create a function to brute force possible typos, but that seems difficult. But we can ask the ambassador to fill out the form again (edit the form). In general, editing should be allowed.

In general this isn't a problem. We have the Registry, which is a source of matching email addresses and discords of all ambassadors, so we can operate with both discords and emails, depending on preferences, as we need

@mathematicw
Copy link
Collaborator

You could do that from the evaluation responses sheet instead?

There are many options, and we can do it in various ways.

@mathematicw
Copy link
Collaborator

Though if we identify non-responders directly from the form, we won't be able to separate those who avoided evaluation (even though they were assigned submitters) from those who simply didn't have submitters because there were too few.

@jrwashburn
Copy link
Contributor Author

We know who was assigned, just need to keep track of it.

@mathematicw
Copy link
Collaborator

We can't directly take evaluators' e-mail addresses from the Evaluation Form (to not reveal em), nor take Discords as there are no in Evaluation form. Can write the string "didn't evaluate" for ex,
But I suggest for clarity to write evaluators' Discord handles (converted from their Emails using Registry sheet) when they evaded evaluation. Still the the Month-sheets are designed for human monitoring.
Not even a problem if they not relevant, or a typo.

@mathematicw
Copy link
Collaborator

mathematicw commented Sep 27, 2024

It would be nice to add one more feature -- in case an ambassador is not selected to review any submissions, we should email them confirming that they were not selected and are not expected to submit evaluations that period. (see #9)

Those ambassadors who did not get submitter for evaluation, although are listed on the Registry, do not go into the sabmitter-evaluator matrix as evaluators. And in such cases, they should be notified of exemption from evaluation.
They don't response to Evaluation Form, but will not be penalized.

Penalty points for non-evaluation will be issued only to those who are listed in the matrix as an evaluator but whose response is not received within 7 days from the date of sending the last letter.

@mathematicw
Copy link
Collaborator

i can create a pull request to show you the code and what stage it is at so far

@mathematicw
Copy link
Collaborator

mathematicw commented Oct 15, 2024

The basic features are done so far, except for the final processing of scores, penalty points, CRT, and upcoming peer review notifications (which are just piece of cake compared to the former mentioned) . However, they need to be tested thoroughly. Additionally, Google has a limit on the number of emails sent per day, which is reached surprisingly quickly, even though it seems like it should be 500.

@mathematicw
Copy link
Collaborator

mathematicw commented Oct 19, 2024

Generally, penalty points for not participating in the Submission or Evaluation processes can be (and should be) written only directly in the Overall score in the "Penalty Points" column, in corresponding to that evaluator row.

  1. Penalty points for not participating in the Submissions can be calculated by comparing the lists of ambassadors on the form response sheet (within 7 day time frame) and the month sheet (which has the same list as the Review Log or review matrix). So that, one missed Submission Form should lead to imposing one penalty point, adding it to already existing penalty points in "Penalty Points" column in Overall score.

We want also count the "didn't submit" and "late submission" events (equally) from past months to get wider view and to be able to detect if there are ambassadors who have 3 and more penalty points within 6 month period.

  1. Penalty points for not participating in the Evaluations could be calculated by counting events, when there is no evaluation in at least one of three grades-fields on the month sheet, so that this field ambassador-evaluator is given 1/3 of a penalty point (according to my proposal) for each such case, with it added to already existing penalty points in "Penalty Points" column on the Overall score sheet.

Also it is possible to handle past periods...
Although we can limit the scope of penalty points accrual and issuance to the most recent periods starting from a certain discussed date.

It raises an interesting thought: how are you going to run this algorithm that will find all past offenders, and it may turn out that even if someone is doing well in the last 6 months, there is still a 6-month period where the number of penalty points was greater than 3. What happens in such cases?


implementation note:
Total amount of penalty points for current ambassador will be displayed in Overall score sheet's Penalty Points column. And yet for monitoring if there is 2.97* penalty points threshold within a contiguous 6 month period in this row, the new dedicated column is needed.

*2.97, because this is the minimum closest number to the threshold value, if it is accumulated by only 0.33 PP (multiplied on 9) for missed single Evaluation requests cases (assuming mentioned proposal).

P.s. sorry for the multiple edits, the section is not as obvious as it seemed.

@jrwashburn jrwashburn self-assigned this Oct 19, 2024
@mathematicw
Copy link
Collaborator

Here are some problems that can take quite a bit of time if I work on them alone


What we have:

We want to make the Overall score sheet a full-fledged dashboard (I want to make it this way). You already know the columns it has, but here's what else needs to be done:

  • In the Average Score column, we need to display the arithmetic average of all the scores.

    • The AVERAGEA function is used here, which works only with numeric format! The cells that currently contain text markers like "didn't submit" or "late submission" (regardless of which marker it is) are all treated as 0 by this function, rather than being ignored, which lowers the average score. It also prevents the script from being improved as it introduces conflicting formats.
  • In the Penalty Points column, we need to show the total number of penalty points.

    • Here, the script searches for markers that indicate cases of missing report submissions or failure to participate in evaluations. These markers can be in two forms: color-based or text-based.
  • In the Max 6-Month PP column, we need to show the maximum number of penalty points within any six-month period.

    • The logic here is similar to the Penalty Points column, but it works over different six-month time periods.

The Problem:

In the month columns, the following events can occur in the cells:

  • "didn't submit"
  • "didn't evaluate"
  • "hasn't been evaluated"
  • "didn't submit and didn't evaluate"
  • "didn't evaluate and hasn't been evaluated"

However, we cannot combine these string values with numeric values (the Final Score) in the same cells, as the Final Score is transferred from the month-sheet and needs to be in a numeric format.

Possible Solutions:

  1. Modify the Average Score formula to parse the cells in the month columns and extract the numbers, ignoring the text.

    • This would make the sheet both informative and visually clear (especially with color coding), but it would require more computations and increase the risk of bugs.
  2. Use a color-based logic where the events are represented solely by color codes.

    • These colors would be recognized by other functions, and penalty points (PP) would be assigned based on this color scheme. The analysis over longer periods would also be based on these color codes.
      The examples in the attached files do not cover all the possible options I have listed, but give a rough idea.
      concatenated markers
      Color logic- looking

@mathematicw
Copy link
Collaborator

colored logic (to text)
here are empty cells (where test ambs) but they "says" what happened by color.
(Just idea, and in some cases it has advantages over combined strings)

@mathematicw
Copy link
Collaborator

I only need to shed light on two things to dig in one direction and complete the script.

  1. Do you agree that if an ambassador did not submit a report, then according to the rules he should be given a penalty point and not a zero evaluation? Because a zero evaluation lowers the average evaluation. So such an ambassador is double penalized. Or it doesn't matter?
  2. Will "didn't submit" events that took place in previous months be taken into account by function which detects if 3 or more PPs are there within any contiguous 6 month periods?

@jrwashburn
Copy link
Contributor Author

I believe these questions were handled in discord. If these are still open please let me know.

@mathematicw
Copy link
Collaborator

mathematicw commented Nov 4, 2024 via email

@jrwashburn
Copy link
Contributor Author

@mathematicw issues from initial review:

Module1.gs - request submissions module:
ReferenceError: SUBMISSION_WINDOW_MINUTES is not defined
at unknown function
at requestSubmissionsModule(Module1:26:10)

@jrwashburn
Copy link
Contributor Author

ReferenceError: SUBMISSION_WINDOW_REMINDER_MINUTES is not defined
setupSubmissionReminderTrigger @ Module1.gs:63
requestSubmissionsModule @ Module1.gs:53

@mathematicw
Copy link
Collaborator

@mathematicw issues from initial review:

Module1.gs - request submissions module: ReferenceError: SUBMISSION_WINDOW_MINUTES is not defined at unknown function at requestSubmissionsModule(Module1:26:10)

Probably you missed SharedUtilities.gs file?
The project should look like this:
Screenshot_2024-11-09_18-02-22

@mathematicw
Copy link
Collaborator

I will send new PR today (in 4-5 hours after the time of writing this comment) with all fixes, I have made after my testing

@mathematicw
Copy link
Collaborator

bugs fixed:

  • removed evaluationStartTime property, the only evaluationWindowStart is used everywhere in script.
  • Made the EvaluationFormURL appear as a clickable link in the email.
  • 'month' field in Submission and Evaluation Forms will be changed in accordance with current reportinmg month.
  • while new month-column was being created it reflected the colors format of previous month-column. Fixed. (affected color code recognition in Module3)

@jrwashburn
Copy link
Contributor Author

@mathematicw I added all the files to the source repository here. I think it's best to use git for the source code and then just clasp push to update your sheets project. We should add config for AMBASSADOR_REGISTRY_SPREADSHEET_ID and AMBASSADORS_SCORES_SPREADSHEET_ID so multiple testers can use independent sheets without walking on each other.

@mathematicw
Copy link
Collaborator

ok gimme few hours.. i 'll manage it

@mathematicw
Copy link
Collaborator

Btw, Changing the set of variables or overriding them, and especially while having multiple projects in Google Apps, can cause various glitches in Google platform. To address this, I provided the script with the 'Refresh Script State' option in the menu.

@mathematicw
Copy link
Collaborator

@jrwashburn I'm going to push commit now. Should I add your instruction to SharedUtilities ?

@jrwashburn
Copy link
Contributor Author

@mathematicw let's coordinate merge order on discord where easier to chat?

@mathematicw
Copy link
Collaborator

I'd prefer just make a new PR, as there were some unfinished things to start from it. And now it's a working script that just needs to be fine-tuned,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants