You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The discussion I want to have is about this part :
Impact & Effort Impact: High Effort: High
How do you calibrate it?
Should we add a link on the source justifying the Impact and Effort (which is the current weak point of teh french RGESN and Green It 115 advice for eco sustainable design.
Notes: Unlike GRI ratings which were calibrated based on scientific measurements (GRI standards plus a GreenIT Report calculated using Jupyter notebooks to get a specific value which could be defined as high, medium, or low); the impact and effort ratings were mostly calculated based upon what evidence existed at the time of publication and the agreement of the committee involved as to the labelling of high, medium, or low. A more scientific metrics based measurement could be used in the future but as research in some areas is hard to quantify, linking to what research exists using citations would be a potential stepping point solution to mediate until a unified way to calculate every variable exists.
If anyone has further ideas, or questions on how this could be achieved, feel free to contribute to this thread!
Every link referenced within the footer of the specification has now been cross-referenced within the impact rating. The same will occur in due course with the test suite when the results are available to ensure that hard data is available to assess against the impact of each score (and techniques for implementation can be utilized as well). The justification for this approach is that footer links only contain specifications, government bodies, authoritative sources (guidelines, etc), and bodies of research and study which have been undertaken. As such these can be quantified as high quality.
We can reassess the referencing system in the future, and references as a whole were holistically used to score ratings as were individual expertise (so this shouldn't be classified as gospel - just as the best sources to backup claims). This is the closest to a balanced approach we have been able to come up with at this time. As such I'll close this issue as complete.
The update can be seen in the living draft and the cross-referencing will appear in the next public specification release.
Following a question on Slack:
Notes: Unlike GRI ratings which were calibrated based on scientific measurements (GRI standards plus a GreenIT Report calculated using Jupyter notebooks to get a specific value which could be defined as high, medium, or low); the impact and effort ratings were mostly calculated based upon what evidence existed at the time of publication and the agreement of the committee involved as to the labelling of high, medium, or low. A more scientific metrics based measurement could be used in the future but as research in some areas is hard to quantify, linking to what research exists using citations would be a potential stepping point solution to mediate until a unified way to calculate every variable exists.
If anyone has further ideas, or questions on how this could be achieved, feel free to contribute to this thread!
Credit: @youenchene @tantek
The text was updated successfully, but these errors were encountered: