-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add "and removing" to 9c re: illegal content #3
Conversation
"Under the inappropriate/illegal project requirements – you may wish to consider including mechanisms for detecting, moderating and removing content." Recommendation from eSafety Australia.
@@ -17,7 +17,7 @@ Indicator | Requirement | |||
**9. Do No Harm** | All projects must demonstrate that they have taken steps to ensure that the project anticipates, prevents and does no harm. | |||
**9a) Privacy & Freedom of Expression** | All projects must have strategies in place to anticipate, respond to and minimize adverse impacts on privacy and freedom of expression where governments are believed to be using the project’s product or services for illegitimate or political purposes. | |||
**9b) Data Privacy & Security** | Projects that collect data must identify the types of data collected and stored and demonstrate that the project ensures the privacy and security of this data and has taken steps to prevent adverse impacts resulting from it’s collection, storage and distribution. | |||
**9c) Inappropriate & Illegal Content** | Projects that collect, store or distribute content must have policies identifying inappropriate and illegal content such as child sexual abuse materials and mechanisms for detecting and moderating inappropriate/illegal content. | |||
**9c) Inappropriate & Illegal Content** | Projects that collect, store or distribute content must have policies identifying inappropriate and illegal content such as child sexual abuse materials and mechanisms for detecting, moderating and removing inappropriate/illegal content. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a statement about what type of content is appropriate and inappropriate, or, less acutely, a statement about content needing to be "generally accepted as not harmful to children."
With regards to illegality, I think the question of "under what laws" is sticky. Specifically, does this include laws pertaining to copyright? What if a rogue site in an oppressive nation is able to copy content from a nation with a freer press and redistribute it? Doing so would be illegal under the laws of both countries, yet it would also be a public good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good point. Yet, I would point out that the emphasis of this indicator on must have policies (...) and mechanisms not on whether certain content is illegal on some jurisdictions or defining appropriateness of content.
If those policies and mechanisms are embedded in the public good, then it satisfies the standard, and is left to the implementor to make their own decisions on what is inappropriate/illegal and make use of those policies and mechanisms. As discussed in #1, then it becomes an issue of "policing" implementation of digital public goods, which falls out of the scope of the standard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sgoggins Thanks for your comment. We've waited a reasonable time, but since your comment is not directly actionable, and it is on a different scope of the proposed changes here, I am merging this PR. If you feel your comment is not being addressed properly, I invite you to open a different PR suggesting changes to the actual text. Thanks again.
"Under the inappropriate/illegal project requirements – you may wish to consider including mechanisms for detecting, moderating and removing content." Recommendation from eSafety Australia.