You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The proposal for moderation does not include many details on how PDS administrators will receive reports, or on what action they will be able to take to protect themselves and their users from bad actors when they receive reports about objectionable content.
As someone who plans on hosting a PDS that may have hundreds or even thousands of users, I would expect I should be able to take actions similar to the following:
Establish rules and terms of service for people on my PDS
Receive reports about bad actors or toxic content seen by users hosted on my PDS
Be able to block bad actors from interacting from people on my PDS
Be able to block content from being seen by people on my PDS
Be able to remove users from my PDS and warn other hosts of said user
Be able to alert said bad actor as to what action I took and why.
The labeling service is a good idea, but it does not prevent toxic content from propagating across the network. For example, it's not enough to put a label on racist content and allow people to hide it, we should be doing everything we can on all layers of the network to stop people from being able to post exceptionally harmful content altogether.
The text was updated successfully, but these errors were encountered:
If giving the PDS this type of control is not in the general vision of the network then I think we need a clear and more detailed map on what each of the nodes will be responsible for in relation to moderation.
The proposal for moderation does not include many details on how PDS administrators will receive reports, or on what action they will be able to take to protect themselves and their users from bad actors when they receive reports about objectionable content.
As someone who plans on hosting a PDS that may have hundreds or even thousands of users, I would expect I should be able to take actions similar to the following:
The labeling service is a good idea, but it does not prevent toxic content from propagating across the network. For example, it's not enough to put a label on racist content and allow people to hide it, we should be doing everything we can on all layers of the network to stop people from being able to post exceptionally harmful content altogether.
The text was updated successfully, but these errors were encountered: