This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hello Reddit Community,
Today, we're rolling out updates to Rule 3 and Rule 4 of our Content Policy to clarify the scope of these rules and give everyone a better sense of the types of content and behaviors that are not allowed on Reddit. This is part of our ongoing work to be transparent with you about how we’re evolving our sitewide rules to keep Reddit safe and healthy.
First, we're updating the language of our Rule 3 policy prohibiting non-consensual intimate media to more specifically address AI-generated sexual media. While faked depictions of non-consensual intimate media are already prohibited, this update makes it clear that sexually explicit AI-generated content violates our rules if it depicts a real, identifiable person.
This update also clarifies that AI-generated sexual media that depict fictional people, or artistic depictions such as cartoons or anime whether AI-generated or not, do not fall under this rule. Keep in mind however that this type of media may violate subreddit-specific rules or other policies (such as our policy against copyright infringement), which our Safety teams already enforce across the platform.
Sidenote: Reddit also leverages StopNCII.org, a free, online tool that supports platforms to detect and remove non-consensual intimate media while protecting the victim’s privacy. You can read more information about how StopNCII.org works here. If you've been affected by this issue, you can access the tool here.
Now to Rule 4. While the vast majority of Reddit users are adults, it is critical that our community continues to prioritize the safety, security, and privacy of minors regardless of their engagement with our platform. Given the importance of minor safety, we are expanding the scope of this Rule to also prohibit non-sexual forms of abuse of minors (e.g., neglect, physical or emotional abuse, including, for example, videos of things like physical school fights). This represents a new violative content category.
Additionally, we already interpret Rule 4 to prohibit inappropriate and predatory behaviors involving minors (e.g., grooming) and actively enforce against this content. In line with this, we’re adding language to Rule 4 to make this even clearer.
You'll also note that we're parting ways with some outdated terminology (e.g., "child pornography") and adding specific examples of violative content and behavior to shed light on our interpretation of the rule.
As always, to help keep everyone safe, we encourage you to flag potentially violative content by using the in-line report feature on the app or website, or by visiting this page.
That's all for now, and I'll be around for a bit to answer any questions on this announcement!
Does this mean Reddit will finally stop ignoring reports of CSAM posts (including allegedly-self-posted images by minors) and communities, like you have been for the past few years? Can we get a firm commitment that things will actually change here? This was one of the main issues that led to our communities to support the blackout and go dark for nearly a month.
- It took us a full twenty months to get CSAM images taken down from one of our communities; why are image posts removed by mods still visible to users, anyways?
- Other mods have told us they're having the same problems. Were those images ever taken down?
- Last year, Reddit appeared to suggest that they expect us to report individual posts in active CSAM trading rings, rather than shutting them down. I still do not see any viable means of reporting these; will you start accepting these reports in r/ModSupport modmail? Or can we get a proper way to report users and subreddits who post CSAM and NCIM content?
- Here's some more threads by us and other mods complaining about the same. In particular, a substantial number of reports are ignored the first time, and many are even repeatedly ignored after multiple requests for review.
Why can't we get you guys to take down not-AI-generated images that are posted to our communities, and which are still visible to users even after we remove them? See my other comment, the images reported by the other mod are still up right now.
We keep losing members of our team who do not wish to mod on a site that has such absolute indifference towards sexual abuse of minors. Can we get an official response addressing this, and what can be done to stop this from happening?
Subreddit
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/redditsecur...
Can you provide an explanation of why this was not taken seriously before; what lead to these reports getting ignored, repeatedly, nineteen times in total, for over a year and a half?
What, specifically, is Reddit doing to ensure that future reports are not going to be repeatedly ignored in the same way? What changes are you making to ensure our reports will reach a human being at Reddit, and not simply be kicked back to AEO dozens of times with no action occurring?
Can you also explain why image content removed by mods is still visible to users? This would be a far lesser issue if we could actually remove the content while waiting for admin intervention. Why is this not being done today?
Edit: I'm also being told by another mod that not all of the content reported by them in that r/ModSupport modmail screenshot has been taken down. Who do we/they reach out to to make this happen? Here are the relevant message links:
Can you please get this content taken down now? I have also asked the other mod to create a new ModSupport thread; I do not want to be going thru this again myself either if someone does this in one of our subs again.
Here's the new modmail they've sent:
https://reddit.com/message/messages/1wnsd45
We've lost a lot of folks on our team too, who do not wish to support Reddit specifically due to the issues this mod and myself have brought up. A clear and official response addressing what you're doing to ensure these reports are going to be handled in a timely manner in the future, is the least you can do to reassure us that we will not have to go thru this again, and to stop the flight of talented mods from our teams over moral objections to the way Reddit is being run.