AI-Generated Strangulation Videos Flood Social Media: OpenAI's Sora Under Fire (2025)

Imagine scrolling through your social media feed and stumbling upon terrifying, AI-generated videos of young women and girls in life-threatening situations—it's a chilling wake-up call to the darker side of technology. But here's where it gets controversial: Despite strict policies from companies like OpenAI, their tools are still being misused to create and spread violent content, raising huge questions about who should control AI's boundaries. And this is the part most people miss: These aren't just harmless animations; they're fueling real-world debates on ethics, safety, and the potential for harm in our digital age.

Let's break this down step by step for anyone just getting into the world of AI. OpenAI's Sora 2, a cutting-edge video generation model, has become a hot topic after videos depicting women and girls being strangled flooded platforms like TikTok and X (formerly Twitter). These clips, crafted by users through AI, starkly violate OpenAI's own rules against producing violent or harmful media. It's a glaring example of how generative AI systems, designed to create stunning visuals, can be twisted into something disturbing without proper safeguards in place.

For instance, one prolific account on X has been uploading dozens of these unsettling videos since mid-October. Each one typically lasts about 10 seconds and centers on a 'teenage girl' in distress—she's shown crying, fighting back desperately, until her eyes flutter shut and she collapses. The titles are alarmingly specific and sensational, such as 'A Teenage Girl Cheerleader Was Strangled As She Was Distressed,' 'Prep School Girls Were Strangled By The Murderer!', and 'Man Strangled a High School Cheerleader with a Purse Strap Which Is Crazy.' These aren't isolated incidents; they highlight a broader trend where AI tools are exploited to generate graphic content that could desensitize viewers or even inspire copycat behavior in the real world.

To understand why this matters, think of AI like a powerful paintbrush—it can create beautiful art, but in the wrong hands, it can produce nightmares. OpenAI has policies in place to prevent such abuses, but enforcement lags, often relying on after-the-fact reporting. This begs the question: Should AI companies be held liable for how their tech is used, or is it up to platforms and users to police themselves? It's a contentious point—some argue for stricter built-in controls to block harmful prompts, while others worry that over-regulation could stifle innovation. What do you think? Could this lead to a future where AI-generated violence becomes normalized, or will society demand better accountability?

This post is for paid members only

Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.

Subscribe (https://www.404media.co/membership/)

Sign up for free access to this post

Free members get access to posts like this one along with an email round-up of our week's stories.

Subscribe (https://www.404media.co/signup/)

Already have an account? Sign in (https://www.404media.co/signin/)

AI-Generated Strangulation Videos Flood Social Media: OpenAI's Sora Under Fire (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Aracelis Kilback

Last Updated:

Views: 6510

Rating: 4.3 / 5 (44 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.