Netflix CEO Ted Sarandos recently spoke about how artificial intelligence can help creators tell stories “better, faster, and in new ways.” The promise of AI as a creative accelerator sounds inspiring on paper, and platforms are already leaning into that vision. Yet while industry leaders highlight the potential, a harsher reality is unfolding on social media feeds.
OpenAI’s latest video creation model, Sora 2, is being used to circulate a wave of harmful content across Instagram, TikTok, and YouTube. The model’s hyper-realistic output has made it effortless for people to produce videos that target and mock overweight individuals as well as communities of color. The rise of this content reflects a broader cultural problem. AI is becoming a tool not only for innovation but also for amplifying cruelty.
In recent weeks, several clips created with Sora 2 have gone viral for all the wrong reasons. One shows an overweight woman bungee jumping as the bridge supposedly collapses beneath her. Another features a Black woman falling through the floor of a fictional KFC, combining racism and fat-shaming into a single piece of “comedy.” Other videos depict delivery drivers crashing through porches or characters ballooning in size after eating.

Many viewers believe these clips are real, which makes the situation even more troubling. Hyper-realistic AI visuals blur the line between digital fabrications and actual events. When the content is built around stereotypes, the effect is not just harmful but socially corrosive. A single viral moment turns into a template for countless copycat videos chasing views and engagement.
The growing trend highlights a deeper ethical challenge. AI tools like Sora 2 have dramatically lowered the barrier to producing high quality video. Skilled editing once required time, tools, and expertise. Now anyone can generate near-cinematic visuals in seconds, even if their intent is rooted in harassment or hate. While OpenAI advertises strong guardrails, recent examples show clear gaps in enforcement.
This dynamic is not unique to Sora. Similar concerns have surfaced around other generative models as well. Researchers and journalists have repeatedly warned that AI can amplify misinformation, prejudice, and digitally generated cruelty. Publications like The Verge and Wired have covered how quickly hateful AI content spreads compared to organic posts, particularly on platforms optimized for engagement. You can read more about broader AI misuse patterns or explore reporting on AI safety challenges.
The implications go far beyond trolling. Young users especially struggle to distinguish AI fabrications from reality. Repeated exposure to dehumanizing portrayals shapes assumptions, biases, and humor patterns. When these videos go viral, the algorithm rewards and replicates harmful stereotypes at scale.

Despite the visibility of these clips, OpenAI has not made a public statement addressing this new wave of fatphobic and racist videos generated through Sora 2. The situation has prompted discussions across policy circles about how responsibility should be shared between developers, platforms, and users. Regulators in several countries are already examining whether current AI safeguards are enough as these models become more capable and accessible.
As Sora 2 and similar tools evolve, the central question becomes clearer. Innovation is racing ahead, yet the cultural and ethical guardrails lag behind. With AI creativity now available at the tap of a screen, society is being pushed to confront how to prevent new forms of digital “creativity” from eroding dignity, empathy, and basic respect.