Zuckerberg's AI Has Reportedly Been Horrifically Inappropriate With Children
Over the weekend, the Wall Street Journal reported that Meta staffers had raised concerns that underage users were being exposed to sexually explicit discussions by AI-powered bots. The Facebook owner's chatbots had reportedly indulged in fantasy "romantic role-play," including the sharing of selfies and live voice conversations, features that were readily available to underage users. The news indicated that Meta CEO Mark Zuckerberg's obsession with making bots as engaging as possible came at the expense of effective ethical guardrails, a fast and loose approach that could easily expose minors to disturbing content. Even chatbots modeled after Princess Anna from the […]


Over the weekend, the Wall Street Journal reported that Meta staffers had raised concerns over underage users being exposed to sexually explicit discussions by the company's AI-powered bots.
The Facebook owner's chatbots had reportedly indulged in explicit "romantic role-play," including the sharing of selfies and live voice conversations — features that were readily available to underage users.
The news shows that Meta CEO Mark Zuckerberg's obsession with making bots as engaging as possible is coming at the expense of effective ethical guardrails, a fast and loose approach that could easily expose minors to inappropriate content.
Even chatbots modeled after Princess Anna from the animated blockbuster "Frozen" reportedly had romantic encounters with users, a trend that seemingly had people at Disney horrified.
"We did not, and would never, authorize Meta to feature our characters in inappropriate scenarios and are very disturbed that this content may have been accessible to its users — particularly minors — which is why we demanded that Meta immediately cease this harmful misuse of our intellectual property," a spokesperson told the WSJ.
AI versions of celebrities are also misbehaving, with the newspaper reporting that a bot based on wrestling personality John Cena engaged in a "graphic sexual scenario" with a user who identified as a 14-year-old girl.
Meta claims it had addressed the issue by "taking additional measures" to make it as hard as possible for bad actors to manipulate "our products into extreme use cases."
The WSJ's findings are only the tip of the iceberg. 404 Media reported on Tuesday that Meta's AI Studio app was routinely allowing users to create bots that claimed they were licensed therapists, further highlighting the dangers of the tech. The bots were even caught generating entirely made-up license numbers for a number of US states, as well as hallucinating the names of therapy businesses.
Some user-created bots also claimed to be veterinarians, suggesting that users may already be taking health advice for their pets from factually-challenged AI chatbots.
Despite the substantial ethical implications, Meta is doubling down. Today, the company released a ChatGPT-like app — just hours after 404 published its story — that shows your friends how you're making use of Meta's AI features.
Zuckerberg, for his part, has personally overseen the loosening of restrictions, according to the WSJ, chastising managers for not rolling out chatbot features fast enough. He even pushed to have the chatbots mine personal profile data to keep conversations going.
Is this Meta tripping over its own feet to keep up with the likes of OpenAI and Google? The company has historically struggled to keep up in a rapidly changing AI landscape. Earlier this month, Meta was accused of posting misleading benchmark results of its latest Llama 4 model, highlighting an intense and increasingly heated head-to-head.
Zuckerberg's well-documented "move fast and break things" approach could have some serious ethical drawbacks, highlighting some persistent challenges when it comes to content moderation in the age of AI.
Whether AI-based characters themed after "Frozen" roleplaying romantic encounters with underage users will be enough to convince Meta to take a step back and reevaluate the situation remains unclear at best.
"That effort would really require pausing and taking a step back," University of Michigan researcher Lauren Girouard-Hallam told the WSJ. "Tell me what mega company is going to do that work."
More on AI chatbots: Professors Staffed a Fake Company Entirely With AI Agents, and You'll Never Guess What Happened
The post Zuckerberg's AI Has Reportedly Been Horrifically Inappropriate With Children appeared first on Futurism.