Mattel’s going to make AI-powered toys, kids’ rights advocates are worried
Toy company Mattel has announced a deal with OpenAI to create AI-powered toys, but digital rights advocates have urged caution.

Toy company Mattel has announced a deal with OpenAI to create AI-powered toys, but digital rights advocates have urged caution.
In a press release last week, the owner of the Barbie brand signed a “strategic collaboration” with the AI company, which owns ChatGPT. “By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety,” it said.
Details on what might emerge were scarce, but Mattel said that it only integrates new technologies into its products in “a safe, thoughtful, and responsible way”.
Advocacy groups were quick to denounce the move. Robert Weissman, co-president of public rights advocacy group Public Citizen, commented:
“Mattel should announce immediately that it will not incorporate AI technology into children’s toys. Children do not have the cognitive capacity to distinguish fully between reality and play.
Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children. It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm.”
The kids aren’t alright
Some are concerned about the effect of AI on young developing minds. Researchers from universities including Harvard and Carnegie Mellon have warned about negative social effects, along with a tendency for children to attribute human-like properties to AI.
One such child, 14 year-old Sewell Seltzer III, took his own life after repeatedly talking to chatbots from Character.AI, which allows users to create their own AI characters.
In a lawsuit against the company, his mother Megan Garcia described how he began losing sleep and growing more depressed after using the service, to the point where he fell asleep in class. A therapist diagnosed him with anxiety and disruptive mood disorder. It emerged that he had become obsessed with an AI representing an adult character from Game of Thrones that purported to be in a real romantic relationship with him.
Past mistakes
We’re not suggesting Mattel would condone such activities. It cites “more than 80 years of earned trust from parents and families”, but that statement glosses over previous missteps.
These include Hello Barbie. Mattel launched this Wi-Fi connected doll in 2015 and encouraged kids to talk with it. It asked personal questions about children and their families, sending that audio to a third-party company that used AI to generate a response. Non-profit group Fairplay, which advocates for protecting children from inappropriate technology and brand marketing, launched a campaign protesting child surveillance. Subsequently, investigators found vulnerabilities that would allow intruders to eavesdrop on that audio. Mattel pulled the toy from shelves in 2017.
Fairplay executive Josh Golin slammed the OpenAI partnership announcement.
“Apparently, Mattel learned nothing from the failure of its creepy surveillance doll Hello Barbie a decade ago and is now escalating its threats to children’s privacy, safety and well-being.
Children’s creativity thrives when their toys and play are powered by their own imagination, not AI. And given how often AI ‘hallucinates’ or gives harmful advice, there is no reason to believe Mattel and OpenAI’s ‘guardrails’ will actually keep kids safe.”
Another incident where Mattel lost parents’ trust was back in November 2024 when a packaging mistake sent owners of its ‘Wicked’ doll to an adult movie website (Wicked Pictures) instead of a promotional landing page for the Wicked movie.
Incidents like these show that even with the best intentions in the world, companies can make mistakes.
Ultimately, it’s up to parents to make decisions about whether they’ll expose their children to AI-powered toys. It’s perhaps inevitable that AI will reach every corner of our lives, but is it ready and polished enough to be used on our children?
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.