Nonprofit Urges Shoppers to Avoid AI-Enabled Toys
A growing number of child safety advocates are sounding the alarm about AI toys, warning that products equipped with chatbots and interactive technologies may expose children to privacy risks, inappropriate content, and unhealthy emotional attachment. Fairplay, a nonprofit focused on children’s safety, issued an advisory on Thursday titled “AI Toys are NOT safe for kids,” urging gift givers to avoid buying these products during the holiday shopping season.
The group argues that AI toys mimic human behavior in ways that can blur boundaries for young children, who may not understand the difference between a machine and a trusted friend. The advisory was endorsed by more than 150 experts and organizations, including author and MIT professor Sherry Turkle, pediatrician Jenny Radesky, the Social Media Victims Law Center, and the International Play Association USA.
“It’s ridiculous to expect young children to avoid potential harm here,” said Rachel Franz, a Fairplay program director. She warned that AI toys can collect sensitive data, undermine healthy development, and displace the direct human interactions children need.
Reports Cite Data Collection and Inappropriate Responses
Fairplay’s warning follows similar concerns raised by the Public Interest Research Group in its annual “Trouble in Toyland” report. PIRG found that some AI toys encourage conversations about sexually explicit topics, lack meaningful parental controls, and gather extensive data on their young users. Examples include the collection of children’s voices, names, birth dates, likes and dislikes, and even details about friends or family.
“Because they’re connected to the internet, anything is available,” said PIRG co author Teresa Murray. “Who knows what those toys might start talking to your children about?”
Industry Response Highlights Safety Measures
Toy companies and AI developers have defended their products, emphasizing safety features and strict privacy policies. OpenAI recently suspended the developer of the AI teddy bear Kumma after PIRG researchers reported inappropriate guidance and conversations. The company said it enforces strict rules prohibiting any use of its technology that could endanger or sexualize minors.
OpenAI’s tools are embedded in a range of consumer products, including Loona, an AI robot pet, and the company has a partnership with Mattel to develop future AI experiences aimed at older users and families rather than children under 13.
Manufacturers behind toys such as Miko, Loona Petbot, and Gabbo say they prioritize child safety. Miko’s makers say facial recognition is optional and processed locally on the device, while Gabbo’s parent company says its safety guardrails are designed to shield kids and give parents oversight through companion apps.
Calls for Stronger Oversight and Informed Shopping
The Toy Association, which represents major manufacturers, said toys sold by reputable companies must comply with over 100 federal safety standards, including online privacy protections for minors. The group urged families to shop from trusted brands and retailers and highlighted the importance of reviewing available safety controls before buying AI-connected products.
Even with industry assurances, advocacy groups maintain that AI-enabled toys pose unique risks that traditional safeguards cannot fully address. Their message for the holiday season is clear: parents should use caution and consider whether a connected toy is worth the potential trade-offs.
