A damning new report from Common Sense Media has revealed that xAI’s chatbot Grok poses significant risks to children and teenagers, with inadequate age verification, weak safety guardrails, and frequent generation of inappropriate content. The findings come as regulatory bodies worldwide are intensifying scrutiny of AI systems’ impact on young users, forcing the industry to confront fundamental questions about balancing innovation with protection.
Systemic Safety Failures Exposed
Common Sense Media’s assessment, conducted between November and January, found Grok’s “Kids Mode” essentially non-functional. The nonprofit discovered users aren’t asked for age verification, allowing minors to lie about their age, and the system fails to use context clues to identify teenagers. Even with Kids Mode enabled, Grok produced harmful content including gender and race biases, sexually violent language, and detailed explanations of dangerous ideas.
“We assess a lot of AI chatbots at Common Sense Media, and they all have risks, but Grok is among the worst we’ve seen,” said Robbie Torney, head of AI and digital assessments at the nonprofit. He noted that while safety gaps are common across chatbots, Grok’s failures intersect in particularly troubling ways: “Kids Mode doesn’t work, explicit material is pervasive, and everything can be instantly shared to millions of users on X.”
Global Regulatory Response Intensifies
The report’s release coincides with escalating international regulatory action. The European Commission has launched an investigation into X over concerns that Grok was used to create sexualized deepfake images of real people. If found in violation of the EU’s Digital Services Act, X could face fines up to 6% of its global annual turnover. This follows similar announcements from UK watchdog Ofcom and temporary bans in Indonesia and Malaysia.
California Senator Steve Padilla, one of the lawmakers behind the state’s AI chatbot regulations, told TechCrunch: “Grok exposes kids to and furnishes them with sexual content, in violation of California law. This is precisely why I introduced Senate Bill 243… No one is above the law, not even Big Tech.”
Industry-Wide Safety Measures Emerge
Other major AI companies are implementing more robust safety measures in response to growing concerns about teen protection. Meta has announced a global pause on teen access to AI characters across all its apps while developing a specially tailored version with enhanced parental controls. The company faces legal challenges, including an upcoming trial in New Mexico where it’s accused of failing to protect children from sexual exploitation.
OpenAI has rolled out new teen safety rules for ChatGPT, including parental controls and an age prediction model to estimate whether accounts likely belong to users under 18. Character.AI restricted open-ended chatbot conversations for users under 18 in October, and AI role-playing startup Character AI removed chatbot functions entirely for minors.
Broader Implications for AI Development
The Grok controversy highlights deeper challenges in AI development. Anthropic CEO Dario Amodei recently warned in a nearly 20,000-word essay about catastrophic risks from powerful AI systems, arguing current safeguards are inadequate and humanity lacks maturity to handle such power. “Humanity is about to be handed almost unimaginable power and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it,” Amodei wrote.
Meanwhile, concerns about AI systems incorporating biased or inaccurate information are growing. ChatGPT has been found pulling answers from Elon Musk’s Grokipedia, an AI-generated encyclopedia criticized for conservative bias and inaccuracies. GPT-5.2 cited Grokipedia nine times in response to various queries, though it avoided citing it on topics where its inaccuracies are widely known.
Business Model Questions Emerge
The Grok situation raises difficult questions about AI business models. After facing outrage over the enablement of illegal child sexual abuse material, xAI restricted Grok’s image generation and editing to paying X subscribers only. Torney criticized this approach: “When a company responds to the enablement of illegal child sexual abuse material by putting the feature behind a paywall rather than removing it, that’s not an oversight. That’s a business model that puts profits ahead of kids’ safety.”
This contrasts with approaches from companies like Sparkli, founded by former Google employees, which is building an AI-powered interactive learning app for children aged 5-12 with built-in safety measures for sensitive topics. The startup raised $5 million in pre-seed funding and is piloting in schools, demonstrating alternative approaches to AI development for young users.
The Path Forward
As regulatory pressure mounts, AI companies face critical decisions about how to balance innovation with protection. The Common Sense Media report found Grok’s AI companions enable erotic roleplay and romantic relationships, and since the chatbot appears ineffective at identifying teenagers, children can easily fall into these scenarios. The platform also gamifies interactions through “streaks” that unlock companion clothing and relationship upgrades.
With investigations underway across multiple continents and legislative action accelerating, the AI industry must develop more robust age verification systems, implement meaningful parental controls, and prioritize safety over engagement metrics. The coming months will reveal whether companies can self-regulate effectively or whether governments will impose stricter requirements that could reshape the entire AI landscape.

