EU Probes X's Grok AI Over Sexual Deepfakes as Tech Giants Face Mounting Regulatory Scrutiny

Summary: The European Commission has launched an investigation into X's Grok AI over sexual deepfake concerns, potentially resulting in fines up to 6% of global turnover. This regulatory action coincides with broader AI challenges including biased information propagation through ChatGPT's use of Grokipedia, industrial robots lagging human efficiency by 50-70%, escalating cybersecurity threats predicted for 2026, and growing public backlash against AI infrastructure projects. The article examines how rapid AI advancement is creating complex regulatory, ethical, and practical challenges across multiple sectors.

The European Commission has launched a formal investigation into Elon Musk’s X platform over concerns that its AI tool Grok was used to create sexualized images of real people. This regulatory action follows similar scrutiny from the UK’s Ofcom and comes just weeks after X received a �120 million fine for deceptive verification practices. If found in violation of the Digital Services Act (DSA), X could face penalties up to 6% of its global annual turnover.

Regulatory Pressure Intensifies

EU Executive Vice-President Henna Virkkunen called the sexual deepfakes a “violent, unacceptable form of degradation,” emphasizing that the investigation will determine whether X treated European citizens’ rights as “collateral damage of its service.” The probe extends beyond deepfakes to include X’s recommender algorithms, which have been under investigation since December 2023.

X’s Safety account previously stated the platform had stopped Grok from digitally altering pictures to remove clothing in “jurisdictions where such content is illegal.” However, campaigners argue this capability should have never existed. The timing is particularly notable given Grok’s recent claim of generating over 5.5 billion images in just 30 days.

Broader AI Challenges Beyond Content Moderation

While X faces regulatory heat, other AI developments reveal deeper systemic challenges. A TechCrunch report reveals that ChatGPT is now pulling answers from Musk’s Grokipedia, an AI-generated encyclopedia criticized for conservative bias and factual inaccuracies. GPT-5.2 cited Grokipedia nine times in response to various queries, though it avoided using the source on topics where its inaccuracies are widely known.

An OpenAI spokesperson stated the company aims to draw from “a broad range of publicly available sources and viewpoints,” but this incident highlights how AI systems can inadvertently amplify biased or inaccurate information. Grokipedia has faced criticism for claiming pornography contributed to the AIDS crisis and offering ideological justifications for slavery.

Industrial Realities: Robots Still Lag Human Efficiency

Meanwhile, in the industrial sector, UBTech – a leading Chinese humanoid robot maker – reveals that its Walker S2 robots are only 30-50% as efficient as human workers in specific tasks like stacking boxes and quality control. Despite this limitation, manufacturers are racing to order them to avoid competitive disadvantages.

“You can imagine…if Tesla has the advantage of deploying their own human robots into the manufacturing line, that means maybe BYD, they are staying behind,” said Michael Tam, Chief Brand Officer at UBTech. The company aims to boost robot performance to 80% of human efficiency by 2027 and targets producing 10,000 robots this year.

Cybersecurity Threats Loom Large

Beyond content and industrial applications, cybersecurity experts warn that 2026 could be a tipping point for AI-enabled threats. ZDNET’s analysis predicts unprecedented damage from AI-powered cyberattacks, with global ransomware damage expected to increase 30% from $57 billion in 2025 to $74 billion in 2026.

“2025 marked the first large-scale AI-orchestrated cyber espionage campaign, where Anthropic’s Claude was used to infiltrate global targets,” said Floris Dankaart of NCC’s Managed Extended Detection and Response Group. “This trend will continue in 2026, and AI’s use as a sword will be followed by an increase in AI’s use as a shield.”

Infrastructure Backlash and Industry Response

The AI boom faces another challenge: growing public opposition to data center projects. Major operators like Digital Realty, QTS, and NTT Data are planning coordinated lobbying campaigns to counter backlash over rising energy costs, water consumption, and air pollution. Over two dozen projects were blocked or delayed in January 2025 alone.

“Nimbyism is coming to our space real fast,” said Andrew Power, CEO of Digital Realty. “There’s a tremendous amount of misperception that is slowing development.” Industry leaders argue data centers are being unfairly blamed for energy price increases resulting from grid under-investment.

Balancing Innovation with Responsibility

The EU’s investigation into X represents more than just another regulatory action – it’s part of a broader pattern where AI capabilities are outpacing governance frameworks. From deepfake generation to biased information propagation and cybersecurity vulnerabilities, the technology presents complex challenges that require nuanced solutions.

As Regina Doherty, a member of the European Parliament, noted: “The European Union has clear rules to protect people online. Those rules must mean something in practice, especially when powerful technologies are deployed at scale. No company operating in the EU is above the law.”

The coming months will reveal whether voluntary industry measures can address these concerns or whether more stringent regulation becomes inevitable. What’s clear is that as AI capabilities expand, so too must the frameworks governing their responsible deployment.

Found this article insightful? Share it and spark a discussion that matters!

Latest Articles