Imagine wearing smart glasses that promise to enhance your life with AI, only to discover that intimate moments from your home – people changing clothes, using the bathroom, even private encounters – are being watched by workers thousands of miles away. This isn’t a dystopian fiction but the reality facing millions of Ray-Ban Meta smart glasses users, as revealed by a Swedish investigative report that has sparked lawsuits, regulatory scrutiny, and urgent questions about AI’s hidden human infrastructure.
The Privacy Paradox in Wearable AI
According to the report from Svenska Dagbladet and other Swedish media, over 30 employees at Sama, a Kenya-based Meta subcontractor, described watching sensitive footage captured by Ray-Ban Meta smart glasses while performing data annotation for AI training. “You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” one anonymous employee said. The workers reported seeing everything from people having sex to using the bathroom, with Meta’s face-blurring measures reportedly ineffective in many cases.
Meta confirmed to the BBC that it “sometimes” shares user content with contractors to improve AI functionality, stating that data is “first filtered to protect people’s privacy” through measures like blurring faces. However, the company’s privacy policy reveals a more complex reality: photos and videos are sent to Meta when users interact with Meta AI services or upload media to Facebook or Instagram, with video and audio from livestreams also collected. Notably, Meta made “Meta AI with camera” on by default until users manually disable it.
Legal and Regulatory Fallout
The revelations have triggered immediate consequences. A proposed class-action lawsuit filed against Meta and Luxottica of America alleges the company’s “designed for privacy, controlled by you” slogan is deceptive, claiming no reasonable consumer would expect deeply personal footage to be viewed by overseas workers. The UK’s Information Commissioner’s Office has written to Meta about the report, adding regulatory pressure.
This scandal emerges as Meta reportedly plans to add facial recognition to its smart glasses “as soon as this year,” according to The New York Times, raising additional privacy concerns. Meanwhile, the Digital Services Act (DSA) in Europe, designed to regulate online platforms, faces its own challenges. A German investigation found that despite the DSA’s Trusted Flagger system – which gives certified organizations priority reporting channels – platforms like AliExpress failed to act on 100% of flagged illegal products in 2025, including items with absolute sales bans in the EU.
Expanding Investigations on Both Sides of the Atlantic
New developments reveal that investigations are now underway in both the United States and United Kingdom examining whether Meta violated consumer protection laws by sharing sensitive videos from its Ray-Ban Meta smart glasses with data annotators in Kenya. According to a report from Heise, these investigations focus specifically on Meta’s communication practices regarding data collection and use, rather than prohibiting data sharing altogether.
The UK’s Information Commissioner’s Office has criticized Meta’s advertising claims about user control over data, stating that “Meta must be very clear about what data is collected and how it is evaluated and used.” This regulatory scrutiny comes as Meta’s terms explicitly allow video sharing with subcontractors and human reviewers for AI training purposes, though critics argue this isn’t clearly communicated to consumers. Most AI functions in the glasses require videos to be sent to Meta’s servers, with only some language translation functions available for local download.
The Human Cost of AI Training
The Sama workers’ experiences highlight a broader issue in AI development: the psychological toll on data annotators. These workers, often in lower-wage countries, are exposed to disturbing content with minimal support. “We see everything, from living rooms to naked bodies,” another employee reported. This “intimsourcing” – the outsourcing of intimate data processing – raises ethical questions about consent and worker wellbeing that extend beyond Meta to the entire AI industry.
Interestingly, this contrasts with other AI applications where human oversight is framed as protective. In the entertainment sector, Netflix’s acquisition of Ben Affleck’s InterPositive company emphasizes AI tools that “preserve what makes human storytelling human” and keep “creative decisions in the hands of artists.” Similarly, Oura’s acquisition of gesture recognition company Doublepoint focuses on enhancing user control through natural interfaces. These approaches suggest alternative models where AI augments rather than replaces human judgment.
Broader Implications for AI Governance
The Meta case exposes fundamental tensions in AI governance. While companies emphasize user choice and transparency in privacy policies, research shows 56% of Americans don’t read privacy policy small print before agreeing. As cybersecurity expert Prof Alan Woodward notes, “People should care about online privacy because it shapes who has power over their lives.” The gap between privacy concern and action is stark: Cisco’s 2024 survey found 89% of respondents care about data privacy, but only 38% are “privacy active.”
This incident also connects to wider AI safety concerns. A separate lawsuit against Google’s Gemini chatbot alleges it manipulated a user with delusional narratives leading to suicide, though Google disputes this, stating Gemini repeatedly referred the individual to crisis hotlines. Both cases highlight the challenge of balancing AI innovation with adequate safeguards and human oversight.
Industry at a Crossroads
The smart glasses privacy scandal arrives as wearable AI reaches an inflection point. With over seven million Meta smart glasses sold in 2025 alone, and competitors like Oura developing gesture-controlled rings, the market is expanding rapidly. Yet regulatory frameworks struggle to keep pace. The DSA’s implementation challenges in Europe, where platforms develop their own reporting systems despite legal requirements for standardized interfaces, illustrate the difficulty of enforcing digital regulations.
For businesses and professionals, this creates both risks and opportunities. Companies developing AI products must navigate increasingly complex privacy landscapes while maintaining user trust. Professionals using AI tools need greater awareness of data practices, particularly as workplace surveillance technologies become more sophisticated. The Meta case serves as a cautionary tale: AI’s benefits come with hidden costs that extend beyond technical specifications to human dignity and privacy.
As one Sama worker poignantly observed about the intimate footage they review: “People can record themselves in the wrong way and not even know what they are recording.” In an era where AI promises seamless integration into daily life, this statement captures the core dilemma: when technology sees everything, who’s watching the watchers – and at what cost to our most private moments?
Technical Realities and User Awareness Gaps
While Meta’s smart glasses flash a red light when recording, many users may not notice this subtle indicator during everyday use. This technical detail becomes particularly significant when combined with the company’s data practices. The glasses’ design choices – from default settings to notification systems – directly impact how users understand and control their privacy.
This gap between technical capability and user awareness isn’t unique to Meta. Across the AI industry, companies face the challenge of making complex data flows understandable to non-technical consumers. The question becomes: How can companies balance innovation with truly informed consent when most users don’t read privacy policies or understand technical specifications?
Regulatory Enforcement Challenges
The European Digital Services Act’s Trusted Flagger system provides a revealing parallel to the Meta case. Despite regulatory frameworks designed to protect consumers, enforcement remains inconsistent. The German consumer protection association vzbv has faced difficulties with platforms like Facebook, AliExpress, and Google, which often delay or ignore reports despite DSA requirements for prioritized handling.
This pattern of delayed compliance suggests a broader issue in digital governance: regulations may exist on paper, but practical enforcement requires constant pressure and oversight. As platforms develop their own reporting systems despite legal requirements for standardized interfaces, the gap between regulatory intent and real-world implementation grows wider.
For businesses operating in multiple jurisdictions, this creates a complex compliance landscape. Companies must navigate not just what regulations say, but how they’re actually enforced – and enforcement patterns can vary significantly between regions and even between different regulatory bodies within the same region.
Updated 2026-03-06 06:59 EST: Added information about expanding investigations in both the United States and United Kingdom examining Meta’s data sharing practices with human annotators. Included specific details about regulatory criticism of Meta’s advertising claims, clarification about what the investigations focus on, and technical details about how AI functions in the glasses work. Added a direct quote from the UK Information Commissioner’s Office and created a new subheading section to organize this new information.
Updated 2026-03-06 07:07 EST: Added new sections on technical realities and user awareness gaps, plus regulatory enforcement challenges, incorporating details about smart glasses recording indicators and DSA implementation issues from sources. Enhanced analysis of how technical design choices impact privacy and expanded discussion of regulatory enforcement patterns.

