In a development that could reshape how businesses approach AI adoption, a proposed class action lawsuit against Perplexity, Google, and Meta alleges that the AI search engine’s “Incognito Mode” is a “sham” that fails to protect user conversations from being shared with advertising giants. The lawsuit, filed by an anonymous user identified as John Doe, claims that “enormous volumes of sensitive information” including financial data, health queries, and personal identifiers were shared without user knowledge or consent between December 2022 and February 2026.
The Core Allegations
According to the complaint, Perplexity allegedly embedded ad trackers including Facebook Meta Pixel, Google Ads, and Google Double Click that transmitted complete chat transcripts to third parties. Even users who paid for premium services and activated “Incognito Mode” reportedly had their conversations shared alongside email addresses and other identifying information. The lawsuit alleges this violates both state and federal privacy laws, with potential statutory damages exceeding $5,000 per violation.
Broader Industry Context
This lawsuit emerges against a backdrop of increasing privacy concerns across the AI industry. Just weeks before this case, Anthropic’s Claude Code CLI suffered a significant source code leak when nearly 512,000 lines of TypeScript code were accidentally exposed through an npm packaging error. While Anthropic stated this was “not a security breach” and involved “no sensitive customer data,” the incident highlighted how even established AI companies can struggle with data protection.
Meanwhile, privacy-focused alternatives are gaining traction. DuckDuckGo’s Duck.ai chatbot saw visits surge 300% to 11.1 million in February 2025, according to ZDNET analysis, with users specifically citing privacy concerns as their motivation. Unlike proprietary chatbots, Duck.ai anonymizes queries and prevents third parties from accessing chat data through agreements with model providers like Anthropic and OpenAI.
Business Implications
For enterprise users, the Perplexity allegations raise critical questions about AI adoption strategies. Many businesses have encouraged employees to use AI tools for sensitive tasks like legal research, financial analysis, and healthcare inquiries. The lawsuit specifically mentions users relying on Perplexity for tax management, investment decisions, and medical research – all areas where data confidentiality is paramount.
“No reasonable person would have expected that Perplexity would share complete transcripts of their conversations,” the complaint states, suggesting that disclosure of these practices would fundamentally change how users interact with AI systems. This comes as companies increasingly integrate AI into their workflows, often without fully understanding the data privacy implications.
Technical and Regulatory Challenges
The lawsuit alleges that Perplexity’s privacy policy is difficult to find – users must search for it rather than finding it linked on the homepage. Even when located, the policy reportedly doesn’t mention specific trackers but warns that attempts to block them could affect services. Google’s response to the allegations suggests a shifting of responsibility: “Businesses manage the data they collect and are responsible for informing users about it.”
This case follows a pattern of increasing regulatory scrutiny. The proposed class covers certain Perplexity users nationwide, with a separate subclass for California users pursuing additional claims under that state’s stringent privacy laws. The timing coincides with broader discussions about AI regulation, including the WTO’s recent warning about how energy demands and data center growth could impact AI industry development.
Looking Forward
As AI becomes more integrated into business operations, the Perplexity lawsuit serves as a critical case study in balancing innovation with privacy protection. The outcome could influence how AI companies design their privacy features and disclosure practices, potentially leading to more transparent data handling standards across the industry.
For now, businesses using AI tools should carefully review their data protection measures and consider whether their current AI solutions adequately protect sensitive information. As one privacy expert noted about growing concerns: “It’s an issue that’s been around for a while, but I definitely feel like a lot of folks are taking a look at it with fresh eyes and urgency.”

