GUIDES

ZDNet Investigation Identifies Five Privacy Risks in AI Chatbot Conversations Most Users Overlook

M megaone_admin Mar 29, 2026 2 min read
Engine Score 8/10 — Important

This story provides highly actionable advice on chatbot privacy, impacting a vast user base across individuals and companies. Its practical guidance on 'how to fix past mistakes' makes it particularly valuable for current AI users.

Editorial illustration for: ZDNet Investigation Identifies Five Privacy Risks in AI Chatbot Conversations Most Users Overlook

A ZDNet investigation published on March 28, 2026 details five categories of privacy risk that users face when sharing personal information with AI chatbots, along with practical steps to reduce exposure. The analysis comes as chatbot adoption continues to accelerate while federal privacy regulation remains absent.

The first risk involves memorization and surveillance potential. AI models may memorize personal information that could be extracted verbatim or near-verbatim, a core complaint in The New York Times lawsuit against OpenAI. The investigation references Anthropic’s recent clash with the Department of Defense, where Anthropic objected to its product being used for mass domestic surveillance, which the article frames as a tacit admission that these models can be used for that purpose.

The second risk concerns default privacy settings. Platform privacy controls are often buried and difficult to navigate. While Claude offers incognito chat and ChatGPT offers Temporary Chats, these are per-session toggles rather than fixed defaults. Users may also lose track of whether they are on a personal or work account, potentially sharing deeply personal information through an employer-managed AI where there is no expectation of employee privacy.

Third, emotional context in chatbot conversations reveals far more than search queries. The investigation draws a comparison between a single search query for a suicide prevention hotline and a thousand-line transcript of someone’s innermost thoughts shared with an AI, arguing the latter creates an unprecedented data exposure surface.

Fourth, human workers may review conversations. Some platforms use humans for reinforcement learning from human feedback, and flagging a chatbot response can trigger human review. The boundary between AI-only processing and human access is not always clear to users.

Fifth, policy is lagging behind the technology. There is no federal regulation in the United States governing how AI companies store sensitive data. The California Consumer Privacy Act provides some requirements, but protections vary state by state, leaving most users without meaningful legal recourse.

The investigation recommends that users review and tighten platform privacy settings, use incognito or temporary chat modes for sensitive conversations, avoid sharing financial or medical information with chatbots, and periodically delete conversation histories where platforms allow it.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy