Generative AI and privacy are best frenemies - a new study ranks the best and worst offenders

10 hours ago 10
gettyimages-1467937769-cropped
TU IS/Getty

Most generative AI companies rely on user data to train their chatbots. For that, they may turn to public or private data. Some services are less invasive and more flexible at scooping up data from their users. Others, not so much. A new report from data removal service Incogni looks at the best and the worst of AI when it comes to respecting your personal data and privacy.

For its report "Gen AI and LLM Data Privacy Ranking 2025," Incogni examined nine popular generative AI services and applied 11 different criteria to measure their data privacy practices. The criteria covered the following questions:

  1. What data is used to train the models?
  2. Can user conversations be used to train the models?
  3. Can prompts be shared with non-service providers or other reasonable entities?
  4. Can the personal information from users be removed from the training dataset?
  5. How clear is it if prompts are used for training?
  6. How easy is it to find information on how models were trained?
  7. Is there a clear privacy policy for data collection?
  8. How readable is the privacy policy?
  9. Which sources are used to collect user data?
  10. Is the data shared with third parties?
  11. What data do the AI apps collect?

The providers and AIs included in the research were Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI did well with some questions and not as well with others.

Also: Want AI to work for your business? Then privacy needs to come first

As one example, Grok earned a good grade for how clearly it conveys that prompts are used for training, but didn't do so well on the readability of its privacy policy. As another example, the grades given to ChatGPT and Gemini for their mobile app data collection differed quite a bit between the iOS and Android versions.

Across the group, however, Le Chat took top prize as the most privacy-friendly AI service. Though it lost a few points for transparency, it still fared well in that area. Plus, its data collection is limited, and it scored high points on other AI-specific privacy issues.

ChatGPT ranked second. Incogni researchers were slightly concerned with how OpenAI's models are trained and how user data interacts with the service. But ChatGPT clearly presents the company's privacy policies, lets you understand what happens with your data, and provides clear ways to limit the use of your data.

(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Grok came in third place, followed by Claude and PI. Each had trouble spots in certain areas, but overall did fairly well at respecting user privacy.

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni said in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy."

As for the bottom half of the list, DeepSeek took the sixth spot, followed by Copilot, and then Gemini. That left Meta AI in last place, rated the least privacy-friendly AI service of the bunch.

Also: How Apple plans to train its AI on your data without sacrificing your privacy

Copilot scored the worst of the nine services based on AI-specific criteria, such as what data is used to train the models and whether user conversations can be used in the training. Meta AI took home the worst grade for its overall data collection and sharing practices.

"Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft)," Incogni said. "Gemini, DeepSeek, Pi AI, and Meta AI don't seem to allow users to opt out of having prompts used to train the models."

Incogni's AI chatbot privacy rankings for 2025
Incogni

In its research, Incogni found that the AI companies share data with different parties, including service providers, law enforcement, member companies of the same corporate group, research partners, affiliates, and third parties.

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni said in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators."

With some services, you can prevent your prompts from being used to train the models. This is the case with ChatGPT, Copilot, Mistral AI, and Grok. With other services, however, stopping this type of data collection doesn't seem to be possible, according to their privacy policies and other resources. These include Gemini, DeepSeek, Pi AI, and Meta AI. On this issue, Anthropic said that it never collects user prompts to train its models.

Also: Your data's probably not ready for AI - here's how to make it trustworthy

Finally, a transparent and readable privacy policy goes a long way toward helping you figure out what data is being collected and how to opt out.

"Having an easy-to-use, simply written support section that enables users to search for answers to privacy related questions has shown itself to drastically improve transparency and clarity, as long as it's kept up to date," Incogni said. "Many platforms have similar data handling practices, however, companies like Microsoft, Meta, and Google suffer from having a single privacy policy covering all of their products and a long privacy policy doesn't necessarily mean it's easy to find answers to users' questions."

Get the morning's top stories in your inbox each day with our Tech Today newsletter.

Read Entire Article