AI Rankings for Data Privacy in the Year 2025
In a recent report by Incogni, nine leading AI platforms were evaluated based on their data privacy practices. The study used 11 criteria to understand which systems offer the most privacy-friendly experience.
The report found that Neuroflash (ChatFlash), a German company, stands out for its transparency. As a GDPR-compliant platform, Neuroflash ensures that data sent to OpenAI is not used to train language models and remains confidential.
OpenAI, Mistral, Anthropic, and xAI were found to be relatively transparent about their practices, making it easy to determine how prompts are used for training. However, the report notes that OpenAI, Meta, and Anthropic provided the most detailed explanations about their training data sources.
On the other hand, Microsoft's Copilot and Meta AI, Google's Gemini, and DeepSeek were found to be the most aggressive in data collection and least transparent about their practices. ChatGPT (OpenAI) ranked second, followed by Grok (xAI).
The report also highlighted that most platforms collect data during account setup or user interaction, but some also gather data from additional sources. For example, ChatGPT, Gemini, DeepSeek collect data from security partners, Gemini and Meta AI collect data from marketing partners, Copilot collects data from financial institutions, and Claude (Anthropic) uses commercial datasets.
The risk of unauthorized data collection and sharing has surged with the increasing use of generative AI and large language models (LLMs). Incogni found that all privacy policies require at least a college-level reading ability. Meta, Microsoft, and Google provide long and complex privacy documents that cover multiple products, while OpenAI and xAI offer helpful support articles.
Privacy risks vary widely between AI platforms. The best performers offer clear privacy policies, opt-out controls, and minimal data collection, while the worst offenders lack transparency and share user data broadly without clear justification.
In terms of mobile app data collection and sharing, Le Chat had the lowest privacy risk, followed by Pi AI and ChatGPT. Meta AI was the most aggressive, collecting data like usernames, emails, phone numbers, and sharing much of it with third parties. Gemini and Meta AI collect exact user locations, Pi AI, Gemini, and DeepSeek collect phone numbers, Grok shares photos and app interaction data, and Claude shares app usage and email addresses.
However, no platform offers a way for users to remove their personal data from existing training sets. Most platforms share prompts with a defined set of third parties, including service providers, legal authorities, and affiliated companies.
Incogni concludes that AI platforms must make privacy documentation easier to read, provide modular privacy policies for each product, and avoid relying on broad umbrella policies. Companies should also maintain up-to-date support resources that clearly answer data handling questions in plain language.
The investigation revealed that platforms are grouped into three levels of transparency. OpenAI, Mistral, Anthropic, and xAI provide accessible documentation, while Microsoft and Meta make this information somewhat difficult to find. Gemini, DeepSeek, and Inflection offer limited or fragmented disclosures.
Some platforms, such as ChatGPT, Copilot, Le Chat, and Grok, allow users to opt out of training. Meta and Microsoft require users to search through unrelated documentation for information about prompt usage.
In conclusion, while the data privacy landscape of AI platforms is complex, the report from Incogni serves as a valuable resource for consumers seeking to make informed choices about the AI services they use.
Read also:
- "In a daring decision, Battlefield 6 forgoes ray tracing - understanding the advantages this choice brings"
- Rapid growth witnessed by Dinesh Pandey's business empire over the past two years, with a notable 60% expansion in the retail sector.
- Upcoming Amazon Hardware Event 2025: Anticipated Announcements
- Roblox is distributing its artificial intelligence technology to combat harmful in-game conversations and safeguard children