Skip to content

utilizing user data for AI model training - Judgment from the Higher Regional Court of Cologne in the Meta case

AI manufacturers granted approval by Higher Regional Court of Cologne: Meta now authorized to utilize user data for AI model training. More details available here.

AI Model Training with User Data - Ruling by the Higher Regional Court of Cologne in the Meta Case
AI Model Training with User Data - Ruling by the Higher Regional Court of Cologne in the Meta Case

utilizing user data for AI model training - Judgment from the Higher Regional Court of Cologne in the Meta case

In a landmark decision, the Higher Regional Court of Cologne has ruled in favour of Meta, allowing the tech giant to continue using user data for AI training. The court's decision, published on May 23, 2025, provides valuable insights for companies considering similar practices.

The NRW Consumer Rights Organisation had raised concerns about Meta's AI training practices and sought a temporary injunction. However, the court dismissed the application, stating that Meta had examined alternatives but found them not equally suitable.

The court's decision highlights several key considerations for companies:

1. **Legal Compliance with Regulations**: The court ruled that Meta's approach to using partially de-identified data for AI training does not violate the Digital Markets Act (DMA) or the General Data Protection Regulation (GDPR). Companies must ensure their data practices comply with these regulations.

2. **User Consent and Notification**: Companies must provide users with opt-out options for their data being used in AI training and inform them about how their data is used. In Meta's case, users were adequately notified and given the opportunity to object.

3. **Legitimate Interest**: The court justified Meta's use of public user data under Article 6(1)(f) GDPR, stating it serves a legitimate interest. Companies must demonstrate that their use of data cannot be achieved by less intrusive means.

4. **Protection of Sensitive Data**: The court acknowledged Meta's technical safeguards, such as filtering out personally identifiable information. Companies should implement similar safeguards to protect sensitive data and ensure compliance.

5. **Balancing Test**: A careful, case-by-case balancing test is necessary when relying on legitimate interests to justify AI training. Companies should undertake a thorough assessment of their data practices.

The court's decision strengthens the legal position for companies training AI models with user data. However, it also emphasises that users' interests must be weighed against the companies' legitimate interest.

Meta has been using user data from Facebook and Instagram to train its AI models since May 27, 2025. Users can expect their data from June 26, 2024, onwards to be used for AI training, if this is explicitly informed to them.

It is crucial for companies planning to use customer or user data for AI training to seek legal advice at an early stage. Anyone who processes personal data, such as posts on Facebook, requires a legal basis, and training an AI also constitutes such data processing. The court has established that AI training with user data can be a legitimate interest of companies, but companies must formulate their legitimate interest "sufficiently clearly and precisely" and demonstrate its necessity.

Meta's case serves as a reminder for companies to approach AI training with caution, ensuring compliance with data protection regulations and maintaining transparency with users. Although the court upheld Meta's legal assessment, it is essential for companies to undertake a thorough assessment of their data practices to balance the interests of users and their own legitimate interests.

The NRW Consumer Rights Organisation's unsuccessful attempt to halt Meta's data-and-cloud-computing technology usage for AI training further emphasizes the importance of technology companies seeking advice on legal compliance with regulations like the GDPR and DMA before implementing such practices. In corporate data protection assessments, it is vital to demonstrate that legitimate use of sensitive data, such as user data, cannot be achieved by less intrusive means and that necessary safeguards, like filtering personally identifiable information, are in place to protect user interests.

Read also:

    Latest