Post

Meta's AI Training: EU User Data Under Scrutiny

Discover how Meta plans to utilize EU user data for AI training and the regulatory challenges it faces. This article delves into the privacy concerns, regulatory responses, and the implications for European users.

Meta's AI Training: EU User Data Under Scrutiny

TL;DR

Meta plans to use EU user data for AI training despite previous regulatory pushback. The Irish Data Protection Commission (DPC) and the European Data Protection Board (EDPB) have weighed in, leading to a complex landscape for data privacy and AI development.

Meta’s AI Training: EU User Data Under Scrutiny

European Facebook users have so far avoided having their public posts used to train parent company Meta’s AI model. That’s about to change, the company has warned. In a blog post today, it said that EU residents’ data was fair game and it would be slurping up public posts for training soon.

AI Training and Data Collection

Facebook, which launched its AI service for EU users last month, said that it needs that user data to make its AI service more relevant to Europeans.

  • Dialects and Colloquialisms: The company aims to capture regional linguistic nuances.
  • Hyper-Local Knowledge: Understanding local contexts and humor specific to different countries.
  • Advanced AI Models: Enhancing multi-modal functionality across text, voice, video, and imagery1.

Regulatory Pushback

Meta originally planned to start training its AI on user posts in the EU in June last year, but it pressed pause after pushback from the Irish Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO). This came after European privacy advocacy group NOYB complained about the move to several regulators in the region.

  • Legitimate Interest Claim: Meta claimed that the data collection was in its legitimate interest, allowing users to opt out of the AI training.
  • Opt-In vs. Opt-Out: NOYB responded that the company should ask users before using their data to train its AI models, advocating for an opt-in arrangement2.

EU Regulatory Response

The EU Handballs the Issue Back to National Regulators

The DPC’s delay was apparently just a speed bump. The Irish DPC asked the European Data Protection Board (EDPB) to mull the issue further, specifically asking several questions. When can an AI model be considered anonymous, it asked? And how can a company demonstrate legitimate interest when collecting data to develop and deploy such a model?

On December 17 of last year, the Board issued a ruling, Opinion 28/2024, that answered those questions by passing them back to regulators. They would have to look at anonymity on a per-case basis, the ruling said. It advised them to consider whether it would be possible to extract personal information from the model, and to look at what the company did during development to prevent personal data from being used in the training or to make it less identifiable.

To determine whether an interest is legitimate, a regulator should decide whether the company’s interest is lawful and with real-world application, rather than just being speculative. Developing an AI model would likely pass that test, it added. Then, they should evaluate whether the data collected is necessary to fulfill it, and then see whether that collection overrides the users’ fundamental rights.

Finally, the DPC asked the Board what the effect on an AI model’s operation would be if a company was found to have used personal data unlawfully to train it. The Board once again handed that to the regulators on a per-case basis3.

Meta’s Response and Objection Forms

Meta felt that this opinion was enough.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” the company said in the blog post about the forthcoming reintroduction of AI training. “Since then, we have engaged constructively with the IDPC and look forward to continuing to bring the full benefits of generative AI to people in Europe.”

The social media giant appears to have dodged NOYB’s opt-out vs opt-in question. It said that notifications about the AI training—which will arrive via email or via the platform—will include a link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta said. In short, it’s still an opt-out arrangement4.

But objection forms were a concern for NOYB in its original complaint.

“Meta makes it extremely complicated to object, even requiring personal reasons,” NOYB warned last June. “A technical analysis of the opt-out links even showed that Meta requires a login to view an otherwise public page. In total, Meta requires some 400 million European users to ‘object’, instead of asking for their consent.”

It remains to be seen whether the objection forms will be different this time around. Perhaps the real worry here is that we’re about to get an EU AI model trained on traditional Facebook fodder: food pictures, obvious political opinions, an endless stream of vacuous fortune-cookie life lessons, and your cousins’ ongoing feud over what Julie said about Brian’s egg salad at the family barbecue last March.

Protect Your Social Media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Conclusion

The ongoing debate between Meta and regulatory bodies highlights the complex landscape of data privacy and AI development. As Meta moves forward with its plans, the impact on EU users and the broader implications for data protection remain to be seen. Stay informed and protect your digital identity in this evolving technological era.

References

  1. Meta (2025-04-14). “Making AI Work Harder for Europeans” Meta Newsroom. Retrieved 2025-04-14. ↩︎

  2. NOYB (2024-06). “NOYB Urges 11 DPAs to Immediately Stop Meta’s Abuse of Personal Data for AI” NOYB. Retrieved 2025-04-14. ↩︎

  3. European Data Protection Board (2024-12-17). “Opinion 28/2024 on AI Models” EDPB. Retrieved 2025-04-14. ↩︎

  4. Meta (2025-04-14). “Making AI Work Harder for Europeans” Meta Newsroom. Retrieved 2025-04-14. ↩︎

This post is licensed under CC BY 4.0 by the author.