Share this article on:
Powered by MOMENTUMMEDIA
Breaking news and updates daily.
Attempts to prevent Meta from using EU user data to train its AI have failed after the German data protection watchdog ordered Meta to halt the activity.
Last month, Meta announced its new standalone ChatGPT rival Meta AI, which differentiates itself from other AI chatbots like ChatGPT and DeepSeek by drawing on information it already knows about users through their social media accounts for years and years.
“We’re using our decades of work personalising people’s experiences on our platforms to make Meta AI more personal. You can tell Meta AI to remember certain things about you (like that you love to travel and learn new languages) and it can also pick up important details based on context,” Meta said.
“Your Meta AI assistant also delivers more relevant answers to your questions by drawing on information you’ve already chosen to share on Meta products, like your profile, and content you like or engage with.”
Following this, the Verbraucherzentrale North Rhine-Westphalia (NRW), a regional German data protection authority, has ordered Meta to halt the training altogether, sending the company a cease and desist letter, which will follow legal action if Meta didn’t cooperate.
However, the court injunction to prevent Meta using the data was not granted by the Cologne Court.
This is despite privacy regulators from Belgium, France, and the Netherlands having already found issue with the new AI and warned users to restrict data access before the company begins the training on 27 May as part of its new privacy policy by objecting through Meta’s website.
While Meta is set to continue the training, it did make some changes, including improved transparency notices and clearer and easier opt-out forms.
Kok-Leong Ong, RMIT professor of business analytics, highlighted Meta’s decision to train AI on social media user data as the potential for major security risks.
“Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements,” said Ong.
“They will need to choose between safeguarding their data versus the experience they get from using the AI agent. Conversely, imposing tight security and privacy settings on Meta may impact the effectiveness of its AI agent.”
Ong also warns that AI powered by social media could expand the spread of misinformation and harmful content.
“We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media.
“AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content. This could lead to mental health issues and fewer in-person social interactions,” said Ong.
That being said, a press release by the Irish Data Protection Authority (DPC), which is the lead authority for Meta, remained positive about the training after the company made changes including the ones named previously.
“Having reviewed Meta’s proposals and following feedback from the other EU/EEA supervisory authorities, the DPC made a number of recommendations to Meta regarding the potential impact for the data protection rights of individuals,” the DPC said.
“Meta has been responsive to the DPC’s requests during this process and as a result, Meta has implemented a number of significant measures and improvements.”
Be the first to hear the latest developments in the cyber industry.