- Meta decide to withhold launching its upcoming multimodal AI model and its future versions in the EU.
- Meta’s decision may impact its competitive edge and lead to a privacy policy revolution within the industry.
OUR TAKE
Meta’s decision to withhold the launch of its AI models in the EU due to regulatory uncertainties may impact its competitive edge and create opportunities for competitors. However, it could also drive a revolution in user privacy protection policies within the industry.
— Yasmine Luo, BTW reporter
What happened?
Meta has decided not to launch its upcoming multimodal AI model and its future versions in the EU due to regulatory uncertainties. In a statement to Axios, Meta said, “We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment.”
These models are designed to process text, images, and audio, powering AI capabilities in Meta platforms and the company’s Ray-Ban smart glasses.
However, Meta plans to introduce these models in the UK and will release the upcoming text-only Llama 3 model in the EU. This decision is driven by the challenges of training AI models with European customer data while complying with the General Data Protection Regulation (GDPR).
Previously, Meta was forced to stop using publicly available user data in the EU to train its AI models due to privacy concerns. Meta believes that not using European data will affect the AI models’ understanding of local languages and cultures.
Also read: New Ray-Ban Meta glasses smash sales records, Milleri says
Also read: What is metaverse? Future of digital world
Why it’s important
Meta’s announcement to not launch its products in EU countries has been foreshadowed for some time. Initially, Meta aimed to enter the European market by using user data to train its AI systems for better accuracy. However, the Irish Data Protection Commission (DPC) raised serious privacy concerns, forcing Meta to halt its expansion. The DPC highlighted issues such as:
- Lack of explicit user consent to obtain personal information.
- Unnecessary data collection: GDPR stipulates that only necessary data should be collected, but the DPC believes Meta’s data collection is too extensive and unregulated.
- Transparency issues: Users were not informed about the specific use of their data.
As a result, Meta had to stop its expansion, re-strategise, and reallocate resources, negatively impacting its operations. The DPC’s decision may lead to stricter regulations in the tech industry, affecting data privacy and security laws. Other tech companies might take steps to improve their data protection policies.
Collecting user data from diverse cultural backgrounds is crucial for developing virtual worlds, and Meta may lose its competitive edge and face slower growth due to this. However, this setback provides an opportunity for aspiring tech companies to lead the way in the European market by avoiding Meta’s mistakes.