- Meta won’t immediately join the EU’s AI Pact, opting instead to focus on complying with the upcoming AI Act.
- The Act requires detailed data summaries from AI firms, with most rules applying from August 2, 2026.
OUR TAKE
Meta’s decision to bypass the EU’s voluntary AI Safety Pact underscores the delicate balance between fostering innovation and adhering to regulatory standards. While prioritising compliance with the AI Act may streamline future operations, it also positions Meta as cautious amidst evolving AI governance, potentially impacting its reputation and relationship with EU regulators.
–Vicky Wu, BTW reporter
What happened
Meta Platforms, the parent company of Facebook, has opted not to immediately join the European Union’s voluntary AI Safety Pact, a move that contrasts with the decisions of tech giants like Microsoft and Google.
The pact is intended as an interim measure ahead of the full implementation of the EU’s AI Act, which will come into effect in 2026. A spokesperson for Meta stated that the company is currently prioritising its compliance efforts under the incoming AI Act and may consider joining the pact at a later date. This decision places Meta in a unique position compared to its peers, particularly since its AI model, Llama, includes open-source elements that could complicate compliance with forthcoming regulations. Similarly, French open-source AI startup Mistral will also abstain from signing the pledge. Despite not facing legal repercussions for opting out, companies like Meta may encounter reputational challenges and increased scrutiny from EU regulators.
Also read: Vietnam’s leader Lam set to meet Google and Meta in the US
Also read: Meta oversight board urges balanced approach to controversial phrase
Why it’s important
Meta’s decision not to join the EU’s AI Safety Pact highlights the complex relationship between tech giants and emerging regulatory frameworks aimed at ensuring the responsible development and use of artificial intelligence. The EU’s AI Act represents a pioneering effort to establish standards for AI governance without stifling technological advancement.
By choosing not to participate in the voluntary pact, Meta signals a preference for focusing directly on compliance with the AI Act itself, which could influence how the company adapts its AI technologies, particularly the open-source Llama model, to meet stringent European requirements. This stance also reflects broader concerns about the regulatory landscape in Europe, where companies must navigate increasingly strict rules governing data privacy, content moderation, and AI ethics. As other big tech players commit to the pact, the divide between those who embrace early self-regulation and those who prefer a wait-and-see approach becomes more pronounced.