Trends
Meta to 10X computing power for Llama 4
OUR TAKEThe rising costs of training large language models pose challenges for Meta and OpenAI. While investments in flexible infrastructure are crucial, short-term revenue from generative AI may be limited. Meta’s success with its AI chatbot in India demonstrates the potential of AI in emerging mar…

Headline
OUR TAKEThe rising costs of training large language models pose challenges for Meta and OpenAI. While investments in flexible infrastructure are crucial, short-term revenue from generative AI may be limited. Meta’s success with its AI chatbot in India demonstrates the potential…
Context
OUR TAKE The rising costs of training large language models pose challenges for Meta and OpenAI. While investments in flexible infrastructure are crucial, short-term revenue from generative AI may be limited. Meta’s success with its AI chatbot in India demonstrates the potential of AI in emerging markets, emphasising the need for strategic foresight as the competitive landscape evolves. -Lilith Chen, BTW reporter In a recent earnings call, Mark Zuckerberg revealed that the training requirements for Llama 4 are expected to be substantially higher than those for Llama 3, which featured 80B parameters. This announcement follows the launch of Llama 3.1 405B, Meta ‘s largest model to date, boasting an impressive 405B parameters. To meet the increasing demands of future AI projects, Zuckerberg emphasised the importance of building the necessary infrastructure capacity ahead of time. He highlighted the lengthy lead times associated with establishing new projects, indicating a proactive approach to development.
Evidence
Pending intelligence enrichment.
Analysis
Meta’s CFO, Susan Li, also shared insights into the company’s strategy, indicating that various data center projects are under consideration to enhance its AI training capabilities. She noted that these investments are expected to increase capital expenditures in 2025. In Q2 2024, Meta’s capital expenditures rose nearly 33% to $8.5B, largely driven by significant investments in servers and data centers, underscoring the company’s commitment to advancing its AI initiatives. Also read: Meta launches AI studio for custom chatbots Also read: Meta expects ad revenue growth in Q2, prioritises AI and cost The escalating costs of training large language models are a significant concern in the AI sector, as companies strive to enhance their capabilities. For context, reports indicate that OpenAI spends around $3B annually on model training and an additional $4B on renting servers from Microsoft.
Key Points
- Meta, the parent company of Facebook, has announced its intention to significantly ramp up computing power for training its next-generation large language model, Llama 4.
- Zuckerberg says Llama 4 requires ten times the resources of Llama 3 to remain competitive in AI. This strategic move aims to ensure that Meta stays competitive in the rapidly evolving field of AI.
Actions
Pending intelligence enrichment.





