Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » Meta releases early versions of Llama 3 multimodal AI model
    Llama 3
    Llama 3
    AI

    Meta releases early versions of Llama 3 multimodal AI model

    By Monica ChenApril 24, 2024Updated:April 24, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • Meta Platforms released early versions of its latest large language model, Llama 3, with new computer coding capabilities and the ability to process image commands. The models will be integrated into the virtual assistant Meta AI, which the company is pitching as the most sophisticated of its free-to-use peers. 
    • Versions of Llama 3 planned for release in the coming months will also be capable of “multimodality,” meaning they can generate both text and images, as it races to catch up to generative AI market leader OpenAI.
    • The Llama 2 model is unable to understand basic context, Meta reduces these problems in Llama 3 by using “high-quality data” to allow the model to recognise nuances. The demand for data for generative AI models has become a major source of tension in the development of the technology.

    Meta Platforms released early versions of its latest large language model, Llama 3, with new computer coding capabilities and the ability to process image commands. The equipped image generator will update pictures in real-time while users type prompts, as it races to catch up to generative AI market leader OpenAI.
    See CEO Mark Zuckerburg’s video explainer.

    Aiming at AI model with multimodality

    Versions of Llama 3 planned for release in the coming months will also be capable of “multimodality,” meaning they can generate both text and images though for now the model will output only text, Meta chief product officer Chris Cox said in an interview.

    The models will be integrated into the virtual assistant Meta AI, which the company is pitching as the most sophisticated of its free-to-use peers. More advanced reasoning, like the ability to craft longer multi-step plans, will follow in subsequent versions.

    Also read: Meta debuts an ‘all-rounder’ MTAI chip 3 times faster than previous

    The inclusion of images in the training of Llama 3 would enhance an update rolling out this year to the Ray-Ban Meta smart glasses, a partnership with glasses maker Essilor Luxoticca, enabling Meta AI to identify objects seen by the wearer and answer questions about them, said Chris Cox.

    Data crisis for training AI models

    The Llama 2 model is unable to understand basic context, Meta reduces these problems in Llama 3 by using “high-quality data” to allow the model to recognise nuances. Rival Google has run into similar issues and recently suspended the use of its Gemini AI image-generating tool after it was criticised for inaccurate depictions of historical figures.

    Meta CEO Mark Zuckerberg said that the biggest version of Llama 3 is currently being trained with 400 billion parameters and already scoring 85 Massive Multitask Language Understanding, citing metrics used to convey the strength and performance quality of AI models.

    Also read: US Rep proposes bill forcing AI companies to disclose training data

    The voracious demand for data for generative AI models has become a major source of tension in the development of the technology. Mete did not elaborate on the data sets used, although it supplied Llama 3 with seven times more data than Llama 2 used, and used “synthetic” or AI-created data to enhance areas such as coding and reasoning.

    ai model Llama 3 META
    Monica Chen

    Monica Chen is an intern reporter at BTW Media covering tech-trends and IT infrastructure. She graduated from Shanghai International Studies University with a Master’s degree in Journalism and Communication. Send tips to m.chen@btw.media

    Related Posts

    Prestabist: Advances AI commerce tools across Africa

    July 9, 2025

    Indosat deploys Nokia AI to cut network emissions

    July 8, 2025

    Huawei’s AI lab denies copying Alibaba’s Qwen model

    July 8, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.