How Chinese military AI is advancing with Meta’s open-source tech

  • Researchers linked to China’s military have adapted Meta’s Llama AI model for military use, unveiling a new tool, “ChatBIT.”
  • This re-engineered AI model, which highlights the reach of Chinese military AI advancements, has raised concerns over the accessibility of open-source technology for defense purposes.

What happened

A significant move was made in Chinese military AI. Researchers tied to the People’s Liberation Army (PLA) have adapted Meta’s open-source Llama model to create an artificial intelligence tool known as “ChatBIT.” This development, highlighted in recent academic papers, underscores China’s strategy of repurposing U.S.-developed AI models for military intelligence and decision-making. The use of open-source technology in sensitive sectors raises new concerns over security and accessibility.

Also read: Astrology and AI: Bringing an ancient art into the 21st century
Also read: NTT and Toyota invest $3.3B in AI autonomous driving technology

In a June paper, six researchers from the PLA’s Academy of Military Science and other institutions detailed their modifications to Meta’s earlier Llama 13B model to create “ChatBIT,” an AI tool fine-tuned for tasks related to military intelligence. Capable of dialogue and question-answering within operational contexts, ChatBIT reportedly outperformed several AI models, demonstrating roughly 90% of the capability of advanced systems like OpenAI’s ChatGPT-4. However, specifics regarding its practical deployment remain undisclosed.

Meta has previously expressed support for open innovation, though it limits its models from being used for warfare, espionage, and similar applications. While Meta’s terms prohibit military use, enforcement is limited since these models are publicly available. “Any use of our models by the People’s Liberation Army is unauthorized,” stated Meta’s Director of Public Policy, Molly Montgomery, noting that the company’s AI terms are designed to curb misuse but acknowledging the challenges posed by open access.

Why this is important

The PLA’s adaptation of Meta’s Llama for military purposes has reignited debates around the global security risks associated with open-source AI. While open-access models like Llama foster innovation, they also present unique challenges when such technologies end up in the hands of foreign military organisations. Meta’s stance on open innovation contrasts sharply with recent U.S. government measures aimed at tightening control over AI technology in strategic sectors. With President Biden’s recent executive order on AI management and the Pentagon’s increased scrutiny of global AI developments, concerns are mounting over how China’s significant investments and growing technological capabilities may widen its influence in sensitive areas.

This incident not only raises questions about the extent of the PLA’s AI capabilities but also about the efficacy of corporate and government policies aimed at safeguarding advanced AI technology from unintended applications. Experts argue that the deep academic and technological collaborations between the U.S. and China make it increasingly difficult to prevent cross-border AI advancements. As China’s ambitions to lead AI development by 2030 intensify, the balance between innovation and security becomes a pressing international issue with potential ramifications on both policy and technology access.

Vionna-Fiducia Theja

Vionna Fiducia Theja

Vionna Fiducia Theja is a passionate journalist with a First Class Honours degree in Media and Communication from the University of Liverpool. A storyteller at heart, she delves into the vibrant worlds of technology, art, and entertainment, where creativity meets innovation. Vionna believes in the power of media to transform lives and spark conversations that matter. Connect with her at v.zheng@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *