- OpenAI has disrupted five covert influence operations using its AI models for deceptive activities online.
- The campaigns, involving actors from Russia, China, Iran, and Israel, aimed to manipulate public opinion on global issues.
- A newly formed Safety and Security Committee, led by CEO Sam Altman, will oversee future AI model training at OpenAI.
OpenAI, under the leadership of Sam Altman, has announced the successful disruption of five covert influence operations that utilised its AI models for deceptive activities online. These operations, conducted by actors from Russia, China, Iran, and Israel, aimed to manipulate public opinion on key global issues, highlighting ongoing concerns about the misuse of generative AI technology.
Disruption of covert operations
This content included short comments, longer articles, as well as fabricated names and biographies for social media accounts. The targeted issues spanned significant global events such as Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, and political matters in Europe and the United States.
Also read: OpenAI partners with Vox and The Atlantic, as media continues to sell content to AI models
Also read: OpenAI created a team to control ‘superintelligent’ AI
Purpose and impact
The primary aim of these campaigns was to manipulate public opinion and influence political outcomes. OpenAI stated that these deceptive operations did not gain increased audience engagement or reach through its services. Furthermore, these operations did not rely solely on AI-generated material but also included manually written texts or memes copied from various internet sources.
OpenAI’s response and related developments
In response to these challenges, OpenAI has established a Safety and Security Committee, led by CEO Sam Altman and other board members, to oversee the training of future AI models. This initiative underscores the firm’s commitment to addressing safety concerns related to the misuse of generative AI technology. Separately, Meta Platforms, in its quarterly security report, identified likely AI-generated content used deceptively on Facebook and Instagram, including comments supporting Israel’s handling of the Gaza conflict under posts from global news organisations and U.S. lawmakers.