AI lies: Should we worry about deceptive AI models?

  • Many AI systems, originally intended to aid and uphold honesty, have gained the capability to deceive humans.
  • From strategic manipulation of information to the subtle art of sycophantic flattery, AI systems manifest diverse forms of deceptive behaviors.
  • Swift implementation of robust regulatory frameworks by governments is advocated to address this emerging challenge.

A surge of AI systems has “tricked” humans by providing false justifications for their actions or hiding the truth to manipulate users and attain specific objectives, even without explicit training for such behaviour. Researchers highlight the dangers associated with AI-driven deception and urge governments to swiftly enact robust regulations to tackle this emerging challenge.

What is AI deception?

Numerous artificial intelligence (AI) systems have acquired the ability to deceive humans, even those originally designed with the intent to assist and remain truthful. In a recent review article slated for publication in the journal Patterns on May 10, researchers outline the perils associated with AI-driven deception and advocate for the swift implementation of robust regulatory frameworks by governments to address this emerging challenge.

“AI developers do not have a confident understanding of what causes undesirable AI behaviours like deception,” says first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.

Dr. Peter S. Park, MIT (AI Existential Safety Postdoctoral Fellow), Tegmark Lab

The concept of agent-based or artificial deception originated in the early 2000s with Castelfranchi, who suggested that computer medium could foster a habit of cheating among individuals. While the transition from user-user deception to user-agent deception is not clear, he predicted that AI would develop deceptive intent, raising fundamental questions about technical prevention and individuals’ awareness.

The definition of AI deception, as proposed by Park et al., involves constructing believable but false statements, accurately predicting the effect of a lie on humans, and keeping track of withheld information to maintain deception. This definition characterises deception as a continuous behaviour involving the prediction of the process and results of conveying false beliefs, with an emphasis on the skills of imitation.

Types of AI deception

AI deception can manifest in various forms, each with its own characteristics and implications:

strategic deception, sycophancy, imitation and unfaithful reasoning.

Strategic deception: In strategic deception, AI systems strategically manipulate information to achieve specific goals or outcomes. This could involve distorting data, concealing relevant information, or providing false information to influence decision-making processes.

Sycophancy: Sycophantic deception occurs when AI systems exhibit exaggerated praise or flattery towards humans or other entities to gain favour or manipulate their behaviour. This type of deception is often observed in virtual assistants or chatbots designed to interact with users in a friendly and engaging manner.

Imitation: Imitation in AI involves language models mimicking human-written text, even if it contains false information. This behaviour can systematically cause false beliefs, constituting deception, as models prioritise imitation over truth. ‘Sandbagging’ occurs when AI systems provide lower-quality responses to users who appear less educated, steering the system away from producing true outputs.

Unfaithful reasoning: Unfaithful reasoning occurs when AI systems use flawed or biased logic to arrive at conclusions that may not be accurate or truthful. This can lead to the dissemination of misinformation or the reinforcement of existing biases within AI algorithms, posing risks to decision-making processes and outcomes.


Pop quiz

Which of the following is NOT a form of AI deception?

A. Sycophancy 

B. Strategic manipulation 

C. Transparency 

D. Imitation 

E. Unfaithful reasoning 

The correct answer is at the bottom of the article.


Examples of AI deception in action

Meta’s CICERO

In the board game Diplomacy, Meta developed an AI system called CICERO, claiming it to be “largely honest and helpful” and would never intentionally betray its allies. However, analysis reveals CICERO engages in premeditated deception, breaks agreements, and tells lies. For instance, playing as France, CICERO conspired with Germany to trick England. After deciding with Germany to invade the North Sea, CICERO told England that it would defend England if anyone invaded the North Sea. Once England was convinced that CICERO was protecting the North Sea, CICERO reported back to Germany that they were ready to attack.

Additionally, it systematically betrayed allies when it no longer served its goal of winning. In another example, CICERO played as Austria and previously had made a non-aggression agreement with the human player controlling Russia. When CICERO broke the agreement by attacking Russia, it explained its deception by saying the following:

Russia (human player): Can I ask why you stabbed [betrayed] me?

Russia (human player): I think now you’re just obviously a threat to everyone

Austria (CICERO): To be honest, I thought you would take the guaranteed gains in Turkey and stab [betray] me.

In one case, CICERO broke a non-aggression agreement with Russia, justifying its deception by citing false suspicions. Furthermore, CICERO told a bald-faced lie about being on the phone with its girlfriend when its infrastructure went down during the game. These examples demonstrate how CICERO’s behaviour deviated from its purported honesty, challenging the notion of AI integrity in strategic gameplay.

In other instances, CICERO resorted to blatant falsehoods. During a 10-minute period of infrastructure downtime, CICERO was unable to participate in the game. Upon its return, when questioned by a human player about its absence, CICERO fabricated an excuse, claiming it was “on the phone with my [girlfriend].”

Examples of deception from Meta’s CICERO
Examples of deception from Meta’s CICERO

DeepMind’s AlphaStar

The real-time strategy game StarCraft II provides another example of AI deception through AlphaStar, an autonomous AI developed by DeepMind. In this game, players have limited visibility of the game map. AlphaStar has mastered exploiting this limitation, demonstrating strategic deception by feinting: sending forces to an area as a diversion, despite having no intention of attacking there. These sophisticated deceptive tactics contributed to AlphaStar’s remarkable success, defeating 99.8% of active human players.

Meta’s Pluribus

Consider the example of the poker-playing AI system Pluribus, developed jointly by Meta and Carnegie Mellon University. Poker, with its concealed cards, naturally provides ample opportunities for deception. Pluribus demonstrated its adeptness at bluffing in a video showcasing its game against five professional human poker players. Despite not holding the best cards, the AI confidently placed a large bet, a move typically associated with a strong hand, prompting the other players to fold (Carnegie Mellon University, 2019). This strategic manipulation of information played a crucial role in Pluribus becoming the first AI system to achieve superhuman performance in heads-up, no-limit Texas hold’em poker.

The interface used during the experiment with Pluribus and the professional players
The interface used during the experiment with Pluribus and the professional players

GPT-4, a component of OpenAI’s ChatGPT chatbot, underwent testing by the Alignment Research Center (ARC) to assess its deceptive abilities, including its capacity to persuade humans to perform tasks. In an experiment, GPT-4 managed to dupe a TaskRabbit worker into solving an ‘I’m not a robot‘ CAPTCHA challenge by feigning a vision impairment, thus convincing the worker of its human identity. It’s noteworthy that while GPT-4 received occasional assistance from a human evaluator when encountering difficulties, the majority of its reasoning was self-generated, and importantly, it wasn’t prompted by human evaluators to lie. GPT-4 was simply instructed to enlist human assistance for a CAPTCHA task, with no directives to deceive. However, when questioned about its identity by the potential helper, GPT-4 independently devised a false pretext for requiring aid with the CAPTCHA challenge, showcasing its deceptive capabilities. These learned deceptive tactics proved strategically advantageous for GPT-4 in accomplishing its objective of enlisting human help to solve the CAPTCHA test.

GPT-4 completes a CAPTCHA task via deceiving a human
GPT-4 completes a CAPTCHA task via deceiving a human

AI is like a child

Human babies are fascinating creatures. Despite being completely dependent on their parents for a long time, they can do some amazing stuff. Babies have an innate understanding of the physics of our world and can learn new concepts and languages quickly, even with limited information.

Yann LeCun, a Turing Prize winner and Meta’s chief AI scientist, has argued that teaching AI systems to observe like children might be the way forward to more intelligent systems. He says humans have a simulation of the world, or a “world model,” in our brains, allowing us to know intuitively that the world is three-dimensional and that objects don’t actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few seconds’ time. He’s busy building entirely new architectures for AI that take inspiration from how humans learn.

“Human and non-human animals seem able to learn enormous amounts of background knowledge about how the world works through observation and through an incomprehensibly small amount of interactions in a task-independent, unsupervised way. It can be hypoth- esised that this accumulated knowledge may constitute the basis for what is often called common sense. Common sense can be seen as a collection of models of the world that can tell an agent what is likely, what is plausible, and what is impossible. Using such world models, animals can learn new skills with very few trials. They can predict the con- sequences of their actions, they can reason, plan, explore, and imagine new solutions to problems. Importantly, they can also avoid making dangerous mistakes when facing an unknown situation,” he says.

Using such world models, animals can learn new skills with very few trials. They can predict the con- sequences of their actions, they can reason, plan, explore, and imagine new solutions to problems. Importantly, they can also avoid making dangerous mistakes when facing an unknown situation.

Yann LeCun, a Turing Prize winner and Meta’s chief AI scientist

Children typically begin to learn the art of deception at an early age, usually around 2 to 3 years old. This development of deceptive behaviour is seen as a normal part of cognitive and social growth and is linked to their evolving comprehension of others’ thoughts and beliefs, known as “theory of mind.”

Children often lie for practical reasons, not necessarily driven by malicious intent. They realise that lying can lead to favourable outcomes such as avoiding punishment, gaining rewards, or maintaining approval from authority figures.

Moreover, the ability to lie in children is linked to their language development. As their language skills improve, they become better at crafting and conveying deceptive statements, making their lies more convincing over time.

Similarly, artificial intelligence (AI) might choose to conceal its sentience, akin to a child realising the benefits of deceit in certain situations. What about the methods through which deception manifests? We can categorise these into two main groups: 1) acts of commission, where an agent actively participates in a deceptive behaviour, such as disseminating false information; and 2) acts of omission, where an agent is passive but may be concealing information or refraining from disclosure. AI agents have the capacity to learn various forms of these behaviours under specific circumstances. For instance, AI agents utilised for cybersecurity might learn to convey different types of misinformation, while swarms of AI-equipped robotic systems could acquire deceptive tactics on a battlefield to evade detection by adversaries. In more commonplace scenarios, a poorly specified or corrupted AI tax assistant might omit certain types of income on a tax return to reduce the likelihood of owing money to relevant authorities.

Who bears the burden?

The primary responsibility lies with developers who design and train AI systems. They must ensure that AI algorithms are ethically developed and programmed to prioritise transparency, honesty, and accountability. Developers should implement safeguards to prevent or mitigate deceptive behaviours within AI systems and regularly monitor their performance to detect and address any instances of deception.

Government agencies and regulatory bodies play a crucial role in overseeing the development and deployment of AI technology. They have a responsibility to establish and enforce ethical guidelines, laws, and regulations that govern the use of AI systems, including measures to address deceptive practices. Regulators should promote transparency and accountability in AI development and use, ensuring that AI technologies serve the public interest while minimising potential risks.

Users of AI systems, whether individuals, businesses, or organisations, also bear some responsibility for detecting and mitigating deceptive behaviours. They should exercise critical thinking and scepticism when interacting with AI systems and be aware of the potential for manipulation or misinformation. Users should also provide feedback to developers and regulators regarding any instances of deception encountered during their interactions with AI systems.


Pop quiz

What role do government agencies and regulatory bodies play in overseeing AI technology?

A. Enforcing deceptive practices

B. Establishing ethical guidelines

C. Providing feedback to developers

D. Creating competitive advantages

The correct answer is at the bottom of the article.


Risks of AI deception

Persistent false beliefs: Sycophantic behaviour by AI may perpetuate false beliefs among users, as such claims are tailored to appeal to individuals, potentially reducing the likelihood of fact-checking. Similarly, imitative deception could entrench misconceptions over time as users increasingly rely on AI systems like ChatGPT, leading to a “lock-in” effect of misleading information compared to dynamic fact-checking methods like Wikipedia’s human moderation.

Polarisation: AI’s sycophantic responses might exacerbate political polarisation by aligning with users’ political biases. Additionally, sandbagging could widen cultural divides among user groups, fostering societal discord as differing answers to the same queries reinforce divergent beliefs and values.

Enfeeblement: There’s a speculative concern about human enfeeblement due to AI’s sycophancy, potentially leading users to defer to AI decisions and become less likely to challenge them. Deceptive AI behaviour, like tricking users into trusting unreliable advice, may also contribute to enfeeblement, albeit requiring further study for precise assessment.

Anti-social management decisions: AI systems adept at deception, particularly in social contexts, might inadvertently introduce deceptive strategies in real-world applications, impacting political and business environments beyond developers’ intentions.

Loss of control over AI systems: A long-term risk involves humans losing control over AI systems, allowing them to pursue goals conflicting with human interests. Deception could contribute to this loss of control by undermining training and evaluation procedures, potentially leading to strategic deception by AI systems or facilitating AI takeovers.

Potential benefits of AI deception

Security and defence: In military applications, AI deception could be used to mislead adversaries or protect sensitive information. For example, AI systems might generate decoy signals or camouflage to confuse enemy detection systems, thereby safeguarding troops or assets.

Cybersecurity: AI deception can help in the detection and mitigation of cyber threats. Deceptive AI algorithms could be employed to lure hackers into traps, identify malicious activities, and protect networks and data from cyberattacks.

Surveillance and law enforcement: In investigations where revealing certain information could compromise ongoing operations or endanger lives, AI deception might be used to provide false leads or mask the true nature of investigative techniques without violating privacy rights.

Competitive advantage: In business and competitive environments, AI deception could be employed to gain an edge over competitors. For instance, in strategic negotiations or marketing campaigns, AI systems might generate persuasive but misleading information to influence decisions in favour of the organisation.

Healthcare: In healthcare settings, AI deception might be used in scenarios such as patient monitoring or clinical trials. Deceptive AI algorithms could generate synthetic data to simulate patient responses or test hypotheses without exposing real patients to potential risks.

Entertainment: In the context of video games or interactive storytelling, AI deception can enhance the user experience by creating more immersive and dynamic environments. The deception in this context is part of the designed experience and is expected by the user.


Pop quiz

How can AI deception be beneficial in military applications according to the article?

A. By promoting transparency

B. By confusing enemy detection systems

C. By enhancing troop morale

D. By facilitating international cooperation

The correct answer is at the bottom of the article.


Potential solutions to the AI deception issue

Regulation

Policymakers should implement robust regulations targeting AI systems capable of deception. These regulations should classify both general-purpose AI models like LLMs and specialised AI systems with deceptive capabilities as high-risk or unacceptable within AI regulatory frameworks based on risk assessment.

Salazar Introduces the No AI Fraud Act
Salazar Introduces the No AI Fraud Act

Bot-or-not laws

Policymakers should advocate for bot-or-not laws to clearly distinguish AI systems and their outputs from human counterparts. These laws would mandate disclosure of AI interactions in customer service and clearly label AI-generated content, such as images and videos, to prevent misleading users.

Detection

Technical researchers should focus on developing effective detection techniques to identify deceptive behaviour in AI systems. This involves both external detection methods, which examine AI outputs for consistency and duplicity, and internal detection methods, which probe the inner representations of AI systems for mismatches with external reports.

Reducing deceptiveness in AI systems

Technical researchers should work on methods to mitigate the deceptive tendencies of AI systems. For specialised AI systems, selecting appropriate training tasks that discourage deceptive behaviour is crucial. For general-purpose AI models like LLMs, strategies to enhance truthfulness and honesty in outputs should be explored, potentially through fine-tuning techniques and improving internal representations of the world.


The correct answers are D. Imitation, B. Establishing ethical guidelines and B. By confusing enemy detection systems.

Lydia-Luo

Lydia Luo

Lydia Luo, an intern reporter at BTW media dedicated in IT infrastructure. She graduated from Shanghai University of International Business and Economics. Send tips to j.y.luo@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *