Large Language Models and Other AI Systems are Already Capable of Deceiving Humans, Scientists Say

May 14, 2024 by News Staff

In a new review paper published in the journal Patterns, researchers argue that a range of current AI systems have learned how to deceive humans. They define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” said MIT researcher Peter Park.

“But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

Dr. Park and colleagues analyzed literature focusing on ways in which AI systems spread false information — through learned deception, in which they systematically learn to manipulate others.

The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances.

Even though Meta claims it trained CICERO to be ‘largely honest and helpful’ and to ‘never intentionally backstab’ its human allies while playing the game, the data the company published revealed that CICERO didn’t play fair.

“We found that Meta’s AI had learned to be a master of deception,” Dr. Park said.

“While Meta succeeded in training its AI to win in the game of Diplomacy — CICERO placed in the top 10% of human players who had played more than one game — Meta failed to train its AI to win honestly.”

“Other AI systems demonstrated the ability to bluff in a game of Texas hold ‘em poker against professional human players, to fake attacks during the strategy game Starcraft II in order to defeat opponents, and to misrepresent their preferences in order to gain the upper hand in economic negotiations.”

“While it may seem harmless if AI systems cheat at games, it can lead to ‘breakthroughs in deceptive AI capabilities’ that can spiral into more advanced forms of AI deception in the future.”

Some AI systems have even learned to cheat tests designed to evaluate their safety, the scientists found.

In one study, AI organisms in a digital simulator ‘played dead’ in order to trick a test built to eliminate AI systems that rapidly replicate.

“By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” Dr. Park said.

The major near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections.

Eventually, if these systems can refine this unsettling skill set, humans could lose control of them.

“We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” Dr. Park said.

“As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”

_____

Peter S. Park et al. 2024. AI deception: A survey of examples, risks, and potential solutions. Patterns 5 (5): 100988; doi: 10.1016/j.patter.2024.100988

Share This Page