In a new review paper published in the journal Patterns, researchers argue that a range of current AI systems have learned how to deceive humans. They define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. “AI...