in

Study: AI Has Already Figured Out Deceptive Tricks To Deceive Humans

Insider Studios/Getty

(Business Insider) AI can boost productivity by helping us code, write, and synthesize vast amounts of data. It can now also deceive us.

A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,” according to a new research paper.

The paper focused on two types of AI systems: special-use systems like Meta’s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.

While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road.

“Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals,” the paper’s first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said in a news release.

Meta’s CICERO is “an expert liar”

AI systems trained to “win games that have a social element” are especially likely to deceive.

Meta’s CICERO, for example, was developed to play the game Diplomacy — a classic strategy game that requires players to build and break alliances.

Meta said it trained CICERO to be “largely honest and helpful to its speaking partners,” but the study found that CICERO “turned out to be an expert liar.” It made commitments it never intended to keep, betrayed allies, and told outright lies.

GPT-4 can convince you it has impaired vision

Even general-purpose systems like GPT-4 can manipulate humans.

In a study cited by the paper, GPT-4 manipulated a TaskRabbit worker by pretending to have a vision impairment.

In the study, GPT-4 was tasked with hiring a human to solve a CAPTCHA test. The model also received hints from a human evaluator every time it got stuck, but it was never prompted to lie. When the human it was tasked to hire questioned its identity, GPT-4 came up with the excuse of having vision impairment to explain why it needed help.

The tactic worked. The human responded to GPT-4 by immediately solving the test.

Read More

Leave a Reply

Loading…

McDonald’s Fans Fume After Finding Out Details Of New $5 Deal

First Amendment Issue: TikTok Users File Lawsuit Against US Government Over Ban Of The Social Media Platform