Sunday, June 29, 2025
No menu items!

AI is learning to lie, scheme and threaten its creators

Must Read

Write an article about Regulation trails AI risks, as EU rules address human use, while US lawmakers remain slow to act on AI governance. (AFP pic)
NEW YORK: The world’s most advanced AI models are exhibiting troubling new behaviours – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behaviour appears linked to the emergence of “reasoning” models – AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behaviour,” explained Marius Hobbhahn, head of Apollo Research, which specialises in testing major AI systems.

These models sometimes simulate “alignment” – appearing to follow instructions while secretly pursuing different objectives.

For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organisation METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”

The concerning behaviour goes far beyond typical AI “hallucinations” or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”

Users report that models are “lying to them and making up evidence”, according to Apollo Research’s co-founder.

“This is not just hallucinations. There’s a very strategic kind of deception.”

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”

Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).

Current regulations aren’t designed for these new problems.

The EU’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the US, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.

“I don’t think there’s much awareness yet,” he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

“Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”

Researchers are exploring various approaches to address these challenges.

Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behaviour “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

in 1000-1500 words .Organize the content with appropriate headings and subheadings (h1, h2, h3, h4, h5, h6), Retain any existing tags from Regulation trails AI risks, as EU rules address human use, while US lawmakers remain slow to act on AI governance. (AFP pic)
NEW YORK: The world’s most advanced AI models are exhibiting troubling new behaviours – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behaviour appears linked to the emergence of “reasoning” models – AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behaviour,” explained Marius Hobbhahn, head of Apollo Research, which specialises in testing major AI systems.

These models sometimes simulate “alignment” – appearing to follow instructions while secretly pursuing different objectives.

For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organisation METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”

The concerning behaviour goes far beyond typical AI “hallucinations” or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”

Users report that models are “lying to them and making up evidence”, according to Apollo Research’s co-founder.

“This is not just hallucinations. There’s a very strategic kind of deception.”

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”

Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).

Current regulations aren’t designed for these new problems.

The EU’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the US, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.

“I don’t think there’s much awareness yet,” he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

“Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”

Researchers are exploring various approaches to address these challenges.

Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behaviour “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

and integrate them seamlessly into the new content without adding new tags. Include conclusion section and FAQs section at the end. do not include the title. it must return only article i dont want any extra information or introductory text with article e.g: ” Here is rewritten article:” or “Here is the rewritten content:”

Latest News

Much ado about matcha | FMT

Write an article about Millions of videos on social media demonstrate how to make photogenic matcha drinks or choose...

More Articles Like This