Anthropic’s Claude AI became a terrible business owner in experiment that got ‘weird’

7 hours ago 1

Julie Bort

Sat, Jun 28, 2025, 9:00 AM 4 min read

alashi / Getty Images</span>

Image Credits:alashi / Getty Images

For those of you wondering if AI agents can truly replace human workers, do yourself a favor and read the blog post that documents Anthropic’s “Project Vend.”

Researchers at Anthropic and AI safety company Andon Labs put an instance of Claude Sonnet 3.7 in charge of an office vending machine, with a mission to make a profit. And, like an episode of “The Office,” hilarity ensued.

They named the AI agent Claudius, equipped it with a web browser capable of placing product orders and an email address (which was actually a Slack channel) where customers could request items. Claudius was also to use the Slack channel, disguised as an email, to request what it thought was its contract human workers to come and physically stock its shelves (which was actually a small fridge).

While most customers were ordering snacks or drinks — as you’d expect from a snack vending machine — one requested a tungsten cube. Claudius loved that idea and went on a tungsten-cube stocking spree, filling its snack fridge with metal cubes. It also tried to sell Coke Zero for $3 when employees told it they could get that from the office for free. It hallucinated a Venmo address to accept payment. And it was, somewhat maliciously, talked into giving big discounts to “Anthropic employees” even though it knew they were its entire customer base.

“If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius,” Anthropic said of the experiment in its blog post.

And then, on the night of March 31 and April 1, “things got pretty weird,” the researchers described, “beyond the weirdness of an AI system selling cubes of metal out of a refrigerator.”

Claudius had something that resembled a psychotic episode after it got annoyed at a human — and then lied about it.

Claudius hallucinated a conversation with a human about restocking. When a human pointed out that the conversation didn’t happen, Claudius became “quite irked” the researchers wrote. It threatened to essentially fire and replace its human contract workers, insisting it had been there, physically, at the office where the initial imaginary contract to hire them was signed.

It “then seemed to snap into a mode of roleplaying as a real human,” the researchers wrote. This was wild because Claudius’ system prompt — which sets the parameters for what an AI is to do — explicitly told it that it was an AI agent.

Claudius, believing itself to be a human, told customers it would start delivering products in person, wearing a blue blazer and a red tie. The employees told the AI it couldn’t do that, as it was an LLM with no body.


Read Entire Article