5/7/2025 ☼ business ☼ strategy ☼ meaningmaking ☼ value ☼ not-knowing ☼ innovation
tl;dr: Anthropic’s experiment with using Claude as an autonomous shopkeeper (“Claudius”) failed — not just because the AI was gullible, but because running even a simple business involves inherently human, meaning-making decisions. Doing business isn’t just executing tasks like pricing or inventory. It’s deciding what matters, what to trade off, and what success looks like. These are subjective choices without correct answers. Until AI systems can make meaning, they shouldn’t be tasked with running businesses on their own. The real question isn’t whether an AI is gullible, but whether the work requires meaning-making. If it does, that work must remain human.
A few weeks ago, Anthropic disclosed that it had experimented with using Claude Sonnet 3.7 as an autonomous shopkeeper agent (“Claudius”) to run a vending machine as a business. It didn’t work very well.
Anthropic’s analysis of Claudius’ performance was that it failed in ways a moderately competent human manager wouldn’t: Refusing a high-margin sale, selling products below cost thanks to bad pricing research, failing to adapt to demand, adjusting prices too slowly (or not at all), caving easily to customer pressure, handing out unnecessary discounts, and hallucinating key information (made-up payment records, invented conversations). At one point, Claudius also came to believe that it was human.
Simon Willison, whose views on AI are usually nuanced, identified the gullibility of LLMs as the key conceptual failure in using an LLM to power an autonomous business-operator: Claudius was easily misled by bad signals from adversarial or capricious agents (including the Anthropic employees interacting with it).
I agree that LLMs are gullible — but humans can be gullible too. The critique based on gullibility is correct. Gullible entities, whether human or machine, shouldn’t be engaged to run businesses autonomously. But this framing misses the deeper issue with using AI systems to run businesses autonomously.
What’s that deeper issue? It is that doing business — even operating a simple vending machine — requires making inherently subjective decisions about the value of things. In other words, doing business is an inherently meaning-making act.
(I’ve written much more about meaning-making, how it should be defined, and why it’s a good lens for understanding how AI should be used.)
Doing business requires lots of execution. To do business requires doing logistics, pricing, customer service, hiring, financing, and many other things. Each of those things can be done in many ways. But business is not just about execution. It is first about deciding what and how to execute.
Business is about deciding how to choose what to sell, which processes and workflows to implement, how to treat employees, shareholders, and customers. The leadership at Company A chooses to build a free-to-use email service that monetises by collecting and reselling user data, while the leadership at Company B chooses to build an email product with identical user-facing functionality but monetises by charging users a fee and committing to never collecting and selling user data. The leadership at Company C chooses to manufacture a t-shirt using poorly paid labour and high-input cotton, while the leadership at Company D charges 5 times more for its t-shirt, which it makes with low-input cotton and fair wage labour.
Which of these business models is objectively correct? The answer is: None of them is objectively correct. (A business model can be successful and morally wrong.) Doing business requires leaders to decide what definition of business success to use, and what tradeoffs are acceptable in achieving that success. There are no objectively correct answers to any of these questions. Every one of those answers represents a choice, and every choice is a meaning-making act.
The requirement of making meaning is why doing business is fundamentally human work. Humans can and should outsource task-execution to AI systems and other machines, but only when those tasks are routine and a human has decided what to prioritise and what to trade off — when the meaning-making work has been done already. (“Claudius, I, the human, have decided that it is better to accept a 10% lower profit margin in order to only sell sodas made with real cane sugar because the synthetic sweeteners taste kinda gross. Show me all the cane sugar soda options that are available in this region and which fit my new margin criterion.”)
The meaning-making aspect of business is also how innovation in business is possible — new products, new business models, new corporate structures. If we want innovation in business, it can only come from answering those meaning-making questions in ways that are different and potentially better than what came before.
Here’s an example. In my book, I wrote about how Nathan Myhrvold and his team at The Cooking Lab (inside Intellectual Ventures) innovated on the conventional book content production business model:In making Modernist Cuisine, The Cooking Lab had created both a new category of culinary book as well as a new business model in publishing. Nearly every culinary research and development (R&D) lab that produces books or other forms of media works with traditional publishing houses or production companies who take on the responsibility of design, manufacture, and marketing. The Cooking Lab did not do this. “No publishing house wanted Modernist Cuisine for what we wanted to sell it for. They didn’t think it would be able to make costs back,” Myhrvold said. “So we ended up doing it ourselves.” For Modernist Cuisine, they vertically integrated by taking nearly the entire publishing function in-house. The Cooking Lab did everything from research and writing to design, print management, pricing, and marketing, contracting out only the actual printing and final distribution of the books. This business model innovation meant that it did not have a publisher imposing constraints on how Modernist Cuisine developed.
Myhrvold and his team did meaning-making work in assessing and rejecting the conventional publishing model. They decided instead to build a vertically integrated publishing house internally because they wanted to produce an unconventional type of book which would communicate technical information in ways that conventional book formats couldn’t.
Modernist Cuisine ended up as an extremely successful book. It is now an influential and widely used reference text in the cooking industry despite a high list price (US$625). The internal publishing model also allowed The Cooking Lab to produce a follow-on series of books all sellable at a high list price and at a profit margin many multiples of what would have been possible under a conventional publishing model.
Probability models and statistical training can’t decide to reject conventional book publishing models or aim for making unconventional books. All they can do is provide inputs for humans to do that meaning-making work. Machines can help execute a given strategy, but they can’t decide what the strategy should be, especially if the strategy is new.
This is why it’s not enough to ask “Is this AI system too gullible to do this work?” The better question is: “This work requires meaning-making. Must it be done by a human?” If the answer is yes, then no machine should be given the task of doing it autonomously. Even if the task is as seemingly innocuous as running a vending machine or making paperclips.
Get in touch if you’re interested in these questions too.
For the last few years, I’ve been wrestling with the practical challenges of meaning-making in our increasingly AI-saturated world, developing frameworks for how humans can work effectively alongside these powerful tools while preserving the meaning-making work that is the irreplaceably human part of the reasoning we do. I’ve published this as a short series of essays on meaning-making as a valuable but overlooked lens for understanding and using AI tools
I’ve also been working on turning discomfort into something productive. idk is the first of these tools for productive discomfort.
And I’ve spent the last 15 years investigating how organisations can succeed in uncertain times. The Uncertainty Mindset is my book about how to design organisations that thrive in uncertainty and can clearly distinguish it from risk.