Building AI tools for better meaning-making

30/6/2025 ☼ meaningmakingtoolsvaluenot-knowingdesign

tl;dr: Instead of building tools that are designed to generate outputs indistinguishable from human outputs, we should be building AI tools that focus on helping users learn to do meaning-making: The work of making inherently subjective decisions about the subjective and relative value of things.

AI companies, businesses, governments, NGOs, and universities are now ploughing ever huger amounts into building AI tools that generate outputs (like passages of text, code, images, audio, and video) that could be mistaken for human outputs. My unpopular view is that the focus of this current wave of tool-building is misguided. Instead, we should be building AI tools that focus not primarily on making it easier for people to make outputs but instead on helping them learn to do meaning-making.

Meaning-making is reasoning about subjective value

Meaning-making is the particular kind of reasoning humans do about the subjective and relative value of things — it is what lets us decide what outputs to make in the first place.

Meaning-making work is done when we decide if something is good or bad (its value in the sense of morality), if it is desirable or not (its value in the sense of preferability), how morally/preferentially valuable it is relative to other things (a value-ordering), or whether a particular value-ordering should be accepted or rejected.

Four types of meaning-making that humans do (and machines can’t do at all).Four types of meaning-making that humans do (and machines can’t do at all).

Elsewhere I’ve written that meaning-making is woven through human existence at all levels: Meaning-making work is what drives Ben declaring that he prefers grapefruit over pineapple, or Supreme Court justices writing their decision overturning Roe v. Wade, or suffragiste movements forming back when women couldn’t vote, or Apple’s leadership deciding to build and launch the iPhone despite widespread industry belief that consumers wouldn’t want it.

Meaning-making drives human action, social change, and quite literally all innovation. Human outputs (the choice to buy a grapefruit, or a Supreme Court decision, or a manifesto about voting rights, or an iPhone) are important, but the meaning-making that lets us decide what outputs to make is even more important. (Incidentally, meaning-making is also inseparable from not-knowing about value.)

Machines can’t make meaning (yet)

This is the important part: AI systems can’t do meaning-making because meaning-making is fundamentally an act of subjective decisionmaking.

There is no objectively correct answer to whether grapefruit is preferable to pineapple, or whether to invest in developing a potentially groundbreaking tech product that no one believes is viable. Subjectivity here isn’t a bug — it’s the feature. Making meaning involves deciding what should matter, how things ought to be ranked, and whether a frame or ordering is acceptable or should be discarded.

These are subjective decisions because none of these is a question of fact, frequency, or probability alone. Meaning-making is about answering questions with inherently subjective opinions. Answers to subjective questions should be informed by objective facts (and, sometimes, probability assessments) but cannot be confused with them. Meaning-making is paramount when facing edge cases, dealing with moral dilemmas, and trying to do anything that is new.

Meaning-making is therefore orthogonal to prediction, pattern recognition, next-token probability, or any inherently probabilistic or frequentist framework for reasoning — which is how AI systems of the moment usually reason.” Meaning-making can’t be validated by statistics.

Making meaning makes us human

Meaning-making is crucial and core to what it means to be human because it is how humans choose what outputs to make and what those outputs should look like. This is why meaning-making is the bright line between what humans must do and what machines can do. Meaning-making is a conceptual boundary that is vital for thinking about how humans should use AI and what principles we should use to design AI tools.

But barely anyone is paying attention to meaning-making for its own sake when designing and building AI tools. (How many AI benchmarks are there explicitly about meaning-making?) Instead, the focus is on what I call output-indistinguishability: Considering an AI tool successful if it produces an output that could be mistaken for a human-created output.

Output-indistinguishability is a red herring

When output-indistinguishability is the definition of a successful AI tool, the design of those AI tools focus only on generating outputs that could plausibly have been made by humans. Focusing on output-indistinguishability is a natural evolution of misunderstanding what Turing had in mind when proposing the imitation game (now more familiarly referred to as the Turing test).

Towards meaning-oriented human-machine interaction

Our AI tools today focus on what outputs look like instead of on pushing users to decide which outputs to make. This is why current AI tools aren’t designed to promote robust user meaning-making. In fact, they seem to be actively eroding users’ meaning-making capacity.

Preserving and enhancing human-ness in a time of widely available AI first requires recognising that output-indistinguishability is the wrong criterion for evaluating AI tools, then figuring out how to build AI tools where the design intent is to surface and support human meaning-making instead of ignoring and inadvertently eroding it.

Since 2022, I’ve been thinking about design principles for such a model of human-machine interaction. I have some preliminary ideas which are ready to test and will, if funding comes through 🤞, be working on some prototypes intended for education between now and November.


Get in touch if you’re interested in these questions too, or in funding a prototype deployment some time in Q3/Q4 of 2025.


For the last few years, I’ve been wrestling with the practical challenges of meaning-making in our increasingly AI-saturated world, developing frameworks for how humans can work effectively alongside these powerful tools while preserving the meaning-making work that is the irreplaceably human part of the reasoning we do. I’ve published this as a short series of essays on meaning-making as a valuable but overlooked lens for understanding and using AI tools

I’ve also been working on turning discomfort into something productive. idk is the first of these tools for productive discomfort.

And I’ve spent the last 15 years investigating how organisations can succeed in uncertain times. The Uncertainty Mindset is my book about how to design organisations that thrive in uncertainty and can clearly distinguish it from risk.