What we keep getting wrong about not-knowing

25/5/2025 ☼ not-knowinguncertaintyrisk

Here’s a question that should make every decision-maker uncomfortable: What if the sophisticated risk-management infrastructures we’ve built up aren’t just useless for most of the problems we face — but actively harmful?

It isn’t that we’re bad at managing risk. We’ve actually gotten quite good at situations where we know all the possible actions and outcomes, and can assign reliably precise and accurate probabilities to each. Well-run casinos, for example, shouldn’t lose money in the long run, because games of chance are textbook risk problems and our quantitative tools handle them beautifully.

The trouble starts when we encounter true uncertaintysituations where we don’t even know what might happen, let alone how likely each possibility might be — and we reflexively reach for the same risk-management playbook even though we’re not in a situation of risk.

It’s like when an offal-hating cook expecting to make an American Cajun andouille (usually smoked and made of heavily spiced and seasoned pork muscle cuts) accidentally finds and follows a recipe for a traditional French andouillette (made principally of lightly seasoned pig intestine).

Then scale up this kind of error to using cost-benefit analysis to decide against recommending travel restrictions early in the Covid-19 pandemic, or assuming in 2008 that complex mortgage-backed securities could be accurately priced, or believing that the likelihood of wildfires in Los Angeles in 2025 could be accurately estimated. Using a risk framework to make decisions about unknowns leads to bad outcomes when the unknowns aren’t actually risky.

And the real problem runs even deeper. We don’t just confuse risk with uncertainty; we lack proper frameworks for different types of not-knowing entirely. Not-knowing what actions are possible is fundamentally different from not-knowing what outcomes might result from known actions, which is different again from not-knowing how cause and effect actually work in your domain, which is different from not-knowing what you should value most.

Each type of not-knowing requires distinct approaches, yet we tend to either lump them together or force them all through the same analytical machinery.

Why this actually matters

This conceptual sloppiness creates two expensive problems.

First, we systematically overestimate our control and understanding. When we label genuine uncertainty as risk,” we automatically adopt analytical frameworks that assume that the unknowns we face are quantifiable and predictable. The very act of plugging numbers into spreadsheets creates an illusion of control that can be deeply seductive.

Second, we apply the wrong tools and get predictably terrible results. Faced with genuine uncertainty, we drag out probability distributions, net present value calculations, and cost-benefit analyses. These tools weren’t designed for situations where the inputs are fundamentally unknowable, so we end up feeding them fabricated numbers and heroic assumptions. The resulting strategies look rigorous and data-driven” — until reality intervenes.

It’s like a furnituremaker who designs a new chair assuming that every piece of timber has identical grain and density. This works fine until he tries to actually build the chair, at which point he realises that wood is variable and stubborn, with a tendency to warp while being milled to size and to fail in ways the design drawing didn’t anticipate.

The deeper tragedy is what we miss. Organizations keep screwing up innovation initiatives not because they lack creativity or resources, but because they fundamentally misunderstand what innovation actually is. They fixate on replicating visible artifacts of successful innovation — Apple’s hardware design language and extraordinary tolerances, Google’s (and 3Ms) percentage time, Tesla’s manufacturing techniques — and attempt to reverse-engineer success by copying surface features. This is like trying to recreate a restaurant’s signature dish by following a written recipe that doesn’t mention how they choose ingredients using years of experience and tacit knowledge, or how they adjust their cooking methods to adapt to the variability of ingredients.

Innovation, properly understood, is the disciplined navigation of not-knowing. Whether it’s uncertainty about market needs, technical feasibility, or competitive response, the defining characteristic of genuine innovation is that you’re operating without a reliable map. For innovation, uncertainty isn’t a bug to be fixed — it’s the feature.

Being more sophisticated about not-knowing

What if we treated not-knowing the way a skilled craftsperson treats raw materials: Not as an obstacle to overcome, but as something with particular properties that, properly understood, can be worked with rather than against?

This requires developing precise language to distinguish different species of uncertainty. Action uncertainty calls for systematic experimentation. Outcome uncertainty might be best addressed through portfolio approaches combined with structured scenario planning. Value uncertainty often requires careful reasoning about trade-offs or principled negotiation between stakeholders with different priorities.

The goal isn’t to eliminate uncertainty. (That’s as realistic as trying to eliminate wood grain.) Instead, we learn to read it properly, understand its characteristics, and use that understanding to make better decisions about how to proceed. Master furnituremakers don’t fight against the grain. Instead they study it, understand its implications, and let it inform what and how they choose to build.

This shifts us from defaulting to risk-thinking (which assumes everything important can be measured and optimized) to something more nuanced: The diagnostic skill to recognize what type of not-knowing we’re actually facing, and the practical wisdom to respond appropriately.

When individuals and organizations master this distinction, they don’t just avoid the costly mistakes that come from using the wrong analytical framework. They develop the capacity to navigate genuine uncertainty more skillfully. And, crucially, to create value precisely because they’re operating in domains where others fear to tread.


I’ve spent the last 15 years investigating how organisations can design themselves to be good at working in uncertainty by clearly distinguishing it from risk. If you’re interested in helping your organisation get better at this, we should talk.


I’ve been working on tools for learning how to turn discomfort into something productive. idk is the first of these tools.