5/7/2025 ☼ not-knowing ☼ risk ☼ uncertainty ☼ decisionmaking ☼ optimisation
tl;dr: A futurist friend recently made an expensive, early exit from the Middle East during a flare-up in the Israel-Iran conflict … then felt embarrassed when the situation de-escalated theatrically. That embarrassment stems from misframing the decision as an optimisation problem — which assumes that precise estimation of risks, probabilities, and timings is possible. It wasn’t. The situation wasn’t only risky, it was also uncertain — marked by capricious actors uninterested in playing by well-understood rules. In uncertain situations, optimisation fails and other frames for decisionmaking make much more sense. The real skill here is correctly diagnosing the kind of not-knowing you’re facing, then choosing a decision frame suited to uncertainty, not just risk.
In the last week of June, there was another spate of advanced sabre-rattling in the Israel-Iran conflict involving missiles. Several Middle East countries — including those with big international hub airports — closed their airspaces.
I’m in the latest season of a long-running group chat populated mostly by futurists and futures-adjacent people. Someone in the group sent a series of messages about their response to this latest escalation of hostilities more or less in realtime. With his permission, I’ve consolidated and reproduced them below.
“Qatar closed its airspace then the UAE surprised me by doing the same. All my sources here are freaking out about the inbound missles to Doha. I think ‘Shit, this is blowing my risk register off the scale, I should act.’ So I called Emirates to change my summer tickets from Friday to tonight. Call center totally jammed. Can’t get through at all. So COVID style I jump in the car, race to to the airport and change out tickets to a 3am flight. I’m already wondering if this is the right call when on the drive home, the same friends and sources are saying the attack was intentionally weak, symbolic, and a carefully staged theatrical off-ramp. Great! But fuck, it’s twice the price to change the tickets back and now I jumped the gun. My sang froid predictive record and risk radar was overtuned and jumpier than usual. Just feel embarrassed a bit as an overly sensitive futurist who has thus far being calling this situation pretty decently.”
Embarrassment is the natural and unavoidable feeling that comes from believing that you have misread the “optimal” time to leave. (In their words, “I used to practically write this shit and now I’m falling for it like a first year Muggle.”)
The idea of there being an optimal time makes a strong implicit assumption that it is possible to precisely and accurately estimate the probability and the timing of known downsides in a situation. Optimising requires knowing all the ways things could go wrong, and how likely each outcome is, and all their respective timings. Only then is it possible to leave at just the right moment — not too early, but not too late.
Framing a decision as an optimisation problem (of timing or anything else) is only correct when the situation is risky. Optimisation is a fundamentally mathematical, margin-oriented approach to decisionmaking. The only kinds of unknowns that are mathable in this way are the risky ones. In risky situations, you don’t know exactly what will happen, but you know almost everything about what you don’t know. Important real-life situations are almost never only risky. (Risk is not the same thing as uncertainty.)
The Middle East situation doesn’t seem only risky. It involves multiple state and non-state actors, many of whom appear to be capricious or uninterested in reliably perpetuating formerly well-understood patterns of geopolitical interaction. Trying to estimate accurate probabilities of what these actors will do is a mug’s game. The full range of outcomes isn’t even known, let alone how likely each outcome might be.
If you can’t estimate probabilities accurately, if you don’t know what unknowns you’re facing, you shouldn’t be trying to optimise. Instead, consider using a different framing for the decisions you have to make. In the Middle East, when Iranian missiles are falling on a US base in a country not directly involved in the Israel-Iran conflict and you’re deciding whether to bring forward your plane ticket, some useful framing questions might be “Could this get much worse?”, and “If it gets much worse, will I regret not having left now?”
If yes to both questions — if you could get stuck in a war zone when you could have flown out early for a few thousand dollars and some inconvenience — then leaving “too early” isn’t irrational and embarrassing. It actually seems quite sensible.
The alternative framing for that particular situation is precautionary, but the right alternative frame for decisionmaking always depends on the particularities of the uncertain situation you face. Diagnosis of the situation is a crucial skill.
If the uncertainty in the situation is due to, say, a new technology emerging, an appropriate alternative decisionmaking frame might be one which emphasises the degrees of freedom (to build new types of products, or reach new types of markets) afforded by a new technology that changes how long-established processes are done. The only commonality these alternative decisionmaking frames have is that they explicitly address different types of non-risk, unquantifiable not-knowing and distinguish it from risk.
It’s important to be explicit about diagnosing whether the unknowns in situations are risky or uncertain. This is because we’ve been trained to treat unknowns as things to be mapped, modeled, and calculated with precision. But that works only when you’ve correctly diagnosed the situation as risky and thus susceptible to precise estimation. When the situation is actually uncertain, precise estimation leads only to false confidence.
My futurist friend in the Middle East writes (from a remote tropical location): “I think the moral of the story is that when the bottom of the downside consequences is unknown and the mechanisms of acceleration towards it are unclear, it pays to err on the side of precaution.” Right on.
So, to end, here’s a relevant memory. It’s from mid-March 2020, in the very early days of Covid before we even really knew what it was. A major city in China had recently locked down, and a patchwork of international travel restrictions was beginning to emerge. I had a friend from Cambridge, MA pass through London, about half-way through their annual European lecture tour.
We met for a coffee just north of Oxford Street, but all we could talk about was this new virus and whether they should fly back to the US immediately and abandon the rest of the tour, losing not only the speaking fees but also having to eat all the costs of pre-booked and non-refundable flights and accommodation.
Over coffee, we discussed different frames for decisionmaking given the uncertainties (not just risks) Covid-19 posed. They decided thereafter to go back home immediately, spent several hours looking for a flight, and managed to leave the next day. The US announced a travel ban from the UK two days later that was only lifted fully in November 2021.