How to think more clearly about risk

6/10/2022 ☼ risknot-knowing

tl;dr: This article argues that thinking clearly about risk is essential to making good decisions and achieving good outcomes in situations of not-knowing. I show how the word risk” is incorrectly used, identify bad outcomes resulting from such sloppy thinking, and argue that thinking clearly begins with having different names for situations of not-knowing that aren’t formally risky.

🙏 to James Cham, Chng Kai Fong, Dimitri Glazkov, Paul Henninger, Ben Mathes, and Giulio Quaggiotto for their comments. I am responsible for any remaining poor thinking.


A friend who’s thinking through his position on effective altruism sent me several pages from a book by Will MacAskill (What we owe the future):

I don’t object to MacAskill’s argument that there is an overriding moral imperative for us to be long-termist in how we choose to act. I also agree that effective altruists are in a situation of not-knowing: long-termism makes it even harder than usual to know exactly how to act or what will happen as a result of those actions. Effective altruists need to have the correct decisionmaking mindset to face this situation of not-knowing.

I do object to his assertion that an implicitly highly quantitative version of expected value theory is the correct decisionmaking mindset for this situation of not-knowing.1

In this article, I argue that we think dangerously imprecisely about situations of not-knowing and the decisionmaking mindset we should use to deal with them. One result of this imprecise thinking is believing that expected value theory is the right way to make decisions as a long-termist effective altruist.

Counter-intuitively, the root of the problem is that we’ve come to use the word risk” to describe many different types of situations of not-knowing.2

Risk” correctly should be used only to describe one specific situation of not-knowing: formal risk (I’ll explain that in detail next; poker is relatively close to being formally risky). But risk” is now almost always used to describe situations of not-knowing that aren’t formally risky.

This trivial-seeming sloppy thinking is a dangerous trap. When we call a situation risky,” we instinctively think about the situation as if it is formally risky, even if it isn’t. So we instinctively use a decisionmaking mindset appropriate for formal risk (expected value theory is one example) to decide what actions to take.

Now that risk” is used mostly to describe situations of not-knowing that are not formally risky, this almost always leads to bad decisionmaking, which produces bad outcomes.

The trap is this: Sloppy thinking about risk makes us vulnerable to using the wrong decisionmaking mindset to deal with situations of not-knowing.

Sloppy thinking about risk has already had profound and widespread consequences for people around the world. Just two examples:

  1. Financial institutions worldwide treated complex derivatives as formally risky when they were not — leading to the 2008 Global Financial Crisis.
  2. The WHO treated the emerging Covid-19 situation as formally risky in February 2020 when it was not — leading to a multi-year, global pandemic that continues today.

Thinking clearly about risk really matters.

Onward.

What is formal risk?

The formal definition of risk is a situation of not-knowing in which you know all three of the following: all possible outcomes, all actions you could take, and the probabilities of each outcome based on your actions.

This is a really high bar for knowing stuff about a situation in which you don’t know stuff.

Example 1: You can optimise your chances of winning in a coin-flip game by understanding the risk” involved.

The situation: The extract below explains the optimal strategy for a person who wants to double their money when betting on flips of a fair coin. They do not know whether any particular flip will result in a heads or tails outcome, but they know that both outcomes are equally likely (50% chance of success). With this knowledge, they can calculate the correct way to bet so that they are most likely to double their money.

(That’s from an article by Wilhelm Schultz.)

In this risk” situation, the agent (the person playing the coin-flip game) doesn’t know exactly what the outcome of a coin flip will be. But they know a lot about the situation. They know all the potential outcomes (the payouts from different betting strategies can all be calculated), all the actions they can take (different betting strategies), and the precise probability of any outcome for any action they take. In this situation, the agent can calculate the correct action to take (staking everything on the first flip) to maximise the likelihood that they achieve their desired outcome (doubling their money).

Example 1 illustrates formal risk: You don’t know exactly what will happen, but you know everything about the potential outcomes and what you can do to affect them. This is an extraordinarily high bar, because you have to know so much, and with so much precision, about what you don’t know.

Betting on flips of fair coins or throws of fair dice is formally risky (which is why casinos have viable business models). So is any process that is based on a physical mechanism that obeys some cosmic law, like radioactive decay (which is why atomic clocks work). In a situation of formal risk, you know so much about the not-knowns that you can choose actions that get you desired outcomes by simply doing the math.

Almost no real-life situations are formally risky other than flipping a fair coin or throwing fair dice. Close to 100% of the time, risk” is used to describe several kinds of situations of not-knowing that aren’t formally risky.

Risk” situations which aren’t formally risky

Let’s look at some examples.

Example 2: Banks are having a hard time getting investors to buy Citrix debt (from a leveraged buyout) because of risk.”

(That’s from an FT article.)

The footnote link at the end of this sentence has some background to the Citrix situation which you can skip if you’re already familiar with it.3

The situation: Investors know that they want to balance return on investment and safety of investment. They don’t know if investing in Citrix debt is the best way to achieve their chosen balance of return and safety, given the suddenly shakier economic (not to mention geopolitical) environment and the increasingly attractive option of investing in perfectly secure government debt.

Risk” isn’t formal risk in this example. What we have is a situation where the agent (the investor) knows what outcome they want (chosen balance of investment return and safety), but doesn’t know what action will produce that desired outcome (buy Citrix debt, buy equities, buy Treasuries, etc) given an unexpected change in the environment (climate, economic, and geopolitical).

Example 3: Analysts and campaigners believe there is a growing risk” of nuclear deployment in the war between Russia and Ukraine.

(That’s from a CNBC article.)

The situation: Analysts and campaigners know that there is a possibility that the Russia-Ukraine conflict will escalate to a point where nuclear deployment is involved. They don’t know exactly how likely that is, but they believe that the probability has increased.

Why risk” isn’t formal risk in this example: An agent (an analyst or campaigner) knows about a possible outcome (nuclear conflict in the Russia-Ukraine war) and the approximate probability of that outcome, but doesn’t know the precise probability of that outcome.

Example 4: Some people are at risk” of serious illness from Covid-19.

(That’s from a UK National Health Service webpage.)

The situation: The NHS knows that people might get seriously ill (in an unspecified way) from Covid-19. The probability of getting seriously ill from Covid-19 is higher for some people, and may be reduced by shielding or vaccination.

Why risk” isn’t formal risk in this example: An agent knows they can reduce the probability of a bad outcome (getting seriously ill from Covid-19) by taking particular actions (shielding or getting vaccinated), but doesn’t know how exactly those actions change the probabilities of those bad outcomes.

Example 5: Celsius declared bankruptcy because it engaged in high-“risk” activities.

(That’s from a Bloomberg article.)

(These are from an FT article.)

The situation: Unexpectedly bad things happened to Celsius, such as having to declare bankruptcy because of unexpected losses and poor cashflow. These stemmed from unexpectedly bad investments, unexpectedly high withdrawals, and its unexpected inability to use its deployed capital. These were in turn caused by other events Celsius did not expect, such as a crypto sector-wide devaluation and loss of confidence, delays in Ethereum’s switch from proof of work to proof of stake, and hacks.

Why risk” isn’t formal risk in this example: An agent (Celsius) encountered unforeseen outcomes (liquidity crisis, filing for bankruptcy) because of unforeseen events (crypto sector bloodbath, Ethereum delays, hacks) connected to taking unpredictable actions (cryptocurrency lending).

Using risk” to describe this kind of unknown-filled situation is not just slightly imprecise, it is completely off-base.

Many situations of not-knowing aren’t formally risky

Examples 2-5 illustrate how we label situations of not-knowing as risky” even when they aren’t formally risky. Each example shows a situation of not-knowing that is different from formal risk.

For a situation to be formally risky, the agent in the situation must know all three of the following (as in Example 1):

  1. All the possible outcomes,
  2. All the actions they can take,
  3. The probabilities of particular outcomes resulting from particular actions.

In Examples 2-5, the agents don’t know one or more of these. Using the same word risk” to refer to the situations in Examples 2-5 means that we don’t distinguish between these different types of situations of not-knowing. It also means implicitly assuming that they are formally risky when they’re not — and acting on that assumption.

This is a trap that may not seem like one.

When we call a situation risky,” we automatically think about the situation as if it is formally risky. This makes it easy to decide on what actions to take as if the situation is formally risky. In other words, when we call a situation risky,” we almost always decide on how to act on it with some form of implicitly or explicitly quantitative risk analysis (whether we call it risk modeling, conformal prediction, cost-benefit analysis, expected outcomes theory, expected value theory etc). Quantitative risk analysis is almost the only approach we teach people to use in thinking about and acting on futures that aren’t completely certain.

This is a trap because quantitative risk analysis as a decisionmaking mindset only works as expected when the decisionmaking situation is actually one of formal risk (as in Example 1). In all other situations of not-knowing, quantitative risk analysis involves made-up numbers, the comfort of false certainty, and the real possibility of bad outcomes.

Name the beasts to tame the beasts

This is why we should have different names for situations of not-knowing, names to make it really clear when they are not formally risky. It’s too easy otherwise to fall into the trap of thinking of a situation as formally risky when it isn’t, and then to act on it with the wrong decisionmaking mindset.

The decisionmakers in Examples 2, 4, and 5 were highly-trained experts, many extremely well-paid, specifically responsible for navigating not-knowing. Yet, in each case these professionals acted as if the situations they faced were formally risky … except that they weren’t.

In each case, the outcomes were bad. In each case, there was a mindset mismatch: A disconnect between the situation and the decisionmaking mindset.

Professionals — highly trained people who are extremely well-paid to deal with not-knowing — think sloppily like this all the time. Their training doesn’t provide them with the right cognitive frameworks to think clearly about not-knowing. More importantly, their leaders also lack the ability to think clearly about this. So nearly every organization, whatever the type, has created visible and invisible incentives for the people inside them to focus on formal risk and pretend that other types of not-knowing either don’t exist or aren’t important. Sloppy thinking about risk is reinforced by incentives created by sloppy thinking about risk.4 The consequences for the rest of us are already awful and will become even more disastrous. (Leaders of organizations should take note.)

We can’t seem to trust so-called professionals to think clearly about these things for us anymore. It’s time to do the clear thinking for ourselves, if only for self-protection from the harebrained decisions the professionals end up making.

Understanding what is and isn’t formally risky is also essential for prosaic but important decisionmaking. Clear thinking about risk is essential when considering things like flooding hazard if you live in Florida (as we saw last week with the effects of Hurricane Ian) or the possibility of sudden interest rate surges while on an adjustable rate mortgage (as we saw in last week’s sterling crisis in the UK).

We need more tools to deal with the many non-risk situations of not-knowing that already surround us. But these tools barely exist because we rarely distinguish them from formal risk situations. The way to start building these tools is by thinking more clearly about risk.


A postscript: I intentionally left out Example 3, as the Russia-Ukraine situation is still live and developing — which is terrifying. One potential outcome is extraordinarily bad (a nuclear deployment). Just how bad that outcome could be isn’t fully understood (anything from a small tactical nuclear strike in Ukraine to a large-scale attack against a third country). The range of actions that can be taken in this situation is not completely known, and the probabilities of different outcomes from those actions is not known either. The Russia-Ukraine war is not a situation of formal risk. In this situation, deciding how to act using a formal risk mindset (especially if it isn’t explicitly recognizable as such) is almost certainly a bad idea.


This article is part of a project on not-knowing.


  1. The very precisely quantified implementation of expected value theory is what I object to. Also note that making decisions based on expected values requires quantifying the value of outcomes so that their respective values can be compared with each other. There are many problems with that idea, not least of which is quantifying the value of an outcome requires that those outcomes be commensurable with each other in some way. This is where the two meanings of value” collide with each other: the amount or number of something (“this parameter in the model has a value of 4”) and the moral worth of something (“reducing our environmental impact has the highest value to us, over profit”). But these problems are for another article in this series.↩︎

  2. This article focuses on risk” as an objective descriptor for different situations of not-knowing. We often also use risk” to mean something implicitly subjective or value-judgmental. For instance, we talk about people who don’t take enough risks as being cowardly, or people who take on too much risk as being imprudent. This value-judgmental use of risk” is the subject of yet a different article.↩︎

  3. In January 2022, a couple of Big-Name Hedge Funds (BNHFs) decided to do a leveraged buyout (LBO) of Citrix (which does cloud computing stuff). Broadly, this means: To pay for buying Citrix, the BNHFs took a loan from some Big-Name Banks (BNBs) and used Citrix assets as collateral for the loan. The buyout is leveraged in the sense that the BNHFs use borrowed money for part of the payment, and that borrowed money is secured by the company they are buying. BNBs underwrite LBO debt like this when they believe the debt is high-quality, which is to say it is secured by good assets that they expect will provide enough cashflow to pay the interest on the debt. BNBs get fees for underwriting the debt. BNBs may also make a profit from selling the high-quality debt asset to other kinds of investors who want that kind of thing (like pension and endowment funds). But, the BNBs are having a lot of trouble selling the Citrix debt to investors. Since January, government debt interest rates have risen and the stock markets have fallen. (Other things have happened too, like wars, energy crises, and extreme weather. The Citrix debt is now — unexpectedly for the BNBs — neither as high-quality nor as attractive to investors as the BNBs had hoped. Citrix debt is lower-quality now because Citrix shares are now worth less and so are the Citrix assets securing the debt. The interest Citrix debt pays is relatively less attractive now that ostensibly perfectly safe government debt has increased the interest it pays.↩︎

  4. Thanks to Ben Mathes for highlighting this very important part of the problem.↩︎