3/5/2023 ☼ not-knowing ☼ ii ☼ iisummary

This is a summary of the fourth session in the InterIntellect series on not-knowing, which happened on 20 April 2023, 2000-2200 CET.

Diagrammatic representations (at the end) updated 12 May 2023.

**Upcoming: “False Advertising,” 18 May 2023, 2000-2200h CET.** The fifth session in the series is about how the word “uncertainty” is often used (e.g., in statistics and machine learning) to describe specific manageable forms of partial knowledge which aren’t equivalent to Knightian true uncertainty. This gives us misplaced confidence that we have methods for managing true uncertainty — similar to the false comfort we derive from mischaracterising true uncertainty as manageable “risk.” More information and tickets available here. As usual, get in touch if you want to come but the $15 ticket price isn’t doable — I can get you sorted.

**Reading:** How to think more clearly about risk.

**Participants:** Colin R., Indy N., Mike W., Karen A., Kevin M., Travis K., Daniel O., Anne-Marie N., Chris B.

We began with two observations and the problem that results from them.

**Observation 1:** The word “risk” has a formal definition: A situation where the precise outcome is unknown, but all possible actions, outcomes, and probabilities of outcomes given actions are known. In other words, “risk” is when we know almost everything about what we don’t know, such that the conditions for strict formal rationality are met. In a situation of formal risk, it’s possible to make clear decisions about how to act using well-established methods (value in expectation, cost-benefit analysis, etc). These methods for decisionmaking under formal risk depend on both precise (= exact) and accurate (= correct) knowledge of probabilities.

**Observation 2:** Formal risk is rare in real life and is limited to stuff like betting on the outcome when flipping a fair coin or tossing a fair dice. This means that the word “risk” is used in a looser sense for describing many real-world situations of not-knowing which are not formally risky. The reading for this week’s discussion offers some examples of some of these situations of non-risk not-knowing. A cynical view is that this is intended to make these situations seem more knowable and manageable than they really are.

**The resultant problem:** The word “risk” is used to describe non-risk situations of not-knowing, so decisionmaking methods appropriate for formal risk are used in these situations. These methods depend on accurate *and* precise knowledge of probabilities. In situations of non-risk not-knowing, probabilities may be known precisely but almost never exactly. The emerging kerfuffle around Silicon Valley Bank (“poor risk management!”) is just one example of the consequences of misnaming the type of not-knowing we face. The reading offers some other examples of the consequences of applying risk mindset to decisionmaking in a non-risk situation.

**Formal risk is rare in real life, so the word “risk” is misused to describe situations of not-knowing that aren’t formally risky, and decisionmaking methods only suitable for formal risk are nonetheless applied to make decisions in those situations. These simply fail when applied to non-risk situations of not-knowing.**

Outside of a handful of economists,

**the word “risk” is rarely understood and used in its formal definition**— a situation where the precise outcome is unknown, but all possible actions, outcomes, and probabilities of outcomes given actions are known. Situations of formal risk practically never occur in real life.There is a widespread

*informal*understanding and usage of**“risk” as some combination of partial knowledge and its (usually negative) consequences**. However, different groups of people mean different and specific things when they use the word.Some of these informal yet specific meanings of “risk” are:

- A potential or imagined problem
- A set of known problems with known mitigations
- The probability of a specific outcome that is known to be bad — an outcome with a negative valence.
- Generally, outcomes with negative valences (without probabilities attached to them)
- A specific outcome that is bad for some people but not for others — an outcome with a subjectively negative valence
- The cost of not taking a particular mitigating action (i.e., the cost of inaction).
- The negative unforeseen consequences of a given action
- Specific to the US military: A framework of domain-specific conceptions of potential threats (i.e., acquisition risk, program risk, inaction risk, risk to mission, risk to personnel, etc).

These informal definitions of “risk” are in fact quite formal in the sense that

**each means something specific that is conceptually distinct from the others though potentially overlapping in practice**.We came up with those 8 informal meanings of “risk” among the 10 participants in the group, so

**there are probably many more out there**.The wide range of informal definitions of “risk” reflects a commonplace understanding that the world is fuzzy-edged, ambiguous, and our knowledge of the future is highly imperfect.

Conventional risk mindset and its associated methodologies (cost-benefit analyses, expected value analyses etc) comes out of the formal definition of risk. Formal risk is definitionally precisely and accurately quantifiable, so

**risk mindset relies on precise and accurate probabilities of outcomes**. (Important: Precision and accuracy are not the same thing.)**Conventional risk mindset is comforting because it implicitly suggests that the situation is knowable and controllable**in the sense that it allows for precise and accurate probabilities.But it only works as expected when precise and accurate probabilities are actually available — so

**risk mindset may be inappropriate for the vast majority of real world situations**.Nonetheless,

.*interpreting*a situation as “risky” implicitly justifies using risk mindset to deal with it — and this is comforting even when risk mindset is inappropriate for the situation**Some open questions:**- Is it possible to communicate “riskiness” of a situation effectively when there are so many different and partially overlapping informal definitions of “risk”?
- How should we make decisions when we know that our precise probability estimates are unlikely to be accurate and when we don’t know how inaccurate they are?
- What if there is no “real” probability? This is an epistemological question about whether all events must have a true probability or not.
- In what situations are numerical probabilities impossible to estimate?
- How should Bayesians deal with unknown situations with no priors? Is it truly the case that a randomly selected prior is better?

**Some observations:**- Estimates of an outcome’s probability are often connected to the outcome’s desirability, even when the outcome is produced by a mechanism with a stable and known distribution of outcome states.
- Beliefs about the state of the world (e.g., “I believe the situation we are facing is one of formal risk” or “I believe that there is a 24-33% probability that my action will result in Skynet, and that my probability estimate is highly accurate”) are crucial underlying factors in decisionmaking which are rarely interrogated and made explicit.
- Animals (black swans, grey rhinos, etc) might be a useful taxonomy of partial knowledge.

A Venn diagram of not-knowing, formal risk, Knightian uncertainty, and informal “risk” clarifies where these concepts do (and don’t) overlap.

One participant contributed this quick sketch during the session:

I added to it to produce the diagram below (still a work in progress):

**12.05.2023**— A further update after further discussions with TW, DK, PH: