Not-knowing discussion #7: Causal not-knowing (summary)
17/8/2023
☼ not-knowing
☼ ii
☼ iisummary
This is a summary of the seventh session in the InterIntellect series on not-knowing, which happened on 25 July 2023, 1700-1900 CEST.
Upcoming: “The fog of time,” 21 Sep 2023, 2000-2200 CET. The ninth episode in my Interintellect series about not-knowing is about how the length of the time horizon exacerbates not-knowing. More information and tickets here.
Causal not-knowing
Reading: Causal not-knowing.
Participants: Indy N., Egle V., Les G., Tina K., Mack F., Kimsia S., Ben S., Chris B.
In this conversation, we talked about how making good strategy is hard when you don’t know how particular actions are connected to particular outcomes — in other words, when there is not-knowing about causation. At the same time, causal not-knowing can be a source of strategic and tactical advantage. We talked about five types of causal not-knowing and how each type offers different constraints and opportunities for strategy and tactics.
Discussion highlights
- Scale of analysis probably affects causal not-knowing. Causality may be differently knowable/known at different system scales — e.g. understanding the behavior of a large crowd vs a single individual (Asimov’s psychohistory is an expression of this possibility).
- Causal not-knowing is one root cause of the common problem of believing that you have a working model of the situation even when you don’t. This is related to what I’ve previously called mindset mismatch.
- Important to distinguish between chaos, complicatedness, and complexity — they each seem to affect causal not-knowing differently.
- Chaos: Being unpredictable to the point of randomness. Dealt with using slack, “trusting the process,” novelty-based search.
- Complicated: Having many parts (potentially interconnected) but nonetheless susceptible to linear and deterministic analysis. Dealt with by investing time/effort in analysis.
- Complex: Having many interdependent parts and not fully susceptible to linear and deterministic analysis. Dealt with using sensemaking, slack.
- Slack is vital for dealing with causal not-knowing. Slack takes different forms: Space, time, people, permission to fail, permission to be small.
- Some tools/approaches for dealing with causal not-knowing:
- Progressive experimentation.
- Constant checking-in.
- Rhythms of development directed to smaller outcomes (e.g. team-level check-ins weekly, strategic check-ins quarterly).
- Expectations of breakage and designed-in accommodative slack (e.g. crash computing).
- Use of frameworks to ensure coverage of known assumptions.
- Explicit identification of situations where quantitative assessment of causation (i.e., a numerical probability) is inappropriate.
- Sensemaking training. Specifically, explicitly building models which disclose when conditions affecting their underlying assumptions change. I call this good brittleness.
- Composing teams with members temperamentally suited to the amount of causal not-knowing the team will face. (Participants brought up the difference between pioneers/settlers/town planners)
Some links shared by participants
- Learning to live with complexity: On complexity and inconsistent causation.
- Leadership in an organized anarchy: On creating causation in complex, messy, adversarial system.
- Ontological remodeling: On rethinking categories and systems of understanding.
- The soul of a new machine: On using multiple teams to deal with causal not-knowing (about what hardware architectural approach would work in designing a new computer).
- Lean thinking vs. fat thinking: On “fat thinking,” which is inefficient/weird and associated with open-ended learning and pulling back from superlegible goals. (I don’t agree with Venkat on the conditions under which fat thinking is good, though this may due to an inadvertent typo.)
- Proxy failure: On how making a metric a goal makes it a bad metric.