Not-knowing discussion #10: Intent, causation, and values (summary)

14/11/2023 ☼ not-knowingiiiisummary

This is a summary of the tenth session in the InterIntellect series on not-knowing, which happened on 19 October 2023, 2000-2200 CET.

Upcoming: “Mindset change.” 16 Nov 2023, 2000-2200 CET. The eleventh episode in my Interintellect series about not-knowing unpacks what a mindset is and why an appropriate mindset for uncertainty is based on explicitly talking about different types of non-risk not-knowing. We’ll talk about common pitfalls in evaluating uncertain situations, how the right mindset helps us avoid these pitfalls, and the new action strategies which become visible with the right mindset for not-knowing. More information and tickets here. As usual, get in touch if you want to come but the $15 ticket price isn’t doable — I can sort you out. And here are some backgrounders on not-knowing from previous episodes.

Intent, causation, and values

Reading: The fountain.

Participants: Paul M., Indy N., Trey D., Chris B., Mack F.,

The four types of not-knowing — actions, outcomes, causation, and values — affect each other over time. Any reasoned approach to not-knowing must accommodate how a change in one of the four affects the others. Compared to the narrow risk-management approach, this broadened understanding of not-knowing opens up a fundamentally different way of thinking about the world and how to take action in it. The tools for relating to not-knowing naturally also look different from risk-management tools.

Discussion highlights

  1. Not-knowing is unavoidable in many high-stakes situations.
    1. Politics: Especially in doing strategic research for underdogs/opposition it is often not possible to find something even if you know it exists. Examples include the particular sentiment/platform that will align the votes of an otherwise unaligned demographic, or the characteristic/s that will make an otherwise diffuse demographic coherent and addressable.
    2. Engineering: So far, engineered systems are necessarily highly structured, yet they must deal with unstructured inputs and use-environments. Examples include all software systems designed to be used by non-experts (i.e., nearly all consumer software) and hardware operated in unpredictable/extreme environments (e.g. physical infrastructure for space missions).
    3. Management: Leaders are expected to produce predictable results (financial targets, new product launch targets, etc) despite having to work in an environment made uncertain by unpredictable others (employees, financial markets, competitors). Examples include leaders of investment companies who discover that portfolio companies are engaging in fraud (e.g., investors in FTX), or leaders of companies in industries where there is unexpected regulatory or technological change (e.g., leaders of traditional news companies).
  2. Even when stakes are high, there are no specific strategies for addressing not-knowing. When faced with not-knowing which is not risk, we often simply hope for the best.” Sometimes, we respond to non-risk not-knowing by overbuilding/overengineering (designing the system to be much more robust) or actuarial thinking (designing for arbitrarily chosen low-probability events such as a 500-year flood).
  3. Temporality makes it hard to relate well to not-knowing. This is not only because futurity intensifies not-knowing. The urgency of a situation of not-knowing complicates responses to it because there is no time to wait for more information and for frames/meaning to clarify. An example of this is the difficulty of interpreting real-time information (statistics, photos, videos) from Gaza, or even establishing its veracity.
  4. [Capacity + inclination] to engage with not-knowing can become a source of selective advantage.
    1. Individuals, teams, and organisations gradually discover whether they are good at dealing with different types of not-knowing (i.e., risk and non-risk not-knowing) and whether they like dealing with it.
    2. If they are are both capable and inclined, they gradually seek out work that involves more not-knowing.
    3. Examples:
      1. Individuals: Research scientists, R&D specialists, turnaround experts, reconnaissance specialists, startup addicts (founders/employees).
      2. Teams: R&D teams, landing parties, risk management teams, special projects teams.
      3. Organisations: Military special forces/special operations, startups.
  5. Organisations gradually incorporate capacity + inclination for dealing with not-knowing into their structures. When this happens quickly in the right parts of the organisation, the organisation succeeds. The organisation fails or is less successful if the incorporation is too slow or happens in the wrong parts of the organisation. An example of success might be Google identifying government relations in major markets as a key area in which to bring in highly qualified talent. An example of failure might be the US CDC failing to bring in qualified talent to run their testing development/approval department during the COVID-19 crisis.
  6. Companies at different stages of lifecycle need to be primarily good at dealing with different types of not-knowing.
    1. Startups are intended to explore novel products/services by investigating product-market fit (PMF). This is ultimately a process of exploring not-knowing about what is valuable and why it should be valued (i.e., value not-knowing). Startups succeed to the extent that they are able to pivot when needed to explore the space of value not-knowing enough to identify one (of potentially many) viable PMFs. Startups also need to explore causal not-knowing to grow out of the startup phase once PMF has been identified.
    2. Established companies are intended to exploit product/market fit by optimising execution. This is ultimately a process of exploring not-knowing about the relationship between actions taken and outcomes achieved (i.e., causal not-knowing). However, established companies also need to continually explore value not-knowing to avoid being overtaken by startup competitors.
  7. Another lens for understanding company evolution is when the primary not-knowing focus switches over from value to causation — this switch triggers changes in strategy and tactics.
  8. Changes in the environment can force companies to do this switch — being ready to switch modes of not-knowing is crucial for success. An example is the launch and sudden popularity of LLMs with human-language interfaces, which catalysed this switch for both startups working on other human-language interface LLMs and established companies working on other approaches to AI and AI interaction.
  9. Different types of R&D seem to map primarily to different kinds of not-knowing. Basic research maps primarily to exploring action not-knowing (figuring out what actions are possible), while applied research maps primarily to outcome and causation not-knowing (figuring out what outcomes are possible, and the connection between specific actions and outcomes).
  10. Not-knowing affects strategy-making through the strategic goals that are chosen.
    1. Goals must be appropriately concrete/specific. Specifying success outcomes in terms of concrete goals locks decisionmaking into the frame of the present because concretely imaginable goals encode present values and understandings of actions, outcomes, and causation. For innovation work or futures thinking, concrete goals are often counterproductive. What is needed instead is anchored imagination of future resources/constraints/theories of value.
    2. Special caution is needed where goals are expressed as quantitative success metrics. This is especially the case where the goal is new/innovative and not yet well-understood. Metrics require good understanding of the phenomenon being measured; otherwise, metrics will not accurately capture the phenomenon. If inaccurate metrics are used as a measure of success, they will incentivise actions that produce unexpected results (including undesirable side effects).
    3. Metrics are dangerous because they too easily conceal ambiguity/subjectivity/imprecision in their creation. Metrics often result from making judgment calls (e.g., expressing an approximate likelihood as a point probability estimate), but once in existence they travel and propagate without reminding users that they originated as a judgment call. In other words, it is possible to use metrics in ways that acknowledge that they are subjective, but once the metrics are out there they are seductively objective-seeming.

Gaps in the market for not-knowing tools

  1. Specific strategies for responding to high-stakes not-knowing (e.g., in politics, engineering, management).
  2. Frameworks for individuals, teams, and organisations to more quickly and accurately identify capacity and inclination to deal with specific types of not-knowing (risk and non-risk types).
  3. Processes for organisations to
    1. More quickly identify areas where capacity + inclination for dealing with not-knowing is required, and
    2. Find and bring in people/teams with relevant capacity + inclination.
  4. Tools to help organisations become better at dealing with not-knowing that reduce adoption hurdles by being modular in nature and which take advantage of adjacent possibility.

Fragmentary ideas/questions that came up which seem valuable

  1. Is intelligence and intent required for causally interpretable outcomes? For example, if a rock is disturbed by an earthquake, the outcome (it rolls down the mountain) is causally interpretable even though there is no intelligent intent. It is probably more accurate to say that intelligence and intent are required for interpretable strategy (explicit choice of action aimed at desired outcome).
  2. When representing complex things, we tend to reduce them to representations that are complicated but legible — potentially at the expense of a lot of lost information. Is there a new form of legibility that works for complex gestalts? What is the difference between [legibility of complex systems] and [legibility for complicated systems]?
  3. At the same time, systems are systems. It is the lens/frame through which we view them that leads us to interpret them as being complicated or complex. This is related to the foundational importance of mindsets in relating well to not-knowing.
  4. Gestalt understanding is impossible to explicate and hard to teach but can be learned — so it is difficult to replicate/scale and currently needs to be transmitted from person to person. What would a machine form of gestalt understanding look like? There are threads connecting this to research on black boxes in scientific research, tacit knowledge/perception, and stopping rules in ambiguous situations (such as training ML models or cooking steaks).
  1. Distinguishing between long-term thinking and strategy.
  2. A view of strategy as recognition of constraints presented by complex environments.
  3. A theory of black boxing in the doing of science.