Now-ish

Evening in the Luberon; 11 July, 2025.Evening in the Luberon; 11 July, 2025.

The logistics of delivering from Singapore on consulting obligations entered into while based in France are complicated and unpleasant.

So … I traveled from Singapore’s heat and humidity to be back in France for much of July, to more easily alternate between the different heats and humidities of Marseille (to fill holes in the floor and make use of the Mediterranean), Ljubljana (to teach for the European Institute of Innovation and Technology on strategy for uncertainty, and on meaning-making and AI), and soon Paris (for a roundtable on art thinking as a way to approach uncertain situations).

Catalans on Bastille Day; 14 July, 2025.Catalans on Bastille Day; 14 July, 2025.

Storžič from Kranj; 18 July, 2025.Storžič from Kranj; 18 July, 2025.

But even continental travel eats up whole days like delicious snack crackers. For various unfortunate family reasons, this is no longer viable. If you’re in Southeast Asia and would like on-demand support for your team in building better experiments for new products, workflows, or strategy, you should definitely get in touch. Other less-legible consulting offerings available too, of course — for the amorphous consulting we should have a chat to start.

These past 6 weeks, the writing I’ve done has mostly reflected the development I’m doing for two projects that recently got funded: The public sector strategy course I’ve talked about before, and a self-education tool that uses LLMs to create a Socratic mirror for early stages of projects demanding critical thinking (the last two on the list).

  1. The AI expertise conundrum: Current LLMs can’t truly or create new knowledge on their own; but they can help humans do that innovation work more quickly. LLMs work best as eager research assistants: good at mapping known landscapes, bad at deciding what matters. So, paradoxically, they’re most useful to people with enough domain expertise to ask good questions and spot flaws — but they leave novices vulnerable to plausible but biased or simply incorrect outputs. If your organisation is deploying AI as a creativity engine or innovation driver, maybe reconsider. The smarter approach: Design AI use around its real affordances (information synthesis, not autonomous creation) and build AI workflows that keep human meaning-making at the centre. (13/7/2025)
  2. Optimisation fallacy: A futurist friend recently made an expensive, early exit from the Middle East during a flare-up in the Israel-Iran conflict … then felt embarrassed when the situation de-escalated theatrically. That embarrassment stems from misframing the decision as an optimisation problem — which assumes that precise estimation of risks, probabilities, and timings is possible. It wasn’t. The situation wasn’t only risky, it was also uncertain — marked by capricious actors uninterested in playing by well-understood rules. In uncertain situations, optimisation fails and other frames for decisionmaking make much more sense. The real skill here is correctly diagnosing the type of not-knowing you’re facing, then choosing a reasoned and appropriate decision frame instead of defaulting to a decision frame only suited to risk. (5/7/2025)
  3. Business is a meaning-making act: Anthropic’s experiment with using Claude as an autonomous shopkeeper (“Claudius”) failed — not just because the AI was gullible, but because running even a simple business involves inherently human, meaning-making decisions. Doing business isn’t just executing tasks like pricing or inventory. It’s deciding what matters, what to trade off, and what success looks like. These are subjective choices without correct answers. Until AI systems can make meaning, they shouldn’t be tasked with running businesses on their own. The real question isn’t whether an AI is gullible, but whether the work requires meaning-making. If it does, that work must remain human. (5/7/2025 )
  4. Building AI tools for better meaning-making: Instead of building tools that are designed to generate outputs indistinguishable from human outputs, we should be building AI tools that focus on helping users learn to do meaning-making: The work of making inherently subjective decisions about the subjective and relative value of things. (30/6/2025)
  5. The future of education is meaning-making: AI tools are increasingly accessible, cheap, and seem potentially able to produce any output a human can produce — they’ll certainly reconfigure what work looks like. I agree with this read on AI, except for one important difference: Humans can and must do what I call meaning-making” because AI can’t do it yet. Meaning-making consists of making subjective judgments about the relative value of things. Education at all levels, but especially higher education, has largely abandoned teaching students how to make and justify subjective value judgments. To remain relevant, education must reorient around helping students learn what meaning-making is, and how to do it well. (21/6/2025)

Back to the motherland next week.

Plage de Bonne Brise; 12 July, 2025.Plage de Bonne Brise; 12 July, 2025.

Updated 20 July, 2025