The half life of certainty

A desk covered in annotated books, and handwritten notes, beside a modern computer displaying text and website design.

Leadership, Learning, and the Collingridge Dilemma in the Age of AI

For almost a year now, I’ve been attending gatherings hosted by AI CoLab a diverse mix of people from government, academia, industry, nonprofits, and civil society, all circling a similar question from very different directions: how do we shape the development and use of artificial intelligence in ways that remain recognisably human?

What continues to surprise me is how quickly conversations about AI stop being merely technological. Even when discussions begin with regulation, capability uplift, procurement, data governance, or automation strategy, they often drift toward something older and more difficult to define: questions of power, institutional trust, human adaptability, and the psychology of uncertainty. Increasingly, the conversations seem to orbit around the relationship between capability and wisdom, and the kinds of leadership societies require during periods of accelerated change.

It reminds me that technological disruption is never purely technological. New tools arrive inside existing human systems — political systems, economic systems, cultural systems, emotional systems — and they interact with incentives and vulnerabilities already present. They amplify some behaviours while weakening others. Often, societies only begin understanding the full implications of a technology after it has already become embedded in ordinary life.

At a recent AI CoLab session, one researcher working in governance and public-sector risk mentioned something that immediately lodged itself in my mind: the Collingridge dilemma.

David Collingridge, writing in The Social Control of Technology (1980), observed that:

When change is easy, the need for it cannot yet be seen. When the need for change becomes obvious, change has become difficult, expensive, and disruptive.
— Collingridge, 1980

The idea feels painfully familiar once it is seen clearly. In the early stages of technological change, systems remain comparatively flexible. Regulation is still possible, norms are still forming, and infrastructure has not yet hardened around the innovation itself. Yet at precisely that stage, the downstream consequences remain unclear. Risks exist mostly as abstractions, and warnings can sound speculative because the surrounding harms have not yet fully materialised.

Later, however, once a technology becomes deeply integrated into institutions, economies, habits, and identities, the consequences become easier to recognise. But by then adaptation is vastly more difficult. Industries depend on the system, governments rely upon it, citizens organise their lives around it, and the surrounding infrastructure has already formed. The period in which intervention would have been easiest was also the period in which urgency was hardest to justify.

That tension feels increasingly important in a world shaped by artificial intelligence, algorithmic systems, automation, and rapidly accelerating technological capability. But I suspect the deeper issue is not technology alone. It is preparedness.

The Half-Life of Certainty

If you have spent any meaningful time inside government, you develop a feel for institutional tempo. Policy windows open slowly. Legislation takes time. Consultation takes time. Evidence gathering takes time. Public trust takes time. Good governance is rarely instantaneous because democratic systems are designed, at least in part, to absorb complexity carefully rather than react impulsively.

But this creates an increasingly difficult tension when the surrounding environment evolves faster than institutional learning cycles.

There was a time when professional expertise could remain comparatively stable across an entire career. A person could build mastery within a largely fixed environment and expect much of that knowledge to remain useful decades later. That stability now feels increasingly fragile.

In some technical fields, the half-life of knowledge is measured not in decades or even years, but in months. Capabilities evolve continuously. Platforms shift beneath organisations before implementation cycles are complete. By the time governance frameworks mature, the systems they were designed to govern may already have changed shape.

Many of our institutions were not designed for this kind of velocity. Large parts of the public sector emerged from a world where information moved more slowly, expertise remained relatively durable, and change could be managed incrementally. Today, however, technology evolves faster than policy cycles, information spreads faster than institutions can respond, and unintended consequences often emerge before governance structures have had time to adapt.

This is where the Collingridge dilemma begins feeling less like a theory of technology and more like a theory of institutional learning. The central challenge may no longer be simply whether institutions can respond to change, but whether they can learn how to learn, and unlearn while change is still unfolding.

Policy Windows and Preparedness

One of the more difficult realities in public policy is that preparation rarely feels politically urgent before visible crisis arrives.

Complexity and predictability
Art work installed in the Himalayan Cedar Forest, National Arboretum, Canberra Australia

Strategic foresight work often occupies this uncomfortable territory. Horizon scanning, scenario planning, weak-signal detection, participatory futures processes, and anticipatory governance are not really attempts to predict the future with precision. Serious practitioners understand that prediction is limited, particularly in complex systems. Rather, these approaches attempt to widen institutional awareness before decisions become constrained by momentum and embedded dependency. They are ways of thinking while flexibility still exists.

This matters because complex systems frequently generate consequences that were never fully intended by any individual actor within them.

Sociologist Robert K. Merton wrote extensively about the problem of unintended consequences: the way purposeful social action often produces outcomes that nobody initially foresaw. Not necessarily because individuals are irrational or malicious, but because human systems generate second and third-order effects that only become visible after millions of interactions unfold across time.

Human beings adapt to systems, but systems also adapt to human beings. Markets evolve. Institutions evolve. Cultures evolve. Once large systems begin interacting with one another simultaneously, consequences emerge that could not easily have been predicted from the beginning.

Which is another way of saying that societies often discover the meaning of their inventions only after living inside them.

Industrialisation created extraordinary prosperity while simultaneously embedding carbon dependency deep within modern economies long before climate consequences became fully understood. Social media platforms initially appeared as tools for connection and democratised communication before societies began grappling seriously with algorithmic amplification, outrage economies, youth mental health concerns, and industrial-scale attention extraction.

The same pattern appears repeatedly across urban planning, pharmaceuticals, financial systems, education policy, digital infrastructure, and now increasingly, artificial intelligence. The problem is not simply technological optimism. The problem is complexity itself.

Learning Faster Than Conditions Change

A reflective workspace combining books, handwritten notes, and modern digital technology illuminated in low light.

‘Learning together before certainty arrives’

What interests me most about this in the context of leadership is not whether AI will ultimately become “good” or “bad.” That framing feels too narrow to hold the scale of what is unfolding. The more interesting question may be what kinds of institutions, cultures, and leadership capabilities become necessary during periods where uncertainty itself accelerates.

There is a subtle but important difference between technological adoption and adaptive capability, and organisations often confuse the two. Terms such as “digital transformation,” “AI-enabled strategy,” or “modernisation agenda” are not inherently wrong, and many forms of technological adoption are both useful and necessary. But acquiring new technology is far easier than building organisations capable of learning continuously under changing conditions.

Learning cultures are difficult to build during periods of comfort. Most institutions say they value experimentation, curiosity, and innovation, yet far fewer genuinely reward the behaviours those things require: uncertainty, visible learning, revision of assumptions, intellectual humility, or occasional failure. This becomes especially difficult in environments where competence has historically been associated with procedural certainty.

The tension feels particularly visible inside large public institutions, which are often required to balance competing obligations simultaneously: accountability and adaptability, stability and innovation, procedural consistency and rapid change, risk minimisation and experimentation. None of these tensions disappear simply because technology accelerates. If anything, they intensify.

Institutions under pressure often respond by tightening procedure. More reporting structures emerge. More assurance layers appear. Governance frameworks expand in an attempt to stabilise uncertainty through process. Sometimes this is necessary; public trust depends upon accountability. But complexity has a habit of humbling systems that become too rigid in their pursuit of predictability, particularly when the surrounding environment continues evolving faster than the structures attempting to contain it.

Periods of rapid change tend to reward organisations capable of learning faster than conditions change around them. Increasingly, preparedness may depend less upon prediction and more upon adaptive capacity. And both institutions and individuals alike can learn these skills.

Intelligent Trial and Error

One reason the Collingridge dilemma remains useful is that it’s proposed response was what he described as “intelligent trial and error”: decentralised experimentation, rapid feedback loops, manageable reversibility, and systems designed for adaptation rather than rigid certainty. There is something important in that phrase. Not perfect prediction. Not total control. Not omniscient expertise. Learning.

That feels increasingly relevant because many professional cultures still carry an older image of leadership: the competent leader as the person with answers, the authority figure whose value lies primarily in certainty and decisiveness. And there are still environments where those qualities matter profoundly.

But periods of accelerated change place unusual pressure on static expertise. When surrounding systems evolve continuously, certainty itself becomes unstable. Under those conditions, the ability to learn may become more important than the knowledge already possessed.

This does not mean expertise no longer matters. It matters deeply. But expertise without adaptability eventually hardens. Careers are now involving continuous relearning rather than periodic retraining. That requires a different psychological relationship with uncertainty… one capable of updating assumptions publicly without experiencing revision as personal diminishment.

And that is not merely an intellectual challenge. It is emotional.

Organisations that punish uncertainty often suppress learning itself. If people fear appearing uninformed, they stop updating publicly. Once learning becomes reputationally dangerous, adaptation slows. This is one reason psychologically safe learning cultures matter so much, not because they feel progressive, but because brittle systems struggle during disruption.

Capability Without Wisdom

There is an older tension sitting beneath all of this.

Every generation inherits tools more powerful than the wisdom structures designed to govern them. That is not unique to artificial intelligence. Human history is filled with examples of capability outpacing ethical maturity, institutional adaptation, or long-range thinking. What feels distinctive now may simply be the speed at which this process unfolds.

Artificial intelligence is not the first technology to reshape society, nor will it be the last. But systems now scale globally before institutions fully understand their downstream effects. Behavioural change occurs faster. Information ecosystems mutate faster. Economic incentives compound faster. The pace itself alters governance conditions.

Preparation becomes harder because stability itself becomes less durable.

I am reminded sometimes of the old cyclical observation, repeated in various forms throughout history, that:

Good times create weak people. Weak people create hard times. Had times create strong people. Strong people create good times.

The phrase is too blunt to fully explain social reality, but it points toward something recognisable. Periods of stability can slowly erode preparedness. Institutions begin assuming continuity rather than rupture. Expertise becomes procedural. Risk frameworks quietly inherit the assumption that tomorrow will broadly resemble yesterday. Comfort creates the illusion that present conditions are somehow permanent.

Preparation, by contrast, is difficult to sustain in the absence of visible crisis. This may be true for individuals as much as governments, and perhaps especially true for technologically successful societies.

Holding Systems Loosely

What sits beneath the Collingridge dilemma, at least for me, is ultimately not a question about technology.

It is a question about how human beings remain adaptive without becoming untethered; how institutions preserve stability without becoming brittle; and how leaders continue learning without mistaking uncertainty for weakness.

These are not problems with neat solutions.

Part of the difficulty may be that modern systems still reward confidence more readily than curiosity, stability more readily than revision, and procedural mastery more readily than intellectual flexibility. Yet the future may belong less to the most knowledgeable organisations than to the fastest learning ones.

That idea sounds almost obvious once written down, but many systems still struggle to operationalise it because learning cultures are hardest to build before they become necessary.

Which returns again to the quiet discomfort at the centre of the Collingridge dilemma: when adaptation is easiest, the need for it remains hardest to see.

A contemplative image of the setting sun over the Brindabella Mountains taken from Black Mountain in Canberra Australia

Sitting with Uncertainty

The setting sun taken from Black Mountain, Canberra ACT

Closing Reflection

I sometimes wonder whether the deepest question beneath artificial intelligence is not technological at all, but civilisational.

Can human wisdom evolve at the same pace as human capability?

The question matters because capability alone does not prepare societies for disruption. Adaptive capacity does. The ability to revise assumptions, remain intellectually flexible, and continue learning before crisis removes the luxury of learning slowly may become one of the defining leadership challenges of this era.

Not because leaders must predict everything, but because increasingly they may need to learn while moving.

If you want to know more, and explore some of your own ability to learn on the path toward insight, reach out today!

Previous
Previous

The real limit to AI adoption

Next
Next

The light on the hill