Truely innovative collaboration

The AI CoLab
1 Moore St, Canberra, ACT

What I saw at the AI CoLab

Most people encountering the term AI CoLab for the first time would be forgiven for assuming it is primarily about technology.

The APS AI CoLab is a cross-sector community bringing together public servants, researchers, policy thinkers, technologists, and practitioners to explore how artificial intelligence intersects with public purpose, accountability, and participation. It is less a program and more a practice space — one designed to help people think together about emerging technology in the context of real public work.

I attended the CoLab’s year-in-review session last week. I went partly out of a long-standing interest in AI, but also because I know one of the founders, Paul Hubbard, and deeply respect his work. Across multiple roles within the APS, Paul has repeatedly demonstrated how thoughtful innovation can lift capability and outcomes across the public service. The AI CoLab is one expression of that broader pattern of work.

What was immediately evident was the diversity of the room. The CoLab has brought together an unusually eclectic mix of people: policy practitioners, librarians and information specialists, copyright and Creative Commons experts, economists, national security thinkers, technologists, facilitators, and public servants working quietly at the edges of innovation.

On the surface, the focus has been artificial intelligence. But what became increasingly visible to me, as the group talked and presented, was something else entirely.

What the AI CoLab really shows us is not how fast AI is advancing; but how human capability develops when people are given permission to think, learn, and work together well under conditions of uncertainty.

This matters, because uncertainty is now the water the public sector swims in.

A serious room… and why that matters

The review session drew on work that had unfolded across the year: conversations spanning librarianship and Creative Commons, national security and AI futures, policy maturity models, epistemology and large language models, infrastructure, equity, and public accountability.

On paper, this could read like a catalogue of topics. But what mattered was why these conversations were being held together.

The unifying thread was not enthusiasm for AI itself, but a shared concern with agency, trust, and influence: who gets a voice, how decisions are shaped, and how public institutions remain accountable while operating in increasingly complex environments.

This aligns closely with long-standing public-administration research showing that legitimacy and effectiveness in complex systems depend less on technical optimisation and more on participation, transparency, and shared sense-making (e.g. Ansell & Gash; Head).

AI, in this context, was being treated as a means (a potential amplifier of insight, participation, and collective understanding) rather than an end in itself.

That orientation matters. Tool-first approaches tend to reproduce existing power structures and blind spots. Purpose-first approaches, by contrast, create the conditions for more adaptive and ethical use of technology. This seriousness did not need to be declared; it was evident in the quality of the questions being asked and in the willingness of people from very different parts of the system to sit with complexity together.

Learning by doing… together

One of the features of the CoLab that particularly resonated with me was its use of play sessions.

These sessions invite participants to work directly with AI tools, share prompts, compare approaches, and learn from one another in real time. People arrive with different levels of confidence and technical skill, and leave having stretched their capability through hands-on experience and mutual support.

This reflects a well-established finding from learning science: capability does not develop through information transfer alone. It develops through experience, reflection, and social learning (Kolb; Lave & Wenger).

In many organisations, experimentation happens privately, knowledge remains siloed, and learning stays hidden. The CoLab counters this by creating shared experiences where learning is visible, collective, and discussable.

The result is not only individual skill-building, but relational learning; people discovering how to think together. Research on communities of practice consistently shows that this kind of shared work is how tacit knowledge moves and how professional judgment deepens over time (Wenger).

The real innovation I saw

Although the session I attended was a review, it was clear (from the stories shared and from the ease of interaction in the room) that this community had been shaped by repeated collaboration over time.

What struck me was the absence of hierarchy in how people engaged. Level and title appeared to matter far less than the quality of contribution. Authority felt distributed rather than positional.

This kind of interaction does not emerge by accident. Studies of high-performing teams show that learning and adaptation depend on psychological safety; environments where people feel able to speak, question, and admit uncertainty without fear of status loss (Edmondson).

Within that culture, the conversation consistently returned to human questions:

  • What problem are we actually trying to solve?

  • Who is not currently represented here?

  • How will we know if this helps rather than harms?

  • Where are the limits of the technology — and what should not be automated?

These questions did not feel performative. They appeared to be the genuine drivers of the work.

In that sense, the most important thing the AI CoLab is innovating is not artificial intelligence.

It is how people relate, reason, and collaborate in the presence of uncertainty.

This is the signature of a functioning community of practice: people working on shared challenges, exchanging partial insights, returning with what they have learned, and building understanding over time. Research consistently shows that trust, adaptability, and learning emerge from shared work, not from formal structure alone (Wenger; Brown & Duguid).

Why this kind of collaboration works

Lize van der Walt, Zak Kazakoff, and Paul Hubbard

Several features help explain why the CoLab feels effective. and why it aligns so closely with what research tells us about collaboration under complexity.

It is human-centred. The work starts with people and public purpose, not technology; a principle echoed in design, systems, and public-value scholarship.

It is inclusive. Diversity of perspective is treated as an asset. Research on collective intelligence shows that heterogeneous groups outperform homogeneous ones on complex problem-solving when conditions support good interaction and mutual respect (Page).

It is values-driven. Questions of accountability, equity, and impact are held alongside innovation, not deferred until later; a hallmark of adaptive governance in complex systems (Head).

It is experiential. Capability is built through doing, reflecting, and learning together, rather than through abstract discussion alone.

These are not just structural features. The group’s effectiveness is reinforced through:

  • Diversity of perspective, which expands the solution space.

  • Repeated interaction, which builds trust and shared language over time.

  • Shared experience, which creates what sociologist Mark Granovetter called “weak ties” — connections that dramatically increase information flow and system responsiveness.

  • Low ego, high curiosity, enabled by psychological safety.

  • Knowledge sharing, where insights are treated as communal assets rather than personal capital.

These outcomes are well documented in research on innovation networks and collaborative governance — and they are the result of intentional choices about how the space is convened and held.

From reaction to responsiveness

A recurring theme in the CoLab’s work is the possibility of moving public systems from reactive postures toward more anticipatory and responsive ones.

Examples shared during the review included ways AI can help visualise patterns in policy submissions, not only highlighting dominant positions, but also surfacing minority and fringe perspectives that may carry important early signals.

Used carefully, this kind of capability can broaden civic participation, lower barriers created by jargon, and help people see where and how their voice might matter. Research on sense-making and early-warning systems suggests that responsiveness depends less on prediction accuracy than on the ability to notice weak signals and hold multiple perspectives at once.

Again, the technology enables this, but only if the human and relational conditions are right.

The deeper lesson

It is tempting, in moments of technological acceleration, to focus on tools first.

The AI CoLab quietly insists on a different order:

Start with people.
Start with purpose.
Start with how we learn together.

Without attention to how humans collaborate, reflect, and build shared understanding, AI will simply accelerate existing patterns — including silos, inequities, and blind spots.

But when technology is held inside thoughtful, values-driven, human-centred collaboration, something else becomes possible.

Not just better use of AI — but better public work.

And that, ultimately, is what the CoLab shows us.

References (indicative)

  • Ansell, C., & Gash, A. (2008). Collaborative governance in theory and practice.

  • Brown, J. S., & Duguid, P. (1991). Organizational learning and communities of practice.

  • Edmondson, A. (2018). The Fearless Organization.

  • Granovetter, M. (1973). The strength of weak ties.

  • Head, B. (2019). Complexity and public policy.

  • Kolb, D. (1984). Experiential Learning.

  • Lave, J., & Wenger, E. (1991). Situated Learning.

  • Page, S. (2007). The Difference.

Next
Next

Learning what can not be taught