Pizza, AI, and the New Risk Work of 2026

I spent the break away with the family, a mix of beaches, eating out, and catching up with friends. Somewhere between swims, sandcastles and sunrise walks, my kids taught me the difference between takeaway and eating out. Apparently, ordering pizza via Uber Eats is a completely different food group from pizza at a restaurant. Who knew?

Leading into any extended break, I normally plan my reading. As 2025 drew to a close I was so tired, my well dry and my capacity shot, that I did not have the energy. When it came to curating my holiday reading list I went back and filled in the gaps on a few classics I have missed over the years, How to Win Friends and Influence People and Dare to Lead. I kept listening to The Diary of a CEO with Stephen Bartlett , and on a friend’s recommendation added The Contrarian podcast with Adir Shiffman into the mix.

And yet the piece that really grabbed my attention was not a book. It was an article shared by one of our long-standing RMIA award winners, “A guide for compassionate managers amid automation and AI-led layoffs.” It is written by James Kavanagh for managers, but the message for risk and compliance is sharp. This is not just a technology uplift. It is a structural shift in what organisations still need humans to do.

The quiet truth: the work is moving to the seams

The article makes a practical case for why AI governance, meaning oversight, controls, accountability, and the ability to demonstrate compliance, is becoming a genuine career destination. It is hard to automate away. It requires judgement, cross-functional influence, and the ability to translate between technical reality and organisational obligations.

What stood out to me is how operational the author is about what this actually means. AI oversight is not a philosophical debate. It is the hard, unglamorous work of:

  • Knowing what AI you are using and where it came from

  • Setting review points that do not get bypassed

  • Documenting decisions and trade-offs

  • Making sure monitoring exists before the incident does

If you work in risk or compliance, you will recognise the pattern immediately. The seams are where outcomes are won or lost. The policies look fine. The controls look fine. Then a team ships something fast, procurement did not flag it, a vendor change slips through, and suddenly the organisation is explaining itself after the fact.

 

PwC adds the bigger signal for 2026

PwC’s CEO Survey released today adds a critical overlay; leaders are impatient for results, but many are not seeing returns from AI yet. That gap is now a business risk in its own right.

In Australia, PwC reports that only 28 percent of CEOs believe current AI investment levels are sufficient to deliver their goals, and only 14 percent report revenue gains from AI, compared to 30 percent globally.

Globally, PwC’s analysis also shows that more than half of CEOs say they have seen neither higher revenue nor lower costs from AI so far, and only a small proportion report both revenue uplift and cost reduction.

Put simply, 2026 is when experimentation has to become repeatable delivery. Delivery only happens when AI is governed like any other material business capability, with clear ownership, decision rights, assurance, and monitoring.

 

What this tells us for 2026

Three implications are hard to ignore.

  1. AI will compress some roles and reshape career pathways.
    -
    Routine work will be redesigned. That will show up inside our own teams through automation, and across the business as AI changes how decisions and work get done.

  2. The execution gap is the risk.
    -
    Australian CEOs are signalling ambition, but they are also signalling that investment and capability are not yet matching the goals. That is where risk leaders can add enormous value, by turning intent into an operating model that holds.

  3. Trust, accountability, and security will decide who scales.
    -
    When organisations cannot evidence how AI decisions were made, what data was used, and what controls were applied, scale slows. Worse, scale happens without control.

 

What you can do as a risk professional as you start back in 2026

Here is a practical back-to-work agenda you can act on immediately.

  • Create an AI inventory, even if it is imperfect.

    • Start with a simple attestation: what tools or models are being used, for what purpose, with what data, and who owns the risk. The first version will not be perfect. That is fine. You are building visibility.

  • Stand up minimum controls for AI use.

    • One page. Clear thresholds. When do you need approval. What is prohibited. What requires additional review. If it is too heavy, teams route around it. The article’s warning here is simple and accurate.

  • Build a lightweight stage-gate for AI change.

    • Pilot to limited use to scaled use. Each step has simple evidence requirements such as testing, human oversight, a monitoring plan, and an incident pathway.

  • Tie AI to existing risk rhythms.

    • Do not create a parallel universe. Bring it into procurement, change management, incidents, and assurance. AI oversight works when it becomes how we do things, not another committee.

  • Protect your organisation’s capability pipeline.

    • If junior work is being redesigned, do not let development collapse. Rebuild junior tasks as AI-assisted and governed work with coaching and review, so people still learn judgement.

  • Be the translator and the connector.

    • This is where risk drives everything. Risk professionals are uniquely positioned to connect strategy, obligations, technology, and operational reality, and make it executable.

 

Why this makes me proud to lead RMIA

One of the reasons I am proud to lead the RMIA is that this is exactly the kind of moment our profession was built for. When the operating environment shifts, our job is to help organisations respond with clarity, discipline, and confidence.

It is also why we have been investing in education and training for the profession. Not as a marketing exercise, but because the work is moving quickly and our members deserve learning that keeps pace with what is happening in real organisations. Practical capability, current content, and a pathway that supports people from early career through to senior leadership.

 

A closing thought

If 2025 was the year of fascination, 2026 is the year of discipline.

The organisations that win will not be the ones with the most pilots. They will be the ones who can scale AI safely, lawfully, and with confidence, and prove it when asked.

For risk professionals, that is not a threat. It is a leadership invitation.

 

Written by Simon Levy, RMIA CEO 

Previous
Previous

Reflections from RMIA Risk Award Winner, Rodney Young

Next
Next

Reflections from RMIA Risk Award Winner, Kate Gannon