Culture is the New Decision Advantage in an AI World
Over the past fortnight, Australia has been given a timely reminder that AI isn’t just a technology uplift, it’s a decision-making shift. And like every shift of that kind, the hard part isn’t the tool. It’s the people.
In late January, ASIC released its Key Issues Outlook 2026, noting there is “variable maturity” in how organisations manage AI governance risks – and flagging the added complexity of agentic AI, where systems can plan and act more independently.
At almost the same time, the Office of the Australian Information Commissioner (OAIC) published guidance focused on transparency in automated decision-making, reinforcing a simple truth: when people don’t understand how decisions are being made or what’s influencing them, trust erodes.
Then we saw the practical “this is no longer theoretical” example: Mastercard announced Australia’s first authenticated agentic transactions – AI agents that can complete purchases.
So yes, AI is accelerating. But it’s also exposing something leaders have always known (and sometimes avoided): most data and AI programs fail at the last metre because the decision is messy, time-bound, political, and values-driven.
That’s why more of my conversations this year have been about culture:
Culture in decision-making
Data culture (how we treat evidence, uncertainty, and challenges)
To keep it practical, I’ve landed on three nonnegotiables for any insights pack, dashboard, or recommendation that’s meant to influence executive or board decisions.
Not fifteen principles. Three.
My three nonnegotiables for decision-ready insights:
1. Decision usefulness: “What decision is this changing?” If the insight isn’t tied to a specific decision, it becomes interesting… and irrelevant.
How to apply it:
Start with one line: “We are deciding X by date Y.”
Define the threshold that changes the call: “If churn exceeds 4% in segment A, we intervene.”
Present options and trade-offs, not just trends.
Include a confidence rating (High/Med/Low) and what would increase confidence.
Why it matters: The goal isn’t more information, it’s better decisions.
2. Cultural alignment: “Will people trust it, use it, and feel safe challenging it?”
Decisions are social. If insights threaten identity, incentives, or status, they get resisted – quietly, politely, and effectively.
How to apply it:
Make it normal to challenge interpretation without attacking the person: accountability without blame.
Require explicit assumptions (especially if the culture rewards certainty).
Use consistent rituals: one pack format, consistent metrics, clear cadence.
Separate facts from judgement.
“What happened?”
“What do we think it means?”
“What do we recommend?”
Why it matters: AI will scale outputs, but culture determines whether those outputs are used well.
3. Data source integrity: “Is the evidence fit for purpose and traceable?”
If the provenance is unclear, every decision becomes contestable and slow.
How to apply it:
Lock “source of truth” definitions (revenue, pipeline, incidents, headcount – the usual suspects).
Check bias and coverage.
“What’s missing?”
“What changed?”
“What’s the timeframe?”
Put basic controls around it: versioning, access control, audit trail, named owner per metric etc.
Avoid false precision: ranges are often more honest than decimals.
Why it matters: Without integrity, data becomes a weapon. With integrity, it becomes a shared language.
The standard I’m adopting for exec and board-facing insights is:
Decision usefulness first.
Cultural adoption second.
Data integrity third – and all three are required, every time.
It’s a lightweight standard, but it closes the gap where most programs stall: the moment a human being must make the call.
And if you’re looking for a single indicator of how quickly this is becoming “business as usual”: Stanford HAI reports 78% of organisations used AI in 2024, up from 55% the year before.
The CRO questions worth asking this week
What decision will this insight change this week, and what threshold triggers action?
If it’s wrong, what’s the most likely reason (bias, coverage, timing, definition)?
Who owns the metric end-to-end, and what’s our minimum quality standard?
What would a sceptic say, and have we addressed it in the pack?
What upside are we missing if we only look for problems – where’s the growth lever?
Final Word
AI will keep accelerating. Data will keep multiplying. But the organisations that outperform won’t be the ones with the most dashboards. They’ll be the ones with a culture that can trust evidence, challenge safely, and make trade-offs explicit.
That’s decision advantage and it’s becoming the real differentiator.
Written by Simon Levy, RMIA CEO