Enthusiasm for AI is everywhere right now. Every organisation I walk into has a pilot running, a committee formed, or a strategy document in progress. The language is consistent: transformation, acceleration, competitive advantage. The reality is more nuanced.
After years helping some of Australia's largest organisations achieve AI design wins at Microsoft — and now working as an independent consultant — I've developed a clear and sometimes uncomfortable view: most organisations are not ready for Agentic AI. Not because they lack ambition. Because they haven't honestly assessed where they stand.
That observation is what led me to build the AI Readiness & Maturity Agent. And it's why I chose the MAIVA framework as its foundation.
"I spent three weeks getting the instructions right. The agent took five minutes to build. That ratio tells you almost everything about where the real work in Agentic AI actually lives."
What is MAIVA — and why does it matter?
MAIVA stands for Microsoft AI Value Assessment. It's a structured framework for evaluating an organisation's readiness to adopt and scale AI — not just technically, but holistically. It was developed through Microsoft's work with thousands of enterprise customers globally, and it reflects hard-won patterns about what separates organisations that succeed with AI from those that stall.
At its core, MAIVA recognises something that pure technology assessments miss: AI readiness is not just a technical question. An organisation can have excellent Azure infrastructure and still fail at AI adoption because their governance is immature, their people are unprepared, or their strategy is disconnected from real business outcomes.
The framework assesses readiness across five interconnected pillars. Understanding each one — and why it matters — is essential to using it effectively.
The MAIVA framework is not a checklist. It's a diagnostic tool that surfaces the real constraints on AI adoption — and that's exactly what makes it powerful when applied honestly.
The five pillars — and why each one matters
This pillar asks a deceptively simple question: why are you adopting AI, and what business outcome are you trying to achieve? It sounds obvious. In practice, I encounter organisations regularly where the AI strategy exists in isolation from the business strategy — where the technology team is excited about agents while the business units are asking what problem they're solving.
Without strategic alignment, AI investments drift toward interesting rather than impactful. They generate demonstrations and pilots that never reach production. This pillar forces the honest conversation about where AI genuinely creates value — and where it doesn't.
Agentic AI is only as good as the data it can access and reason over. This pillar assesses whether an organisation's data is structured, accessible, governed, and trustworthy enough to support AI workloads. It also looks at platform readiness — whether the technical infrastructure (cloud, identity, security) is configured to support AI at scale.
This is often where the most uncomfortable truths emerge. Many organisations discover that their data is fragmented across legacy systems, that their SharePoint is ungoverned, or that their identity and access controls are not mature enough to allow agents to operate safely. These are not insurmountable problems — but they are real constraints that need to be addressed before agents can be trusted with consequential work.
Technology is only transformative when people can use it confidently. This pillar assesses whether the workforce has the skills, confidence, and mindset to adopt AI — and whether the organisation has the internal capability to build, operate, and continuously improve AI solutions.
What I consistently find is that this pillar is underestimated. Organisations focus heavily on the technology and underinvest in the humans. They deploy Copilot without training. They build agents without change management. They expect transformation to happen organically. It doesn't. Capability uplift is not a one-time event — it's a sustained programme.
As AI moves from chat-based tools to autonomous agents that take actions, make decisions, and interact with systems — governance becomes non-negotiable. This pillar assesses whether an organisation has the policies, controls, and oversight mechanisms to deploy AI responsibly.
This is particularly critical in financial services, government, and regulated industries — the sectors I know best. An agent that sends emails, updates records, or triggers workflows needs guardrails. It needs human oversight at the right escalation points. It needs auditability. It needs someone accountable for its behaviour. Organisations that skip this pillar don't just create risk — they create liability.
The final pillar is often the most undervalued — and consistently the one that determines whether AI investments deliver lasting value or quietly fade into shelfware. This pillar assesses whether the organisation has a genuine change management capability, whether leadership is visibly championing AI adoption, and whether the culture is ready to embrace new ways of working.
I've seen technically excellent implementations fail because adoption was treated as a training day rather than a transformation programme. I've seen Copilot deployments with 80% of licences unused six months after rollout. Technology does not change behaviour by itself. Change management does.
The breakthrough — and why external signals matter
When I decided to build an AI Readiness & Maturity Agent, I knew the MAIVA framework would be the foundation. What I didn't want was a tool that simply recorded self-assessments. Self-assessment is limited by what people know, what they're willing to admit, and how honestly they can see their own organisation.
The breakthrough in my design was the validation layer. Rather than relying on self-assessment alone, the agent cross-checks what organisations say against what they're actually doing — using external signals like hiring behaviour, public announcements, job board activity, and newsroom content to validate or challenge the responses it receives.
If an organisation says they have strong AI governance but their job board shows no governance or risk roles, that discrepancy matters. If they claim advanced data foundations but their public-facing technology announcements suggest otherwise, the agent surfaces that tension — not to judge, but to prompt reflection and validation.
Where things don't align, the model doesn't judge — it prompts reflection. That's the design principle I'm most proud of.
The result is an assessment tool that is honest in a way that pure self-assessment tools rarely are. It creates a structured, evidence-based conversation about where an organisation actually stands — rather than where they hope they stand.
The instruction architecture — where the real work lives
I've said publicly that the agent took five minutes to build and three weeks to get right. That's not a complaint — it's an observation about where value actually lives in Agentic AI work.
The five minutes were the technical deployment. The three weeks were the instruction architecture — the careful, iterative work of defining how the agent reasons, how it structures its questions, how it handles ambiguous responses, how it validates against external signals, and how it synthesises everything into a report that is genuinely useful for its audience.
This is the part of agent design that is most underestimated. People see the output — a clean report, a structured assessment, a clear set of recommendations — and assume the hard work was the technical build. It wasn't. The hard work was the thinking. The methodology. The framework alignment. The validation logic. The instruction design.
That insight has shaped how I approach every agent engagement with clients. The technology is the easy part. The architecture — intellectual, methodological, and instructional — is where the value lives.
What I believe about AI readiness — and what it means for organisations
After everything I've seen — at Microsoft working on some of Australia's most significant AI programs, and now as a consultant working across industries — I've arrived at a clear set of beliefs:
- Enthusiasm is not a strategy. Every organisation I work with is enthusiastic about AI. The ones that succeed are the ones that pair enthusiasm with honest assessment and structured execution.
- Readiness is not binary. There is no organisation that is either ready or not ready for AI. There are organisations at different stages across different dimensions. MAIVA captures that nuance.
- The biggest barrier to AI adoption is not technology. It's the gap between what organisations say they're doing and what they're actually doing — in their data practices, their governance, their people capability, and their change management.
- Frameworks are not constraints — they're accelerants. The organisations I've seen move fastest with AI are the ones that were most honest about where they stood before they started. A good framework surfaces those truths early, when they're still fixable.
- Agents need guardrails, not just ambitions. The move to agentic AI is real and significant. But autonomy without governance is not transformation — it's risk. Every agent I design has clear boundaries, escalation paths, and human oversight built in.
A final thought
Satya Nadella once said: "The illusion of knowledge is the enemy of growth. Be a learn-it-all, not a know-it-all." I return to that line often — both as a reminder to keep learning in a field that moves as fast as AI, and as a framing for the work I do with clients.
The most dangerous place to be in an AI transformation is confidently wrong about your readiness. The most valuable thing an honest assessment can do is replace that dangerous confidence with accurate self-knowledge — and a clear, prioritised path forward.
That's what the AI Readiness & Maturity Agent is designed to do. And it's why I built it.
Amal Al-Zahab is an Agentic AI consultant based in Sydney. She has 30+ years in enterprise technology, with five years at Microsoft leading strategic AI and cloud transformation programs across Australia. She is the winner of the Best Prompt Award 2026, the Microsoft FSI Blue Jacket Award 2025, and multiple Microsoft Customer Hero and Impact awards.