Your team is using AI. You know this because you can see the ChatGPT tabs open in meetings, the Copilot suggestions appearing in documents, the colleagues who’ve quietly started running their weekly reports through one model or another.
And yet, when you ask whether AI is making a measurable difference to the business, the honest answer is usually: not really.
This is the gap that almost nobody talks about. Not because it’s secret, but because it’s uncomfortable. Companies have invested in licences, run lunch-and-learns, encouraged experimentation. The individual tool adoption is real. The strategic impact isn’t.
The paradox nobody planned for
According to PwC’s 2026 AI Business Predictions, 89% of SMEs now use AI tools. MIT Sloan’s research on AI trends for 2026 puts mature deployments, meaning AI that delivers consistent, measurable business value, at around 1%.
That’s not a typo. 89% adoption, 1% maturity.
The gap between those two numbers is where most businesses are stuck. They’ve got AI fluency at the individual level. They haven’t got it at the organisational level. And those are completely different problems.
Individual fluency means someone can use a tool to draft an email faster, summarise a document, or generate a first cut of something. Useful. Fine. Probably worth the licence cost.
Organisational fluency means the team has agreed on which tools to use for what, where they’re appropriate and where they’re not, who owns the outputs, and how AI-generated work fits into existing quality and accountability processes.
The first is a skill. The second is a strategy.
AI Strategy Insight
The AI Fluency Gap
of SMEs now use AI tools
have deployments delivering measurable value
Three things keeping you at 1%
There are three separate barriers that typically operate at once. Treating them as a single problem is why most AI rollouts stall.
Skills gaps that aren’t what you think
The assumption is that the skills gap is about learning to use tools. It isn’t, mostly. Most people can figure out a ChatGPT interface in an afternoon. The actual gap is between knowing how to prompt a model and knowing how to design a workflow that incorporates AI systematically.
One person using AI to draft their section of a report is individual fluency. A team agreeing on where AI sits in the report-writing process, who reviews AI outputs, what the quality bar is, and how the workflow is documented, that’s organisational fluency. Those are not the same skill, and the first one doesn’t automatically lead to the second.
The other dimension of skills gaps: leaders. Senior teams are often the least fluent, for obvious reasons. They didn’t need these tools during the years they were building their expertise. Many use AI quietly, privately, and without confidence. It’s hard to lead an AI strategy you don’t fully understand yourself.
Governance gaps nobody wants to raise
Every organisation that’s been through an AI adoption conversation has had the moment where someone raises data security. Who exactly can see what goes into these tools? Does the model train on our content? What are we allowed to share externally?
These are legitimate questions. The problem is that without a governance framework, they stop the conversation entirely. The security-conscious voice in the room raises the concern, nobody has a clear answer, and the initiative gets parked while someone goes away to “look into it.”
Months pass. Nothing happens.
The irony is that most SaaS AI tools used for business purposes have documented privacy policies, enterprise tiers with stricter data handling, and clear use-case boundaries. The information to answer those questions exists. What doesn’t exist, in most businesses, is the internal process to review it and make a decision.
Governance gaps aren’t usually about risk. They’re about the absence of a decision-making framework.
Cultural resistance that doesn’t look like resistance
The third barrier is the one most leaders underestimate. It doesn’t present as resistance. It presents as polite interest, some experimentation, a few early adopters, and then quiet drift back to the old way of working.
What’s actually happening: people are uncertain about what competent AI use looks like in their role. There’s no shared standard, no clear expectation, no way to demonstrate that you’re using it well versus using it badly. In the absence of clarity, most people default to the behaviour they know is safe.
Add to that the fear, rarely articulated but very present, that embracing AI too visibly might undermine the appearance of expertise that took years to build. A senior analyst admitting they use AI for their first-draft work isn’t comfortable if nobody else on the team is being transparent about it.
Cultural resistance to AI is rarely about rejection. It’s about confusion, self-consciousness, and the absence of a shared framework for what good looks like.
Why this is a leverage problem, not a technology problem
The real question isn’t which AI tools your team should use. It’s how much of your team’s expertise is captured in systems, processes, and tools, versus locked in individuals’ heads and manual habits.
A business with weak leverage is a business where the same work gets done from scratch every time, where expertise doesn’t compound, where growth requires proportional headcount increases.
AI fluency, done well, is one of the most significant leverage opportunities available to SME leadership teams right now. It’s not about replacing roles. It’s about getting the same quality output from the same people, faster, with more consistency, and with capacity freed up for the work that genuinely requires human judgement.
But leverage amplifies what’s already there. If the underlying skills, governance, and culture aren’t in place, deploying AI at scale just makes the existing confusion bigger and faster.
What actually moves the needle
Three things, in roughly this order.
First, leadership alignment. The executive team needs a shared position on AI: what it’s for, what it isn’t for, who owns the strategy, and what success looks like at six months. This doesn’t need to be complicated. A one-page decision document agreed by the leadership team is more useful than a 40-slide strategy deck that nobody reads.
Second, a governance framework that enables rather than blocks. This means documented policies on which tools are approved for which use cases, what data classifications can go into external AI systems, and who has authority to expand or restrict that list. The goal isn’t to prevent AI use. It’s to give people clear permission to use it appropriately.
Third, structured team upskilling focused on workflow integration, not tool tutorials. The question to answer isn’t “how does this tool work?” It’s “where does AI fit into how we already work, and what changes when it does?”
The honest question to ask this week
Forget the tools for a moment. Ask your team this: if I asked you to document how we use AI across the business, what the rules are, and what we’re measuring, could you?
Most teams can’t. That’s not a criticism. It’s a diagnosis.
The gap between 89% tool adoption and 1% deployment maturity is a strategy gap, not a technology gap. Closing it doesn’t require more tools. It requires the same structured approach to organisational change that any other business improvement initiative needs.
If you want to see where your business sits on the Leverage dimension, the C.L.I.M.B. Quick Check takes ten minutes and gives you a concrete score across all five dimensions.
If you’re ready to go deeper, book a strategy session and we’ll start with an honest diagnostic of where your team actually is.