Issue 40: The Trust Gap: Why Boards See AI - But Don't Yet Rely on It
AI is everywhere in the enterprise. Governance is not. Here's what's actually behind the hesitation - and what closes it.
🎧 For those who prefer to listen, Chris and I recorded a companion episode on The Trust Gap: Why Boards See AI - But Don’t Yet Rely on It
At a recent governance panel I attended, the moderator asked this question: “We’ve got about ten minutes left. Maybe we should talk about AI - how can boards use it to make themselves more efficient?”
Within minutes, the conversation had completely shifted. Data security. Privilege waivers. What not to upload. Approved tools. Don’t record board meetings. Definitely don’t let AI draft your minutes.
The question was about efficiency. The answers were all about risk.
And then one panelist said something that has stayed with me: “I wouldn’t trust AI in the boardroom.” The room didn’t push back. It didn’t need to. Because that hesitation is showing up everywhere right now - in boardrooms, in executive teams, in strategy sessions where AI is on the agenda but somehow never quite gets resolved.
Boards see AI. They’re just hesitant to rely on it.
Adoption is accelerating. Reliance is not.
Across industries, the adoption numbers look impressive. Tools are embedded in enterprise platforms, employees experiment constantly, vendors position AI as table stakes, and public companies now routinely reference AI in their disclosures. From the outside, it looks like steady progress.
But inside organizations, the picture is more complicated.
AI generates analysis, and someone double-checks it.
AI drafts a memo, and someone rewrites it.
AI flags anomalies, and no one wants to act on the alert alone.
AI produces recommendations, but the final decision still feels like it needs to be human and instinctive.
The technology is present, but reliance is tentative. And most organizations can’t tell you exactly where the line is - which decisions AI is actually influencing, which it’s merely informing, and which it’s quietly making while everyone assumes a human is still in charge. That ambiguity is not a minor operational detail.
It’s a governance gap.
The hesitation is human.
That panel comment wasn’t really about technology. It was about control - and that distinction matters more than it might seem.
Behavioral research describes something called control aversion: the tendency for people to resist systems that appear to reduce their autonomy or judgment, even when those systems demonstrably outperform human alternatives.
The key word is appear.
Control aversion isn’t triggered by poor performance. It’s triggered by the perception that a decision is no longer fully yours. In high-stakes leadership environments, where judgment is both the job and the identity, that perception is deeply uncomfortable.
This is why you see smart, analytically sophisticated board members respond to AI the way they do. It’s not that they can’t evaluate the technology. It’s that relying on it feels like a different kind of vulnerability than they’re used to managing.
Layered on top of that are two additional dynamics.
Status quo bias pulls leaders toward existing processes simply because they’re familiar - the devil you know.
And the endowment effect leads us to overvalue our own institutional judgment relative to new systems, not because our judgment is necessarily better, but because it’s ours.
Put those three things together - control aversion, status quo bias, the endowment effect - and the result isn’t rejection. It’s something subtler and harder to address: a persistent, self-reinforcing hesitation that feels entirely reasonable from the inside, even as it quietly accumulates cost on the outside.
How hesitation quietly reshapes organizations
Hesitation rarely stops adoption outright. Instead, organizations adapt around it. Extra validation steps get added. Pilot programs stretch on indefinitely. Teams experiment unevenly. Employees become unsure when AI output is actually “allowed” to stand on its own. Individually, each of these adjustments seems prudent. Collectively, they add friction - and they make it nearly impossible for leadership to get an accurate picture of what AI is actually doing inside the organization.
Companies invest in AI capability but struggle to capture AI leverage. Decision velocity barely changes. Efficiency gains feel incremental. Strategy language moves faster than operational confidence. And perhaps most importantly, no one can answer the questions that matter most: Where is AI influencing our decisions? Who owns those systems? What happens when something goes wrong?
Many organizations today have AI everywhere - and accountability nowhere. That’s not a technology problem. It’s a readiness problem.
Why AI conversations default to risk
Think back to that panel. The moderator asked about efficiency, and within minutes the conversation had migrated to liability. That shift is revealing - and also predictable. Boards are trained, correctly, to identify downside exposure before upside acceleration. So when AI enters the room, it naturally gets filtered through familiar governance lenses: data security, legal privilege, regulatory exposure, documentation risk.
Those concerns are valid. But when AI discussions remain stuck at containment, something important never gets addressed. The conversation about what it would actually take to rely on this - not just tolerate it - never happens. No one maps where AI is operating. No one assigns ownership. No one defines what good looks like or how to measure it. And without that work, caution doesn’t evolve.
It calcifies.
The organization keeps experimenting indefinitely, never quite committing, never quite capturing the value it keeps announcing in earnings calls.
We’ve been here before
A few months ago I wrote about the idea that AI represents a “1968 moment” — a narrow window of access and advantage that most people don’t recognize until it has passed. (If you missed that issue, you can read it here.)
What I didn’t explore then is what it feels like to be inside one of those moments. The answer, historically, is that clarity isn’t the dominant experience. What leaders actually feel is uncertainty, conflicting signals, and genuine debate about whether the change is real, temporary, or misunderstood. Generational shifts rarely feel historic while you’re living through them. They feel unsettled.
That’s where many boards are right now. Not resisting the future - encountering it before the institutional structures around it are fully formed. The problem is that technological change doesn’t pause while leaders work through that uncertainty. Competitors experiment. Employees adapt. Markets move. And the organizations that wait for clarity before building governance infrastructure tend to discover, too late, that the gap they thought was temporary has become structural.
The actual problem boards need to solve
Here’s what I’ve come to believe after working with boards on this: the hesitation isn’t really about trust in the technology. It’s about the absence of a structure that would make trust rational.
Think about how boards govern other complex, high-stakes systems - financial controls, cybersecurity, regulatory compliance. They don’t rely on those systems because they’ve personally audited every line of code or policy. They rely on them because there’s a visible architecture around them: clear ownership, defined controls, regular reporting, and a process for surfacing problems before they become crises. That architecture is what makes confidence possible. It’s also what makes accountability possible when things go wrong.
AI currently operates, in most organizations, without that architecture. It’s present but not mapped. Influential but not owned. Used but not governed. Leaders know AI is somewhere in the organization - they just can’t tell you exactly where, or what it’s touching, or what would happen if it failed. And so what looks like a trust problem is really a visibility problem. Boards can’t govern what they can’t see - and right now, most boards can’t see very much.
The question worth asking isn’t “do we trust AI?” It’s “have we done the work to understand what we’re actually dealing with?” In my experience, most organizations haven’t. Not because they’re negligent, but because no one has handed them a structured way to do it.
What rigorous AI readiness actually looks like
This is the problem the Elemental AI Governance Navigator was built to solve - and I want to be specific about what that means because “AI assessment” has become a vague term that covers everything from a one-page checklist to a comprehensive diagnostic.
The Navigator is the latter. It evaluates an organization’s AI readiness across five stages of maturity and seven core domains: governance and oversight, leadership and talent, strategy and use case fit, risk and ethics, decision intelligence, data and infrastructure, and culture and change readiness. The diagnostic itself draws on over 500 structured questions tailored to the company’s specific context, paired with a half-day executive session designed to surface not just what leaders know, but where the gaps between leadership and operational reality actually live.
The output isn’t a report that sits in a drawer. It’s a radar-style visualization that maps current state against target profile across all seven domains - the kind of board-ready picture that makes it immediately clear where the organization is genuinely ready to move and where it needs to build before it can safely scale. More importantly, it identifies the gaps that tend to derail AI initiatives before they become visible as failures: the governance structures that don’t exist yet, the accountability that hasn’t been assigned, the risk controls that looked adequate until they weren’t.
What makes this work isn’t the framework - it’s the diagnostic conversations. Boards and executives consistently know less about their actual AI exposure than they think they do. The Navigator is designed to surface that gap in a structured way, before it surfaces on its own in a less comfortable context.
If your board is having the same AI conversation it was having eighteen months ago - important but somehow never resolved - that’s usually a signal that what’s needed isn’t more discussion. It’s a structured assessment of where you actually are. Email me at Fayeron.ElementalAI@gmail.com if you’d like to talk through whether the Navigator is the right fit for your organization.
A better question for boards
Most boards instinctively ask: Is this AI reliable? It’s a reasonable starting point. But in practice, it tends to produce either vendor assurances or technical debates - neither of which moves the governance conversation forward.
A more useful question is: What would need to be true for us to rely on this?
That shift matters because it reframes trust as something designed, not assumed. It moves the conversation from “evaluate this technology” to “build the conditions under which we can confidently use it.” Ownership becomes clear. Controls get defined. Reporting gets established. The board stops being a passive observer of AI activity and starts being an active architect of the environment in which AI operates. That’s a different posture - and ultimately a more defensible one.
Let’s Get Elemental
I keep coming back to that panel moment. Not the comment itself, but the silence that followed it. A room full of experienced directors, and no one pushed back - not because they disagreed, but because the hesitation felt too familiar to argue with.
That silence is the trust gap. And closing it won’t happen through better AI. It will happen through better governance - the kind that gives boards enough visibility to move from watching AI operate to actually directing it. Technology scales what organizations can do. Trust determines what they’re willing to let it do.
The boards that close that gap deliberately, and with structured rigor rather than good intentions, will find themselves governing something genuinely different from the ones that don’t.
About Fayeron Morrison
Fayeron Morrison is the President of Elemental AI, a strategic advisory firm that helps boards and executives navigate the governance challenges of artificial intelligence. She is the creator of the Elemental AI Governance Navigator, a diagnostic tool built to bring clarity and accountability to AI oversight at the highest levels.
A graduate of the Stanford Graduate School of Business Executive Program in AI Leadership, Fayeron is also the author of Elemental AI, a weekly Substack publication focused on AI governance, risk, and boardroom readiness.
Beyond her AI work, Fayeron is a Certified Public Accountant (CPA) and Certified Fraud Examiner (CFE) with a long-standing career advising both public and private companies.
She lives in Newport Beach, California with her husband and their Bernese Mountain Dog, Oakley. She’s the proud mom of three grown sons and, when she’s not writing or advising, she’s likely on a hiking trail with Oakley - where she does some of her best thinking!
Get in touch to learn more about the Governance Navigator →
📧 fayeron.elementalai@gmail.com
🌐 elementalai.ai
•


