Issue No. 29 - When AI Goes Dark: The Debut of the Governance Navigator
Turning AI’s black box into a picture leaders can actually govern.
“There is a ghost in the machine.”
— Gilbert Ryle, 1949
Gilbert Ryle coined this phrase to mock the idea of a mind invisibly steering the body.
I’ve used this quote before - in Issue No. 5, Shadow AI and Ghost Decisions - and it’s worth revisiting.
In that piece, I talked about how, more than seventy-five years later, my car changes lanes on its own. It parks, it brakes, it navigates traffic with unnerving confidence - and it’s still hard for me to take my hands off the wheel. Because as smooth as it feels, I know it’s making decisions I can’t see… and sometimes don’t expect.
But even if you don’t drive a semi-autonomous vehicle, everyone has experienced a moment like this:
You turn on your laptop, your phone, or your television - and instead of information, you’re met with a black screen.
No status.
No context.
No visibility into what the system is doing behind the scenes.
The device is clearly alive - you can hear the fan, feel the vibration.
Something is happening.
You just can’t see what.
And in that moment, you’re reminded of something important:
Technology is only as trustworthy as your visibility into how it is behaving.
Today, AI inside most companies operates the same way. It is running. It is learning. It is influencing workflows, decisions, controls, customer interactions, and risk outcomes. But it is doing so behind an opaque layer most leadership teams cannot penetrate.
Executives see the productivity.
They see the speed.
They see the promise.
What they don’t see is how mature their AI capabilities truly are - or how unevenly that maturity may be distributed across the organization.
This is the visibility gap of our time.
And you cannot solve a visibility gap with more AI.
You solve it with clarity - with a shared, accurate picture of how AI is operating inside your business.
That is why I built the Elemental AI Governance Navigator - a structured, evidence-based diagnostic designed to illuminate what has remained invisible to boards, CEOs, CFOs, and leadership teams.
The Governance Navigator turns the black screen into a coherent shape - a single picture that reveals AI maturity, governance strength, and organizational readiness across the areas that matter most.
This is its debut.
Why the Navigator Had to Exist - and Why Now
AI has crossed a threshold.
It is no longer experimental.
It is operational - and entering organizations through multiple channels at once:
• embedded in enterprise software
• through vendor updates
• through employee experimentation
• through features leaders never explicitly approved
• through consumer-grade tools used inside workflows
Meanwhile, expectations from boards, auditors, regulators, investors, and customers are rising.
Leaders are expected to govern a system they cannot yet see clearly.
And the gap between responsibility and visibility is where risk lives.
AI has outpaced our ability to understand its operational reality.
The Governance Navigator closes that gap.
It gives leadership a way to see AI as a system - not as scattered tools or isolated use cases.
The Fragmented View Problem
Across the enterprise, each function sees a different slice of AI.
CFOs see controls and exposure.
CIOs see architecture and integration.
Legal sees policy.
Risk sees vulnerabilities.
Data teams see lineage and quality.
Each individual perspective is valid, but it does not represent a collective reality for a business.
This can lead to, amongst other things:
• inconsistent governance
• undocumented decision logic
• ambiguous accountability
• unmonitored drift
• opaque vendor behavior
• policies that are written but not practiced
This is not a failure of skill.
It is a failure of visibility.
The Governance Navigator creates the shared picture leaders have been missing.
Where the WEF AI Maturity Framework Fits
In 2021, the World Economic Forum recognized a global issue:
AI adoption was accelerating faster than governance, strategy, and regulation.
There was no common vocabulary.
No consistent baseline.
No agreed-upon definition of “maturity.”
The WEF’s AI Maturity Framework filled that gap - but it is conceptual by design. It offers a map, not a measurement tool.
It does not:
• reveal variations across teams
• diagnose governance gaps
• show where adoption outpaces oversight
• distinguish policy from practice
• expose where AI is improvising
The WEF defines the language.
The Navigator measures the reality.
What the Governance Navigator Measures
The Navigator evaluates AI maturity across seven domains - the places where responsible AI either becomes real in operations or quietly breaks down. Each domain is assessed using structured, evidence-based criteria on a 0–5 scale aligned to the WEF’s maturity levels.
The Navigator evaluates these seven domains because this is where responsible AI either takes shape or quietly breaks down:
Governance & Oversight
This domain looks at whether AI has a real home inside the organization. Are roles, responsibilities, decision rights, and escalation paths defined, documented, and used in practice, or is governance mostly implied? Mature organizations can point to clear ownership at every stage of the AI lifecycle.
Leadership & Talent
Here the focus is on competency, not coding. Do leaders and key teams understand AI well enough to set direction, ask informed questions, and respond when something does not look right? This domain evaluates AI literacy, governance awareness, and whether the right skills exist to support responsible use.
Strategy & Use Case Fit
This domain evaluates whether AI efforts are anchored in real business strategy. Are use cases selected intentionally, with clear value and guardrails, or are they scattered experiments driven by curiosity or pressure to “do something with AI”? Mature organizations show a disciplined link between strategy, use cases, and capacity to govern them.
Risk, Ethics & Compliance
Here the Navigator examines how the organization identifies, evaluates, and manages AI-related risk. Are ethical considerations and regulatory expectations built into design and deployment, or treated as a review step at the end? This domain reflects whether the company can operate AI safely in a world of rapidly evolving laws and stakeholder expectations.
Decision Intelligence
AI does not just produce outputs - it shapes decisions. This domain looks at how well the organization understands when and how AI influences decisions, who reviews or approves those decisions, and how exceptions are handled. The emphasis is on traceability, accountability, and the ability to explain AI-supported decisions at the level that matters for oversight.
Data & Infrastructure Readiness
AI maturity cannot rise above the quality of the data and systems underneath it. This domain assesses whether data is reliable, well governed, and fit for purpose - and whether the technical infrastructure can support monitoring, security, and responsible deployment at scale. Weakness here often explains why AI ambitions stall.
Culture & Change Readiness
Even the best AI roadmap will fail in a culture that is not ready to use it. This domain evaluates whether teams trust and understand AI tools, whether they are willing to escalate concerns, and whether the organization can adapt processes and behaviors as AI capabilities evolve. Culture determines whether AI becomes embedded or remains a series of pilots.
The result is not a single maturity score.
It is a profile that shows:
• where the organization is strong
• where it is exposed
• where imbalance between domains is creating hidden risk
The AI Governance Radar
The Radar consolidates all seven domains into one integrated visual.
It reveals:
• where capabilities are strong
• where maturity is uneven
• where governance is structured
• where AI is improvising
• where risks are accumulating quietly
It replaces intuition with evidence.
It gives leaders something they’ve never had before:
a picture of the system they are responsible for governing.
Why This Launch Matters Now
The timing could not be more urgent.
• The EU AI Act has moved into enforcement.
• The SEC is asking sharper questions about AI-related controls.
• States are drafting algorithmic accountability laws.
• Investors are probing readiness.
• Vendors are embedding AI with limited transparency.
Oversight is no longer optional.
It is expected.
Organizations that understand their AI maturity will move with confidence.
Those that cannot will face friction, scrutiny, and delay.
The Black Screen Is Optional
AI becomes risky long before it becomes dangerous.
It becomes risky when:
• drift goes unnoticed
• maturity varies across teams
• accountability is assumed
• adoption outpaces oversight
• vendor logic is opaque
• data quality is inconsistent
None of these issues announce themselves.
They accumulate quietly … until the consequences arrive publicly.
The Governance Navigator makes these patterns visible before they become costs, headlines, or regulatory findings.
It turns the black screen into a contour - a shape leaders can finally interpret.
Once AI comes into focus, it never fades again.
And with that clarity comes the ability to govern it wisely.
Let’s Get Elemental
This is the debut of the Elemental AI Governance Navigator.
More guidance, examples, and case studies will follow in upcoming issues.
Visibility is the foundation of oversight.
And oversight is the foundation of trust.
If you cannot see it, you cannot govern it.
If you cannot see it, you cannot scale it safely.
If you cannot see it, you cannot defend the decisions made around it.
The black screen is optional.
Seeing the full picture is not.
Fayeron Morrison is a certified public accountant, certified fraud examiner, and the president of Elemental AI, a strategic board advisory firm helping organizations navigate the intersection of artificial intelligence, governance, and business transformation. A mom to three grown sons, she lives in Newport Beach with her husband and their Bernese Mountain Dog, Oakley. When she’s not advising boards, she’s usually on the trails with Oakley – where she does some of her best thinking! She can be reached at fayeron.elementalai@gmail.com or at elementalai.ai


