Issue 41: When AI Becomes Infrastructure
Cyber governance took a crisis to build. AI doesn't have that kind of time.
đ§ For those who prefer to listen, Chris and I recorded a companion episode on When AI Becomes Infrastructure: Cyber governance took a crisis to build. AI doesnât have that kind of time.
Do you remember when your mailbox was full of data-breach notices?
For me, every few months another bank, retailer, or service provider would âregret to inform meâ that my Social Security number or card details had slipped out the back door.
And then things slowed down.
Those letters didnât stop because hackers lost interest in my data. They stopped because cybersecurity quietly graduated from a line item in the IT budget to governed infrastructure - with monitoring, escalation paths, board oversight, and eventually mandatory disclosure rules.
Cybersecurity eventually became something boards could rely on.
AI is now following the same path - but much faster.
Today AI is already woven into critical workflows across most organizations. Yet in many companies it is still treated like a clever tool, not a system you intend to rely on and therefore need to govern.
That gap - between dependency and governance - should feel familiar. Itâs exactly where cybersecurity stood not that long ago.
When cyber moved from IT issue to board issue
For most of the 1990s and early 2000s, cybersecurity barely appeared in boardrooms. IT departments talked about network security, patched servers, and monitored traffic. Directors assumed the issue was under control. Boards rarely discussed it.
In hindsight, that seems almost impossible to imagine.
Today cybersecurity appears regularly on board agendas. Directors receive threat briefings. Companies conduct tabletop breach exercises. Regulators require disclosure of material cyber incidents. Insurance carriers ask detailed questions before issuing coverage.
Cyber oversight has become a routine part of board responsibility.
But that governance structure did not exist for a long time. It emerged only after companies discovered something unsettling: they were already deeply dependent on systems they didnât fully control.
For years, cyber incidents were treated as technical nuisances. Embarrassing, yes - but not obviously existential.
Then the breaches started getting bigger.
Financial institutions discovered attackers had been inside their systems for months. Corporations realized intellectual property was quietly disappearing. Nation-state actors began probing corporate infrastructure in organized, sustained ways.
Even then many boards still treated cyber as an operational issue. Management installed security software. IT teams monitored networks. The problem appeared manageable.
The turning point came with Target in 2013.
Attackers infiltrated the network during the holiday shopping season. By the time the breach was discovered, more than 40 million credit and debit card numbers had been compromised, along with personal information belonging to roughly 70 million customers. The company ultimately agreed to an $18.5 million multistate settlement, on top of hundreds of millions in other breach-related costs.
The detail security professionals still point to today is this:
Targetâs security systems had actually detected the attack.
Alerts had been triggered.
The system recognized the suspicious activity moving through the network.
And nothing happened.
The alerts were not escalated quickly enough, and attackers continued operating inside the system.
The technology had worked. The governance had failed.
That moment changed how companies thought about cybersecurity. Organizations realized that installing security tools was not the same as controlling cyber risk. They needed monitoring frameworks, escalation protocols, incident response teams, and leadership visibility into what was happening.
In other words, cybersecurity required governance.
Over time, boards began asking new questions: How are threats detected? How quickly are incidents escalated to leadership? Who is responsible for responding? What does the board receive about cyber risk?
Those questions led to a new oversight structure. Today, cybersecurity operates inside a governance framework that includes monitoring systems, response plans, reporting structures, and regulatory disclosure requirements. Breaches still occur. But organizations have built systems designed to detect them quickly and respond before the damage spreads.
Cybersecurity became something boards could rely on - not because the risk disappeared, but because the governance structures around the technology matured.
From a board's perspective, that trajectory should feel uncomfortably familiar.
Powerful technology running ahead of governance until a crisis forces accountability.
And that is exactly where AI is today.
The infrastructure threshold
Every major technology inside organizations eventually crosses a threshold where it stops being experimental and becomes infrastructure.
Infrastructure technologies are the ones companies depend on to operate: financial reporting systems, enterprise software, payment networks, cybersecurity monitoring.
Once a technology crosses that threshold, governance stops being optional. It operates inside defined oversight structures, with explicit owners, processes, and reporting.
Boards donât personally verify every journal entry or firewall configuration. Thatâs not their job. But they can hold those systems accountable - because governance architecture exists around them.
AI has not yet earned that status. In most organizations today it is still governed - to the extent it is governed at all - by the same informal tolerance extended to any promising new tool.
But AI is approaching that threshold quickly.
As models move deeper into operational workflows - pricing engines, underwriting models, logistics optimization, fraud detection - the decisions influenced by AI begin to affect real economic outcomes.
That is when boards need to begin asking different questions.
Not simply what the technology can do.
But whether the organization has built the structures required to rely on it.
Right now most companies are in an awkward middle ground.
AI is important enough to matter, but not yet governed like infrastructure.
The opportunity for boards is to move faster on governance than they did with cyber - to treat AI as future core infrastructure while the footprint is still manageable.
The blind spot AI creates
Thereâs one way AI is very different from cybersecurity.
By the time breach letters were filling our mailboxes, nobody could plausibly say, âWe donât use computers.â Boards understood they were on digital infrastructure, even if they underestimated the risk.
With AI, I still hear a different line from boards and executives:
âWeâre not really using AI yet.â
Most of the time, thatâs not true.
Their CRM has predictive lead scoring. Their HR system filters candidates with machine learning models. Their fraud tools use anomaly detection. Their marketing platform auto-generates and optimizes copy. Their cloud provider is quietly routing workloads through âAI-enhancedâ services.
The AI is already there. Itâs just invisible.
I saw this blind spot up close recently. A company engaged me to draft an AI policy because the chair of the board put âAI policy reviewâ on the agenda. When the CFO called, she was almost apologetic: âWe donât even use AI, but I have to have something to show them.â
Once we started mapping their systems and workflows, shadow AI was everywhere-inside their HR tools, their marketing platform, their customer support software, and even in spreadsheets staff were quietly running through public chatbots. The CFO herself was using AI regularly. She just didnât label it that way, because nobody had ever asked her to look at the company through that lens.
That makes the governance gap worse than cyberâs early days.
In cybersecurity, companies underestimated how much risk they had. In AI, many organizations donât even see the systems that are creating the risk in the first place.
And unlike cyber, boards donât have twenty years for the governance to catch up.
We don't yet know what the AI equivalent of Target looks like. Maybe it's a hiring algorithm that surfaces a discrimination lawsuit after quietly filtering candidates for three years. Maybe it's a pricing model that triggers a regulatory investigation because it drifted in ways nobody monitored. Maybe it's a fraud detection system that missed something catastrophic because the underlying data changed and nobody noticed. The specific incident doesn't matter as much as the pattern - and the pattern is identical. A system running inside the organization, influencing real outcomes, with no one watching the dashboard. The breach letters, when they come, will have different letterhead. The governance failure underneath them will look exactly the same.
The question is not whether AI will become infrastructure. For most organizations, it already is - whether they've recognized it yet or not. The governance just hasn't caught up to that reality. And unlike cyber, boards don't have twenty years.
A different kind of board question
So if AI is already functioning as infrastructure inside your organization - running inside systems you depend on, influencing decisions that affect real outcomes, operated by vendors you approved years ago - the first job is not to write a 40-page policy.
Itâs to surface reality.
Start with three questions:
Inventory: Can management show us, on one page, where AI is already in our business - internal tools, vendor systems, and customer-facing products included?
Ownership: For each of those uses, who is accountable - by name - for how it behaves and what happens when itâs wrong?
Readiness: If one of those systems caused real harm tomorrow, what evidence could we put on the table about how we selected it, tested it, and monitored it?
These are the right questions.
But hereâs the catch: you canât just hand them to management and wait for answers.
As the CFO story shows, the people closest to the work often canât see whatâs running underneath it. She wasnât hiding anything. She genuinely didnât recognize what she was looking at as AI. Thatâs a failure of visibility infrastructure.
Asking management to self-report on AI they canât see is like asking Targetâs IT team to write a report on alerts they never noticed.
Thereâs a phrase that comes up a lot in these conversations: trust but verify. But verify against what?
If you donât know where AI is operating in your organization, you have nothing to verify against. You canât audit a system you havenât mapped. You canât set standards for tools you donât know exist.
Without that visibility, what feels like governance is closer to optimism.
Letâs Get Elemental
This is exactly where the Elemental AI Governance Navigator earns its place - not as a policy template, and not as a simple inventory checklist, but as a readiness diagnostic. It assesses AI maturity across seven domains: Governance & Oversight, Strategy & Use Case Fit, Risk, Ethics & Compliance, Decision Intelligence, Leadership & Talent, Culture & Change Readiness, and Data & Infrastructure Readiness.
Think of it the way insurance carriers think about cyber coverage. Before theyâll issue a policy, they donât just ask whether youâve had a breach. They ask whether your controls would contain one. Do you have monitoring? Escalation paths? Documented accountability? Theyâre assessing whether the governance infrastructure is real - or theoretical.
The Navigator does what outside eyes often do that inside teams cannot. It surfaces not just where AI is operating, but whether the governance around it is real. Are the controls documented? Is ownership clear? Are there escalation paths when something goes wrong? Across seven domains, it gives leadership an honest picture of where their AI governance is solid and where it isnât.
Thatâs the foundation the CFO in my story didnât have. Not because she wasnât capable - but because nobody had built the visibility infrastructure that would have let her see it.
And hereâs what that visibility actually unlocks. The organizations that built real cyber governance early didnât just avoid the worst outcomes - they got better at using the technology. Because once the oversight structure is in place, you can make deliberate choices.
You can say:
Weâve assessed this.
Weâve built the controls.
Weâre comfortable relying on this system for this decision.
Thatâs the real payoff.
Not just staying out of trouble.
Earning the confidence to actually use AI - deliberately and accountably.
The goal of governance isnât to slow AI down.
Itâs to earn the right to rely on it.
Hereâs the question Iâd leave you with:
If the board asked management tomorrow to show you where AI is operating in the business - by system, by owner, by decision impact- could they answer it?
In most organizations, the honest answer is not yet.
Not because anyone is asleep at the wheel, but because nobody has been asked to build that visibility infrastructure yet. The CFO in my story wasnât the exception. She was the rule.
Thatâs not a technology problem.
Itâs a governance decision.
The organizations that make it deliberately, before something forces their hand, are the ones that wonât be writing the AI equivalent of those breach letters.
Reach out at elementalai.ai.
Letâs find out whatâs already running.
About Fayeron Morrison
Fayeron Morrison is the President of Elemental AI, a strategic advisory firm that helps boards and executives navigate the governance challenges of artificial intelligence. She is the creator of the Elemental AI Governance Navigator, a diagnostic tool built to bring clarity and accountability to AI oversight at the highest levels.
A graduate of the Stanford Graduate School of Business Executive Program in AI Leadership, Fayeron is also the author of Elemental AI, a weekly Substack publication focused on AI governance, risk, and boardroom readiness.
Beyond her AI work, Fayeron is a Certified Public Accountant (CPA) and Certified Fraud Examiner (CFE) with a long-standing career advising both public and private companies.
She lives in Newport Beach, California with her husband and their Bernese Mountain Dog, Oakley. Sheâs the proud mom of three grown sons and, when sheâs not writing or advising, sheâs likely on a hiking trail with Oakley - where she does some of her best thinking!
Get in touch to learn more about the Governance Navigator â


