Elemental AI Issue No. 22: The 97% Problem- Why AI Breaches Look Different (and Boards Aren’t Ready)
When 97% of AI breach victims had zero access controls in place, it's not a technology failure - it's a governance crisis waiting for your boardroom
Picture this: the CEO walks into a board meeting with grim news - the company’s system has been breached. The usual questions start flying. What data was accessed? Which system? How fast can we contain it?
Then comes the twist.
It wasn’t the customer database or the email server.
It was an AI model - and when the board asks what AI access controls were in place, the answer is simple: there weren’t any.
The Breach That Whispers
In September 2025, Noma Security discovered ForcedLeak, a vulnerability in Salesforce’s Einstein AI that allowed attackers to use prompt injection to extract sensitive CRM data.
No sophisticated malware. Just cleverly crafted text inputs.
The firewalls were fine. Encryption worked. Database permissions were tight.
But the AI - which had legitimate access to everything - could be persuaded to reveal that data.
No one had thought to govern what the AI itself could access or how it could be queried.
This is what AI breaches look like. They don’t announce themselves with ransom notes or system crashes. They whisper. A model quietly exfiltrated. Training data poisoned, altering outputs months later. A prompt that manipulates an AI system into revealing information it should never disclose.
No one watches for that.
Your incident-response plan doesn’t name it.
And the discussions are still framing AI risk in yesterday’s language.
The 97% Nobody’s Talking About
According to IBM’s 2025 Cost of a Data Breach report, 13% of organizations experienced a breach involving AI models or applications.
And among those organizations that suffered AI-related breaches, 97% had no AI access controls whatsoever.
Not weak controls. Not outdated controls. None.
If 97% of ransomware victims had no endpoint security, there’d be outrage.
If 97% of database breaches happened without access controls, the board would demand accountability.
But when it comes to AI - the technology every company is racing to embed across products and workflows - boards are letting it slide.
This isn’t a technology failure. It’s a governance one.
And if your board hasn’t discussed AI-specific security in the last quarter, you’re probably already in that 97%.
What AI Breaches Actually Look Like
The ForcedLeak vulnerability wasn’t an isolated incident.
In 2023, researchers found 225,000 OpenAI credentials for sale on the dark web. They weren’t stolen from OpenAI. They came from corporate users - stored in logs, code repositories, or config files.
Each token represented access to an organization’s AI environment, prompts containing proprietary data, and the ability to rack up API costs.
Most companies didn’t know until the bill arrived.
These aren’t hypothetical. They’re the first wave of AI breaches - invisible, low-noise, and governance-blind.
Where Governance Fails
The 97% statistic reveals a pattern of board-level blind spots that need to close - fast.
1. The Inventory Problem
Ask a simple question: “What AI systems do we use?”
If the answer requires multiple follow-ups, that’s the problem.
AI is everywhere - in SaaS tools, shadow projects, and embedded vendor features. You can’t govern what you can’t see.
And the numbers tell the story. Breaches involving shadow AI - ungoverned AI tools adopted without IT approval - cost companies an average of $670,000 more than standard breaches.
2. The Access Problem
Traditional access controls manage who can reach the data.
AI access controls must manage what the system itself can reach - and what it’s allowed to reveal.
A model with read access to your SharePoint can synthesize insights no single employee could. A clever prompt can unlock those insights without ever breaking a rule.
3. The Monitoring Problem
Your security team knows how to spot SQL injections and privilege escalations.
But can they detect prompt injections, model extraction attempts, or data poisoning?
Most organizations are watching today’s attack surfaces through yesterday’s dashboards.
4. The Response Problem
If a model is stolen, poisoned, or manipulated, who’s on point?
What’s the decision tree for taking an AI tool offline - and what’s the operational impact?
Most incident-response plans don’t even mention models, prompts, or data lineage.
Why This Happened So Fast
Three years ago, AI deployments were controlled and centralized.
Then came generative AI. Every employee gained access. Every SaaS vendor added AI features.
Adoption went from deliberate to chaotic - faster than governance could adapt.
Security teams didn’t have AI-specific frameworks ready.
Boards focused on innovation over containment.
And so, in 2025, we find ourselves here: AI embedded in everything, governed like nothing.
What Boards Need to Do Now
If your organization is in the 97%, here’s the path out:
Build the inventory.
List every AI tool, integration, and model in use - whether enterprise-approved or quietly adopted.
Define AI-specific access controls.
Not just who can use the model, but what the model can use.
Expand monitoring to AI behaviors.
Track anomalies in model performance, prompt logs, and API activity.
Update the incident-response plan.
Include model theft, data poisoning, and prompt injection scenarios.
Create new metrics for board reporting.
How many AI systems are governed? How many have access controls? What percentage have been security-assessed?
Bring in independent expertise.
AI security is not traditional cybersecurity. The board should hear from someone who understands both.
Let’s Get Elemental
The IBM data isn’t a trivia point. It’s a mirror.
Only 13% of breaches today involve AI - but that number is rising, fast.
And 97% of those breached had no controls.
The lesson is simple: AI is not a sidecar to cybersecurity. It’s the next frontier of it.
Organizations that wait for a breach to understand this will be the case studies everyone else learns from.
So ask the uncomfortable question at your next board meeting:
“Are we in the 97%?”
Then demand the inventory, the controls, and the accountability that pull you out of it - before you become next year’s statistic.
Thanks for reading Elemental AI ! Subscribe for free to receive new posts and support my work.
Fayeron Morrison is a certified public accountant, certified fraud examiner, and the president of Elemental AI, a strategic board advisory firm helping organizations navigate the intersection of artificial intelligence, governance, and business transformation. A mom to three grown sons, she lives in Newport Beach with her husband and their Bernese Mountain Dog, Oakley. When she’s not advising boards, she’s usually on the trails with Oakley - where she does some of her best thinking! She can be reached at fayeron@me.com.


