Issue No. 5 – Shadow Models & Ghost Decisions
How Shadow AI Is Already Reshaping Risk—and Why Boards Need to Get Ahead of It
“There is a ghost in the machine.”
Gilbert Ryle, 1949
Gilbert Ryle coined this phrase back in 1949 to mock the idea of a mind invisibly steering the body.
Over seventy-five years later, I’m in a car that changes lanes by itself.
It parks, it brakes, it navigates traffic with unnerving confidence—and it’s still hard for me to take my hands off the wheel.
Because as smooth as it feels, I know it’s making decisions I can’t see… and sometimes don’t expect.
That tension—between marvel and mistrust—isn’t just for the highway.
It’s showing up in your company, too.
You already know the AI you’re supposed to see:
• the polished demo in the board deck
• the chatbot marketing is already counting as twenty new hires
• the finance pilot someone insists is “just a test”
But the real action? It’s happening before the first cup of coffee.
There’s AI quietly running in the background—rerouting leads, rewriting forecasts, screening applicants. No one approved it. No one’s monitoring it. And no one’s quite sure what it’s learning.
That’s Shadow AI.
And like a side road the GPS insists is faster, it looks efficient—until it isn’t.
What Is Shadow AI?
Shadow AI is what happens when employees or teams start using AI tools without IT, legal, or security ever knowing.
It’s fast. It’s easy. It’s usually well-intentioned.
But it can also lead to:
• 🕵️♀️ Data leaks
• 🚫 Regulatory noncompliance
• 🔓 Exposure of proprietary systems
• 😬 Loss of customer trust
Real-World Flashpoints: When Shadow AI Is Discovered Too Late
Shadow AI rarely announces itself. It doesn’t knock on IT’s door or ask for a license.
It just shows up—until something leaks, breaks, or makes headlines.
Here are three moments when tech outpaced governance:
Samsung’s Code-in-the-Wild Moment (2023)
A few engineers copied source code into ChatGPT—just trying to debug something.
But that proprietary code didn’t stay in the chat. It became part of ChatGPT’s training data.
Now imagine being the executive who finds out that confidential IP is floating around a public model.
That’s exactly what happened. The result? A full internal ban on ChatGPT. Overnight.
That’s how fast trust evaporates when no one’s watching the wheel.
Slack’s Hidden Policy (2024)
In 2024, it came out that, by default, OpenAI could use internal company messages to train its models—unless the company had explicitly opted out.
Most companies had no idea.
So a lot of businesses were unintentionally sending their internal conversations—strategy docs, hiring plans, product ideas—into OpenAI’s learning loop.
The clause was in the fine print, sure. But nobody flagged it. Until the internet did.
Salesforce’s Slack Lockdown (2025)
Now, the pendulum swings the other way. As of June 2025, Salesforce (Slack’s parent company) cut off long-term third-party access to Slack message data—even with customer consent.
Apps like Glean—which build enterprise knowledge graphs from Slack—were abruptly cut off.
A Salesforce spokesperson told Reuters,
“As AI raises critical considerations around how customer data is handled, we’re committed to delivering AI and data services in a way that’s thoughtful and transparent.”
But here’s the kicker:
By shutting down sanctioned access without offering a clear alternative, Salesforce may have triggered a new wave of shadow AI—employees finding unsanctioned ways to get their data back.
Sometimes, governance is what creates the shadows.
These aren’t fringe cases.
They’re what happens when tech moves faster than governance—and everyone assumes someone else is watching the wheel.
The DIY ChatGPT Problem
Public AI tools are frictionless—and that’s the risk.
A late joint 2024 survey by CybSafe and the UK’s National Crime Agency (think FBI for cybercrime) found:
38% of employees admitted to sharing sensitive company data with AI tools—without approval.
Companies are responding:
• Samsung
• Apple
• Verizon
• Amazon
• JPMorgan
All have banned or restricted public LLMs internally.
🛑 Shadow AI isn’t just an IT issue. It’s a company-wide threat vector.
When Risk Comes Pre-Installed
Not all shadow AI sneaks in through backdoors.
Sometimes it arrives gift-wrapped—inside the platforms you already trust.
Think:
• A CRM’s “insight engine” quietly reshuffling sales priorities
• A hiring platform filtering resumes via black-box models
• An analytics suite surfacing anomalies—without showing its math
These features get activated with a checkbox.
But how many teams stop to ask:
• What’s the model actually doing?
• What data is it touching?
• Where does it send that data?
• And what happens when it gets it wrong?
Most vendor contracts still don’t address explainability, indemnity, or even notice when the model changes.
When AI is embedded in everyday tools, risk becomes easy to overlook—and even easier to inherit.
Shadow AI = Open Door for Attackers
Every unsanctioned model, browser plug-in, or third-party API is a potential breach point.
Your security team can’t defend what they don’t know exists.
That’s how attackers slip in—through side doors no one thought to lock.
Minimum Guardrails
The Cloud Security Alliance—a global organization that sets standards for cloud and AI security—recommends a few baseline moves:
✅ Acceptable Use Policy: Make it explicit—no sensitive data in public AI tools.
🛍️ Internal AI App Store: Give employees safe, pre-approved options.
📡 Real-Time Monitoring: Flag calls to unapproved LLMs as they happen.
🎓 Training + Amnesty: Teach responsible use—and make it safe to come clean.
Because if employees can’t find a secure path… they’ll invent a risky one.
Let’s Get Elemental…
Where strategy meets systems—and the real work begins.
📍 For Management: What You Should Start to Track Now
• Inventory your AI touchpoints—sanctioned, shadow, vendor-embedded, browser-based.
• Pull three contracts for key AI-enabled vendors. Check for audit rights, model updates, indemnity, and explainability.
• Talk to your CISO. Ask what tools are being monitored—and which ones might not be.
🧭 For Boards: What You Need to Ask—Before Someone Else Does
Have we mapped every AI touch-point—official, shadow, or vendor-embedded?
Are there clear rules about putting regulated or confidential data into external models?
Do our contracts cover AI-specific risks—bias, breach, hallucinations?
Are we sandboxing before we scale?
Is it easy for employees to disclose a new tool?
Is Security aware of every outbound AI connection?
AI risk doesn’t always arrive wearing a name tag and asking for a budget.
More often, it slips in quietly—embedded, automatic, invisible.
You don’t need to fear every algorithm.
But you do need to know which ones are already making decisions on your behalf.
Because if something’s already driving your business…
you should probably know where it’s going.


