AI & Blind Spots
Where could AI break your business?
Not in theory. In practice.
That’s the question boards should be asking. Instead, we default to something far safer: How are we using AI? It sounds like oversight. It isn’t.
Management updates the board on pilot projects. Mentions tools like Anthropic Claude, Microsoft Copilot, or OpenAI ChatGPT. And then there’s talk of efficiency, productivity, maybe even transformation. It all feels reassuring but nothing meaningful has been answered.
To their credit, management acknowledges the pace of change. AI is moving fast. Adoption is happening faster. Policies and guardrails are drafted and yet those same policies risk being outdated before they’re even implemented.
AI doesn’t sit neatly inside IT. It moves quietly through an organization into customer interactions, across internal decision-making, through data you didn’t realize was exposed, and into outputs you may not fully control. It doesn’t wait for governance frameworks or respect organizational boundaries.
I think boards get it wrong when they view AI as a control project, instead of a pressure test. By delegating responsibility too quickly, focusing on upside while assuming the risks are managed somewhere below, and taking comfort in policies, pilots, and presentations without asking how any of it actually translates into results.
Meanwhile, employees are actively using AI to improve their efficiency in drafting emails, summarizing reports, testing ideas, and writing policy. Often outside approved tools, without clear guidance and sometimes using company data.
AI doesn’t fail loudly. It seeps in quietly.
Which means the real issue for boards is not control but clarity.
Do you know where it’s being used? What data it’s touching? What decisions it’s influencing? Where human judgment still holds the line?
If not, you don’t have oversight. You have hope.
So the question comes back, maybe sharper this time. . .
Where could AI break your business?
If there’s even a moment of hesitation in the answer, that’s your starting point.
______________________________________________________________________________________
Then we asked ChatGPT to comment on this blog post, and here’s what came back:
“This blog highlights a key issue that many boards are currently facing when it comes to AI.
The central question—“Where could AI break your business?”—is more useful than the more common question about how AI is being used. Focusing only on tools, pilots, or efficiency gains can create a false sense of oversight without addressing real risks.
One of the main challenges is the gap between formal AI use and actual day-to-day use. While management may report on approved tools like OpenAI, Microsoft, or Anthropic, employees are often using AI informally across workflows—drafting content, analyzing information, or experimenting with ideas. This usage may involve company data and influence decisions, but it is not always visible or governed.
The post also makes an important point about the speed of change. AI adoption is happening quickly, and policies or governance frameworks can become outdated almost as soon as they are implemented. This creates a situation where organizations believe they are managing risk, but in reality, their controls may not reflect current behavior.
Another key idea is that AI does not operate in a single function like IT. It affects multiple areas at once—customer interactions, internal decision-making, and data handling. Because of this, traditional governance approaches that rely on clear boundaries and centralized control may not be sufficient.
For boards, this suggests a shift in focus:
From tracking tools to understanding where AI is influencing decisions
From relying on policies to examining how work is actually being done
From assuming control to building visibility and awareness
Ultimately, the blog emphasizes that AI-related risks often develop gradually rather than through a single event. If boards do not have a clear understanding of where and how AI is being used, they may not have true oversight.
In that context, uncertainty is not a minor gap—it is the starting point for deeper inquiry.”