The $300k consulting strategy that can't find the hidden AI Agent use cases
How to identify AI agent opportunities in costly manual business workflows.
"We spent £300k on Business Strategy consultants. They left us with 47 potential use cases and zero deployed agents.”
Sound familiar?
Six months later, you're still in discovery workshops, while competitors are shipping agents that look suspiciously like your internal workflows, which you discussed with the business strategy consultants.
Last week, I showed you why most AI agents are just expensive chatbots. Built for demos, not real workflows. But that assumes you've already identified where to deploy agents.
Today, let's solve the problem that comes before implementation.
Finding the correct use cases in the first place.
Your best AI agent use cases aren't in consulting pitch decks. They're already running inside your business, executed manually by humans, every single day.
The challenge isn't finding AI agent opportunities. It's recognising them.
COMING END JULY 2025!
If you want to ‘Master Agentic AI product and design’ before your competition…..
……enjoy a 50% discount on the ‘No Spoon Survival Guide’.
The classic 'Strategic Clarity' trap
I've watched dozens of AI initiatives fail because teams chase abstract possibilities while missing the structured, repeatable workflows that AI agents excel at.
"Let's build an agent that writes our marketing copy!"
This is pitfall #1 of the 8 critical failure modes; vague goals replacing specific value cases.
Your organisation likely has 50+ processes perfect for AI agent automation, but they're invisible because they look like "just how we do things":
The weekly pipeline review that takes the sales ops team 6 hours
The customer onboarding checklist that CSMs follow religiously
The contract review process that bounces between legal and procurement
These aren't exciting AI moonshots.
Let’s look at a use case I worked on several years ago when AI 1.0 was all the hype.
The £2.4M Hidden in Plain Sight
A FTSE 100 insurance company discovered that its claims analysts spent 70% of their time on one task - cross-referencing policy details across five systems to validate claims.
It wasn't innovative. It wasn't strategic. It was just.…… Thursday.
If we were to revisit this today with AI agents, here’s the opportunity it could deliver.
80% of standard validations.
Annual savings £2.4M.
Implementation time: 12 weeks.
Knowing the behaviour of insurance policy, cross-referencing the likelihood of high workflow time is probably in meetings, so I’d look in Outlook calendars to find:
"Weekly Claims Review Meeting - 3 hours."
Something the Business Strategy consultant would never pick up.
The AI Agent opportunity hiding in plain sight
After shipping AI deployments, I've identified a simple four-signal mini-workshop which can be applied to separate genuine AI agent opportunities.
If you stop looking for AI agent use cases in consulting decks, and start looking where the work happens - at the keystroke - you quickly identify whether there are any potential use cases.
Here’s where to get started, quickly.
Find a very manual process that has lots of meetings and email messaging around a given critical business task, then run the following:
📊 The Calendar Test
Open a team's calendars. Every recurring meeting that involves:
Status updates
Information gathering
Progress tracking ...is a potential AI agent workflow.
📧 The Email Archaeology method
Search their sent folder for phrases like:
"Can you check if..."
"Please compile..."
"Send me the latest..."
🔄 The Handoff Mapper
Document every time work moves between teams. These transition points, where context gets lost and delays multiply, are where AI agents create maximum value.
Each represents a human doing potential AI agent work.
Day in the Life: Your AI Agent's ‘buy-in’ starter
"Day in the Life" is a service design technique that can be used to capture someone's chronological workflow hour-by-hour, exposing hidden inefficiencies that have become normalised.
It's incredibly powerful in driving AI agent conversations because it transforms abstract "process improvement" into a visceral reality that stakeholders can't ignore.
“Sarah wastes 3.5 hours every Monday on manual data compilation”
The before/after comparison makes the AI agent's value undeniable.
It’s not about replacing Sarah, but freeing her from soul-crushing copy-paste work to do strategic analysis.
It also turns ROI calculations into human stories, making it easier to get buy-in because executives aren't approving "an AI agent", they're giving Sarah her Mondays back so she can grow the business in her process.
Unlocking AI-Agent value: The ‘Three Shifts’ that matter
Even when organisations spot high-impact AI agent use cases, they often stumble in execution.
The root causes can be traced back to three familiar antipatterns. Each has a straightforward cure if you apply data and process design thinking from the outset.
Anti-pattern 1: From Innovation Experiments to Rapid, Secure Deployment
The Antipattern:
You wow stakeholders with a Q1 pilot… then by Q3 the PoC is still stuck in security review, while your innovation leads have already raced off to the next shiny thing:
“We built a working fraud-detection bot, now it’s red-lined by legal.”
Why This Happens:
Data teams operate in isolation from compliance and user-experience experts. Security and governance only join in at “handoff,” triggering months of rework.
The Data and Process Design Fix:
Adopt a “Production-First Sprint” approach that stitches security, compliance, and end-user needs into every iteration. Before you even write your first line of agent code, assemble a cross-functional squad:
Security/compliance regulation policies
Data engineer for pipelines & access
UX/UI designer for auditability & transparency
Product owner for business and customer alignment
Sprint 0: Map data sources, risk controls, and user-journey checkpoints.
Sprint 1: Finalise data-use agreements and compliance requirements.
Sprint 2: Prototype UI-level explanations and API-driven integrations.
Track This: Time elapsed from pilot sign-off to live deployment.
Anti-pattern 2. From Disconnected Agents to a Unified Agent Ecosystem
The Antipattern:
You deploy an invoice-processing agent, then spin up a customer-service bot, then an analytics helper… only to discover none can pass context between them. Your “automation” multiplies silos.
“Yes, we have 12 agents, but none can hand off tasks or share customer state.”
Why This Happens:
Each team builds in isolation. Data architects and UX designers never co-author the end-to-end workflow, so there’s no shared memory, no standard protocols, and no smooth user hand-offs.
The Data and Process Design Fix:
Before building your second agent, blueprint an Orchestration Layer with your squad to get to:
Shared Context Store: a single source of truth for user profiles, case statuses, and decision logs.
Handoff Protocols: design standardised API contracts and UI-level hand-off cues (e.g., “Your case is now with InvoiceBot”).
Unified Escalation Paths: chart how agents invoke human review or route exceptions.
Workshops (sticky notes + data visualisations) help your team prototype these flows and spot friction points before any code is written.
Track This: Percentage of agents using the shared context store and handoff APIs.
Anti-pattern 3: From Tech Metrics to CEO-Proof of Business Outcomes
The Antipattern
“We handled 10,000 agent requests today!” is no substitute for a conversation with your CEO.
“Nice stats, now, how many hours did we really save, and how much new revenue flowed in?”
Why This Happens:
Data scientists report throughput and accuracy, designers obsess over task completion rates, but neither team ties metrics back to the P&L or user value.
The Data and Process Design Fix:
Embed a business-aligned measurement framework into every squad’s Definition of Done:
Time Saved = (Automated tasks × avg. task time) × hourly cost
Revenue Impact = accelerated deals + cost avoidance (e.g., churn prevented)
Decision Velocity = days-to-minutes improvement in key workflows
Ask your UX designers to map key decision points (where the AI agent intervenes) and your data team to instrument those spots for end-to-end measurement.
Track This: Share of AI agents with fully documented business case (target: 100%).
What do you think? Tried these approaches?
Stop Searching, Start Seeing
Your competitors aren't beating you because they have better AI technology. They're beating you because they recognise that AI agents excel at the boring, repetitive, high-volume workflows you've ignored.
The organisations winning with AI agents today didn't start with a vendor selection process. Nor a slide deck from a business strategy consultant.
They began with a simple question:
"What do our people do every week that follows a predictable pattern so we can build repeatable expertise?"
Your Next 48 Hours:
Tomorrow morning, open your calendar. Find three recurring meetings. For each one, ask:
What information gets gathered?
What decision gets made?
Who acts on the output?
If you can answer all three, you've found an AI agent opportunity.
Document it. Score it.
Then read next week's article, where I'll show you how to avoid Pitfall #2: Investment Discipline, why most teams chase shiny AI features instead of building business cases that survive CFO scrutiny.
Spoiler. It's not about the technology. It's about speaking finance.
In the next 18 months, your organisation will deploy AI agents.
The question is, will you be ready with validated use cases and precise ROI projections, or will you still be running to the consultants for a fictitious slide deck?
🙋♂️ Have you tried building AI Agents yet? Drop your tricks or lessons in the comments. Let's compare notes.
🔁 Share with a designer or PM still treating agents like chatbots.