The Product AI Agent

The Product AI Agent

Share this post

The Product AI Agent
The Product AI Agent
How PMs are contributing to 40% of data misuse in LLMs and damaging your brand
AI PM Operating System

How PMs are contributing to 40% of data misuse in LLMs and damaging your brand

The Jedi mind trick product leaders need in product operations to restore balance in their rogue teammates who use sensitive data in prompts.

Tim Daines's avatar
Tim Daines
Aug 21, 2025
∙ Paid

Share this post

The Product AI Agent
The Product AI Agent
How PMs are contributing to 40% of data misuse in LLMs and damaging your brand
1
Share
IMAGE: Created by AI and inspired by Star Wars

"Dangerous is the path when data flows unchecked through LLMs. Confidential, it might be.”

That might sound like a line from Yoda, but it’s also the reality facing Product Leaders in 2025.

Let’s face it, LLM adoption has outpaced data governance.

Whilst your product teams, the loyal but unpredictable Wookie companions, may be strong in talent, they are blind to the shadows they cast by pasting sensitive data into LLMs without hesitation.

By 2027, Gartner predicts that 40% of AI-related data breaches will originate from the misuse of cross-border generative AI. Combine that with the rising sophistication of prompt injection attacks, and you have the ingredients for the biggest compliance crisis since email.

Those product leaders who now command board-level respect aren't banning LLMs, they're using a 3-wave AI governance system that's prevented hundreds of data breaches across enterprise teams (more on this below).

Those who continue with clever productivity will quickly become the compliance disaster that will dominate the company boardroom.

Let’s find out how to prevent your LLM governance crisis.


This August, we aren’t putting our feet up. Instead, we’re helping the PMs who are drowning in AI survival Mode. Before your next security review, prevent data breaches by accessing the September PM AI Product Operations Playbook.


How ‘must use LLM’ pressure breeds bad practice

The numbers paint a sobering picture. 69% of organisations cite AI-powered data leaks as their top security concern in 2025, yet nearly 47% have no AI-specific security controls in place.

I’ve been talking to product leaders on the ground, and a dangerous pattern emerges:

"My PMs know LLMs hallucinate, but still paste in unverified research. They understand personal data shouldn't be shared, but still drop in customer details because deadlines loom. They feel the AI hype breathing down their necks, and act in survival mode, cutting corners to keep up."

The result? A toxic mix of urgency and convenience, where teams abandon their data-discipline training.

What your IT team won't tell you. Prompts are the candy that attracts creative product people to use LLMs. A perfect lure for single-question taste buds, which can be weaponised for injection attacks, including those with hidden instructions embedded in innocent-looking content.

These attacks coerce LLMs into revealing sensitive data or injecting malicious outputs. They are now top-ranked enterprise threats, leaving even Microsoft quaking in their boots.


What even Microsoft Lost Control

If you think your product teams couldn’t possibly expose sensitive data at scale, consider what happened inside Microsoft.

In 2023, AI researchers accidentally left a misconfigured Shared Access Signature (SAS) token in GitHub that unlocked 38 terabytes of private data connected via an API. That trove included source code, personal backups, and even 30,000 internal messages, complete with passwords and security keys.

This wasn’t the result of a malicious insider or a sophisticated attack. It was product-driven AI experimentation without proper guardrails. Researchers needed a quick way to share data, and in the rush to move fast, they overlooked the basics of data security.

Here's what your vendors won't tell you, our product managers or our Jedi product leader can take away from this incident:

  • Losing sight of the data pipeline attached to your product processes.

  • If Microsoft can end up on the front page of Wired for a product team slip-up, your company’s reputation won’t survive a similar headline.

  • The board will question your product leadership capability when the Q4 audit exposes this gap.

  • Systematic risks appear when PMs sprint ahead without product operations discipline, i.e., creating prompts without first understanding the outcome of their use.

  • Product Leaders should be aware that they may become the centre of attention of the next big security showdown.

  • Your CTO will discover untracked LLM usage in the upcoming security audit.

What I've discovered while working with LLM vendors is that those who cannot explain how they defend against prompt injection attacks are the ones whose platforms become tomorrow's security nightmares.


I’ve been generally shocked at hearing from Senior Product Leads about rogue teammates uploading confidential data into LLMs to speed up self-productivity. If you are in this position and would like a FREE 30-minute conversation to help you resolve this behaviour, please book a time in my calendar at your convenience.

Share

Book a FREE 30 minute call


Why a Product-Ops playbook is the Jedi mind trick needed

Here’s the truth about clamping down on LLM usage. Policy memos won’t work. Email reminders get ignored. Restricting usage stifles innovation and slows down your top talent. And endlessly trying to determine which LLM is ‘best’ for every stage of the product lifecycle creates analysis paralysis.

What works is an AI Product Operations Playbook. A 3-wave system that gently steers Product Managers towards secure, compliant behaviour, without them realising they’re being managed.

It begins with purpose.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Tim Daines
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share