Compliance

AI Tools and Customer Data: The Risk You Can't See

AI Tools and Customer Data: The Risk You Can't See

A thread on r/cybersecurity hit a nerve recently. The post — titled 'To every manager who thinks they have AI under control' — described a scenario playing out in offices everywhere: employees quietly feeding real customer records, internal documents, and sensitive business data into unapproved AI tools for months. No alerts. No flags. No one watching.

The manager found out the way most do. Too late.

If you run a small business, manage government contracts, or oversee IT for a handful of clients, that story probably landed differently than it would have two years ago. Because now there's news to go with it.

The Same Tools Your Staff Uses Were Used to Breach a Government

In early March 2026, attackers used ChatGPT and Claude with a carefully crafted playbook prompt to breach multiple Mexican government agencies and exfiltrate citizen data. According to Schneier on Security, Claude — a mainstream AI assistant used by millions of professionals daily — was weaponized as a core component of the attack chain.

Think about that for a moment. The same tool your billing coordinator might use to draft a client summary email was used to compromise a national government's data infrastructure.

This isn't an argument to ban AI. It's an argument to govern it. Because right now, most small businesses have no visibility into which AI tools their employees are using, what data is being submitted, or where that data goes afterward.

Anonymized Data Isn't Safe Either

One of the most common justifications employees give — when they give one at all — is that they "removed the names" before pasting data into an AI tool. Problem solved, right?

Not even close. According to Schneier on Security, LLMs can be used to re-identify individuals from data that appears anonymized. At scale, patterns in seemingly scrubbed records — zip codes, job titles, purchase histories, appointment dates — can be stitched back together to identify specific people.

For a small business handling customer health information, financial records, or government contract data, this isn't a theoretical concern. It's a compliance exposure. CMMC Level 1, the FTC Safeguards Rule, and Ohio's SB 220 safe harbor provisions all hinge on demonstrating that you've taken reasonable steps to protect sensitive data. "My employee thought it was fine" is not a defense that holds up.

We've written before about ChatGPT data leaks and small business AI security risks — and the LLM deanonymization research makes that risk substantially more concrete.

The Insider Threat You Can't See Coming

Here's where it gets more complicated. North Korean advanced persistent threat (APT) groups are now using AI tools to enhance IT worker scams — creating convincing fake personas, generating polished code samples, and slipping past hiring filters at companies that believe they're onboarding legitimate contractors.

The reason this matters for AI governance isn't just the nation-state angle. It's the underlying lesson: organizations can no longer reliably distinguish between a trusted insider and a threat actor operating under the radar. When an employee — or someone posing as one — uses an unauthorized AI tool to process sensitive data, the blast radius of that action is invisible until it isn't.

This is the same dynamic we see in shadow IT more broadly. As we covered in our breakdown of the shadow IT crisis and department heads bypassing security controls, the problem almost never starts with bad intent. It starts with convenience. Someone finds a faster way to do their job, skips the approval process, and creates a risk the security team doesn't know exists.

What Good AI Governance Actually Looks Like

You don't need a 40-page AI policy to start. You need a few concrete controls:

1. Know what tools are in use. Conduct a simple audit. Ask department heads to list every AI tool their team uses regularly — including browser extensions, writing assistants, and anything accessed through a personal account. The answers will surprise you.

2. Classify your data before your employees do. If your staff doesn't know which data categories are sensitive, they can't make good decisions about what to paste into an AI tool. Create a one-page data classification guide: public, internal, confidential, restricted.

3. Establish an approved tools list. This doesn't mean banning everything. It means designating which tools have been reviewed, which are prohibited for sensitive data, and which require a security review before use. Make the approved path easier than the unapproved one.

4. Monitor for data exfiltration, not just intrusion. Most small business security setups are oriented toward keeping attackers out. AI data leakage flows the other direction — outbound, through legitimate-looking traffic. Your monitoring needs to account for both.

5. Revisit your access controls. Employees who can access everything can leak everything. Least-privilege access limits the blast radius when someone makes a bad call. Our post on preventing employee privilege escalation and access control covers the practical steps.

The Compliance Clock Is Ticking

For government contractors pursuing CMMC Level 1 certification, unsanctioned AI tool usage isn't just a security risk — it's a documentation problem. You need to demonstrate that CUI (Controlled Unclassified Information) is handled in accordance with defined practices. If employees are submitting contract-related data to public LLMs, that documentation falls apart.

For Ohio businesses, SB 220 safe harbor protection requires implementing a recognized cybersecurity framework. AI governance is increasingly considered part of that baseline. The safe harbor doesn't protect you if you haven't taken reasonable steps — and "we didn't know employees were doing this" is exactly the kind of gap auditors look for.


Take Action

The gap between "we have a policy" and "we have visibility" is where breaches live. Proactive scanning catches misconfigurations, unauthorized access paths, and exposure risks before an attacker — or an accidental data submission — does.

Oscar Six Security's Radar gives small businesses and MSPs affordable, continuous vulnerability scanning at $99 per scan. It's designed for organizations that need real answers, not enterprise-priced complexity.

See how Radar works →

Focus Forward. We've Got Your Six.