Mission

Vibe Coding: Why AI-Generated Code Is a Security Bomb

Vibe Coding: Why AI-Generated Code Is a Security Bomb

Your Client's Employee Just Shipped an App. Nobody Reviewed the Code.

It starts innocently enough. A motivated employee — maybe the owner's son, maybe someone in ops who's "good with computers" — discovers a tool like Lovable, Cursor, or Replit. Within an afternoon, they've built something that looks like a real application: a client portal, an internal dashboard, a form that writes to a database. They're proud of it. Leadership is impressed. Nobody calls IT.

This is vibe coding. And it's already inside your clients' networks.

MSPs across the country are running into this exact scenario. One recent discussion in a managed services forum described a client whose son wanted to replace vetted security tools with apps he'd built using AI coding assistants — no security review, no testing, no oversight. It's not a hypothetical. It's Tuesday.

What Is Vibe Coding, and Why Should You Care?

Vibe coding refers to the practice of using AI tools to generate functional applications through natural language prompts, often by people with little to no formal software development background. The AI writes the code. The human ships it.

The appeal is obvious. The risk is severe.

Researchers recently examined a real-world application showcased by Lovable — an AI app-building platform — and found 16 exploitable vulnerabilities in a single app that had over 18,000 users. Broken authentication. Exposed API keys. Insecure data handling. The app looked polished. The code underneath was a liability waiting to be triggered.

That's not a one-off. That's the pattern.

The AI Tools Themselves Aren't Safe Either

Here's where it gets harder to dismiss: the problem isn't just that non-technical employees are building apps. The problem is that even the best AI coding tools have documented security gaps.

Recent reporting from Security News revealed that Claude Code — Anthropic's enterprise-grade AI coding assistant — contained exploitable flaws that put developer machines at risk. These weren't theoretical edge cases. They were real attack surfaces in a tool used by professional developers who should know better than to deploy code without review.

A follow-up piece from the same outlet noted that while Claude Code shows promise, it is far from perfect — and that the security limitations of AI coding tools are being systematically understated relative to how aggressively they're being marketed and adopted.

When enterprise tools used by experienced developers carry these risks, what does that mean for the AI-generated app your client's office manager just deployed to handle customer intake forms?

LLMs Have a Security Blind Spot Baked In

The issue runs deeper than any single tool. According to Schneier on Security, research has confirmed that large language models generate predictable outputs that appear random but follow exploitable patterns. The specific finding involves password generation, but the implication extends directly to code.

AI-generated code may look functional and even sophisticated on the surface while embedding the same kinds of predictable, insecure patterns that attackers have learned to target. The code passes a visual review. It works in testing. And it fails catastrophically when someone who knows what to look for decides to probe it.

This is why "it works" is not the same as "it's secure."

The Supply Chain Risk Nobody's Talking About

Vibe coding doesn't just create vulnerable apps. It creates a new attack surface through the dependencies those apps pull in.

AI coding tools routinely suggest third-party libraries, packages, and repositories to make generated code functional. Most users accept these suggestions without review. Attackers know this.

According to The Hacker News, Microsoft has warned developers about fake Next.js repositories being used to deliver in-memory malware — malicious packages disguised as legitimate development resources. Professional developers are being targeted through this vector. Non-technical employees using vibe coding tools are even more exposed, because they lack the instinct to question what the AI recommends.

One poisoned dependency. One AI suggestion accepted without review. That's the entire attack chain.

What MSPs and IT Admins Should Do Right Now

This isn't a problem you can wait to address. Here's where to start:

1. Have the conversation before the app gets deployed. Build a simple policy: any application that touches company data, customer information, or internal systems requires IT review before going live. Make it easy to request a review, not just a rule that gets ignored.

2. Conduct application inventory. You may already have vibe-coded apps running in your environment and not know it. Ask. Look at what's running on company infrastructure. Shadow development is called shadow development for a reason.

3. Treat AI-generated code like untrusted code. Because it is. Require the same review process for AI-generated applications that you'd require for any third-party software. That means checking for exposed credentials, insecure authentication, unvalidated inputs, and risky dependencies.

4. Educate, don't just restrict. Employees are turning to vibe coding because they're trying to solve real problems. If you only say no, they'll find a workaround. Help them understand the risk, and give them a path to get what they need safely.

5. Scan what's already there. Policies only protect you going forward. The liability from what's already deployed is the more urgent problem. External scanning can surface vulnerabilities in running applications before an attacker finds them first.

The Liability Clock Is Already Running

For Ohio businesses, the stakes include SB 220 safe harbor protections — which require demonstrable cybersecurity practices to claim. For government contractors, CMMC Level 1 compliance doesn't leave room for unreviewed applications handling controlled data. And for any small business, a breach traced back to an AI-generated app with known vulnerability patterns is going to be a difficult conversation with customers, insurers, and regulators.

The vibe coding wave isn't coming. It's already inside your clients' networks. The question is whether you find the vulnerabilities first, or someone else does.


Take Action: Find the Vulnerabilities Before the Attackers Do

Proactive scanning is how you get ahead of this. Waiting for an incident report is not a security strategy.

Oscar Six Security's Radar gives small businesses and their IT teams an affordable way to scan for application vulnerabilities, exposed assets, and security gaps — before they become breaches. At $99 per scan, it's accessible for the businesses that need it most and practical for MSPs managing multiple clients.

If your clients are building or running AI-generated applications, now is the time to find out what's actually in them.

See how Radar works →

Focus Forward. We've Got Your Six.