Top 10 Security Mistakes in AI-Generated Code (Cursor, Claude, ChatGPT)

By Vince • Published January 31, 2025 • 12 min read

Vibe coding has revolutionized how quickly we can build apps. Tools like Cursor, Claude, ChatGPT, Bolt, and Replit Agent let anyone create functional applications in hours instead of months. But there's a critical problem: AI doesn't think about security the way attackers do.

After reviewing hundreds of AI-generated codebases, we've identified the most common security vulnerabilities that appear again and again. Here are the top 10 security mistakes in AI-generated code and how to fix them.

1. Exposed API Keys and Secrets

This is the #1 issue we find. AI coding tools often generate code with API keys, database credentials, and secrets hardcoded directly in the source code.

Real example: We found a Cursor-generated app with Stripe secret keys committed to a public GitHub repo. The developer didn't realize until fraudulent charges appeared.

What AI does wrong:

  • Puts API keys directly in frontend JavaScript
  • Commits .env files to version control
  • Stores secrets in plain text config files
  • Uses the same keys for development and production

How to fix it: Always use environment variables, never commit secrets, and use a secrets manager for production. Add .env to your .gitignore immediately.

2. SQL Injection Vulnerabilities

AI-generated code frequently constructs SQL queries by concatenating user input directly into query strings, creating classic SQL injection vulnerabilities.

// BAD: AI often generates this
const query = `SELECT * FROM users WHERE email = '${userInput}'`;

// GOOD: Use parameterized queries
const query = 'SELECT * FROM users WHERE email = ?';
db.query(query, [userInput]);

SQL injection remains one of the most dangerous vulnerabilities. An attacker can extract your entire database, modify data, or even execute commands on your server.

3. Missing Authentication on API Routes

AI tools are great at building features but often forget to protect them. We regularly find admin endpoints, user data routes, and sensitive operations completely unprotected.

Common patterns we see:

  • DELETE endpoints without auth checks
  • User profile endpoints that don't verify ownership
  • Admin functions accessible to any logged-in user
  • API routes that work without any authentication

4. Insecure Direct Object References (IDOR)

This is when your app lets users access data just by changing an ID in the URL. AI-generated code almost never checks if the current user should have access to the requested resource.

// BAD: Anyone can access any user's data
app.get('/api/user/:id', (req, res) => {
    return db.getUser(req.params.id);
});

// GOOD: Verify the user owns this resource
app.get('/api/user/:id', authMiddleware, (req, res) => {
    if (req.params.id !== req.user.id) {
        return res.status(403).json({ error: 'Forbidden' });
    }
    return db.getUser(req.params.id);
});

5. Cross-Site Scripting (XSS)

AI-generated frontend code often renders user input without proper sanitization, allowing attackers to inject malicious scripts.

Where we find XSS:

  • Comment sections and user profiles
  • Search results pages
  • Error messages that reflect user input
  • Anywhere dangerouslySetInnerHTML or innerHTML is used

6. Weak Password Requirements

ChatGPT and Cursor often generate authentication with minimal password requirements or no password hashing at all.

Issues we find:

  • Passwords stored in plain text
  • MD5 or SHA1 used instead of bcrypt/argon2
  • No minimum password length
  • No rate limiting on login attempts

7. Missing HTTPS and Insecure Cookies

AI-generated code often works fine locally over HTTP but doesn't properly configure secure connections for production.

What's often missing:

  • HTTPS enforcement
  • Secure flag on cookies
  • HttpOnly flag to prevent XSS cookie theft
  • SameSite attribute for CSRF protection

8. Verbose Error Messages

AI loves helpful error messages. Unfortunately, detailed errors in production give attackers a roadmap to your system.

We've seen errors that expose: Database table names, file paths, stack traces with line numbers, and even partial SQL queries.

9. No Input Validation

AI-generated code often trusts all input. File uploads without type checking, forms without length limits, and APIs that accept any data structure.

Always validate:

  • File types and sizes for uploads
  • Email formats and string lengths
  • Numeric ranges and data types
  • Required fields and data structure

10. Outdated Dependencies with Known Vulnerabilities

AI tools often suggest package versions from their training data, which may be outdated. These old packages frequently have known security vulnerabilities.

Solution: Run npm audit or yarn audit regularly, and use tools like Dependabot or Snyk to monitor for vulnerable dependencies.

Why AI Makes These Mistakes

AI coding tools optimize for making code work, not for security. They're trained on public code (including insecure examples), don't understand your specific threat model, and can't anticipate how attackers might abuse your application.

This is why professional code review is essential for any AI-generated application you plan to ship to real users.

Is Your AI-Generated App Secure?

Our security audits specifically look for these vulnerabilities in code from Cursor, Claude, ChatGPT, and other AI tools. Get a professional review before attackers find these issues first.

Get a Security Audit

Next Steps

If you've built an app using AI coding tools:

  1. Search your codebase for hardcoded secrets immediately
  2. Review all API routes for proper authentication
  3. Run a dependency audit
  4. Consider a professional security review before launch

Vibe coding is amazing for getting ideas off the ground quickly. But before you ship to production, make sure a real engineer has checked for these common security mistakes.

Written by Vince

Lead software engineer with 10+ years of experience at a Fortune 20 company. After seeing hundreds of AI-generated codebases with the same preventable mistakes, he started VibeCodeBlue to help vibe coders ship secure, production-ready apps. He's personally reviewed 500+ AI-generated projects across Cursor, ChatGPT, Claude, and Bolt.