# AI-Driven Code Surge Is Forcing a Rethink of AppSec


As artificial intelligence tools flood development teams with auto-generated code, application security teams face an unprecedented challenge: the sheer volume and velocity of new code outpaces traditional security practices. According to Black Duck CEO Jason Schmitt in a recent conversation with Dark Reading, the industry's approach to AppSec must fundamentally evolve to keep pace with AI-driven development.


## The Transformation: AI Meets Development


The landscape of software development has shifted dramatically over the past 18 months. AI code generation tools—from GitHub Copilot and ChatGPT to specialized development assistants—have become mainstream in many enterprises. Developers use these tools to accelerate coding tasks, reduce boilerplate, and speed up project timelines.


But this acceleration comes with a hidden cost: while AI increases development speed, it simultaneously multiplies the surface area for security vulnerabilities. Developers who once carefully constructed each function now accept AI-generated code blocks with minimal review, trusting tools that operate without security-first training or guardrails.


The problem is compounded by scale. Where a team might have produced 50,000 lines of code annually, AI-assisted teams now generate 500,000 or more—a tenfold increase in code volume with security practices largely unchanged.


## The Threat: Volume Over Vigilance


The primary challenge isn't that AI generates *inherently* insecure code—it's that the volume of generated code overwhelms traditional security reviews. Key vulnerabilities emerge across multiple dimensions:


### Code Quality and Security Blindspots

  • Insecure patterns: AI models trained on open-source code absorb the same insecure patterns found in the wild. An AI tool trained on millions of GitHub repositories learns not just best practices, but common anti-patterns as well.
  • Supply chain dependencies: AI-generated code often imports libraries and dependencies without scrutiny. A function scaffolded by AI might introduce transitive dependencies with known vulnerabilities.
  • Outdated practices: Models trained on code written before 2023-2024 may recommend deprecated security functions or cryptographic algorithms (MD5, SHA1) that are no longer considered secure.

  • ### The Review Problem

    Traditional AppSec workflows assume humans review all code before deployment. With AI-driven code volumes:

  • Code review burnout: Security teams reviewing 10x more code experience decision fatigue, missing vulnerabilities in the sheer noise.
  • False confidence: Developers may assume AI-generated code is "vetted" or "safe," reducing personal code review rigor.
  • Coverage gaps: Automated scanning tools (SAST/DAST) already struggle with coverage; they become even less effective against exponentially larger codebases.

  • ## Background and Context: The AppSec Evolution


    Application security has historically followed a progression:


    | Era | Approach | Tooling | Limitation |

    |-----|----------|---------|-----------|

    | Pre-2010 | Manual penetration testing | Code review, pen tests | Slow, expensive, late-stage |

    | 2010-2018 | Shift Left | SAST scanners, SCA tools | Still reactive; misses context |

    | 2019-2023 | DevSecOps | CI/CD integration, automation | Struggles with high false positives |

    | 2024+ | AI-driven development + legacy AppSec | Pre-AI tooling + AI code | Mismatch in scale and speed |


    The current crisis is a capabilities mismatch: development acceleration via AI has no counterpart in AppSec acceleration. Security processes designed for 50 PRs per day now face 500.


    Black Duck's Jason Schmitt emphasizes that this gap exposes a fundamental misunderstanding: the tool accelerating code generation does not inherently understand or enforce security requirements. AI code generators optimize for functionality and speed, not for security architecture or compliance.


    ## Technical Details: Where Vulnerabilities Hide


    AI-generated code introduces vulnerabilities across several categories:


    ### 1. Injection Vulnerabilities

    AI models struggle with context about data sources. Generated code might concatenate user input into SQL queries or system commands without proper sanitization—a classic vulnerability made more frequent by scale.


    ### 2. Cryptographic Weaknesses

    Older training data includes deprecated cryptographic functions. An AI tool might suggest SHA1 for password hashing or DES for encryption, both long obsolete but present in historical codebases.


    ### 3. Authentication and Authorization Bypasses

    Default credential handling, hardcoded secrets, or incomplete permission checks appear frequently in AI-generated code. The model has no stake in your authentication model and generates the fastest working solution.


    ### 4. Dependency Chain Risks

    When AI suggests a library or package, it often picks popular (well-trained-on) options without checking for known vulnerabilities. A generated dependency might itself contain a critical CVE.


    ### 5. Information Disclosure

    Error handling in AI-generated code often leaks sensitive information—stack traces, database connection strings, API keys—in exception messages that make it to logs or user-facing responses.


    ## Implications for Organizations


    The AppSec implications ripple across enterprise risk management:


    Increased Breach Surface: More code → more vulnerabilities → higher probability of exploitable flaws reaching production.


    Compliance Risk: Frameworks like PCI-DSS, HIPAA, and SOC 2 mandate secure development practices. AI-generated code that bypasses security reviews may violate compliance requirements, exposing organizations to audit failures and penalties.


    Supply Chain Exposure: If your organization publishes libraries or frameworks built with AI-generated code, vulnerabilities propagate downstream to your customers and their customers—a multiplier effect.


    Regulatory Uncertainty: Regulators haven't caught up. Liability for vulnerabilities in AI-generated code remains unclear: Is the developer liable for accepting code without review? Is the AI vendor liable for generating insecure patterns?


    False Confidence: The biggest risk may be organizational complacency. "An AI tool generated this, so it must be safe" is a dangerous assumption.


    ## Recommendations: A Path Forward


    Schmitt and security experts across the industry agree on several critical steps:


    ### 1. Rebuild Security Tooling

  • Invest in AI-aware scanning: Next-generation SAST/SCA tools must understand patterns specific to AI-generated code.
  • Automated dependency analysis: Real-time scanning of AI-suggested libraries before they're imported.
  • Code provenance tracking: Know which code was human-written vs. AI-generated, and audit the latter more aggressively.

  • ### 2. Refocus Code Review

  • Risk-based review: Don't review all AI code equally. High-risk functions (authentication, crypto, data handling) require human oversight. Low-risk utilities can proceed faster.
  • Security-first prompting: Teach developers to ask AI tools for secure code: "Generate a function to hash passwords using bcrypt with salt."
  • Post-generation security audit: Treat AI code as a first draft, not a final implementation.

  • ### 3. Update AppSec Practices

  • Security gates in CI/CD: Fail deployments that contain insecure patterns, hardcoded secrets, or unvetted dependencies.
  • Threat modeling for AI-generated features: Before accepting generated code, model threat scenarios specific to that code's role.
  • Continuous monitoring: Don't assume security testing ends at deployment. Monitor generated code in production for exploitation patterns.

  • ### 4. Governance and Training

  • Acceptable use policies: Define which AI tools developers can use, with explicit security guardrails.
  • Developer security training: Teach developers to recognize common vulnerabilities in AI-generated output and how to remediate them.
  • Security champions for each team: Embed AppSec expertise in development teams to review AI suggestions.

  • ## Conclusion


    The AI code surge is not a problem to be solved with a single tool or policy—it's a systemic shift requiring fundamental changes to how organizations develop, review, and secure software. As Jason Schmitt articulates, security practices must evolve at the same pace as development practices.


    Organizations that treat AI-generated code as "good enough" will eventually become breach statistics. Those that embed security into the AI-assisted development pipeline—through tooling, training, and governance—will capture the productivity benefits of AI while maintaining the security posture their customers and regulators demand.


    The window for adaptation is narrow. The technology is already deployed in thousands of enterprises. The question is not whether AppSec will evolve to meet this challenge, but whether it will evolve fast enough.