# AI-Driven Code Surge Is Forcing a Rethink of AppSec
As artificial intelligence tools flood development teams with auto-generated code, application security teams face an unprecedented challenge: the sheer volume and velocity of new code outpaces traditional security practices. According to Black Duck CEO Jason Schmitt in a recent conversation with Dark Reading, the industry's approach to AppSec must fundamentally evolve to keep pace with AI-driven development.
## The Transformation: AI Meets Development
The landscape of software development has shifted dramatically over the past 18 months. AI code generation tools—from GitHub Copilot and ChatGPT to specialized development assistants—have become mainstream in many enterprises. Developers use these tools to accelerate coding tasks, reduce boilerplate, and speed up project timelines.
But this acceleration comes with a hidden cost: while AI increases development speed, it simultaneously multiplies the surface area for security vulnerabilities. Developers who once carefully constructed each function now accept AI-generated code blocks with minimal review, trusting tools that operate without security-first training or guardrails.
The problem is compounded by scale. Where a team might have produced 50,000 lines of code annually, AI-assisted teams now generate 500,000 or more—a tenfold increase in code volume with security practices largely unchanged.
## The Threat: Volume Over Vigilance
The primary challenge isn't that AI generates *inherently* insecure code—it's that the volume of generated code overwhelms traditional security reviews. Key vulnerabilities emerge across multiple dimensions:
### Code Quality and Security Blindspots
### The Review Problem
Traditional AppSec workflows assume humans review all code before deployment. With AI-driven code volumes:
## Background and Context: The AppSec Evolution
Application security has historically followed a progression:
| Era | Approach | Tooling | Limitation |
|-----|----------|---------|-----------|
| Pre-2010 | Manual penetration testing | Code review, pen tests | Slow, expensive, late-stage |
| 2010-2018 | Shift Left | SAST scanners, SCA tools | Still reactive; misses context |
| 2019-2023 | DevSecOps | CI/CD integration, automation | Struggles with high false positives |
| 2024+ | AI-driven development + legacy AppSec | Pre-AI tooling + AI code | Mismatch in scale and speed |
The current crisis is a capabilities mismatch: development acceleration via AI has no counterpart in AppSec acceleration. Security processes designed for 50 PRs per day now face 500.
Black Duck's Jason Schmitt emphasizes that this gap exposes a fundamental misunderstanding: the tool accelerating code generation does not inherently understand or enforce security requirements. AI code generators optimize for functionality and speed, not for security architecture or compliance.
## Technical Details: Where Vulnerabilities Hide
AI-generated code introduces vulnerabilities across several categories:
### 1. Injection Vulnerabilities
AI models struggle with context about data sources. Generated code might concatenate user input into SQL queries or system commands without proper sanitization—a classic vulnerability made more frequent by scale.
### 2. Cryptographic Weaknesses
Older training data includes deprecated cryptographic functions. An AI tool might suggest SHA1 for password hashing or DES for encryption, both long obsolete but present in historical codebases.
### 3. Authentication and Authorization Bypasses
Default credential handling, hardcoded secrets, or incomplete permission checks appear frequently in AI-generated code. The model has no stake in your authentication model and generates the fastest working solution.
### 4. Dependency Chain Risks
When AI suggests a library or package, it often picks popular (well-trained-on) options without checking for known vulnerabilities. A generated dependency might itself contain a critical CVE.
### 5. Information Disclosure
Error handling in AI-generated code often leaks sensitive information—stack traces, database connection strings, API keys—in exception messages that make it to logs or user-facing responses.
## Implications for Organizations
The AppSec implications ripple across enterprise risk management:
Increased Breach Surface: More code → more vulnerabilities → higher probability of exploitable flaws reaching production.
Compliance Risk: Frameworks like PCI-DSS, HIPAA, and SOC 2 mandate secure development practices. AI-generated code that bypasses security reviews may violate compliance requirements, exposing organizations to audit failures and penalties.
Supply Chain Exposure: If your organization publishes libraries or frameworks built with AI-generated code, vulnerabilities propagate downstream to your customers and their customers—a multiplier effect.
Regulatory Uncertainty: Regulators haven't caught up. Liability for vulnerabilities in AI-generated code remains unclear: Is the developer liable for accepting code without review? Is the AI vendor liable for generating insecure patterns?
False Confidence: The biggest risk may be organizational complacency. "An AI tool generated this, so it must be safe" is a dangerous assumption.
## Recommendations: A Path Forward
Schmitt and security experts across the industry agree on several critical steps:
### 1. Rebuild Security Tooling
### 2. Refocus Code Review
### 3. Update AppSec Practices
### 4. Governance and Training
## Conclusion
The AI code surge is not a problem to be solved with a single tool or policy—it's a systemic shift requiring fundamental changes to how organizations develop, review, and secure software. As Jason Schmitt articulates, security practices must evolve at the same pace as development practices.
Organizations that treat AI-generated code as "good enough" will eventually become breach statistics. Those that embed security into the AI-assisted development pipeline—through tooling, training, and governance—will capture the productivity benefits of AI while maintaining the security posture their customers and regulators demand.
The window for adaptation is narrow. The technology is already deployed in thousands of enterprises. The question is not whether AppSec will evolve to meet this challenge, but whether it will evolve fast enough.