# We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is


In what may be the largest security audit of AI infrastructure to date, researchers recently scanned approximately one million self-hosted AI services accessible on the internet and discovered a sobering reality: the race to deploy large language models and other AI workloads has dramatically outpaced security best practices. The findings reveal a landscape riddled with basic misconfigurations, exposed credentials, unpatched vulnerabilities, and inadequate access controls—painting a picture of an industry prioritizing speed to deployment over defensive security practices.


As organizations rush to capitalize on AI's transformative potential, the study underscores a critical tension: the very agility and velocity that makes AI adoption appealing is creating a dangerous security vacuum. In the span of just 18-24 months, what was once cutting-edge infrastructure has become ubiquitous—and largely undefended.


## The Scale of Exposure


The sheer magnitude of this vulnerability landscape is staggering. The research team identified and analyzed approximately 1 million exposed AI services running across cloud infrastructure, on-premises systems, and hybrid deployments. These services ranged from small proof-of-concept LLM deployments to production systems serving real customer workloads.


Of the services analyzed:

  • Approximately 45% were accessible without any authentication
  • Over 60% contained hardcoded API keys or credentials in configuration files
  • Nearly 80% were missing critical security headers or had improper CORS configurations
  • More than 35% were running outdated versions of popular AI frameworks with known CVEs

  • These numbers represent a fundamental failure in basic security hygiene—issues that have been understood and addressed in traditional software development for decades.


    ## Why Speed Trumped Security


    The explanation for this security crisis lies in the unprecedented pace of AI adoption. Unlike traditional software development, which has matured security practices over 20+ years, the AI tooling ecosystem emerged almost overnight. Organizations faced a pressure cooker: move fast, deploy LLMs, achieve competitive advantage, or risk obsolescence.


    Several factors created this dangerous dynamic:


    Time-to-Value Pressures: Boards and executives demanded AI integration within quarters, not years. Security assessments and architectural hardening were seen as obstacles to innovation rather than prerequisites.


    Skill Gaps: Few organizations had in-house expertise in securing LLM infrastructure. DevOps teams adapted containerization practices from the traditional cloud era without understanding AI-specific threat models. Security teams, meanwhile, were overwhelmed and underrepresented in architecture decisions.


    Complexity Without Documentation: Early AI frameworks and deployment tools prioritized ease of use for data scientists over security for operations teams. Default configurations often disabled authentication or logging to simplify initial setup. Many organizations never changed those defaults.


    False Sense of Isolation: Self-hosted AI services were often deployed in internal networks or private clouds, creating an assumption that "internal = secure." This led teams to skip perimeter controls, encrypted communications, and identity management.


    ## The Vulnerability Landscape


    The research identified several recurring categories of critical weaknesses:


    ### Authentication & Access Control

    The most alarming finding: nearly half of all exposed services required zero authentication. Many LLM APIs were deployed with default credentials left intact or with API keys embedded in publicly accessible configuration files. Some organizations stored credentials in git repositories, environment files committed to version control, or Docker image layers.


    ### Network Exposure

    A significant portion of AI services were inadvertently exposed to the public internet through:

  • Misconfigured cloud storage buckets
  • API endpoints without rate limiting or IP restrictions
  • Kubernetes dashboards accessible without authentication
  • Container registries exposing proprietary models and weights

  • ### Data Exfiltration Risks

    Services were discovered transmitting sensitive data—including training datasets, inference logs, and customer information—over unencrypted connections or to misconfigured backup systems. Several services logged user prompts and model responses without proper data retention or deletion policies.


    ### Supply Chain Vulnerabilities

    Many deployments relied on pre-built container images or model weights downloaded from public registries without signature verification. In at least 15 documented cases, researchers identified poisoned or trojanized models that included hidden functionality.


    ## Implications for Organizations


    The security failures discovered aren't just theoretical vulnerabilities—they have direct, measurable business impact:


    Intellectual Property Theft: Exposed models and fine-tuning data can be exfiltrated, allowing competitors to replicate or improve upon proprietary AI systems without investment.


    Prompt Injection & Jailbreaking: Unsecured APIs allow attackers to manipulate AI systems into generating harmful content, leaking training data, or executing unintended tasks.


    Cryptojacking: Researchers found multiple instances where AI services were hijacked to perform cryptocurrency mining, consuming compute resources and degrading performance.


    Compliance Violations: For organizations handling regulated data (healthcare, finance, personal information), these exposures trigger reportable security incidents and potential regulatory fines.


    Reputational Damage: Public disclosure of AI system compromises damages customer trust and brand credibility.


    ## Recommendations for Defensive Action


    Organizations deploying AI infrastructure should implement the following immediate measures:


    ### 1. Inventory and Audit

  • Identify all AI services running across infrastructure (cloud, on-premises, development)
  • Scan for public exposure using tools like Shodan, Censys, or cloud provider logging
  • Review access patterns to detect unauthorized or anomalous connections

  • ### 2. Implement Core Security Controls

    | Control | Priority | Implementation |

    |---------|----------|-----------------|

    | Authentication | Critical | Enforce API keys, OAuth 2.0, or mutual TLS |

    | Network isolation | Critical | Use VPCs, network policies, or private endpoints |

    | Encryption in transit | Critical | Enforce TLS 1.3 for all communications |

    | Encryption at rest | High | Encrypt model weights, prompts, and responses |

    | Logging & monitoring | High | Track all API calls, errors, and data access |

    | Rate limiting | High | Prevent abuse and resource exhaustion |


    ### 3. Adopt Infrastructure-as-Code Security

  • Define security baselines for AI deployments
  • Use policy-as-code (OPA, Kyverno) to enforce standards
  • Version control infrastructure configurations securely
  • Never commit secrets—use secret management systems

  • ### 4. Establish Secure Development Practices

  • Model provenance: Verify all models and weights through cryptographic signatures
  • Supply chain security: Scan dependencies for known vulnerabilities
  • Red team exercises: Conduct penetration tests simulating real attack scenarios
  • Security training: Ensure data scientists and DevOps teams understand threat models

  • ### 5. Implement Continuous Monitoring

  • Deploy intrusion detection systems for AI APIs
  • Monitor for unusual inference patterns or prompt anomalies
  • Set up alerts for unauthorized data access or exfiltration attempts
  • Maintain audit logs with sufficient retention for forensic analysis

  • ## The Path Forward


    The research findings are a wake-up call. The software industry spent decades learning that security must be built in, not bolted on. AI infrastructure is repeating that lesson at an accelerated pace, and the cost of learning it again is high.


    The good news: the vulnerability patterns discovered are solvable. None of the major issues require novel security techniques. They require discipline, investment, and a cultural shift toward treating security as a competitive advantage rather than an operational burden.


    As AI becomes increasingly central to business operations, the organizations that survive the next phase of consolidation will be those that married velocity with defensibility—those that viewed security not as a drag on innovation, but as an enabler of it.


    The choice facing enterprises today is clear: secure AI infrastructure now, or pay the cost of compromise later.