# At RSAC 2026, Cybersecurity Leaders Wrestle with Human vs. AI Decision-Making in Security Operations


The 2026 RSA Conference brought a central tension to the forefront of cybersecurity discussions: as artificial intelligence becomes increasingly sophisticated and prevalent in security operations, how should organizations balance automated decision-making with human expertise and oversight? The question moved beyond theoretical debate into practical territory, as CISOs and security leaders grappled with the real implications of deploying agentic AI systems at scale.


## The Debate Takes Center Stage


RSAC 2026 positioned the human-versus-AI discussion not as an either-or proposition, but as a complex integration challenge. Throughout keynotes, panel discussions, and breakout sessions, industry veterans emphasized that the question isn't whether AI will play a role in cybersecurity—it already has—but rather how organizations can maintain meaningful human control while leveraging AI's speed and analytical capabilities.


Key themes emerged:


  • Agentic AI's Double-Edged Promise: AI systems that autonomously take actions (rather than simply advising humans) can respond to threats in milliseconds. However, they also introduce new risks if not properly constrained.
  • The Human Bottleneck Problem: Many organizations struggle with alert fatigue and the sheer volume of potential threats. Security teams, no matter how skilled, cannot manually review and respond to millions of events daily.
  • Regulatory and Liability Concerns: As AI makes consequential decisions—blocking traffic, isolating systems, or triggering incident responses—questions about accountability and regulatory compliance became unavoidable topics.

  • ## Understanding Agentic AI in Security


    Agentic applications represent a significant leap from traditional rule-based automation. Unlike conventional systems that follow predetermined playbooks, agentic AI systems use large language models and reinforcement learning to understand context, evaluate trade-offs, and make nuanced decisions with minimal human intervention.


    In a security context, this might look like:


    | Traditional Automation | Agentic AI |

    |---|---|

    | If malware detected → block and alert | Analyze threat context, assess business impact, decide whether to block, contain, or monitor based on real-time intelligence |

    | If multiple failed logins → lock account | Evaluate login patterns, user behavior history, geographic anomalies, and risk level before taking action |

    | Alert on all policy violations | Prioritize alerts based on actual risk, investigate chains of related events, and surface the most critical issues first |


    At RSAC, security leaders acknowledged the potential: agentic systems could dramatically reduce response times, correlate disparate data sources more effectively, and adapt to novel attack patterns faster than humans could script responses. Several enterprise security vendors demonstrated systems that autonomously investigated security events, prioritized incidents, and even executed remediation steps—all without human input at each stage.


    ## The Scaling Challenge: When Humans Can't Keep Up


    The core tension emerged clearly: scaling human decision-making is nearly impossible in modern security operations. A mid-sized organization can generate 100,000+ security events per day. Even with advanced SIEM systems, the human team can meaningfully review and decide on perhaps 1-2% of those events.


    Conference discussions highlighted the uncomfortable reality: organizations must choose between:


    1. Keeping humans in the loop (slower responses, better accuracy, higher liability protection, but incomplete coverage)

    2. Trusting AI with autonomous action (comprehensive coverage, faster responses, but reduced human oversight and harder-to-explain decisions)

    3. A hybrid approach (humans monitor and override AI decisions, but this still creates bottlenecks)


    Several panelists noted that many organizations are gravitating toward a "trust but verify" model—allowing agentic systems to take action on routine threats while routing complex, high-stakes decisions to human analysts for final approval.


    ## Technical and Operational Challenges


    RSAC attendees didn't shy away from the obstacles:


    Explainability Gap: When an agentic AI system decides to isolate a server or disable a user account, can it explain its reasoning in terms the security team understands? Model transparency remains a significant challenge, particularly with LLM-based decision systems.


    Adversarial Manipulation: If threat actors understand how agentic systems make decisions, they may craft attacks specifically designed to deceive the AI while appearing benign. Conference discussions highlighted this as an underexplored vulnerability.


    Drift and Model Degradation: AI systems trained on today's threat landscape may perform poorly as attack techniques evolve. Maintaining and retraining agentic systems in production environments is operationally complex.


    Cascading Failures: An agentic system that incorrectly identifies a critical service as compromised and takes autonomous action could cause significant business disruption. The potential for automated systems to amplify rather than mitigate damage was a recurring concern.


    ## Regulatory and Liability Implications


    As agentic AI moves from pilots to production, legal and compliance questions intensified at RSAC:


  • Who is liable if an autonomous security system causes damage? The vendor? The organization? Both?
  • How do regulations like HIPAA, SOX, or GDPR apply to AI-driven security decisions? Few clear answers exist.
  • What documentation and audit trails must exist to demonstrate responsible AI governance? Organizations are still defining standards.

  • Several regulatory and compliance experts suggested that the first organizations to clearly document their human-in-the-loop controls and AI governance frameworks will have competitive advantages during audits and incident investigations.


    ## Emerging Best Practices


    From RSAC discussions, a framework is beginning to crystallize:


    Risk-Based Autonomy Levels: Rather than making all decisions either human or AI, organizations should classify threats by risk and response type. Routine, well-understood attacks might be handled autonomously. Novel, high-impact scenarios should require human approval.


    Continuous Monitoring of AI Decisions: Implement systems that analyze agentic AI behavior over time—what actions did it take? Were those decisions effective? Were there false positives that harmed operations? Use this data to refine the system and set appropriate constraints.


    Clear Escalation Paths: Agentic systems should have transparent escalation: "I'm 60% confident this is a data exfiltration attempt, escalating to human analyst." Confidence scores and reasoning chains help analysts make better override decisions.


    Red-Team Your AI: Test agentic security systems the way you'd test defenses against adversaries. Can attackers trick it? Does it fail gracefully when confused?


    ## Recommendations for Organizations


    Security leaders implementing or expanding agentic AI should consider:


    1. Start with high-volume, low-stakes decisions — use agentic AI to handle routine detections and investigations where the cost of errors is relatively low.


    2. Implement graduated autonomy — not all decisions need the same level of human involvement. Reserve human review for high-risk, novel, or business-critical scenarios.


    3. Invest in explainability tools — choose AI platforms and models that can articulate their reasoning. Black-box decision-making is increasingly indefensible.


    4. Design clear guardrails — define hard limits on what agentic systems can do autonomously (e.g., "may not delete systems, may not modify domain permissions without approval").


    5. Audit and measure — track agentic system performance, false positive rates, and business impact. Treat AI governance as a core security discipline.


    6. Plan for override and recovery — ensure security teams can quickly reverse AI-driven actions and understand how to restore systems if autonomous responses cause damage.


    ## Looking Forward


    RSAC 2026 revealed an industry at an inflection point. AI and agentic systems aren't going away—they're becoming table stakes for security operations. The organizations that will win are those that thoughtfully integrate human expertise with AI capabilities, maintaining meaningful oversight while gaining the speed and scale that modern threats demand.


    The human versus AI debate, it seems, has a clear resolution: it's not about choosing one over the other, but orchestrating both effectively.