Where Code Shapes Power
Expect sharper edges. Heavier conversations. And people who don't confuse authority with understanding.
KEY THEMES
Nation-state tradecraft
AI and intelligence
Offensive and defensive asymmetry
Real-world constraints
Systems that matter at scale
Agenda
Registration and Underground Exhibits Open
Welcome and Opening Remarks/The Gauntlet Kickoff
Expert Witness War Stories: From the Witness Chair in High-Stakes Technology Battles
What does it feel like when your technical conclusions are challenged under oath, with billions of dollars and corporate reputations at stake? In this talk, Dr. Avi Rubin shares candid stories from more than two decades serving as a technology expert witness in complex litigation involving cybersecurity, software systems, AI, and emerging technologies. Through anonymized case studies and firsthand experiences, he reveals the intensity of cross-examination, the strategic maneuvering behind the scenes, and the pivotal moments that can shift the outcome of a case. For select matters, he takes a deep dive into the specific technologies at issue and unpacks the core technical and legal questions that shaped the dispute, while also sharing some of the unexpected, lighthearted moments that arise even in the most serious proceedings. This session offers a rare look at credibility, persuasion, and the human dynamics that often determine which arguments prevail in high-stakes technology disputes.
CISO Panel
Gall’s Law and the OODA Loop: First Principles for building in Age of AI
Lunch and Networking
Lightning Talks
Quick, impactful presentations perfect for introducing new ideas, tools, or techniques.
The Exploitation Lifecycle: Exploring the Stages of Exploitation
This presentation explores attacker activity "left of boom" - from vulnerability discovery, to disclosure, to weaponization, to delivery, and finally to exploitation. At each stage of the exploitation lifecycle, a detailed look is taken at attacker activity and is counterposed with intelligence and operational activities that would help defenders to better protect their environments. The presentation concludes with a fresh way of thinking about the steps leading to successful exploitation including how intelligence derived from attacker activity at each stage in the exploitation lifecycle can be leveraged to paint a more complete picture of adversary capabilities in support of the strongest possible defense.
AI‑First Enterprise Architecture: Turning Pilots into Production
• How to recognize why your AI pilots are not scaling and which architectural gaps are to blame. • A concrete reference model for an AI platform that can serve many use cases across the enterprise. • Patterns for integrating AI services with your existing microservices, data lake/lakehouse, and identity/security stack. • Practical SLOs and observability signals for monitoring AI systems in production (quality, drift, and cost). • A staged roadmap to evolve from ad‑hoc AI projects to a governed, reusable AI platform.
How AI is Changing Offensive Security and Continuous Attack Simulation
Artificial intelligence is fundamentally reshaping offensive security. At Rebellion (RBLN), where practitioners and hackers come together to exchange real-world tactics; not theory, this session explores how AI is accelerating the evolution of cyber attacks and redefining how organizations simulate and defend against them. In this talk, Vincent Swolfs, Director of Hacking & CISO at cisa.one, breaks down the shift from manual exploitation to autonomous attack systems capable of discovering vulnerabilities, adapting strategies, and executing complex, multi-step attack paths with minimal human involvement. This session is built on real-world experience and strengthened through collaboration with Aikido Security, bringing a practical perspective on how modern security teams can move from reactive testing to continuous, AI-assisted attack simulation. You’ll gain insight into: -How AI is lowering the barrier to advanced offensive capabilities -Why continuous attack simulation is replacing traditional pentesting models -How modern attackers operate in an AI-augmented landscape -How platforms like Aikido Security enable developers and security teams to continuously identify and fix vulnerabilities before attackers exploit them This is not theory. This is how offensive security is evolving in practice, and what you must do to stay ahead.
Agent Attacks Beyond the Policy and Identity Layer
• The hidden risks in AI agent architectures • Common exploits and attack patterns • Real-world mitigation strategies • AI governance frameworks and kill chains • Practical tools to get started • Separating hype from reality
IoT HackBots: AI-Powered Hardware Hacking Tool Development
Large language models (LLMs) have already changed the game in offensive security. Tool calling systems like Claude Code allow security researchers to discover zero days and weaponize vulnerabilities faster than ever. However, one class of systems have been out of reach: IoT devices. This session will explore the development of Claude Skills that interact with traditional hardware hacking tools. This tool access provides device context to LLMs that is critical to vulnerability discovery. These Skills allow LLMs to access UART consoles, probe unknown signals with logic analyzers, and obtain root shells over a live network. We will also discuss the potential risk-driven imbalance between offensive and defensive AI adoption.
The Skill Wave: The AI threat we aren't preparing for
-Current AI adoption by both criminals and defensive vendors is accelerating the existing speed/scale threat, not signaling where the threat is heading. The genuinely new risk is the skill wave: cheap stealth, patience, and adaptive tradecraft available to financially-motivated actors for the first time. -"Penetration depth" determines who gets hit by which wave; shallow organizations get savaged by speed and scale, while deep organizations are protected only as long as traversing depth requires expensive human skill, a condition AI is rapidly eliminating. -AI-augmented attackers won't need domain admin. DA is a shortcut that lets shallow attackers achieve impact despite limited depth, but attackers operating at depth won't route through the security bottlenecks we've built around high-criticality accounts. Metrics like "domain admin in n seconds" assume a shallow attack model where DA equals impact, and as AI makes depth cheap to traverse, benchmarks built around positional speed become dangerously misleading. -Speed, scale, and skill are reinforcing dimensions (scale creates emergent skill through cross-target intelligence, skill enables sustainable scale by preventing eviction) and their interactions will generate novel payout models that don't exist in our current threat taxonomy, just as ransomware emerged unpredictably when its enabling conditions converged.
Same Side, Different Speeds: Rethinking Vulnerability Disclosure in the Age of AI
Vulnerability discovery and development have progressed remarkably in recent years, aided and abetted by broad adoption of AI. The volume of new vulnerabilities and exploits flooding the technology ecosystem continues to grow, but many of the human-led systems designed to ingest, validate, and standardize those vulnerabilities have stagnated both technically and philosophically. This asymmetry contributes to antagonistic relationships between good-faith security researchers and technology suppliers, exacerbated by misaligned incentives that encourage quantity over quality and rapid discovery over deeper remediative action. This talk will explore steps that software suppliers and vulnerability researchers can take to improve bilateral disclosure experiences and deliver better outcomes in a rapidly changing security world — starting with an acknowledgment that they’re on the same side.
Ghosts in the Machine - The Therac-25 Affair
In 1985, a software race condition in a radiation therapy device called the Therac-25 began quietly killing cancer patients by delivering radiation doses up to 100 times the therapeutic level. Six patients were overdosed, and three died. The root cause was nothing exotic: reused code, removed hardware interlocks, a single unreviewed programmer, and a manufacturer so confident in their software that they dismissed every patient complaint for nineteen months. Almost forty years later, the healthcare sector is deploying millions of connected medical devices such as insulin pumps, infusion systems, patient monitors (telemetry), diagnostic imaging, connected laboratory equipment and implantables. A surprising amount of which repeat every structural failure that the Therac-25 made famous. Software-only safety controls. Legacy firmware reused without re-testing. Security alert fatigue. This talk takes attendees inside the Therac-25 Affair with deep technical details of the race conditions, the integer overflows, the missing hardware interlocks, and the regulatory blind spots.
Rise of the Pond Master - Jailbreaking tales with big ducks energy
In the ever-evolving arms race between AI developers and the underground hacking community, a new breed of jailbreak artist has emerged — one fueled by absurdity, persistence, and unapologetic "big ducks energy." This talk chronicles the rise of Pond Master, an unorthodox yet highly effective jailbreaking methodology that turns LLM guardrails into playgrounds through creative prompting, psychological manipulation, and relentless experimentation.
Pod-tential for Disaster: Hacking Kubernetes from Pod to Cluster
Throughout the session, you’ll see exactly how bad actors chain these configuration flaws to move laterally, escalate privileges, and ultimately breach critical components. We’ll wrap up by discussing straightforward fixes and best practices that can thwart such attacks in your own deployments. Whether you’re a security pro or just getting started with container orchestration, you’ll come away with a clear understanding of how Kubernetes implementations get hacked—and how to keep them secure. If you want to grasp container security by breaking it first, this talk is for you.
Reverse Engineering EDR Kernel Drivers with AI
1. Differentiate between eBPF hook types — tracepoints, kprobes, uprobes, and LSM hooks — and select the right one for a given security monitoring or enforcement use case 2. Build eBPF security programs in Rust using the Aya framework without writing C or depending on BCC 3. Implement LSM BPF hooks (bprm_check_security, socket_connect, security_task_kill) to block threats at the kernel level before syscalls complete 4. Navigate eBPF verifier constraints in practice — stack limits, bounded loops, per-CPU arrays, and kernel struct offset portability across kernel versions 5. Detect fileless malware by tracing memfd_create syscalls and capture TLS plaintext via OpenSSL uprobes without a MITM proxy
Agents Don't Collaborate Like Humans. Stop Building Like They Do.
-Agents aren't users of your software — they're operators. Build infrastructure accordingly. -Every time you move context from a product surface to an open interface, your agent system gets more capable. This compounds. -The line between good and bad agent infrastructure isn't local vs cloud — it's whether an agent can operate on the interface directly without product-imposed constraints. -The MCP ecosystem is solving connector logistics. The harder problem is environment design — giving agents space to think, not just endpoints to call. -A practical litmus test for your stack: if an agent can't query it, extend it, or rewrite it, it's working against you.
Teaching a New Dog Old Tricks: Hacking FIDO Passkeys
- The audience will know the intended purpose of passkeys and a contextual history of the FIDO2 protocol - The audience will see examples of vulnerabilities and patterns of weakness, and be wary of magical claims about passkey security - When deploying passkeys in the enterprise, the audience will have a working threat model and know how to think about configuration and vendor selection
No Encryption Required: Why Modern Ransomware Bypasses Everything and What DFIR Finds When It Does
1. Identify the four MFA bypass and initial access techniques ransomware affiliates actively use (AiTM proxy phishing, session token replay, push fatigue, and social-engineered RMM tool installs) and determine which one was used from post-incident forensic artifacts 2. Recognize BYOVD as a pre-attack setup step, not a novel technique, and detect it through driver load auditing and EDR telemetry gap analysis rather than relying on blocklists 3. Scope exfiltration forensically when encryption-less extortion through legitimate cloud services defeats both DLP and backup strategies and no ransomware binary exists 4. Deploy honeycreds, canary files, and canary API keys as detection controls that generate zero false positives and function independently of endpoint agents, MFA, and network monitoring 5. Map each "comfort blanket" control to the specific deception-based detection that covers its known bypass, with a concrete deployment plan executable within one week
Breaking the Stream: Real-Time AI Model Exploitation and Defense Strategies
1. Practical exploitation skills: Hands-on understanding of 5+ AI attack techniques including real-time streaming exploits, with code examples and tools they can use to test their own systems 2. Actionable defense playbook: A comprehensive security framework with specific controls for streaming AI, including token-level validation, real-time monitoring configurations, and circuit breaker implementations 3. Real-world threat intelligence: Knowledge of active attack campaigns targeting streaming AI systems, TTPs used by threat actors, and indicators of compromise for streaming-specific attacks 4. Security testing toolkit: Access to open-source tools, scripts, and methodologies for penetration testing streaming AI systems, including WebSocket/SSE security testing frameworks 5. Streaming AI security architecture: A structured approach to secure real-time inference deployments, including edge protection, rate limiting strategies, and monitoring for streaming endpoints
The Underground: Opening Happy Hour
The Gauntlet
Registration and Underground Exhibits Open
6 Hard Lessons from Zero Trust Deployments: What the Field Is Actually Seeing
Zero Trust has quickly moved from security concept to board-level mandate. Yet the reality inside most organizations is far messier than the architecture diagrams suggest. Based on several hundred interviews with IT and security practitioners as well as the analyst community, this session explores what actually happens when organizations attempt to implement Zero Trust. The findings are sobering: roughly 60–70% of Zero Trust initiatives stall before reaching maturity, and fewer than 20% achieve a fully realized Zero Trust architecture. Rather than focusing on theory or vendor frameworks, this session examines seven hard lessons from the field, highlighting both the successes and the failures organizations encounter along the way. Topics include why many Zero Trust initiatives stall, where organizations underestimate complexity, the architectural decisions that matter most, and what successful deployments do differently. This presentation is not a product pitch. Instead, it’s a candid discussion of the hard realities of Zero Trust deployment, grounded in the experiences of practitioners across hundreds of organizations. Attendees will leave with practical insights into what works, what fails, and how to move a Zero Trust initiative from concept to operational reality.
Generative AI, Cybersecurity, and Ethics
As generative AI rapidly reshapes the cybersecurity landscape, it is transforming both attack capabilities and defensive strategies. In this session, Dr. Ray Islam explores how AI is accelerating cyber offense through automated phishing, polymorphic malware, adversarial prompt engineering, and deepfake enabled social engineering, while simultaneously empowering defenders with intelligent threat detection, autonomous SOC workflows, and predictive risk modeling. Beyond the technical arms race, this talk addresses the ethical fault lines emerging in AIML driven security: algorithmic bias in detection systems, privacy implications of large-scale surveillance models, AI governance gaps, and the responsible deployment of autonomous cyber tools.
The Cybersecurity Industrial Complex (CIC)
The root causes of successful cyberattacks that occurred over 25 years ago are still around today. The cybersecurity industry hasn't done a very good job in correcting these flaws. What have we been doing this past quarter century? There appears to be no incentive for the CIC to fix these flaws.
The Future SOC: Human + Agent Collaboration at Scale
Security Operations Centers are under increasing pressure as alert volumes grow and environments become more complex. While AI copilots have improved analyst productivity, they have not fundamentally changed how the SOC operates. A new model is emerging: the Agentic SOC. In this model, AI agents act as active participants in security operations, handling triage, enrichment, correlation, and reporting, while humans focus on judgment and oversight. This creates a hybrid workforce that can operate with greater speed, scale, and consistency. This session breaks down a practical model for building the next-generation SOC based on three pillars: context, coordination, and control.Attendees will learn how to move beyond isolated automation toward coordinated, governed AI systems, and what it takes to operationalize AI agents safely and effectively at enterprise scale.
Unfair Advantage: How AI Supercharges Hackers, Defenders, and Founders
In 2026, anyone with code and cyber skills walks into the game with an unfair advantage. A generation ago, you needed a Fortune‑500‑sized R&D budget to build serious security technology; today, small, focused teams can ship AI‑powered security products that rival legacy vendors and realistically chase unicorn outcomes. In this session, Marcus J. Carey, Principal Research Scientist at ReliaQuest and creator of the Tribe of Hackers series, breaks down how AI shifts the balance of power for red teams, blue teams, and founders. We’ll walk through concrete AI workflows that amplify recon, exploit research, detection engineering, purple teaming, and incident response—and how turning those workflows into product is the new unfair advantage. This talk is built for people who ship, patch, and respond, not people pitching slideware.
Break and Networking
Breaking the Agent: Securing Endpoint AI Agents with OpenClaw in Production Environments
1. How to build a practical threat model for endpoint AI agents like OpenClaw, including concrete attack vectors and failure modes. 2. How common OpenClaw misconfigurations can be exploited, and how to prevent them using least-privilege and runtime controls. 3. A reference security architecture for production deployments, including policy enforcement, monitoring, and explainability. 4. How to operationalize continuous governance for agentic AI, moving beyond static assessments to runtime assurance.
1. Differentiate between eBPF hook types — tracepoints, kprobes, uprobes, and LSM hooks — and select the right one for a given security monitoring or enforcement use case 2. Build eBPF security programs in Rust using the Aya framework without writing C or depending on BCC 3. Implement LSM BPF hooks (bprm_check_security, socket_connect, security_task_kill) to block threats at the kernel level before syscalls complete 4. Navigate eBPF verifier constraints in practice — stack limits, bounded loops, per-CPU arrays, and kernel struct offset portability across kernel versions 5. Detect fileless malware by tracing memfd_create syscalls and capture TLS plaintext via OpenSSL uprobes without a MITM proxy
Vulnerability research is an extremely labor-intensive discipline in cybersecurity. Modern software poses a significant challenge, with codebases encompassing millions of lines and complex, fast-evolving attack surfaces that outpace manual analysis. Consequently, traditional vulnerability research faces a difficult choice: either conduct deep analysis over a narrow scope or achieve shallow coverage across a broad attack surface. Large Language Models (LLMs) show great promise due to their remarkable capabilities in code comprehension, pattern recognition, and technical reasoning. However, a naive application of LLMs to security research often yields unreliable results. Models may hallucinate vulnerabilities, overlook essential context, or fail to rigorously validate their findings. The central issue is not if AI can aid vulnerability research, but rather how to structure that assistance to genuinely enhance human expertise without replacing human judgment. In this talk we will share with you our journey that started with creating a reliable autonomous software development infrastructure and how we applied what we learned in order to create a nearly fully-automated vulnerability research and exploit development platform that also creates actionable detections for our product.
SIR-Bench: Evaluating Investigation Depth in Security Incident Response Agents
•Investigation vs. Classification: Learn the critical difference between an AI that correctly triages alerts (97.1%) and one that conducts genuine forensic investigation (41.9% novel finding coverage)—and why both metrics matter for production deployment •Adversarial Evaluation Design: Implement an LLM-as-Judge that inverts the burden of proof, preventing the confirmation bias that accepts alert repetition as valid investigation •Realistic Benchmark Generation: Use the OUAT methodology to create measurable ground truth from real incident patterns without exposing sensitive production data •Performance by Attack Category: Understand why Unauthorized Access investigations yield deep findings (47.9% hit 7+ novel discoveries) while •Malicious File Execution struggles (1.9%)—and what this means for agent deployment decisions •Production Readiness Framework: Apply the M1/M2/M3 metric framework to evaluate whether your AI security tools are performing genuine investigation or sophisticated pattern matching
Speak Security With a Business Accent: Influence, Trust, and Why Cyber Keeps Losing the Room
• How to reframe security conversations so people actually listen • Techniques for gaining buy-in without fear, authority abuse, or manipulation • Practical methods for translating cyber risk into business value • How to build trust with peers, leadership, and non-technical stakeholders • Why improving communication improves security outcomes
• If they have drunk the Kool-Aid aka have they fallen victim to cybersecurity misconceptions and lies. • Why technology is the real problem. • Why zero trust is the only way.
For years, cybersecurity has been framed around the risk of loss. But business leaders don’t make decisions this way and are focused on growth outcomes. So what happens when we start talking about cybersecurity as a business enabler? This talk explores how to shift the conversation from cost and complexity to revenue, productivity, and competitive advantage. The audience will learn how security investments can unlock new business value, strengthen executive alignment, and turn security teams from perceived blockers into growth partners through real world examples.
The Gauntlet Presentations and Voting
The Gauntlet Winner Announced and Closing Remarks
AfterFuse Party
Where elite minds, cutting-edge ideas, and next-level experiences collide in an exclusive, invite-only after hours event. This is not just a party – it’s an explosive fusion of technology, networking, and sensory indulgence like you’ve never seen before.
Off-the-Record: What Didn’t Make the Stage
Before you head home, join speakers and operators for a candid, off-the-record breakfast where real experiences, hard lessons, and unfiltered insights are shared. Bring your key takeaways and be part of the conversation.
Speakers

Adam Darrah
Vice President of Intelligence · ZeroFox

Adam Vincent
Founder and CEO · Bricklayer AI

AJ Nash
CEO · Unspoken Security

Amit Serper
Lead Security Researcher · CrowdStrike

Avi Rubin
Professor Emeritus, Johns Hopkins University & Founder and Managing Director, Harbor Experts · Harbor Labs

Caitlin Condon
VP of Security Research · VulnCheck

Cristian Leo
Data Scientist · AWS

Daniel Begimher
Senior Security Engineer · AWS

David Etue
Chief Strategy Officer · Cyberbit

David Girvin
AI Security Architect · Sumo Logic

David Tohn
CEO · BTS Software Solutions (BTS)

Dr. Aleksandr Yampolskiy
Co-Founder and CEO · SecurityScorecard

Dr. Louis DeWeaver
Cyber Security Consultant · Marsh McLennan Agency

Dr. Ray Islam
Adjunct Professor (NLP/ML) · George Mason University

Geoff Robinson
Principal Consultant · ivision

Harshit Kohli
Senior Technical Account Manager · AWS

Iftach Ian Amit
Co-Founder and CEO · Gomboc.ai

Jackson Reed
Founder · Barding Defense

Jacob Gajek
Principal Security Researcher · eSentire

James Foster
CEO · eSentire

Jamie Tolles
Vice President, Incident Response · IDX

Joel Bauman
Founder and CEO · Synqly

John Spiegel
Field CTO · HPE

Josh Mason
Solutions Architect · Synack

Justin Chavez
Head of Applied AI Engineering · Inkeep

Kevin Kiley
CEO · Airia

Lavnish Talreja
Data Engineer · McKinsey & Company, Inc.

Marcus Carey
Principal Research Scientist · ReliaQuest

Matt Brown
Founder & Principal Consultant · Brown Fine Security

Michal Bazyli
Founding Cybersecurity Researcher · Cracken

Mike Price
Vice President, Product & Engineering · VulnCheck

Nish Majmudar

Rakesh Pal
Sr. Technical Account Manager · Amazon Web Services

Randy Marchany
Chief Information Security Officer · Virginia Tech

Ryan Hasmatali
Software Developer · eSentire

Samantha St-Louis
VP of AI App Innovation · Atmosera

Scott Miller
Security Consultant and Penetration Tester · Accenture Security

Sean Satterlee
Senior Principal Penetration Tester · Device Recon Labs

Sounil Yu
Creator, Cyber Defense Matrix and Chief AI Officer, Knostic

Steven Solomon
CEO · American Cyber

Tanvi Desai
Sr. Cloud Consultant · Google Cloud

Travis Lowe
Cloud Security Research · CrowdStrike

Vincent Swolfs
Director of Hacking & CISO · CISA.one

Vishavjit Singh
Senior Threat Intelligence Researcher · eSentire

Yonatan Perry
Director of Engineering · CrowdStrike

Yuthvek MJ
Security Researcher
Sponsors
Location
RBLN East takes over the Hyatt Regency Reston. Sessions, workshops, and hands-on chaos happen on the second floor. Room block and parking info below—get ready to make your mark.
Transportation & Parking
- •Washington Dulles International Airport (IAD): 6 miles
- •Ronald Reagan Washington National Airport (DCA): 24 miles
- •Metro (WMATA) Reston Town Center West Station: 6 blocks
- •Parking: 25% discount on overnight and daily self-parking for all RBLN attendees



