Featured Case Study

VR De-Escalation Training, Powered by Agentic AI

In 2021, my dissertation argued that storytelling games needed to move beyond sequential logic. At Axon, I became the catalyst and lead inventor of a patent-pending system (WO2025072878A1) for dynamic agent dialogue. AI agents that think, not just chat.

The Problem

Why the Old Way Didn't Work

De-escalation is a conversation, not a choose-your-own-adventure. The training had to catch up.

Rigid Scripts

Decision trees with fixed branches. Say the wrong thing and the whole conversation jumps tracks.

No Emotional Depth

The NPCs couldn't get angry, scared, or surprised. Real people do all three in the same sentence.

Limited Replayability

Officers memorized the right answers. That's passing a test, not learning a skill.

2021 Dissertation
“The most likely software architecture for new interactive storytelling games are neural networks and these are not explainable in the same manner that traditional sequential logic-based programs.”

From my dissertation, describing exactly the shift we later built at Axon

Under the Hood

How the System Works

A neuro-symbolic approach: LLMs handle the creative side, rule systems keep things grounded.

User Input
Speech & Actions

Neuro-Symbolic Processing

PerceptionIntent Detection
InterpretationContext Analysis
Goal SelectionPriority Engine
Action GenLLM + Rules
Agent StateTrust • Emotion • Facts
Response
Speech & Behavior
Continuous feedback loop updates agent state
The Solution

Pattern-Based Human Simulators

Moving from decision trees to dynamic, intent-based agents that adapt in real-time.

Pattern-Based

Agentic LLM Action Generator

Each session is different. The AI generates responses on the fly based on what the officer actually says.

Valve-Inspired

Fuzzy Pattern Matching

Borrowed from Valve's approach: matching what the officer _means_, not just what they say word for word.

Applied Humics

Humics Engine

Bornet's Humics framework, built for real. Trust meters, emotional state tracking, and hidden facts that make agents feel like real people.

Hybrid AI

Neuro-Symbolic Architecture

LLMs for creativity, rule systems for reliability. You get agents that surprise you but don't go off the rails.

The Result

Every Conversation Is Different

Officers can't memorize their way through it. They have to actually listen, read the room, and adapt.

Unscripted

No two sessions play out the same way

Adaptive

Trust, frustration, and fear all shift dynamically

Scalable

Real officers, real training, real results

Go Deeper

Where This Came From

The Axon work didn't come out of nowhere. It's rooted in four years of PhD research.