KKKa'i Kau
← Back

Project

Fluid XR · Agentic prototype · 2024

VR Agent Jessica

Embodied AI assistant for VR with voice conversation, browser actions, and multi-step execution inside an immersive environment.

What it was.

Jessica explored what happens when an AI assistant can exist inside the environment where the user is already working instead of living in a separate chat window.

I built it as a combined system with low-latency voice, browser actions, multi-step planning, and an embodied interface that made the agent feel present rather than purely conversational.

It is also worth placing in time: this prototype was built before reasoning models became the default expectation for agent behavior, which makes the level of orchestration even more notable in retrospect.

That combination mattered because embodiment alone is theater, while raw voice alone misses the affordances of spatial computing.

The project gave me practical experience with the latency, orchestration, and UX constraints that show up when an agent is expected to act in real time.

The system combined speech-to-speech interaction, web actions, and embodied presence in one loop. Even as a prototype, it demonstrated that an AI agent in XR could be useful, not just performative.

Related

Nearby work.