WalkXR-AI / docs / architecture / PRD.md
PRD.md
Raw

WalkXR-AI Project Requirements Document (PRD)

1. Introduction: The Verse & WalkXR

1.1. The Verse: The 'Why'

The Verse is a worldbuilding engine and a people engine dedicated to creating games, experiences, and rituals that uplift humanity. Our focus is on fostering mindfulness, pro-sociality, empathy, and perspective-taking through co-creation and community collaboration. The Verse operates a dual-entity model: a non-profit incubator for validating ideas and a for-profit B Corp (Verse Ventures) for commercializing successful projects.

1.2. WalkXR: The 'What'

WalkXR is the flagship platform of The Verse. It is a Transformative Compassion & Learning Platform designed as an immersive, interactive social therapeutic. It addresses the poor engagement of traditional wellness apps by using gamification, multimedia storytelling, emotional regulation techniques, and shared rituals to connect users with themselves, their communities, and critical societal issues.

2. The Initial Focus: The "Small Moments" Walk

Our initial development is centered on perfecting the AI-driven experience for a single, high-impact walk: Small Moments.

Walk Name Purpose Emotional Arc User Journey Overview Key Activities
Small Moments To rediscover the power of overlooked moments of human connection and uplift emotional and social well-being in a lighthearted, playful way. From hesitation and social performance to warmth, presence, and a renewed readiness for micro-interaction. Projection → Storytelling → Validation → Self-Awareness → Embodied Readiness → Group Play → Reflection → Habit Formation Projective narrative, 3-part social story sharing, persona identification, collaborative Mad Libs, personal field guide creation.

This walk is designed to be a safe, structured space to explore the subtle moments of connection that research shows are vital to our happiness.

3. The Scientific Foundation: Deeper Conversations Research

The design of the "Small Moments" walk and its AI companions is directly informed by research on social connection. The core scientific thesis is:

People avoid deep conversations with strangers due to miscalibrated expectations. We systematically underestimate how caring and interested others are, while overestimating the potential for awkwardness. This creates a psychological barrier to the happiness and connection that deeper conversations foster.

Our AI's primary role is to act as a caring, non-judgmental partner—an "Awkwardness Companion"—that helps users overcome this barrier in a safe, simulated environment.

4. The WalkXR AI Development Tracks

To achieve our vision of a uniquely personalized and emotionally resonant experience, our work is organized across four dedicated development tracks. This structure ensures that every component of our system is built with advanced, custom-first principles.

  • Track 1: EI Design & Evaluation: This track is responsible for the 'what' and 'why' of our agents' intelligence. It translates research into actionable design principles and builds the evaluation frameworks to measure agent performance against key metrics like emotional resonance, therapeutic alignment, and user trust. It ensures our AI is not just functional, but genuinely effective.

  • Track 2: Simulation & Data: This track provides the foundational data that makes our agents smart. It is evolving from an initial system (Google Sheets, Apps Script, OpenRouter) into a robust, Python-based framework. This future system will leverage QA workflows, reinforcement learning (RL), and LangGraph for complex scenario analysis and automated data generation, creating a world-class pipeline for agent training and evaluation.

  • Track 3: Agents & Memory: This is the core product track, focused on building the agents themselves. We prioritize deep, custom agentic development using LangGraph for its fine-grained control over state and logic, allowing us to create complex, multi-agent interactions that go far beyond simple prompt-response chains. This track also builds our hybrid memory system, combining semantic (vector) and relational (graph) memory for true long-term personalization.

  • Track 4: Full-Stack & Infrastructure: This track builds the production-ready systems to deliver the WalkXR experience. The core backend is built with FastAPI for its speed and scalability. Internal tools and prototypes are rapidly developed with Streamlit. This track ensures our custom agentic systems are served reliably, efficiently, and are ready to scale to meet user demand.

5. Roadmap: The Path to an Emotionally Intelligent OS

Our development is guided by a phased approach, ensuring we build a robust foundation before scaling complexity. The detailed, epic-level plan is tracked in our official WalkXR_AI_Backlog.md.

Phase 1 (E01-E06): Foundational Systems & Core Architecture

  • Goal: To build the core infrastructure for creating stateful, knowledge-grounded, and orchestrated AI agents.
  • Key Activities:
    • Establishing a robust RAG pipeline for knowledge retrieval.
    • Defining core agent architecture with state management and memory.
    • Implementing a multi-agent orchestrator using LangGraph.
    • Creating the first fine-tuned models based on simulation data.

Phase 2 (E07-E10): 'Small Moments' Walk v1.0

  • Goal: To build, test, and release the first complete, end-to-end multi-agent walk experience.
  • Key Activities:
    • Developing the full cohort of specialized agents for the 'Small Moments' walk (Narrative, Ritual, Play, Reflection, etc.).
    • Integrating all agents into the master orchestrator for a seamless user journey.
    • Conducting rigorous end-to-end simulation, adversarial testing, and performance validation.

Phase 3 (E11-E12): WalkXR OS Platform

  • Goal: To generalize the architecture into a scalable platform and introduce dynamic, continuous learning.
  • Key Activities:
    • Refactoring the system into a "Walk Factory" to enable rapid creation of new walks.
    • Evolving the orchestrator into a dynamic engine that personalizes the user journey in real-time.
    • Implementing reward models for continuous learning (RLAIF) and generative agents for novelty.

6. The Future: Long-Term Vision

The long-term vision is to create a truly adaptive digital companion. This system will move beyond rule-based triggers to use AI for dynamic agent selection, integrate multimodal feedback (voice tone, biometrics), and continuously learn from every interaction to guide users toward greater self-awareness, compassion, and connection.