# WalkXR Simulation Docs & Instructions # **1\. Overview** The WalkXR Simulation System is an automated method for rapidly testing how different people might emotionally and cognitively respond to any part of a WalkXR experience. It simulates interactions using structured inputs and custom persona profiles, then sends them through a dedicated large language model (LLM) deployed via OpenRouter. This LLM has been parameterized for our simulations, allowing it to generate nuanced, emotionally attuned simulation output at scale. Each simulation returns structured insight: how people might feel across a moment or entire journey, what resonates or causes friction, where curiosity or fatigue emerges, and how AI might support that experience. In addition to emotional arcs, the simulations provide feedback on prompt design, rituals, tone, inclusivity, storytelling effectiveness, and specific AI feature opportunities. They also generate potential training data for future LLM fine-tuning or RAG integration. The goal is to enable thoughtful iteration, adaptive design, and emotionally intelligent system development across both general WalkXR content and AI-based companion features. Each walk is modular, broken into individual moments called “modules, " including prompts, rituals, reflections, or interactive storytelling. We simulate them using four modes, ranging from focused (one persona, one module) to comprehensive (all personas, full walk). With automation now fully integrated, the simulation system can rapidly ingest parameters, generate results, and organize outputs into an analysis-ready format inside a central spreadsheet. All data structures and insight types are informed by design decisions made by Ben and Roman to support scalable, real-time WalkXR iteration and AI prototyping. Each simulation mode is tuned to extract specific feedback relevant to its scope. If you'd like new insight types or dimensions to be captured, contact Roman Di Domizio via Discord. All simulation components, including prompt templates, Apps Script logic, and output schema, must remain in sync to ensure the system works as intended. # **2\. Current Sim Content** - We currently have three walks loaded into the Simulation System. - Go to step 4 to simulate one of these walks. - Continue to step 3 to add a new walk to the system. | Walk | Modules | Personas | | ----- | ----- | ----- | | Small Moments | 8 modules based on the 5/14/25 Miro demo | 10 diverse personas with detailed backgrounds generated by ChatGPT. | | WalkXR Public Service Experience (PSE) for Financial Scams | 10 Modules | 10 diverse personas with detailed backgrounds generated by ChatGPT. | | WalkXR Diabetes | 8 Modules | 10 diverse personas with detailed backgrounds generated by ChatGPT. | **Simulation Modes** The WalkXR Simulation System supports four simulation modes, each designed to simulate different combinations of personas and walk/module scope. While the structure of the output remains consistent, each mode serves a distinct purpose—surfacing emotional dynamics, design friction, and opportunities for AI integration across different contexts. Modes 2 and 3 are currently the most effective for generating high-quality, structured training data. They allow us to scale testing across multiple modules and personas while maintaining clean, analyzable results for tagging, tuning, and agent development. All modes now use the same 25-field JSON output structure, where each object represents one persona engaging with one specific module. This enables precise mapping across emotional responses, cognitive load, design clarity, and AI readiness. The output is optimized for both human review and machine learning workflows. - #### **Mode 1: Single Persona × Single Module** Simulates how one persona responds to a specific module (e.g., prompt, ritual, reflection). Best for testing micro-level tone, prompt clarity, or emotional activation. Useful for quick iteration or isolated module tuning. - #### **Mode 2: Single Persona × Full Walk** Simulates one persona across every module in a full walk. Best for surfacing pacing patterns, emotional flow, and cumulative experience insights. **Highly recommended for generating full-run training data for agent behavior across a complete experience.** - #### **Mode 3: All Personas × Single Module** Simulates all personas responding to the same single module. Best for comparing how diverse backgrounds interpret and react to the same prompt. **Ideal for inclusivity testing, identifying resonance/failure patterns, and tuning agent adaptability.** - #### **Mode 4: All Personas × Full Walk** Simulates the full journey for all personas across every module. Best for comprehensive pattern detection, agent personalization training, and roadmap-level experience analysis. Use with caution due to the large output size and generation cost. Each mode returns an array of structured JSON objects, one object per persona per module, using the following **25-field schema**: 1. **Persona ID** Internal ID for the simulated persona; must match the canonical WalkXR system identifier. 2. **Walk Name** Internal walk title; must match the official name used in the WalkXR database. 3. **Module ID** Internal ID for the module simulated; each object refers to one specific module only. 4. **Module Prompt** The full prompt or a complete summary, including any visual or narrative media cues. 5. **Mode Ran** The simulation mode used (e.g., "Mode 2"); reflects which template was executed. 6. **Emotion Before** The persona’s emotional or cognitive state before the module began. 7. **Raw Prompt Response** The in-character response the persona would give to the prompt or activity. 8. **Raw Emotional Reaction** Immediate somatic and emotional response after completing the module. 9. **Prompt Interpretation** How the persona mentally and emotionally processed or understood the prompt. 10. **What Resonated** What the persona found meaningful, impactful, or emotionally aligned. 11. **What Missed Or Caused Friction** Anything that felt flat, confusing, inaccessible, or emotionally dissonant. 12. **Module Effectiveness** Whether the module achieved its emotional or reflective goal from the persona’s perspective. 13. **Cognitive Load** How mentally demanding or accessible the module felt to the persona. 14. **Emotional Safety Level** A description of how emotionally safe or unsafe the module experience was. 15. **Descriptive Reflection** A short narrative of the experience, including metaphors, sensations, and key shifts. 16. **Honest Opinion** What the persona would say informally about the module if asked afterward. 17. **Frictions Encountered** Specific points of difficulty; may include UX, tone, content, or pacing issues. 18. **Design Suggestions** Ideas for improving pacing, structure, or emotional flow of the module. 19. **Prompt Revision Suggestions** Rewritten or refined prompt ideas to improve clarity or emotional alignment. 20. **Was AI Desired** Whether an AI agent would have been helpful in this module, and why or why not. 21. **AI Opportunities** Where and how an AI companion could have added meaningful support or value. 22. **Suggested AI Behavior** A sentence or behavior the AI could have used to improve the experience. 23. **AI Tone** Desired voice, posture, or emotional stance of the AI agent for this persona. 24. **Agent Training Notes** Commentary to guide agent development — edge cases, tone needs, or behavior tips. 25. **Response Sentiment Tag** A summary tag (e.g., positive, neutral, overwhelmed) to support tagging and clustering. This schema is shared across all modes to ensure data consistency and allow multi-modal comparison. It is optimized for both agent design and emotional pattern analysis. All simulations are powered by a custom-configured LLM hosted via OpenRouter (`deepseek/deepseek-r1-0528-qwen3-8b:free`). It handles longer context, performs better with structured JSON, and is more consistent for simulation chaining and reasoning. That helps a lot when running long walks or full persona batches. Guided by carefully authored prompt templates and a structured Apps Script backend. Output is automatically parsed, validated, and injected into the All Output sheet, ready for analysis. IMPORTANT: Do not change any field names, prompt schemas, or column headers without explicit coordination. The entire system, prompt templates, App Script logic, output schema, and UI must remain synchronized. ⭐You can add new walks, modules, and personas freely, just ensure they follow the formatting of existing entries. # **3\. Adding a New Walk** To simulate a new walk not yet inside the [Simulation System Sheets file](https://docs.google.com/spreadsheets/d/13IkJHcrIRIHoa1SH9jwHEy_G_xo1B-Ko86AuLJSjMpE/edit?usp=sharing), follow these steps: 1. Open the WalkXR Simulation System Google Sheets file. 2. Navigate to the 'Walk Description' sheet. 3. Fill out all columns accurately for your walk. 4. Next, go to the 'Modules' sheet. 1. Each module should represent a distinct moment, prompt, ritual, or interaction. 2. Use the following format for module keys: TWO CAPITAL LETTERS for walk \+ '-M' \+ two-digit number (e.g., SM-M01, SM-M02). 3. Fill in all module-related columns: Title, Type, Description, Media, Emotional Goal, Prompt. 5. Now go to the 'Personas' sheet: 1. You can create as many personas as needed for the walk. 2. Use a similar format for persona keys: TWO CAPITAL LETTERS for walk \+ '-P' \+ number (e.g., SM-P01, SM-P02). 3. Fill out detailed persona fields: age, background, emotional state, goals, challenges, etc. 6. Now go to the ‘User Interface’ sheet: 1. Add the walk name, modules, and personas you created into their respective drop-down menus. NOTE: The more detailed and specific the modules and personas are, the more accurate and meaningful the simulation output will be. # **4\. Run a Simulation** To run a simulation: 1. Open the ‘User Interface’ sheet in the WalkXR Simulation System file. 2. Use the dropdowns to select: 1. Walk Name 2. One Module or All Modules 3. One Persona or All Personas 3. These combinations automatically correspond to the 4 simulation modes: 1. Mode 1: One Persona × One Module 2. Mode 2: One Persona × Full Walk 3. Mode 3: All Personas × One Module 4. Mode 4: All Personas × Full Walk 4. Click the ‘Generate Prompt’ button. 1. This runs the Apps Script, which dynamically builds the simulation prompt using the selected walk, module(s), and persona(s), and sends it to OpenRouter. 5. The system then sends the prompt to a dedicated simulation LLM hosted on OpenRouter, which has been parameterized specifically for WalkXR's use case. 6. The AI-generated simulation response is returned in structured JSON format and is automatically saved to the ‘All Output’ sheet. **There’s no need to copy/paste prompts or outputs anymore, everything happens in one click.** # **5\. How the System Works \+ How to Use the Output** When you click “Run Simulation” in the User Interface sheet, everything runs automatically: - A simulation prompt is generated using your selected walk, module(s), and persona(s). - The prompt is sent to OpenRouter, using the model: `deepseek/deepseek-r1-0528-qwen3-8b:free` - The LLM returns structured feedback in a 24-field JSON format, which is automatically validated and saved in the All Output sheet. In real time: - Cell B11 shows “⏳ Simulating…” immediately after clicking Run. - Once complete, it updates to show: - “✅ Structured response saved in 'All Output' 🎉” - 🧠 The raw AI response (optional preview) **💡 While the raw response is helpful, it’s best to explore results in the All Output sheet.** There, you can: - Filter by Walk Name, Mode Ran, Persona ID, or Module ID - Scroll to the bottom to view your latest run - Export or copy rows for analysis, synthesis, or training data prep If you see “❌ JSON parsing or validation failed. See logs.”, it means the LLM response didn’t match the required format (usually missing fields). This happens occasionally; just click Run Simulation again. Do not edit column headers or structure in All Output. The simulation system depends on this schema to work correctly. For custom formats, contact Roman.