WalkXR AI is dedicated to creating emotionally intelligent agentic systems that enhance human connection, self-awareness, and empathetic understanding. These agents are designed to power immersive and interactive experiences within the WalkXR platform, a broader initiative by The Verse to build games, experiences, and rituals that uplift humanity.
To develop and deploy sophisticated AI agents that can perceive, understand, and respond to human emotion with nuance and integrity, fostering psychologically safe and transformative interactions.
A future where AI companions and roleplay agents act as catalysts for personal growth, facilitating deeper self-reflection, co-regulation, and social connection. The long-term vision is the WalkXR Emotional OS, a comprehensive orchestration engine that dynamically adapts experiences based on user emotional states and therapeutic goals.
This repository contains the core AI development for WalkXR agents, including knowledge base management, agent logic, and supporting infrastructure.
/Users/romandidomizio/WalkXR-AI
├── .env.template
├── .github/ # GitHub action workflows and templates
├── .gitignore
├── .python-version # Set by pyenv to lock Python version
├── README.md
├── docs/
│ ├── architecture/ # Core technical and product architecture documents
│ └── research/ # Foundational research and papers
├── notebooks/ # Jupyter notebooks for experimentation
├── pyproject.toml # Defines project dependencies and tool configurations
├── scripts/
│ └── manage_rag_index.py # CLI tool for managing the RAG knowledge base
├── src/
│ └── walkxr_ai/
│ ├── __init__.py
│ ├── agents/ # Agent implementations (future work)
│ └── rag/ # Core RAG system components
│ ├── __init__.py
│ ├── chunking_strategy.md # Documentation on chunking approaches
│ ├── rag_config.yaml # Configuration file for the RAG system
│ └── retrieval_engine.py # Main class for RAG pipeline logic
└── vector_store/ # Default local storage for ChromaDB
Repository Workflow: From Idea to Agent
This repository is structured to support a clear development workflow, from initial design and research to production-ready agents.
docs/ (Design & Research): This is where ideas begin. Foundational research, product requirements, and technical architecture documents live here. Before writing code, consult the docs to understand the "why" behind an agent or feature.notebooks/ (Prototyping & Experimentation): Use Jupyter notebooks in this directory to rapidly prototype new concepts, test LLM prompts, and experiment with libraries like LangChain or LlamaIndex before formalizing them into the main codebase.src/walkxr_ai/ (Core Development): This is the heart of the project, containing all production Python code.
rag/: The foundational knowledge system that all agents will use.core/: Core components like state management, safety layers, and memory systems will live here.agents/: Where modular, reusable agents are built. Each agent should be self-contained and designed for future orchestration.simulation/: Contains tools and schemas for simulating user interactions to test agent responses.tests/ (Validation & Quality): All Pytest unit and integration tests go here. Every new feature or agent added to src/ should be accompanied by corresponding tests to ensure reliability.scripts/ (Management & Operations): Contains high-level CLI tools for managing the system, such as the manage_rag_index.py script for interacting with the RAG knowledge base.Follow these instructions carefully to create a consistent development environment.
pyenv and poetry: See steps below.cd path/to/your/development/folder
git clone https://github.com/VerseBuilding/WalkXR-AI.git
cd WalkXR-AI
pyenv)We use pyenv to lock our Python version to 3.11.9.
# Install pyenv (macOS example)
brew install pyenv
# Follow shell setup instructions from pyenv
# Install and set the project's Python version
pyenv install 3.11.9
pyenv local 3.11.9
# Verify the version is active
python --version
# Expected output: Python 3.11.9
Poetry)We use Poetry for managing project dependencies and virtual environments.
Install Poetry: Follow the official installation guide.
Install Dependencies:
The poetry.lock file, which ensures exact dependency versions, is specific to the operating system and Python version it was created on. To avoid conflicts, it is not committed to the repository. You will generate your own local lock file.
Run the following command to install the dependencies and create your poetry.lock:
poetry install
This command reads pyproject.toml, resolves the dependencies, installs them into a virtual environment, and generates a poetry.lock file tailored to your system.
Create a .env file for API keys and other secrets.
cp .env.template .env
Open the .env file and add any necessary keys (e.g., LANGCHAIN_API_KEY for LangSmith tracing).
Ollama)For local development, we use Ollama to serve LLMs and embedding models.
ollama pull llama3 # Primary LLM for generation tasks
ollama pull nomic-embed-text # For text embeddings (RAG)
After completing all steps, verify that the RAG system is working.
poetry shell
ingest command processes documents in the docs/ directory and creates a vector store in vector_store/walkxr_knowledge_base.
python scripts/manage_rag_index.py ingest
query command tests the end-to-end RAG pipeline.
python scripts/manage_rag_index.py query "What is the agent design philosophy?"
If both scripts run without errors, your development environment is correctly set up.
WalkXR AI is designed with modularity, scalability, and ethical considerations at its core.
Our development process is organized into four parallel, interconnected tracks:
Our Retrieval-Augmented Generation (RAG) system grounds our agents in the project's specific knowledge, ensuring their responses are relevant, accurate, and aligned with our design philosophy. The entire system is orchestrated by the RetrievalEngine class (src/walkxr_ai/rag/retrieval_engine.py).
graph TD
subgraph Ingestion Phase
A[Docs Folder] --> B(RetrievalEngine);
C[rag_config.yaml] -- defines chunking --> B;
B -- uses nomic-embed-text --> D[Embeddings];
D --> E[ChromaDB Vector Store];
end
subgraph Query Phase
F[User Query] --> G(RetrievalEngine);
G -- queries --> E;
E -- returns relevant chunks --> G;
G -- combines query + context --> H{LLM Prompt};
H -- sent to llama3 --> I[Synthesized Answer];
end
Key Components:
rag_config.yaml: The central configuration file. It defines all parameters for the RAG pipeline, including paths to documents, the vector store location, the Ollama models to use for embeddings and generation, and the chunking strategy.ingest command): The RetrievalEngine reads documents from the source directory specified in the config. It applies the chosen chunking strategy (see chunking_strategy.md), generates vector embeddings for each chunk using nomic-embed-text, and stores them in the local ChromaDB vector store.query command): When a user submits a query, the RetrievalEngine first embeds the query text. It then searches the ChromaDB vector store to find the most semantically similar document chunks. Finally, it combines the original query with the retrieved context into a single prompt and sends it to a powerful LLM (llama3) to generate a comprehensive, context-aware answer.This project leverages a range of modern technologies for AI development.
llama3 and nomic-embed-text)All primary developer tasks are managed through a single, powerful command-line interface.
scripts/manage_rag_index.pyThis script is the main entry point for managing and testing the RAG knowledge base. It uses Typer to provide a clean, documented CLI.
Activate the environment first: poetry shell
To ingest documents and build the knowledge base:
python scripts/manage_rag_index.py ingest
Run this command whenever you add or update documents in the docs/ folder.
To test the RAG pipeline with a query:
python scripts/manage_rag_index.py query "What is the project's approach to agent safety?"
This will retrieve relevant information from the knowledge base and generate an answer.
SmallTalkAgentOur development philosophy is centered on creating modular, independent agents that can be orchestrated by the future WalkXR Emotional OS. Your first contribution should be building a simple SmallTalkAgent.
Here is the recommended workflow, designed with modularity in mind:
src/walkxr_ai/agents/, create a new file named small_talk_agent.py.SmallTalkAgent class. It should be initialized with a reference to the RetrievalEngine from our RAG system to ensure its responses are grounded in our knowledge base.generate_response Method: This method will take a user query as input. Internally, it should first call the RAG system's query method to get relevant context. Then, it will pass that context along with the original query to an LLM to generate a conversational, in-character response.tests/agents/ directory to validate that your agent can be initialized and can generate a response.By following this pattern, you create a self-contained, testable, and configurable agent—a perfect building block for our larger orchestration platform. poetry run python scripts/test_query.py ``` Use this script to verify that the RAG system is functioning correctly after setup or changes to the ingestion process or models.
Adherence to a consistent workflow ensures code quality and smooth collaboration.
main, develop, feature/*, fix/*, and chore/* branches.feat(agent): add new capability). Details in CONTRIBUTING.md.develop branch for review. Ensure your PRs are linked to relevant issues. We use a Pull Request Template to standardize submissions; please fill it out when creating a PR.git pull origin develop) to stay up-to-date.pyproject.toml is configured to use Ruff. Run poetry run ruff check . and poetry run ruff format . before committing.pyproject.toml is configured for MyPy. Run poetry run mypy src to check types.tests/ directory.Our development is guided by a phased approach, ensuring we build a robust foundation before scaling complexity. The detailed, epic-level plan is tracked in our official WalkXR_AI_Backlog.md.
RetrievalEngine).Key documents providing context, design rationale, and research foundations are located in the docs/ directory.
docs/architecture/Tech_Architecture.md: Describes the overall technical architecture, technology choices, development lifecycle, and scaling considerations for the WalkXR AI system.WalkXR_AI_PRD.md: The Product Requirements Document for WalkXR AI, outlining features, user stories, and success criteria for the AI components.docs/internal/The Verse Notion Page.md: An export or summary of The Verse's broader vision, projects, and operational principles, providing context for WalkXR.WalkXR AI Team Design Document.md: Detailed design notes, discussions, and decisions made by the AI development team.WalkXR Full Outline and Investor Outreach.txt: A comprehensive outline of the WalkXR project, potentially used for investor discussions and strategic planning.WalkXR Simulation Docs.md: Documents related to the simulation methodology for testing WalkXR modules and generating data for AI training and design.WalkXR Summary.md: A concise summary of the WalkXR project, its goals, and key features.docs/research/AI and EI.txt: A general exploration or collection of notes on the intersection of Artificial Intelligence and Emotional Intelligence.Deeper Conversatios Article (Kardas, Kumar, Epley).md: Summary or full text of the research by Kardas, Kumar, & Epley on miscalibrated expectations in conversations, a core piece of research informing agent design.EI LLM Tests Study.txt: Notes or summary of studies demonstrating LLM performance on standard Emotional Intelligence tests.Emotional Intelligence in Artificial Agents.md: A document exploring the concepts and challenges of embedding emotional intelligence into artificial agents.Emotional Intelligence in Artificial Intelligence.md: Similar to the above, likely discussing the broader field of EI within AI.We welcome contributions! Please see our CONTRIBUTING.md for detailed guidelines on our development workflow (including Git branching, issue/PR templates, commit conventions), coding standards (PEP8, Ruff, MyPy), and testing procedures.
For questions, issues, or discussions related to the WalkXR AI project, please contact the project lead, Roman Di Domizio, via Discord.
This README is a living document and will be updated as the project evolves.