aegisai / docs / BACKEND_TESTING.md
BACKEND_TESTING.md
Raw

๐Ÿงช Backend Testing Guide

Complete testing guide for AegisAI backend services.


๐Ÿ“‹ Prerequisites

# Navigate to backend
cd backend

# Activate virtual environment
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install test dependencies
pip install pytest pytest-asyncio pytest-cov httpx

๐Ÿš€ Quick Start

Run All Tests and make sure that .env is inside the backend folder.

pytest

Run with Coverage

pytest --cov --cov-report=html

Run Specific Test File

pytest tests/test_agents.py -v
pytest tests/test_services.py -v
pytest tests/test_api.py -v

Run by Marker

# Unit tests only
pytest -m unit

# Integration tests
pytest -m integration

# API tests
pytest -m api

๐Ÿ“ Test Structure

backend/
โ”œโ”€โ”€ tests/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ test_agents.py          # AI agent tests
โ”‚   โ”œโ”€โ”€ test_services.py        # Service layer tests
โ”‚   โ”œโ”€โ”€ test_api.py             # API endpoint tests
โ”‚   โ””โ”€โ”€ conftest.py             # Shared fixtures (optional)
โ”œโ”€โ”€ pytest.ini                  # Pytest configuration
โ””โ”€โ”€ .coveragerc                 # Coverage configuration

๐Ÿงช Test Categories

1. Agent Tests (test_agents.py)

Coverage:

  • VisionAgent initialization
  • Frame analysis (numpy & base64)
  • Result validation
  • Temporal context building
  • History management
  • Stats tracking
  • PlannerAgent plan generation
  • Plan validation
  • Fallback plans

Run:

pytest tests/test_agents.py -v

Expected Output:

tests/test_agents.py::TestVisionAgent::test_agent_initialization PASSED
tests/test_agents.py::TestVisionAgent::test_analyze_frame_with_numpy PASSED
tests/test_agents.py::TestVisionAgent::test_validate_result PASSED
...
==================== 15 passed in 12.34s ====================

2. Service Tests (test_services.py)

Coverage:

  • Database initialization & schema
  • Incident CRUD operations
  • Action logging
  • Statistics retrieval
  • Status updates
  • Cleanup operations
  • ActionExecutor plan execution
  • Individual action handlers

Run:

pytest tests/test_services.py -v

Expected Output:

tests/test_services.py::TestDatabaseService::test_save_incident PASSED
tests/test_services.py::TestDatabaseService::test_get_statistics PASSED
...
==================== 12 passed in 5.67s ====================

3. API Tests (test_api.py)

Coverage:

  • Root & health endpoints
  • Incident CRUD via API
  • Query parameters & filtering
  • Error handling (404, 422, 405)
  • CORS headers
  • End-to-end workflows

Run:

pytest tests/test_api.py -v

Expected Output:

tests/test_api.py::TestRootEndpoints::test_root PASSED
tests/test_api.py::TestIncidentEndpoints::test_get_incidents PASSED
...
==================== 18 passed in 8.91s ====================

๐ŸŽฏ Manual Backend Tests

Test 1: Server Startup

Steps:

cd backend
source venv/bin/activate
python main.py

Expected Output:

INFO: โœ… AegisAI Backend Ready
INFO: ๐Ÿ“ก API: http://0.0.0.0:8000
INFO: ๐Ÿ“– Docs: http://0.0.0.0:8000/docs
INFO: Uvicorn running on http://0.0.0.0:8000

Verification:

curl http://localhost:8000/
# Should return: {"name":"AegisAI","version":"2.5.0",...}

โœ… Pass: Server starts without errors


Test 2: Health Check

curl http://localhost:8000/api/health

Expected Response:

{
  "status": "healthy",
  "components": {
    "database": "ok",
    "vision_agent": "ok",
    "planner_agent": "ok"
  },
  "timestamp": "2024-XX-XXTXX:XX:XX"
}

โœ… Pass: All components healthy


Test 3: API Documentation

Visit: http://localhost:8000/docs

Expected:

  • Swagger UI loads
  • All endpoints listed:
    • GET /
    • GET /health
    • GET /api/incidents
    • GET /api/incidents/{id}
    • POST /api/incidents/{id}/status
    • GET /api/stats
    • GET /api/agents/stats
    • DELETE /api/incidents/cleanup

โœ… Pass: Docs accessible and complete


Test 4: Create Incident via Database

# Start Python shell
python

>>> from services.database_service import db_service
>>> from datetime import datetime
>>> 
>>> incident_id = db_service.save_incident({
...     'timestamp': datetime.now().isoformat(),
...     'type': 'test',
...     'severity': 'medium',
...     'confidence': 80,
...     'reasoning': 'Manual test',
...     'subjects': [],
...     'evidence_path': '',
...     'response_plan': []
... })
>>> 
>>> print(f"Created incident ID: {incident_id}")
>>> exit()

Verify via API:

curl http://localhost:8000/api/incidents

โœ… Pass: Incident appears in response


Test 5: Video Processor Demo

cd backend
source venv/bin/activate
python -m services.video_processor demo

Expected Output:

๐ŸŽฌ Starting AegisAI Demo Scenario

=== SCENARIO: Normal Activity ===
โœ“ No incident detected - Normal operation

=== SCENARIO: Suspicious Behavior ===
โœ“ Detected: suspicious_behavior
  Severity: medium
  Confidence: 82%

=== SCENARIO: Critical Threat ===
โœ“ Detected: violence
  Severity: high
  Response plan executed: 5 actions

โœ“ Demo sequence completed

โœ… Pass: All scenarios execute successfully


Test 6: Agent Performance Test

cd backend
python

>>> from agents.vision_agent import VisionAgent
>>> import numpy as np
>>> import asyncio
>>> 
>>> agent = VisionAgent()
>>> frame = np.zeros((480, 640, 3), dtype=np.uint8)
>>> 
>>> # Test multiple analyses
>>> async def test_performance():
...     for i in range(10):
...         result = await agent.process(frame=frame, frame_number=i)
...         print(f"Frame {i}: {result['type']}")
...     stats = agent.get_stats()
...     print(f"\nStats: {stats}")
>>> 
>>> asyncio.run(test_performance())

Expected:

  • All 10 frames analyzed
  • Stats show:
    • total_calls: 10
    • total_errors: 0
    • avg_response_time: < 3.0 seconds

โœ… Pass: Consistent performance


Test 7: Database Persistence

# Create incident
python

>>> from services.database_service import db_service
>>> incident_id = db_service.save_incident({
...     'timestamp': '2024-01-01T12:00:00',
...     'type': 'persistence_test',
...     'severity': 'low',
...     'confidence': 75,
...     'reasoning': 'Test persistence',
...     'subjects': [],
...     'evidence_path': '',
...     'response_plan': []
... })
>>> exit()

# Restart server
# (Press Ctrl+C, then `python main.py`)

# Verify incident still exists
curl http://localhost:8000/api/incidents

โœ… Pass: Incident persists across restarts


๐Ÿ“Š Performance Benchmarks

Expected Performance:

Metric Target Acceptable
API Response Time < 100ms < 500ms
Frame Analysis < 2s < 5s
Database Query < 50ms < 200ms
Memory Usage < 300MB < 500MB
CPU Usage < 20% < 40%

Measure Performance:

# API response time
time curl http://localhost:8000/api/incidents

# Memory usage
ps aux | grep python
# Look at RSS (Resident Set Size)

# Load test with Apache Bench
ab -n 100 -c 10 http://localhost:8000/api/stats

๐Ÿ› Common Issues

Issue 1: "No module named 'agents'"

Solution:

# Make sure you're running from backend/ directory
cd backend
python -m pytest tests/

Issue 2: "Database locked"

Solution:

# Stop any running instances
pkill -f "python main.py"
# Delete test database
rm test_*.db

Issue 3: Import errors

Solution:

# Ensure __init__.py files exist
ls backend/__init__.py
ls backend/agents/__init__.py
ls backend/services/__init__.py

Issue 4: Gemini API failures

Solution:

# Check API key is set
echo $GEMINI_API_KEY

# Test connection manually
python
>>> from agents.vision_agent import VisionAgent
>>> agent = VisionAgent()
>>> # If no errors, API key is valid

โœ… Test Checklist

Before deployment, verify:

  • โœ… All unit tests pass (pytest tests/test_agents.py)
  • โœ… All service tests pass (pytest tests/test_services.py)
  • โœ… All API tests pass (pytest tests/test_api.py)
  • โœ… Server starts without errors
  • โœ… Health check returns "healthy"
  • โœ… API docs accessible
  • โœ… Database operations work
  • โœ… Video processor demo runs
  • โœ… Agent performance acceptable
  • โœ… No memory leaks (run for 10 min)
  • โœ… Code coverage > 70%

๐Ÿ“ˆ Coverage Report

After running tests with coverage:

pytest --cov --cov-report=html

Open htmlcov/index.html in browser.

Target Coverage:

  • Overall: > 70%
  • Agents: > 80%
  • Services: > 75%
  • API: > 80%

๐Ÿš€ CI/CD Integration

For automated testing:

# .github/workflows/backend-tests.yml
name: Backend Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.11'
      
      - name: Install dependencies
        run: |
          cd backend
          pip install -r requirements.txt
          pip install pytest pytest-cov          
      
      - name: Run tests
        run: |
          cd backend
          pytest --cov --cov-report=xml          
      
      - name: Upload coverage
        uses: codecov/codecov-action@v3

๐Ÿ“ Test Report Template

# Backend Test Report

**Date**: YYYY-MM-DD
**Tester**: [Name]
**Environment**: [Local/Docker/CI]

## Unit Tests
- Agents: X/15 passed
- Services: X/12 passed
- API: X/18 passed

## Integration Tests
- E2E Flow: โœ…/โŒ
- Database: โœ…/โŒ
- Video Processor: โœ…/โŒ

## Performance
- API Response: XXX ms
- Frame Analysis: X.X s
- Memory: XXX MB
- CPU: XX%

## Coverage
- Overall: XX%
- Agents: XX%
- Services: XX%
- API: XX%

## Issues
1. [Description]

## Recommendation
[ ] Ready for deployment
[ ] Needs fixes

Happy Testing! ๐Ÿงช