Thank you for your interest in contributing to AegisAI! This document provides guidelines and instructions for contributing.
We are committed to providing a welcoming and inclusive environment for all contributors, regardless of:
Before contributing, ensure you have:
# 1. Fork the repository on GitHub
# Click "Fork" button at https://github.com/Thimethane/aegisai
# 2. Clone your fork
git clone https://github.com/YOUR_USERNAME/aegisai.git
cd aegisai
# 3. Add upstream remote
git remote add upstream https://github.com/Thimethane/aegisai.git
# 4. Verify remotes
git remote -v
# Install frontend dependencies
cd frontend
npm install
# Install backend dependencies
cd ../backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install -r requirements-dev.txt # Development tools
# Install pre-commit hooks
pip install pre-commit
pre-commit install
We use feature branches for development:
# Update your main branch
git checkout main
git pull upstream main
# Create feature branch
git checkout -b feature/your-feature-name
# OR for bug fixes
git checkout -b fix/bug-description
# OR for documentation
git checkout -b docs/what-you-are-documenting
feature/ - New features (e.g., feature/multi-camera-support)fix/ - Bug fixes (e.g., fix/camera-permission-error)docs/ - Documentation (e.g., docs/api-guide)refactor/ - Code refactoring (e.g., refactor/agent-architecture)test/ - Test additions (e.g., test/integration-tests)# Make your changes
# Edit files, add features, fix bugs
# Check status
git status
# Stage changes
git add .
# Commit with meaningful message
git commit -m "feat: add multi-camera support"
# Push to your fork
git push origin feature/your-feature-name
We follow Conventional Commits:
<type>(<scope>): <subject>
<body>
<footer>
Types:
feat: - New featurefix: - Bug fixdocs: - Documentation changesstyle: - Code style changes (formatting, no logic change)refactor: - Code refactoringtest: - Adding or updating testschore: - Maintenance tasksExamples:
# Good commits
git commit -m "feat(frontend): add dark mode toggle"
git commit -m "fix(backend): resolve camera disconnection crash"
git commit -m "docs: update installation guide for Windows"
# Bad commits (avoid)
git commit -m "fixed stuff"
git commit -m "updates"
git commit -m "WIP"
// components/MyComponent/MyComponent.tsx
import React from 'react';
import { ComponentProps } from '@/types';
interface MyComponentProps {
title: string;
onAction?: () => void;
}
export const MyComponent: React.FC<MyComponentProps> = ({
title,
onAction
}) => {
// Component logic
return (
<div className="my-component">
{title}
</div>
);
};
any// Good
interface User {
id: string;
name: string;
role: 'admin' | 'user';
}
// Avoid
const user: any = { ... };
// โ
Use functional components
const MyComponent: React.FC<Props> = (props) => { ... };
// โ
Use hooks properly
const [state, setState] = useState<Type>(initialValue);
// โ
Memoize expensive computations
const result = useMemo(() => expensiveCalc(data), [data]);
// โ
Clean up effects
useEffect(() => {
const timer = setInterval(...);
return () => clearInterval(timer);
}, []);
// Good
<div className="flex items-center gap-4 p-4 bg-gray-900 rounded-lg">
// Avoid inline styles
<div style={{ display: 'flex', padding: '16px' }}>
We follow PEP 8 with some modifications:
# Good
class VisionAgent(BaseAgent):
"""Vision analysis agent using Gemini API.
Attributes:
model_name: Gemini model identifier
frame_history: List of recent frames for context
"""
def __init__(self, model_name: str = "gemini-2.0-flash-exp"):
super().__init__()
self.model_name = model_name
self.frame_history: List[Dict] = []
async def process(
self,
frame: np.ndarray,
frame_number: int
) -> Dict[str, Any]:
"""Process a video frame for threat detection."""
# Implementation
pass
Always use type hints:
# Good
def analyze_frame(
frame: np.ndarray,
timestamp: datetime
) -> Dict[str, Any]:
pass
# Avoid
def analyze_frame(frame, timestamp):
pass
# Good - Specific exceptions
try:
result = await api_call()
except ValueError as e:
logger.error(f"Invalid value: {e}")
raise
except APIError as e:
logger.warning(f"API error: {e}")
return fallback_result()
# Avoid - Bare except
try:
result = risky_operation()
except:
pass
import logging
logger = logging.getLogger(__name__)
# Use appropriate levels
logger.debug("Detailed diagnostic info")
logger.info("General informational messages")
logger.warning("Warning messages")
logger.error("Error messages")
logger.critical("Critical issues")
# Include context
logger.error(
"Failed to process frame",
extra={
"frame_number": frame_num,
"error": str(e)
}
)
cd frontend
# Run all tests
npm test
# Run with coverage
npm test -- --coverage
# Run specific test file
npm test -- VideoFeed.test.tsx
# Watch mode
npm test -- --watch
// MyComponent.test.tsx
import { render, screen, fireEvent } from '@testing-library/react';
import { MyComponent } from './MyComponent';
describe('MyComponent', () => {
it('renders title correctly', () => {
render(<MyComponent title="Test" />);
expect(screen.getByText('Test')).toBeInTheDocument();
});
it('calls onAction when button clicked', () => {
const onAction = jest.fn();
render(<MyComponent title="Test" onAction={onAction} />);
fireEvent.click(screen.getByRole('button'));
expect(onAction).toHaveBeenCalledTimes(1);
});
});
cd backend
# Run all tests
pytest
# Run with coverage
pytest --cov --cov-report=html
# Run specific test file
pytest tests/test_agents.py -v
# Run by marker
pytest -m unit
pytest -m integration
# test_vision_agent.py
import pytest
from agents.vision_agent import VisionAgent
@pytest.fixture
def vision_agent():
return VisionAgent()
@pytest.mark.asyncio
async def test_analyze_frame(vision_agent):
frame = np.zeros((480, 640, 3), dtype=np.uint8)
result = await vision_agent.process(frame, frame_number=1)
assert result is not None
assert 'incident' in result
assert 'confidence' in result
assert 0 <= result['confidence'] <= 100
@pytest.mark.integration
async def test_full_analysis_pipeline(vision_agent):
# Integration test
pass
/**
* Custom hook for managing video monitoring state
*
* @param initialInterval - Frame capture interval in ms
* @returns Monitoring state and control functions
*
* @example
* ```tsx
* const { isActive, toggleMonitoring } = useMonitoring(4000);
* ```
*/
export const useMonitoring = (initialInterval: number = 4000) => {
// Implementation
};
def process_frame(
frame: np.ndarray,
context: Dict[str, Any]
) -> AnalysisResult:
"""Process a single video frame for threat detection.
Args:
frame: BGR image array from OpenCV
context: Additional context including timestamp, location
Returns:
Analysis result containing threat type, confidence, and actions
Raises:
ValueError: If frame dimensions are invalid
APIError: If Gemini API call fails
Example:
>>> frame = cv2.imread('test.jpg')
>>> result = process_frame(frame, {'timestamp': datetime.now()})
>>> print(result.confidence)
85.5
"""
pass
When adding features, update relevant documentation:
README.md - Main project overviewQUICKSTART.md - Quick start instructionsINTEGRATION.md - Integration detailsDEPLOYMENT.md - Deployment proceduresSelf-Review Checklist:
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests pass
- [ ] Manual testing completed
## Screenshots (if applicable)
[Add screenshots here]
## Related Issues
Closes #123
Reviewers will check:
# 1. Push your branch
git push origin feature/your-feature-name
# 2. Go to GitHub and create PR
# - Base: main
# - Compare: feature/your-feature-name
# 3. Fill out PR template
# 4. Request review
# 5. Address feedback
# 1. Switch to main
git checkout main
# 2. Pull latest changes
git pull upstream main
# 3. Delete feature branch
git branch -d feature/your-feature-name
git push origin --delete feature/your-feature-name
Use these templates:
Bug Report:
## Bug Description
Clear description of the bug
## Steps to Reproduce
1. Step one
2. Step two
3. Expected vs actual
## Environment
- OS: [e.g., Windows 10]
- Browser: [e.g., Chrome 120]
- Version: [e.g., 2.5.0]
## Screenshots
[If applicable]
Feature Request:
## Feature Description
What feature do you want?
## Use Case
Why is this useful?
## Proposed Solution
How might this work?
## Alternatives Considered
Other approaches?
All contributors are recognized in:
CONTRIBUTORS.md fileWe value all contributions:
Thank you for contributing to AegisAI! Together, we're building the future of autonomous security. ๐ก๏ธ
Last Updated: January 2026