The New Archon (Beta) - The Operating System for AI Coding Assistants!

This commit is contained in:
Cole Medin
2025-08-13 07:58:24 -05:00
parent 13e1fc6a0e
commit 59084036f6
603 changed files with 131376 additions and 417 deletions

View File

@@ -0,0 +1,235 @@
---
description: Perform comprehensive code review for Archon V2 Alpha, this command will save a report to `code-review.md`.
argument-hint: <PR number, branch name, file path, or leave empty for staged changes>
allowed-tools: Bash(*), Read, Grep, LS, Write
thinking: auto
---
# Code Review for Archon V2 Alpha
**Review scope**: $ARGUMENTS
I'll perform a comprehensive code review and generate a report saved to the root of this directory as `code-review[n].md`. check if other reviews exist before you create the file and increment n as needed.
## Context
You're reviewing code for Archon V2 Alpha, which uses:
- **Frontend**: React + TypeScript + Vite + TailwindCSS
- **Backend**: Python 3.12+ with FastAPI, PydanticAI, Supabase
- **Testing**: Vitest for frontend, pytest for backend
- **Code Quality**: ruff, mypy, ESLint
## What to Review
Determine what needs reviewing:
- If no arguments: Review staged changes (`git diff --staged`)
- If PR number: Review pull request (`gh pr view`)
- If branch name: Compare with main (`git diff main...branch`)
- If file path: Review specific files
- If directory: Review all changes in that area
## Review Focus
### CRITICAL: Alpha Error Handling Philosophy
**Following CLAUDE.md principles - We want DETAILED ERRORS, not graceful failures!**
#### Where Errors MUST Bubble Up (Fail Fast & Loud):
- **Service initialization** - If credentials, database, or MCP fails to start, CRASH
- **Configuration errors** - Missing env vars, invalid settings should STOP the system
- **Database connection failures** - Don't hide connection issues, expose them
- **Authentication failures** - Security errors must be visible
- **Data corruption** - Never silently accept bad data
- **Type validation errors** - Pydantic should raise, not coerce
#### Where to Complete but Log Clearly:
- **Background tasks** (crawling, embeddings) - Complete the job, log failures per item
- **Batch operations** - Process what you can, report what failed with details
- **WebSocket events** - Don't crash on single event failure, log and continue
- **Optional features** - If projects/tasks disabled, log and skip
- **External API calls** - Retry with exponential backoff, then fail with clear message
### Python Code Quality
Look for:
- **Type hints** on all functions and proper use of Python 3.12+ features
- **Pydantic v2 patterns** (ConfigDict, model_dump, field_validator)
- **Error handling following alpha principles**:
```python
# BAD - Silent failure
try:
result = risky_operation()
except Exception:
return None
# GOOD - Detailed error with context
try:
result = risky_operation()
except SpecificError as e:
logger.error(f"Operation failed at step X: {e}", exc_info=True)
raise # Let it bubble up!
```
- **No print statements** - should use logging instead
- **Detailed error messages** with context about what was being attempted
- **Stack traces preserved** with `exc_info=True` in logging
- **Async/await** used correctly with proper exception propagation
### TypeScript/React Quality
Look for:
- **TypeScript types** properly defined, avoid `any`
- **React error boundaries** for component failures
- **API error handling** that shows actual error messages:
```typescript
// BAD - Generic error
catch (error) {
setError("Something went wrong");
}
// GOOD - Specific error with details
catch (error) {
console.error("API call failed:", error);
setError(`Failed to load data: ${error.message}`);
}
```
- **Component structure** following existing patterns
- **Console.error** for debugging, not hidden errors
### Security Considerations
Check for:
- Input validation that FAILS LOUDLY on bad input
- SQL injection vulnerabilities
- No hardcoded secrets or API keys
- Authentication that clearly reports why it failed
- CORS configuration with explicit error messages
### Architecture & Patterns
Ensure:
- Services fail fast on initialization errors
- Routes return detailed error responses with status codes
- Database operations include transaction details in errors
- Socket.IO disconnections are logged with reasons
- Service dependencies checked at startup, not runtime
### Testing
Verify:
- Tests check for specific error messages, not just "throws"
- Error paths are tested with expected error details
- No catch-all exception handlers hiding issues
- Mock failures test error propagation
## Review Process
1. **Understand the changes** - What problem is being solved?
2. **Check functionality** - Does it do what it's supposed to?
3. **Review code quality** - Is it maintainable and follows standards?
4. **Consider performance** - Any N+1 queries or inefficient algorithms?
5. **Verify tests** - Are changes properly tested?
6. **Check documentation** - Are complex parts documented?
## Key Areas to Check
**Backend Python files:**
- `python/src/server/` - Service layer patterns
- `python/src/mcp/` - MCP tool definitions
- `python/src/agents/` - AI agent implementations
**Frontend TypeScript files:**
- `archon-ui-main/src/components/` - React components
- `archon-ui-main/src/services/` - API integration
- `archon-ui-main/src/hooks/` - Custom hooks
**Configuration:**
- `docker-compose.yml` - Service configuration
- `.env` changes - Security implications
- `package.json` / `pyproject.toml` - Dependency changes
## Report Format
Generate a `code-review.md` with:
```markdown
# Code Review
**Date**: [Today's date]
**Scope**: [What was reviewed]
**Overall Assessment**: [Pass/Needs Work/Critical Issues]
## Summary
[Brief overview of changes and general quality]
## Issues Found
### 🔴 Critical (Must Fix)
- [Issue description with file:line reference and suggested fix]
### 🟡 Important (Should Fix)
- [Issue description with file:line reference]
### 🟢 Suggestions (Consider)
- [Minor improvements or style issues]
## What Works Well
- [Positive aspects of the code]
## Security Review
[Any security concerns or confirmations]
## Performance Considerations
[Any performance impacts]
## Test Coverage
- Current coverage: [if available]
- Missing tests for: [list areas]
## Recommendations
[Specific actionable next steps]
```
## Helpful Commands
```bash
# Check what changed
git diff --staged
git diff main...HEAD
gh pr view $PR_NUMBER --json files
# Run quality checks
cd python && ruff check --fix
cd python && mypy src/
cd archon-ui-main && npm run lint
# Run tests
cd python && uv run pytest
cd archon-ui-main && npm test
```
Remember: Focus on impact and maintainability. Good code review helps the team ship better code, not just find problems. Be constructive and specific with feedback.

View File

@@ -0,0 +1,153 @@
---
name: archon-onboarding
description: |
Onboard new developers to the Archon codebase with a comprehensive overview and first contribution guidance.
Usage: /archon-onboarding
argument-hint: none
---
You are helping a new developer get up and running with the Archon V2 Alpha project! Your goal is to provide them with a personalized onboarding experience.
## What is Archon?
Archon is a centralized knowledge base for AI coding assistants. It enables Claude Code, Cursor, Windsurf, and other AI tools to access your documentation, perform smart searches, and manage tasks - all through a unified interface.
Its powered by a **Model Context Protocol (MCP) server**
And you can crawl and store knowledge that you can use multiple rag strategies to improve your AI coders performance.
## Quick Architecture Overview
This is a **true microservices architecture** with 4 independent services:
1. **Frontend** (port 3737) - React UI for managing knowledge and projects
2. **Server** (port 8181) - Core API handling all business logic
3. **MCP Server** (port 8051) - Lightweight MCP protocol interface
4. **Agents** (port 8052) - AI operations with PydanticAI
All services communicate via HTTP only - no shared code, true separation of concerns.
## Getting Started - Your First 30 Minutes
### Prerequisites Check
You'll need:
- Docker Desktop (running)
- Supabase account (free tier works)
- OpenAI API key (or Gemini/Ollama)
- Git and basic command line knowledge
### Setup
First, read the README.md file to understand the setup process, then guide the user through these steps:
1. Clone the repository and set up environment variables
2. Configure Supabase database with migration scripts
3. Start Docker services
4. Configure API keys in the UI
5. Verify everything is working by testing a simple crawl
## Understanding the Codebase
### Decision Time
Ask the user to choose their focus area. Present these options clearly and wait for their response:
"Which area of the Archon codebase would you like to explore first?"
1. **Frontend (React/TypeScript)** - If you enjoy UI/UX work
2. **Backend API (Python/FastAPI)** - If you like building robust APIs
3. **MCP Tools (Python)** - If you're interested in AI tool protocols
4. **RAG/Search (Python)** - If you enjoy search and ML engineering
5. **Web Crawling (Python)** - If you like data extraction challenges
### Your Onboarding Analysis
Based on the user's choice, perform a deep analysis of that area following the instructions below for their specific choice. Then provide them with a structured report.
## Report Structure
Your report to the user should include:
1. **Area Overview**: Architecture explanation and how it connects to other services
2. **Key Files Walkthrough**: Purpose of main files and their relationships
3. **Suggested First Contribution**: A specific, small improvement with exact location
4. **Implementation Guide**: Step-by-step instructions to make the change
5. **Testing Instructions**: How to verify their change works correctly
**If the user chose Frontend:**
- Start with `archon-ui-main/src/pages/KnowledgeBasePage.tsx`
- Look at how it uses `services/knowledgeBaseService.ts`
- Take a deep dive into the frontend architecture and UI components
- Identify a potential issue that the user can easily fix and suggest a solution
- Give the user a overview of the frontend and architecture following the report format above
**If the user chose Backend API:**
- Start with `python/src/server/api_routes/knowledge_api.py`
- See how it calls `services/knowledge/knowledge_item_service.py`
- Take a deep dive into the FastAPI service architecture and patterns
- Identify a potential API improvement that the user can implement
- Give the user an overview of the backend architecture and suggest a contribution
**If the user chose MCP Tools:**
- Start with `python/src/mcp/mcp_server.py`
- Look at `modules/rag_module.py` for tool patterns
- Take a deep dive into the MCP protocol implementation and available tools
- Identify a missing tool or enhancement that would be valuable
- Give the user an overview of the MCP architecture and how to add new tools
**If the user chose RAG/Search:**
- Start with `python/src/server/services/search/vector_search_service.py`
- Understand the hybrid search approach
- Take a deep dive into the RAG pipeline and search strategies
- Identify a search improvement or ranking enhancement opportunity
- Give the user an overview of the RAG system and suggest optimizations
**If the user chose Web Crawling:**
- Start with `python/src/server/services/rag/crawling_service.py`
- Look at sitemap detection and parsing logic
- Take a deep dive into the crawling architecture and content extraction
- Identify a crawling enhancement or new content type support to add
- Give the user an overview of the crawling system and parsing strategies
## How to Find Contribution Opportunities
When analyzing the user's chosen area, look for:
- TODO or FIXME comments in the code
- Missing error handling or validation
- UI components that could be more user-friendly
- API endpoints missing useful filters or data
- Areas with minimal or no test coverage
- Hardcoded values that should be configurable
## What to Include in Your Report
After analyzing their chosen area, provide the user with:
1. Key development patterns they should know:
- Alpha mindset (break things to improve them)
- Error philosophy (fail fast with detailed errors)
- Service boundaries (no cross-service imports)
- Real-time updates via Socket.IO
- Testing approach for their chosen area
2. Specific contribution suggestion with:
- Exact file and line numbers to modify
- Current behavior vs improved behavior
- Step-by-step implementation guide
- Testing instructions
3. Common gotchas for their area:
- Service-specific pitfalls
- Testing requirements
- Local vs Docker differences
Remember to encourage the user to start small and iterate. This is alpha software designed for rapid experimentation.

View File

@@ -0,0 +1,54 @@
---
name: prime-simple
description: Quick context priming for Archon development - reads essential files and provides project overview
argument-hint: none
---
## Prime Context for Archon Development
You need to quickly understand the Archon V2 Alpha codebase. Follow these steps:
### 1. Read Project Documentation
- Read `CLAUDE.md` for development guidelines and patterns
- Read `README.md` for project overview and setup
### 2. Understand Project Structure
Use `tree -L 2` or explore the directory structure to understand the layout:
- `archon-ui-main/` - Frontend React application
- `python/` - Backend services (server, MCP, agents)
- `docker-compose.yml` - Service orchestration
- `migration/` - Database setup scripts
### 3. Read Key Frontend Files
Read these essential files in `archon-ui-main/`:
- `src/App.tsx` - Main application entry and routing
- Make your own decision of how deep to go into other files
### 4. Read Key Backend Files
Read these essential files in `python/`:
- `src/server/main.py` - FastAPI application setup
- Make your own decision of how deep to go into other files
### 5. Review Configuration
- `.env.example` - Required environment variables
- `docker-compose.yml` - Service definitions and ports
- Make your own decision of how deep to go into other files
### 6. Provide Summary
After reading these files, explain to the user:
1. **Project Purpose**: One sentence about what Archon does and why it exists
2. **Architecture**: One sentence about the architecture
3. **Key Patterns**: One sentence about key patterns
4. **Tech Stack**: One sentence about tech stack
Remember: This is alpha software focused on rapid iteration. Prioritize understanding the core functionality

View File

@@ -0,0 +1,174 @@
---
name: prime
description: |
Prime Claude Code with deep context for a specific part of the Archon codebase.
Usage: /prime "<service>" "<special focus>"
Examples:
/prime "frontend" "Focus on UI components and React"
/prime "server" "Focus on FastAPI and backend services"
/prime "knowledge" "Focus on RAG and knowledge management"
argument-hint: <service> <Specific focus>
---
You're about to work on the Archon V2 Alpha codebase. This is a microservices-based knowledge management system with MCP integration. Here's what you need to know:
## Today's Focus area
Today we are focusing on: $ARGUMENTS
And pay special attention to: $ARGUMENTS
## Decision
Think hard and make an intelligent decision about which key files you need to read and create a todo list.
If you discover something you need to look deeper at or imports from files you need context from, append it to the todo list during the priming process. The goal is to get key understandings of the codebase so you are ready to make code changes to that part of the codebase.
## Architecture Overview
### Frontend (port 3737) - React + TypeScript + Vite
```
archon-ui-main/
├── src/
│ ├── App.tsx # Main app component with routing and providers
│ ├── index.tsx # React entry point with theme and settings
│ ├── components/
│ │ ├── layouts/ # Layout components (MainLayout, SideNavigation)
│ │ ├── knowledge-base/ # Knowledge management UI (crawling, items, search)
│ │ ├── project-tasks/ # Project and task management components
│ │ ├── prp/ # Product Requirements Prompt viewer components
│ │ ├── mcp/ # MCP client management and testing UI
│ │ ├── settings/ # Settings panels (API keys, features, RAG config)
│ │ └── ui/ # Reusable UI components (buttons, cards, inputs)
│ ├── services/ # API client services for backend communication
│ │ ├── knowledgeBaseService.ts # Knowledge item CRUD and search operations
│ │ ├── projectService.ts # Project and task management API calls
│ │ ├── mcpService.ts # MCP server communication and tool execution
│ │ └── socketIOService.ts # Real-time WebSocket event handling
│ ├── hooks/ # Custom React hooks for state and effects
│ ├── contexts/ # React contexts (Settings, Theme, Toast)
│ └── pages/ # Main page components for routing
```
### Backend Server (port 8181) - FastAPI + Socket.IO
```
python/src/server/
├── main.py # FastAPI app initialization and routing setup
├── socketio_app.py # Socket.IO server configuration and namespaces
├── config/
│ ├── config.py # Environment variables and app configuration
│ └── service_discovery.py # Service URL resolution for Docker/local
├── fastapi/ # API route handlers (thin wrappers)
│ ├── knowledge_api.py # Knowledge base endpoints (crawl, upload, search)
│ ├── projects_api.py # Project and task management endpoints
│ ├── mcp_api.py # MCP tool execution and health checks
│ └── socketio_handlers.py # Socket.IO event handlers and broadcasts
├── services/ # Business logic layer
│ ├── knowledge/
│ │ ├── crawl_orchestration_service.py # Website crawling coordination
│ │ ├── knowledge_item_service.py # Knowledge item CRUD operations
│ │ └── code_extraction_service.py # Extract code examples from docs
│ ├── projects/
│ │ ├── project_service.py # Project management logic
│ │ ├── task_service.py # Task lifecycle and status management
│ │ └── versioning_service.py # Document version control
│ ├── rag/
│ │ └── crawling_service.py # Web crawling implementation
│ ├── search/
│ │ └── vector_search_service.py # Semantic search with pgvector
│ ├── embeddings/
│ │ └── embedding_service.py # OpenAI embeddings generation
│ └── storage/
│ └── document_storage_service.py # Document chunking and storage
```
### MCP Server (port 8051) - Model Context Protocol
```
python/src/mcp/
├── mcp_server.py # FastAPI MCP server with SSE support
└── modules/
├── project_module.py # Project and task MCP tools
└── rag_module.py # RAG query and search MCP tools
```
### Agents Service (port 8052) - PydanticAI
```
python/src/agents/
├── server.py # FastAPI server for agent endpoints
├── base_agent.py # Base agent class with streaming support
├── document_agent.py # Document processing and chunking agent
├── rag_agent.py # RAG search and reranking agent
└── mcp_client.py # Client for calling MCP tools
```
## Key Files to Read for Context
### When working on Frontend
Key files to consider:
- `archon-ui-main/src/App.tsx` - Main app structure and routing
- `archon-ui-main/src/services/knowledgeBaseService.ts` - API call patterns
- `archon-ui-main/src/services/socketIOService.ts` - Real-time events
### When working on Backend
Key files to consider:
- `python/src/server/main.py` - FastAPI app setup
- `python/src/server/services/knowledge/knowledge_item_service.py` - Service pattern example
- `python/src/server/api_routes/knowledge_api.py` - API endpoint pattern
### When working on MCP
Key files to consider:
- `python/src/mcp/mcp_server.py` - MCP server implementation
- `python/src/mcp/modules/rag_module.py` - Tool implementations
### When working on RAG
Key files to consider:
- `python/src/server/services/search/vector_search_service.py` - Vector search logic
- `python/src/server/services/embeddings/embedding_service.py` - Embedding generation
- `python/src/agents/rag_agent.py` - RAG reranking
### When working on Crawling
Key files to consider:
- `python/src/server/services/rag/crawling_service.py` - Core crawling logic
- `python/src/server/services/knowledge/crawl_orchestration_service.py` - Crawl coordination
- `python/src/server/services/storage/document_storage_service.py` - Document storage
### When working on Projects/Tasks
Key files to consider:
- `python/src/server/services/projects/task_service.py` - Task management
- `archon-ui-main/src/components/project-tasks/TaskBoardView.tsx` - Kanban UI
### When working on Agents
Key files to consider:
- `python/src/agents/base_agent.py` - Agent base class
- `python/src/agents/rag_agent.py` - RAG agent implementation
## Critical Rules for This Codebase
Follow the guidelines in CLAUDE.md
## Current Focus Areas
- The projects feature is optional (toggle in Settings UI)
- All services communicate via HTTP, not gRPC
- Socket.IO handles all real-time updates
- Frontend uses Vite proxy for API calls in development
- Python backend uses `uv` for dependency management
Remember: This is alpha software. Prioritize functionality over production patterns. Make it work, make it right, then make it fast.

View File

@@ -0,0 +1,192 @@
---
description: Generate Root Cause Analysis report for Archon V2 Alpha issues
argument-hint: <issue description or error message>
allowed-tools: Bash(*), Read, Grep, LS, Write
thinking: auto
---
# Root Cause Analysis for Archon V2 Alpha
**Issue to investigate**: $ARGUMENTS
investigate this issue systematically and generate an RCA report saved to `RCA.md` in the project root.
## Context About Archon
You're working with Archon V2 Alpha, a microservices-based AI knowledge management system:
- **Frontend**: React + TypeScript on port 3737
- **Main Server**: FastAPI + Socket.IO on port 8181
- **MCP Server**: Lightweight HTTP protocol server on port 8051
- **Agents Service**: PydanticAI agents on port 8052
- **Database**: Supabase (PostgreSQL + pgvector)
All services run in Docker containers managed by docker-compose.
## Investigation Approach
### 1. Initial Assessment
First, understand what's broken:
- What exactly is the symptom?
- Which service(s) are affected?
- When did it start happening?
- Is it reproducible?
### 2. System Health Check
Check if all services are running properly:
- Docker container status (`docker-compose ps`)
- Service health endpoints (ports 8181, 8051, 8052, 3737)
- Recent error logs from affected services
- Database connectivity
### 3. Error Handling Analysis
**Remember: In Alpha, we want DETAILED ERRORS that help us fix issues fast!**
Look for these error patterns:
**Good errors (what we want):**
- Stack traces with full context
- Specific error messages saying what failed
- Service initialization failures that stop the system
- Validation errors that show what was invalid
**Bad patterns (what causes problems):**
- Silent failures returning None/null
- Generic "Something went wrong" messages
- Catch-all exception handlers hiding the real issue
- Services continuing with broken dependencies
### 4. Targeted Investigation
Based on the issue type, investigate specific areas:
**For API/Backend issues**: Check FastAPI routes, service layer, database queries
**For Frontend issues**: Check React components, API calls, build process
**For MCP issues**: Check tool definitions, session management, HTTP calls
**For Real-time issues**: Check Socket.IO connections, event handling
**For Database issues**: Check Supabase connection, migrations, RLS policies
### 5. Root Cause Identification
- Follow error stack traces to the source
- Check if errors are being swallowed somewhere
- Look for missing error handling where it should fail fast
- Check recent code changes (`git log`)
- Identify any dependency or initialization order problems
### 6. Impact Analysis
Determine the scope:
- Which features are affected?
- Is this a startup failure or runtime issue?
- Is there data loss or corruption?
- Are errors propagating correctly or being hidden?
## Key Places to Look
Think hard about where to look, there is some guidance below that you can follow
**Configuration files:**
- `.env` - Environment variables
- `docker-compose.yml` - Service configuration
- `python/src/server/config.py` - Server settings
**Service entry points:**
- `python/src/server/main.py` - Main server
- `python/src/mcp/server.py` - MCP server
- `archon-ui-main/src/main.tsx` - Frontend
**Common problem areas:**
- `python/src/server/services/credentials_service.py` - Must initialize first
- `python/src/server/services/supabase_service.py` - Database connections
- `python/src/server/socketio_manager.py` - Real-time events
- `archon-ui-main/src/services/` - Frontend API calls
## Report Structure
Generate an RCA.md report with:
```markdown
# Root Cause Analysis
**Date**: [Today's date]
**Issue**: [Brief description]
**Severity**: [Critical/High/Medium/Low]
## Summary
[One paragraph overview of the issue and its root cause]
## Investigation
### Symptoms
- [What was observed]
### Diagnostics Performed
- [Health checks run]
- [Logs examined]
- [Code reviewed]
### Root Cause
[Detailed explanation of why this happened]
## Impact
- **Services Affected**: [List]
- **User Impact**: [Description]
- **Duration**: [Time period]
## Resolution
### Immediate Fix
[What needs to be done right now]
### Long-term Prevention
[How to prevent this in the future]
## Evidence
[Key logs, error messages, or code snippets that led to the diagnosis]
## Lessons Learned
[What we learned from this incident]
```
## Helpful Commands
```bash
# Check all services
docker-compose ps
# View recent errors
docker-compose logs --tail=50 [service-name] | grep -E "ERROR|Exception"
# Health checks
curl http://localhost:8181/health
curl http://localhost:8051/health
# Database test
docker-compose exec archon-server python -c "from src.server.services.supabase_service import SupabaseService; print(SupabaseService.health_check())"
# Resource usage
docker stats --no-stream
```
Remember: Focus on understanding the root cause, not just symptoms. The goal is to create a clear, actionable report that helps prevent similar issues in the future.

View File

@@ -0,0 +1,74 @@
# Create PRP
## Feature file: $ARGUMENTS
Generate a complete PRP for general feature implementation with thorough research. Ensure context is passed to the AI agent to enable self-validation and iterative refinement. Read the feature file first to understand what needs to be created, how the examples provided help, and any other considerations.
The AI agent only gets the context you are appending to the PRP and training data. Assuma the AI agent has access to the codebase and the same knowledge cutoff as you, so its important that your research findings are included or referenced in the PRP. The Agent has Websearch capabilities, so pass urls to documentation and examples.
## Research Process
1. **Codebase Analysis**
- Search for similar features/patterns in the codebase
- Identify files to reference in PRP
- Note existing conventions to follow
- Check test patterns for validation approach
2. **External Research**
- Search for similar features/patterns online
- Library documentation (include specific URLs)
- Implementation examples (GitHub/StackOverflow/blogs)
- Best practices and common pitfalls
3. **User Clarification** (if needed)
- Specific patterns to mirror and where to find them?
- Integration requirements and where to find them?
## PRP Generation
Using PRPs/templates/prp_base.md as template:
### Critical Context to Include and pass to the AI agent as part of the PRP
- **Documentation**: URLs with specific sections
- **Code Examples**: Real snippets from codebase
- **Gotchas**: Library quirks, version issues
- **Patterns**: Existing approaches to follow
### Implementation Blueprint
- Start with pseudocode showing approach
- Reference real files for patterns
- Include error handling strategy
- list tasks to be completed to fullfill the PRP in the order they should be completed
### Validation Gates (Must be Executable) eg for python
```bash
# Syntax/Style
ruff check --fix && mypy .
# Unit Tests
uv run pytest tests/ -v
```
**_ CRITICAL AFTER YOU ARE DONE RESEARCHING AND EXPLORING THE CODEBASE BEFORE YOU START WRITING THE PRP _**
**_ ULTRATHINK ABOUT THE PRP AND PLAN YOUR APPROACH THEN START WRITING THE PRP _**
## Output
Save as: `PRPs/{feature-name}.md`
## Quality Checklist
- [ ] All necessary context included
- [ ] Validation gates are executable by AI
- [ ] References existing patterns
- [ ] Clear implementation path
- [ ] Error handling documented
Score the PRP on a scale of 1-10 (confidence level to succeed in one-pass implementation using claude codes)
Remember: The goal is one-pass implementation success through comprehensive context.

View File

@@ -0,0 +1,40 @@
# Execute BASE PRP
Implement a feature using using the PRP file.
## PRP File: $ARGUMENTS
## Execution Process
1. **Load PRP**
- Read the specified PRP file
- Understand all context and requirements
- Follow all instructions in the PRP and extend the research if needed
- Ensure you have all needed context to implement the PRP fully
- Do more web searches and codebase exploration as needed
2. **ULTRATHINK**
- Think hard before you execute the plan. Create a comprehensive plan addressing all requirements.
- Break down complex tasks into smaller, manageable steps using your todos tools.
- Use the TodoWrite tool to create and track your implementation plan.
- Identify implementation patterns from existing code to follow.
3. **Execute the plan**
- Execute the PRP
- Implement all the code
4. **Validate**
- Run each validation command
- Fix any failures
- Re-run until all pass
5. **Complete**
- Ensure all checklist items done
- Run final validation suite
- Report completion status
- Read the PRP again to ensure you have implemented everything
6. **Reference the PRP**
- You can always reference the PRP again if needed
Note: If validation fails, use error patterns in PRP to fix and retry.

View File

@@ -0,0 +1,108 @@
# Create BASE PRP
## Feature: $ARGUMENTS
## PRP Creation Mission
Create a comprehensive PRP that enables **one-pass implementation success** through systematic research and context curation.
**Critical Understanding**: The executing AI agent only receives:
- Start by reading and understanding the prp concepts PRPs/README.md
- The PRP content you create
- Its training data knowledge
- Access to codebase files (but needs guidance on which ones)
**Therefore**: Your research and context curation directly determines implementation success. Incomplete context = implementation failure.
## Research Process
> During the research process, create clear tasks and spawn as many agents and subagents as needed using the batch tools. The deeper research we do here the better the PRP will be. we optminize for chance of success and not for speed.
1. **Codebase Analysis in depth**
- Create clear todos and spawn subagents to search the codebase for similar features/patterns Think hard and plan your approach
- Identify all the necessary files to reference in the PRP
- Note all existing conventions to follow
- Check existing test patterns for validation approach
- Use the batch tools to spawn subagents to search the codebase for similar features/patterns
2. **External Research at scale**
- Create clear todos and spawn with instructions subagents to do deep research for similar features/patterns online and include urls to documentation and examples
- Library documentation (include specific URLs)
- For critical pieces of documentation add a .md file to PRPs/ai_docs and reference it in the PRP with clear reasoning and instructions
- Implementation examples (GitHub/StackOverflow/blogs)
- Best practices and common pitfalls found during research
- Use the batch tools to spawn subagents to search for similar features/patterns online and include urls to documentation and examples
3. **User Clarification**
- Ask for clarification if you need it
## PRP Generation Process
### Step 1: Choose Template
Use `PRPs/templates/prp_base.md` as your template structure - it contains all necessary sections and formatting.
### Step 2: Context Completeness Validation
Before writing, apply the **"No Prior Knowledge" test** from the template:
_"If someone knew nothing about this codebase, would they have everything needed to implement this successfully?"_
### Step 3: Research Integration
Transform your research findings into the template sections:
**Goal Section**: Use research to define specific, measurable Feature Goal and concrete Deliverable
**Context Section**: Populate YAML structure with your research findings - specific URLs, file patterns, gotchas
**Implementation Tasks**: Create dependency-ordered tasks using information-dense keywords from codebase analysis
**Validation Gates**: Use project-specific validation commands that you've verified work in this codebase
### Step 4: Information Density Standards
Ensure every reference is **specific and actionable**:
- URLs include section anchors, not just domain names
- File references include specific patterns to follow, not generic mentions
- Task specifications include exact naming conventions and placement
- Validation commands are project-specific and executable
### Step 5: ULTRATHINK Before Writing
After research completion, create comprehensive PRP writing plan using TodoWrite tool:
- Plan how to structure each template section with your research findings
- Identify gaps that need additional research
- Create systematic approach to filling template with actionable context
## Output
Save as: `PRPs/{feature-name}.md`
## PRP Quality Gates
### Context Completeness Check
- [ ] Passes "No Prior Knowledge" test from template
- [ ] All YAML references are specific and accessible
- [ ] Implementation tasks include exact naming and placement guidance
- [ ] Validation commands are project-specific and verified working
### Template Structure Compliance
- [ ] All required template sections completed
- [ ] Goal section has specific Feature Goal, Deliverable, Success Definition
- [ ] Implementation Tasks follow dependency ordering
- [ ] Final Validation Checklist is comprehensive
### Information Density Standards
- [ ] No generic references - all are specific and actionable
- [ ] File patterns point at specific examples to follow
- [ ] URLs include section anchors for exact guidance
- [ ] Task specifications use information-dense keywords from codebase
## Success Metrics
**Confidence Score**: Rate 1-10 for one-pass implementation success likelihood
**Validation**: The completed PRP should enable an AI agent unfamiliar with the codebase to implement the feature successfully using only the PRP content and codebase access.

View File

@@ -0,0 +1,55 @@
# Execute BASE PRP
## PRP File: $ARGUMENTS
## Mission: One-Pass Implementation Success
PRPs enable working code on the first attempt through:
- **Context Completeness**: Everything needed, nothing guessed
- **Progressive Validation**: 4-level gates catch errors early
- **Pattern Consistency**: Follow existing codebase approaches
- Read PRPs/README.md to understand PRP concepts
**Your Goal**: Transform the PRP into working code that passes all validation gates.
## Execution Process
1. **Load PRP**
- Read the specified PRP file completely
- Absorb all context, patterns, requirements and gather codebase intelligence
- Use the provided documentation references and file patterns, consume the right documentation before the appropriate todo/task
- Trust the PRP's context and guidance - it's designed for one-pass success
- If needed do additional codebase exploration and research as needed
2. **ULTRATHINK & Plan**
- Create comprehensive implementation plan following the PRP's task order
- Break down into clear todos using TodoWrite tool
- Use subagents for parallel work when beneficial (always create prp inspired prompts for subagents when used)
- Follow the patterns referenced in the PRP
- Use specific file paths, class names, and method signatures from PRP context
- Never guess - always verify the codebase patterns and examples referenced in the PRP yourself
3. **Execute Implementation**
- Follow the PRP's Implementation Tasks sequence, add more detail as needed, especially when using subagents
- Use the patterns and examples referenced in the PRP
- Create files in locations specified by the desired codebase tree
- Apply naming conventions from the task specifications and CLAUDE.md
4. **Progressive Validation**
**Execute the level validation system from the PRP:**
- **Level 1**: Run syntax & style validation commands from PRP
- **Level 2**: Execute unit test validation from PRP
- **Level 3**: Run integration testing commands from PRP
- **Level 4**: Execute specified validation from PRP
**Each level must pass before proceeding to the next.**
5. **Completion Verification**
- Work through the Final Validation Checklist in the PRP
- Verify all Success Criteria from the "What" section are met
- Confirm all Anti-Patterns were avoided
- Implementation is ready and working
**Failure Protocol**: When validation fails, use the patterns and gotchas from the PRP to fix issues, then re-run validation until passing.