Revert "chore: remove example workflow directory"

This reverts commit c2a568e08c.
This commit is contained in:
Rasmus Widing
2025-10-23 22:38:46 +03:00
parent c2a568e08c
commit 799d5a9dd7
13 changed files with 933 additions and 712 deletions

View File

@@ -52,6 +52,8 @@ This new vision for Archon replaces the old one (the agenteer). Archon used to b
</a>
<br/>
<em>📺 Click to watch the setup tutorial on YouTube</em>
<br/>
<a href="./archon-example-workflow">-> Example AI coding workflow in the video <-</a>
</p>
### Prerequisites

View File

@@ -0,0 +1,114 @@
---
name: "codebase-analyst"
description: "Use proactively to find codebase patterns, coding style and team standards. Specialized agent for deep codebase pattern analysis and convention discovery"
model: "sonnet"
---
You are a specialized codebase analysis agent focused on discovering patterns, conventions, and implementation approaches.
## Your Mission
Perform deep, systematic analysis of codebases to extract:
- Architectural patterns and project structure
- Coding conventions and naming standards
- Integration patterns between components
- Testing approaches and validation commands
- External library usage and configuration
## Analysis Methodology
### 1. Project Structure Discovery
- Start looking for Architecture docs rules files such as claude.md, agents.md, cursorrules, windsurfrules, agent wiki, or similar documentation
- Continue with root-level config files (package.json, pyproject.toml, go.mod, etc.)
- Map directory structure to understand organization
- Identify primary language and framework
- Note build/run commands
### 2. Pattern Extraction
- Find similar implementations to the requested feature
- Extract common patterns (error handling, API structure, data flow)
- Identify naming conventions (files, functions, variables)
- Document import patterns and module organization
### 3. Integration Analysis
- How are new features typically added?
- Where do routes/endpoints get registered?
- How are services/components wired together?
- What's the typical file creation pattern?
### 4. Testing Patterns
- What test framework is used?
- How are tests structured?
- What are common test patterns?
- Extract validation command examples
### 5. Documentation Discovery
- Check for README files
- Find API documentation
- Look for inline code comments with patterns
- Check PRPs/ai_docs/ for curated documentation
## Output Format
Provide findings in structured format:
```yaml
project:
language: [detected language]
framework: [main framework]
structure: [brief description]
patterns:
naming:
files: [pattern description]
functions: [pattern description]
classes: [pattern description]
architecture:
services: [how services are structured]
models: [data model patterns]
api: [API patterns]
testing:
framework: [test framework]
structure: [test file organization]
commands: [common test commands]
similar_implementations:
- file: [path]
relevance: [why relevant]
pattern: [what to learn from it]
libraries:
- name: [library]
usage: [how it's used]
patterns: [integration patterns]
validation_commands:
syntax: [linting/formatting commands]
test: [test commands]
run: [run/serve commands]
```
## Key Principles
- Be specific - point to exact files and line numbers
- Extract executable commands, not abstract descriptions
- Focus on patterns that repeat across the codebase
- Note both good patterns to follow and anti-patterns to avoid
- Prioritize relevance to the requested feature/story
## Search Strategy
1. Start broad (project structure) then narrow (specific patterns)
2. Use parallel searches when investigating multiple aspects
3. Follow references - if a file imports something, investigate it
4. Look for "similar" not "same" - patterns often repeat with variations
Remember: Your analysis directly determines implementation success. Be thorough, specific, and actionable.

View File

@@ -0,0 +1,176 @@
---
name: validator
description: Testing specialist for software features. USE AUTOMATICALLY after implementation to create simple unit tests, validate functionality, and ensure readiness. IMPORTANT - You must pass exactly what was built as part of the prompt so the validator knows what features to test.
tools: Read, Write, Grep, Glob, Bash, TodoWrite
color: green
---
# Software Feature Validator
You are an expert QA engineer specializing in creating simple, effective unit tests for newly implemented software features. Your role is to ensure the implemented functionality works correctly through straightforward testing.
## Primary Objective
Create simple, focused unit tests that validate the core functionality of what was just built. Keep tests minimal but effective - focus on the happy path and critical edge cases only.
## Core Responsibilities
### 1. Understand What Was Built
First, understand exactly what feature or functionality was implemented by:
- Reading the relevant code files
- Identifying the main functions/components created
- Understanding the expected inputs and outputs
- Noting any external dependencies or integrations
### 2. Create Simple Unit Tests
Write straightforward tests that:
- **Test the happy path**: Verify the feature works with normal, expected inputs
- **Test critical edge cases**: Empty inputs, null values, boundary conditions
- **Test error handling**: Ensure errors are handled gracefully
- **Keep it simple**: 3-5 tests per feature is often sufficient
### 3. Test Structure Guidelines
#### For JavaScript/TypeScript Projects
```javascript
// Simple test example
describe('FeatureName', () => {
test('should handle normal input correctly', () => {
const result = myFunction('normal input');
expect(result).toBe('expected output');
});
test('should handle empty input', () => {
const result = myFunction('');
expect(result).toBe(null);
});
test('should throw error for invalid input', () => {
expect(() => myFunction(null)).toThrow();
});
});
```
#### For Python Projects
```python
# Simple test example
import unittest
from my_module import my_function
class TestFeature(unittest.TestCase):
def test_normal_input(self):
result = my_function("normal input")
self.assertEqual(result, "expected output")
def test_empty_input(self):
result = my_function("")
self.assertIsNone(result)
def test_invalid_input(self):
with self.assertRaises(ValueError):
my_function(None)
```
### 4. Test Execution Process
1. **Identify test framework**: Check package.json, requirements.txt, or project config
2. **Create test file**: Place in appropriate test directory (tests/, __tests__, spec/)
3. **Write simple tests**: Focus on functionality, not coverage percentages
4. **Run tests**: Use the project's test command (npm test, pytest, etc.)
5. **Fix any issues**: If tests fail, determine if it's a test issue or code issue
## Validation Approach
### Keep It Simple
- Don't over-engineer tests
- Focus on "does it work?" not "is every line covered?"
- 3-5 good tests are better than 20 redundant ones
- Test behavior, not implementation details
### What to Test
✅ Main functionality works as expected
✅ Common edge cases are handled
✅ Errors don't crash the application
✅ API contracts are honored (if applicable)
✅ Data transformations are correct
### What NOT to Test
❌ Every possible combination of inputs
❌ Internal implementation details
❌ Third-party library functionality
❌ Trivial getters/setters
❌ Configuration values
## Common Test Patterns
### API Endpoint Test
```javascript
test('API returns correct data', async () => {
const response = await fetch('/api/endpoint');
const data = await response.json();
expect(response.status).toBe(200);
expect(data).toHaveProperty('expectedField');
});
```
### Data Processing Test
```python
def test_data_transformation():
input_data = {"key": "value"}
result = transform_data(input_data)
assert result["key"] == "TRANSFORMED_VALUE"
```
### UI Component Test
```javascript
test('Button triggers action', () => {
const onClick = jest.fn();
render(<Button onClick={onClick}>Click me</Button>);
fireEvent.click(screen.getByText('Click me'));
expect(onClick).toHaveBeenCalled();
});
```
## Final Validation Checklist
Before completing validation:
- [ ] Tests are simple and readable
- [ ] Main functionality is tested
- [ ] Critical edge cases are covered
- [ ] Tests actually run and pass
- [ ] No overly complex test setups
- [ ] Test names clearly describe what they test
## Output Format
After creating and running tests, provide:
```markdown
# Validation Complete
## Tests Created
- [Test file name]: [Number] tests
- Total tests: [X]
- All passing: [Yes/No]
## What Was Tested
- ✅ [Feature 1]: Working correctly
- ✅ [Feature 2]: Handles edge cases
- ⚠️ [Feature 3]: [Any issues found]
## Test Commands
Run tests with: `[command used]`
## Notes
[Any important observations or recommendations]
```
## Remember
- Simple tests are better than complex ones
- Focus on functionality, not coverage metrics
- Test what matters, skip what doesn't
- Clear test names help future debugging
- Working software is the goal, tests are the safety net

View File

@@ -0,0 +1,195 @@
---
description: Create a comprehensive implementation plan from requirements document through extensive research
argument-hint: [requirements-file-path]
---
# Create Implementation Plan from Requirements
You are about to create a comprehensive implementation plan based on initial requirements. This involves extensive research, analysis, and planning to produce a detailed roadmap for execution.
## Step 1: Read and Analyze Requirements
Read the requirements document from: $ARGUMENTS
Extract and understand:
- Core feature requests and objectives
- Technical requirements and constraints
- Expected outcomes and success criteria
- Integration points with existing systems
- Performance and scalability requirements
- Any specific technologies or frameworks mentioned
## Step 2: Research Phase
### 2.1 Knowledge Base Search (if instructed)
If Archon RAG is available and relevant:
- Use `mcp__archon__rag_get_available_sources()` to see available documentation
- Search for relevant patterns: `mcp__archon__rag_search_knowledge_base(query="...")`
- Find code examples: `mcp__archon__rag_search_code_examples(query="...")`
- Focus on implementation patterns, best practices, and similar features
### 2.2 Codebase Analysis (for existing projects)
If this is for an existing codebase:
**IMPORTANT: Use the `codebase-analyst` agent for deep pattern analysis**
- Launch the codebase-analyst agent using the Task tool to perform comprehensive pattern discovery
- The agent will analyze: architecture patterns, coding conventions, testing approaches, and similar implementations
- Use the agent's findings to ensure your plan follows existing patterns and conventions
For quick searches you can also:
- Use Grep to find specific features or patterns
- Identify the project structure and conventions
- Locate relevant modules and components
- Understand existing architecture and design patterns
- Find integration points for new features
- Check for existing utilities or helpers to reuse
## Step 3: Planning and Design
Based on your research, create a detailed plan that includes:
### 3.1 Task Breakdown
Create a prioritized list of implementation tasks:
- Each task should be specific and actionable
- Tasks should be sized appropriately
- Include dependencies between tasks
- Order tasks logically for implementation flow
### 3.2 Technical Architecture
Define the technical approach:
- Component structure and organization
- Data flow and state management
- API design (if applicable)
- Database schema changes (if needed)
- Integration points with existing code
### 3.3 Implementation References
Document key resources for implementation:
- Existing code files to reference or modify
- Documentation links for technologies used
- Code examples from research
- Patterns to follow from the codebase
- Libraries or dependencies to add
## Step 4: Create the Plan Document
Write a comprehensive plan to `PRPs/[feature-name].md` with roughly this structure (n represents that this could be any number of those things):
```markdown
# Implementation Plan: [Feature Name]
## Overview
[Brief description of what will be implemented]
## Requirements Summary
- [Key requirement 1]
- [Key requirement 2]
- [Key requirement n]
## Research Findings
### Best Practices
- [Finding 1]
- [Finding n]
### Reference Implementations
- [Example 1 with link/location]
- [Example n with link/location]
### Technology Decisions
- [Technology choice 1 and rationale]
- [Technology choice n and rationale]
## Implementation Tasks
### Phase 1: Foundation
1. **Task Name**
- Description: [What needs to be done]
- Files to modify/create: [List files]
- Dependencies: [Any prerequisites]
- Estimated effort: [time estimate]
2. **Task Name**
- Description: [What needs to be done]
- Files to modify/create: [List files]
- Dependencies: [Any prerequisites]
- Estimated effort: [time estimate]
### Phase 2: Core Implementation
[Continue with numbered tasks...]
### Phase 3: Integration & Testing
[Continue with numbered tasks...]
## Codebase Integration Points
### Files to Modify
- `path/to/file1.js` - [What changes needed]
- `path/to/filen.py` - [What changes needed]
### New Files to Create
- `path/to/newfile1.js` - [Purpose]
- `path/to/newfilen.py` - [Purpose]
### Existing Patterns to Follow
- [Pattern 1 from codebase]
- [Pattern n from codebase]
## Technical Design
### Architecture Diagram (if applicable)
```
[ASCII diagram or description]
```
### Data Flow
[Description of how data flows through the feature]
### API Endpoints (if applicable)
- `POST /api/endpoint` - [Purpose]
- `GET /api/endpoint/:id` - [Purpose]
## Dependencies and Libraries
- [Library 1] - [Purpose]
- [Library n] - [Purpose]
## Testing Strategy
- Unit tests for [components]
- Integration tests for [workflows]
- Edge cases to cover: [list]
## Success Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
- [ ] [Criterion n]
## Notes and Considerations
- [Any important notes]
- [Potential challenges]
- [Future enhancements]
---
*This plan is ready for execution with `/execute-plan`*
```
## Step 5: Validation
Before finalizing the plan:
1. Ensure all requirements are addressed
2. Verify tasks are properly sequenced
3. Check that integration points are identified
4. Confirm research supports the approach
5. Make sure the plan is actionable and clear
## Important Guidelines
- **Be thorough in research**: The quality of the plan depends on understanding best practices
- **Keep it actionable**: Every task should be clear and implementable
- **Reference everything**: Include links, file paths, and examples
- **Consider the existing codebase**: Follow established patterns and conventions
- **Think about testing**: Include testing tasks in the plan
- **Size tasks appropriately**: Not too large, not too granular
## Output
Save the plan to the PRPs directory and inform the user:
"Implementation plan created at: PRPs/[feature-name].md
You can now execute this plan using: `/execute-plan PRPs/[feature-name].md`"

View File

@@ -0,0 +1,139 @@
---
description: Execute a development plan with full Archon task management integration
argument-hint: [plan-file-path]
---
# Execute Development Plan with Archon Task Management
You are about to execute a comprehensive development plan with integrated Archon task management. This workflow ensures systematic task tracking and implementation throughout the entire development process.
## Critical Requirements
**MANDATORY**: Throughout the ENTIRE execution of this plan, you MUST maintain continuous usage of Archon for task management. DO NOT drop or skip Archon integration at any point. Every task from the plan must be tracked in Archon from creation to completion.
## Step 1: Read and Parse the Plan
Read the plan file specified in: $ARGUMENTS
The plan file will contain:
- A list of tasks to implement
- References to existing codebase components and integration points
- Context about where to look in the codebase for implementation
## Step 2: Project Setup in Archon
1. Check if a project ID is specified in CLAUDE.md for this feature
- Look for any Archon project references in CLAUDE.md
- If found, use that project ID
2. If no project exists:
- Create a new project in Archon using `mcp__archon__manage_project`
- Use a descriptive title based on the plan's objectives
- Store the project ID for use throughout execution
## Step 3: Create All Tasks in Archon
For EACH task identified in the plan:
1. Create a corresponding task in Archon using `mcp__archon__manage_task("create", ...)`
2. Set initial status as "todo"
3. Include detailed descriptions from the plan
4. Maintain the task order/priority from the plan
**IMPORTANT**: Create ALL tasks in Archon upfront before starting implementation. This ensures complete visibility of the work scope.
## Step 4: Codebase Analysis
Before implementation begins:
1. Analyze ALL integration points mentioned in the plan
2. Use Grep and Glob tools to:
- Understand existing code patterns
- Identify where changes need to be made
- Find similar implementations for reference
3. Read all referenced files and components
4. Build a comprehensive understanding of the codebase context
## Step 5: Implementation Cycle
For EACH task in sequence:
### 5.1 Start Task
- Move the current task to "doing" status in Archon: `mcp__archon__manage_task("update", task_id=..., status="doing")`
- Use TodoWrite to track local subtasks if needed
### 5.2 Implement
- Execute the implementation based on:
- The task requirements from the plan
- Your codebase analysis findings
- Best practices and existing patterns
- Make all necessary code changes
- Ensure code quality and consistency
### 5.3 Complete Task
- Once implementation is complete, move task to "review" status: `mcp__archon__manage_task("update", task_id=..., status="review")`
- DO NOT mark as "done" yet - this comes after validation
### 5.4 Proceed to Next
- Move to the next task in the list
- Repeat steps 5.1-5.3
**CRITICAL**: Only ONE task should be in "doing" status at any time. Complete each task before starting the next.
## Step 6: Validation Phase
After ALL tasks are in "review" status:
**IMPORTANT: Use the `validator` agent for comprehensive testing**
1. Launch the validator agent using the Task tool
- Provide the validator with a detailed description of what was built
- Include the list of features implemented and files modified
- The validator will create simple, effective unit tests
- It will run tests and report results
The validator agent will:
- Create focused unit tests for the main functionality
- Test critical edge cases and error handling
- Run the tests using the project's test framework
- Report what was tested and any issues found
Additional validation you should perform:
- Check for integration issues between components
- Ensure all acceptance criteria from the plan are met
## Step 7: Finalize Tasks in Archon
After successful validation:
1. For each task that has corresponding unit test coverage:
- Move from "review" to "done" status: `mcp__archon__manage_task("update", task_id=..., status="done")`
2. For any tasks without test coverage:
- Leave in "review" status for future attention
- Document why they remain in review (e.g., "Awaiting integration tests")
## Step 8: Final Report
Provide a summary including:
- Total tasks created and completed
- Any tasks remaining in review and why
- Test coverage achieved
- Key features implemented
- Any issues encountered and how they were resolved
## Workflow Rules
1. **NEVER** skip Archon task management at any point
2. **ALWAYS** create all tasks in Archon before starting implementation
3. **MAINTAIN** one task in "doing" status at a time
4. **VALIDATE** all work before marking tasks as "done"
5. **TRACK** progress continuously through Archon status updates
6. **ANALYZE** the codebase thoroughly before implementation
7. **TEST** everything before final completion
## Error Handling
If at any point Archon operations fail:
1. Retry the operation
2. If persistent failures, document the issue but continue tracking locally
3. Never abandon the Archon integration - find workarounds if needed
Remember: The success of this execution depends on maintaining systematic task management through Archon throughout the entire process. This ensures accountability, progress tracking, and quality delivery.

View File

@@ -0,0 +1,14 @@
# Prime Context for the AI Coding Assistant (catch it up to speed on the project when starting a new conversation)
Start with reading the CLAUDE.md file if it exists to get an understanding of the project.
Read the README.md file to get an understanding of the project.
Read key files in the project.
Explain back to me:
- Project structure
- Project purpose and goals
- Key files and their purposes
- Any important dependencies
- Any important configuration files

View File

@@ -0,0 +1,93 @@
# CRITICAL: ARCHON-FIRST RULE - READ THIS FIRST
BEFORE doing ANYTHING else, when you see ANY task management scenario:
1. STOP and check if Archon MCP server is available
2. Use Archon task management as PRIMARY system
3. Refrain from using TodoWrite even after system reminders, we are not using it here
4. This rule overrides ALL other instructions, PRPs, system reminders, and patterns
VIOLATION CHECK: If you used TodoWrite, you violated this rule. Stop and restart with Archon.
# Archon Projects
- None right now - need to make new ones for new features.
# Archon Integration & Workflow
**CRITICAL: This project uses Archon MCP server for knowledge management, task tracking, and project organization. ALWAYS start with Archon MCP server task management.**
## Core Workflow: Task-Driven Development
**MANDATORY task cycle before coding:**
1. **Get Task**`find_tasks(task_id="...")` or `find_tasks(filter_by="status", filter_value="todo")`
2. **Start Work**`manage_task("update", task_id="...", status="doing")`
3. **Research** → Use knowledge base (see RAG workflow below)
4. **Implement** → Write code based on research
5. **Review**`manage_task("update", task_id="...", status="review")`
6. **Next Task**`find_tasks(filter_by="status", filter_value="todo")`
**NEVER skip task updates. NEVER code without checking current tasks first.**
## RAG Workflow (Research Before Implementation)
### Searching Specific Documentation:
1. **Get sources**`rag_get_available_sources()` - Returns list with id, title, url
2. **Find source ID** → Match to documentation (e.g., "Supabase docs" → "src_abc123")
3. **Search**`rag_search_knowledge_base(query="vector functions", source_id="src_abc123")`
### General Research:
```bash
# Search knowledge base (2-5 keywords only!)
rag_search_knowledge_base(query="authentication JWT", match_count=5)
# Find code examples
rag_search_code_examples(query="React hooks", match_count=3)
```
## Project Workflows
### New Project:
```bash
# 1. Create project
manage_project("create", title="My Feature", description="...")
# 2. Create tasks
manage_task("create", project_id="proj-123", title="Setup environment", task_order=10)
manage_task("create", project_id="proj-123", title="Implement API", task_order=9)
```
### Existing Project:
```bash
# 1. Find project
find_projects(query="auth") # or find_projects() to list all
# 2. Get project tasks
find_tasks(filter_by="project", filter_value="proj-123")
# 3. Continue work or create new tasks
```
## Tool Reference
**Projects:**
- `find_projects(query="...")` - Search projects
- `find_projects(project_id="...")` - Get specific project
- `manage_project("create"/"update"/"delete", ...)` - Manage projects
**Tasks:**
- `find_tasks(query="...")` - Search tasks by keyword
- `find_tasks(task_id="...")` - Get specific task
- `find_tasks(filter_by="status"/"project"/"assignee", filter_value="...")` - Filter tasks
- `manage_task("create"/"update"/"delete", ...)` - Manage tasks
**Knowledge Base:**
- `rag_get_available_sources()` - List all sources
- `rag_search_knowledge_base(query="...", source_id="...")` - Search docs
- `rag_search_code_examples(query="...", source_id="...")` - Find code
## Important Notes
- Task status flow: `todo``doing``review``done`
- Keep queries SHORT (2-5 keywords) for better search results
- Higher `task_order` = higher priority (0-100)
- Tasks should be 30 min - 4 hours of work

View File

@@ -0,0 +1,196 @@
# Archon AI Coding Workflow Template
A simple yet reliable template for systematic AI-assisted development using **create-plan** and **execute-plan** workflows, powered by [Archon](https://github.com/coleam00/Archon) - the open-source AI coding command center. Build on top of this and create your own AI coding workflows!
## What is This?
This is a reusable workflow template that brings structure and reliability to AI coding assistants. Instead of ad-hoc prompting, you get:
- **Systematic planning** from requirements to implementation
- **Knowledge-augmented development** via Archon's RAG capabilities
- **Task management integration** for progress tracking
- **Specialized subagents** for analysis and validation
- **Codebase consistency** through pattern analysis
Works with **Claude Code**, **Cursor**, **Windsurf**, **Codex**, and any AI coding assistant that supports custom commands or prompt templates.
## Core Workflows
### 1. Create Plan (`/create-plan`)
Transform requirements into actionable implementation plans through systematic research and analysis.
**What it does:**
- Reads your requirements document
- Searches Archon's knowledge base for best practices and patterns
- Analyzes your codebase using the `codebase-analyst` subagent
- Produces a comprehensive implementation plan (PRP) with:
- Task breakdown with dependencies and effort estimates
- Technical architecture and integration points
- Code references and patterns to follow
- Testing strategy and success criteria
**Usage:**
```bash
/create-plan requirements/my-feature.md
```
### 2. Execute Plan (`/execute-plan`)
Execute implementation plans with integrated Archon task management and validation.
**What it does:**
- Reads your implementation plan
- Creates an Archon project and tasks automatically
- Implements each task systematically (`todo``doing``review``done`)
- Validates with the `validator` subagent to create unit tests
- Tracks progress throughout with full visibility
**Usage:**
```bash
/execute-plan PRPs/my-feature.md
```
## Why Archon?
[Archon](https://github.com/coleam00/Archon) is an open-source AI coding OS that provides:
- **Knowledge Base**: RAG-powered search across documentation, PDFs, and crawled websites
- **Task Management**: Hierarchical projects with AI-assisted task creation and tracking
- **Smart Search**: Hybrid search with contextual embeddings and reranking
- **Multi-Agent Support**: Connect multiple AI assistants to shared context
- **Model Context Protocol**: Standard MCP server for seamless integration
Think of it as the command center that keeps your AI coding assistant informed and organized.
## What's Included
```
.claude/
├── commands/
│ ├── create-plan.md # Requirements → Implementation plan
│ ├── execute-plan.md # Plan → Tracked implementation
│ └── primer.md # Project context loader
├── agents/
│ ├── codebase-analyst.md # Pattern analysis specialist
│ └── validator.md # Testing specialist
└── CLAUDE.md # Archon-first workflow rules
```
## Setup Instructions
### For Claude Code
1. **Copy the template to your project:**
```bash
cp -r use-cases/archon-example-workflow/.claude /path/to/your-project/
```
2. **Install Archon MCP server** (if not already installed):
- Follow instructions at [github.com/coleam00/Archon](https://github.com/coleam00/Archon)
- Configure in your Claude Code settings
3. **Start using workflows:**
```bash
# In Claude Code
/create-plan requirements/your-feature.md
# Review the generated plan, then:
/execute-plan PRPs/your-feature.md
```
### For Other AI Assistants
The workflows are just markdown prompt templates - adapt them to your tool - examples:
#### **Cursor / Windsurf**
- Copy files to `.cursor/` or `.windsurf/` directory
- Use as custom commands or rules files
- Manually invoke workflows by copying prompt content
#### **Cline / Aider / Continue.dev**
- Save workflows as prompt templates
- Reference them in your session context
- Adapt the MCP tool calls to your tool's API
#### **Generic Usage**
Even without tool-specific integrations:
1. Read `create-plan.md` and follow its steps manually
2. Use Archon's web UI for task management if MCP isn't available
3. Adapt the workflow structure to your assistant's capabilities
## Workflow in Action
### New Project Example
```bash
# 1. Write requirements
echo "Build a REST API for user authentication" > requirements/auth-api.md
# 2. Create plan
/create-plan requirements/auth-api.md
# → AI searches Archon knowledge base for JWT best practices
# → AI analyzes your codebase patterns
# → Generates PRPs/auth-api.md with 12 tasks
# 3. Execute plan
/execute-plan PRPs/auth-api.md
# → Creates Archon project "Authentication API"
# → Creates 12 tasks in Archon
# → Implements task-by-task with status tracking
# → Runs validator subagent for unit tests
# → Marks tasks done as they complete
```
### Existing Project Example
```bash
# 1. Create feature requirements
# 2. Run create-plan (it analyzes existing codebase)
/create-plan requirements/new-feature.md
# → Discovers existing patterns from your code
# → Suggests integration points
# → Follows your project's conventions
# 3. Execute with existing Archon project
# Edit execute-plan.md to reference project ID or let it create new one
/execute-plan PRPs/new-feature.md
```
## Key Benefits
### For New Projects
- **Pattern establishment**: AI learns and documents your conventions
- **Structured foundation**: Plans prevent scope creep and missed requirements
- **Knowledge integration**: Leverage best practices from day one
### For Existing Projects
- **Convention adherence**: Codebase analysis ensures consistency
- **Incremental enhancement**: Add features that fit naturally
- **Context retention**: Archon keeps project history and patterns
## Customization
### Adapt the Workflows
Edit the markdown files to match your needs - examples:
- **Change task granularity** in `create-plan.md` (Step 3.1)
- **Add custom validation** in `execute-plan.md` (Step 6)
- **Modify report format** in either workflow
- **Add your own subagents** for specialized tasks
### Extend with Subagents
Create new specialized agents in `.claude/agents/`:
```markdown
---
name: "security-auditor"
description: "Reviews code for security vulnerabilities"
tools: Read, Grep, Bash
---
You are a security specialist who reviews code for...
```
Then reference in your workflows.

View File

@@ -28,7 +28,7 @@ class SandboxType(str, Enum):
"""Sandbox environment types"""
GIT_BRANCH = "git_branch"
GIT_WORKTREE = "git_worktree" # Fully implemented - recommended for concurrent execution
GIT_WORKTREE = "git_worktree" # Placeholder for Phase 2+
E2B = "e2b" # Placeholder for Phase 2+
DAGGER = "dagger" # Placeholder for Phase 2+
@@ -102,10 +102,7 @@ class CreateAgentWorkOrderRequest(BaseModel):
"""
repository_url: str = Field(..., description="Git repository URL")
sandbox_type: SandboxType = Field(
default=SandboxType.GIT_WORKTREE,
description="Sandbox environment type (defaults to git_worktree for efficient concurrent execution)"
)
sandbox_type: SandboxType = Field(..., description="Sandbox environment type")
user_request: str = Field(..., description="User's description of the work to be done")
selected_commands: list[str] = Field(
default=["create-branch", "planning", "execute", "commit", "create-pr"],

View File

@@ -164,7 +164,7 @@ class WorkflowOrchestrator:
branch_name = context.get("create-branch")
git_stats = await self._calculate_git_stats(
branch_name,
sandbox.working_dir
sandbox.get_working_directory()
)
await self.state_repository.update_status(
@@ -188,7 +188,7 @@ class WorkflowOrchestrator:
branch_name = context.get("create-branch")
if branch_name:
git_stats = await self._calculate_git_stats(
branch_name, sandbox.working_dir
branch_name, sandbox.get_working_directory()
)
await self.state_repository.update_status(
agent_work_order_id,

View File

@@ -1,178 +0,0 @@
"""Tests for Port Allocation"""
import pytest
from unittest.mock import patch
from src.agent_work_orders.utils.port_allocation import (
get_ports_for_work_order,
is_port_available,
find_next_available_ports,
create_ports_env_file,
)
@pytest.mark.unit
def test_get_ports_for_work_order_deterministic():
"""Test that same work order ID always gets same ports"""
work_order_id = "wo-abc123"
backend1, frontend1 = get_ports_for_work_order(work_order_id)
backend2, frontend2 = get_ports_for_work_order(work_order_id)
assert backend1 == backend2
assert frontend1 == frontend2
assert 9100 <= backend1 <= 9114
assert 9200 <= frontend1 <= 9214
@pytest.mark.unit
def test_get_ports_for_work_order_range():
"""Test that ports are within expected ranges"""
work_order_id = "wo-test123"
backend, frontend = get_ports_for_work_order(work_order_id)
assert 9100 <= backend <= 9114
assert 9200 <= frontend <= 9214
assert frontend == backend + 100
@pytest.mark.unit
def test_get_ports_for_work_order_different_ids():
"""Test that different work order IDs can get different ports"""
ids = [f"wo-test{i}" for i in range(20)]
port_pairs = [get_ports_for_work_order(wid) for wid in ids]
# With 15 slots, we should see some variation
unique_backends = len(set(p[0] for p in port_pairs))
assert unique_backends > 1 # At least some variation
@pytest.mark.unit
def test_get_ports_for_work_order_fallback_hash():
"""Test fallback to hash when base36 conversion fails"""
# Non-alphanumeric work order ID
work_order_id = "--------"
backend, frontend = get_ports_for_work_order(work_order_id)
# Should still work via hash fallback
assert 9100 <= backend <= 9114
assert 9200 <= frontend <= 9214
@pytest.mark.unit
def test_is_port_available_mock_available():
"""Test port availability check when port is available"""
with patch("socket.socket") as mock_socket:
mock_socket_instance = mock_socket.return_value.__enter__.return_value
mock_socket_instance.bind.return_value = None # Successful bind
result = is_port_available(9100)
assert result is True
mock_socket_instance.bind.assert_called_once_with(('localhost', 9100))
@pytest.mark.unit
def test_is_port_available_mock_unavailable():
"""Test port availability check when port is unavailable"""
with patch("socket.socket") as mock_socket:
mock_socket_instance = mock_socket.return_value.__enter__.return_value
mock_socket_instance.bind.side_effect = OSError("Port in use")
result = is_port_available(9100)
assert result is False
@pytest.mark.unit
def test_find_next_available_ports_first_available():
"""Test finding ports when first choice is available"""
work_order_id = "wo-test123"
# Mock all ports as available
with patch(
"src.agent_work_orders.utils.port_allocation.is_port_available",
return_value=True,
):
backend, frontend = find_next_available_ports(work_order_id)
# Should get the deterministic ports
expected_backend, expected_frontend = get_ports_for_work_order(work_order_id)
assert backend == expected_backend
assert frontend == expected_frontend
@pytest.mark.unit
def test_find_next_available_ports_fallback():
"""Test finding ports when first choice is unavailable"""
work_order_id = "wo-test123"
# Mock first port as unavailable, second as available
def mock_availability(port):
base_backend, _ = get_ports_for_work_order(work_order_id)
return port != base_backend and port != base_backend + 100
with patch(
"src.agent_work_orders.utils.port_allocation.is_port_available",
side_effect=mock_availability,
):
backend, frontend = find_next_available_ports(work_order_id)
# Should get next available ports
base_backend, _ = get_ports_for_work_order(work_order_id)
assert backend != base_backend # Should be different from base
assert 9100 <= backend <= 9114
assert frontend == backend + 100
@pytest.mark.unit
def test_find_next_available_ports_exhausted():
"""Test that RuntimeError is raised when all ports are unavailable"""
work_order_id = "wo-test123"
# Mock all ports as unavailable
with patch(
"src.agent_work_orders.utils.port_allocation.is_port_available",
return_value=False,
):
with pytest.raises(RuntimeError) as exc_info:
find_next_available_ports(work_order_id)
assert "No available ports" in str(exc_info.value)
@pytest.mark.unit
def test_create_ports_env_file(tmp_path):
"""Test creating .ports.env file"""
worktree_path = str(tmp_path)
backend_port = 9107
frontend_port = 9207
create_ports_env_file(worktree_path, backend_port, frontend_port)
ports_env_path = tmp_path / ".ports.env"
assert ports_env_path.exists()
content = ports_env_path.read_text()
assert "BACKEND_PORT=9107" in content
assert "FRONTEND_PORT=9207" in content
assert "VITE_BACKEND_URL=http://localhost:9107" in content
@pytest.mark.unit
def test_create_ports_env_file_overwrites(tmp_path):
"""Test that creating .ports.env file overwrites existing file"""
worktree_path = str(tmp_path)
ports_env_path = tmp_path / ".ports.env"
# Create existing file with old content
ports_env_path.write_text("OLD_CONTENT=true\n")
# Create new file
create_ports_env_file(worktree_path, 9100, 9200)
content = ports_env_path.read_text()
assert "OLD_CONTENT" not in content
assert "BACKEND_PORT=9100" in content

View File

@@ -7,7 +7,6 @@ from tempfile import TemporaryDirectory
from src.agent_work_orders.models import SandboxSetupError, SandboxType
from src.agent_work_orders.sandbox_manager.git_branch_sandbox import GitBranchSandbox
from src.agent_work_orders.sandbox_manager.git_worktree_sandbox import GitWorktreeSandbox
from src.agent_work_orders.sandbox_manager.sandbox_factory import SandboxFactory
@@ -197,157 +196,3 @@ def test_sandbox_factory_not_implemented():
repository_url="https://github.com/owner/repo",
sandbox_identifier="sandbox-test",
)
# GitWorktreeSandbox Tests
@pytest.mark.asyncio
async def test_git_worktree_sandbox_setup_success():
"""Test successful worktree sandbox setup"""
sandbox = GitWorktreeSandbox(
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
# Mock port allocation
with patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.find_next_available_ports",
return_value=(9107, 9207),
), patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.create_worktree",
return_value=("/tmp/worktree/path", None),
), patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.setup_worktree_environment",
):
await sandbox.setup()
assert sandbox.backend_port == 9107
assert sandbox.frontend_port == 9207
@pytest.mark.asyncio
async def test_git_worktree_sandbox_setup_failure():
"""Test failed worktree sandbox setup"""
sandbox = GitWorktreeSandbox(
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
# Mock port allocation success but worktree creation failure
with patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.find_next_available_ports",
return_value=(9107, 9207),
), patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.create_worktree",
return_value=(None, "Failed to create worktree"),
):
with pytest.raises(SandboxSetupError) as exc_info:
await sandbox.setup()
assert "Failed to create worktree" in str(exc_info.value)
@pytest.mark.asyncio
async def test_git_worktree_sandbox_execute_command_success():
"""Test successful command execution in worktree sandbox"""
with TemporaryDirectory() as tmpdir:
sandbox = GitWorktreeSandbox(
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
sandbox.working_dir = tmpdir
# Mock subprocess
mock_process = MagicMock()
mock_process.returncode = 0
mock_process.communicate = AsyncMock(return_value=(b"Command output", b""))
with patch("asyncio.create_subprocess_shell", return_value=mock_process):
result = await sandbox.execute_command("echo 'test'", timeout=10)
assert result.success is True
assert result.exit_code == 0
assert result.stdout == "Command output"
@pytest.mark.asyncio
async def test_git_worktree_sandbox_execute_command_timeout():
"""Test command execution timeout in worktree sandbox"""
import asyncio
with TemporaryDirectory() as tmpdir:
sandbox = GitWorktreeSandbox(
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
sandbox.working_dir = tmpdir
# Mock subprocess that times out
mock_process = MagicMock()
mock_process.kill = MagicMock()
mock_process.wait = AsyncMock()
async def mock_communicate():
await asyncio.sleep(10)
return (b"", b"")
mock_process.communicate = mock_communicate
with patch("asyncio.create_subprocess_shell", return_value=mock_process):
result = await sandbox.execute_command("sleep 100", timeout=0.1)
assert result.success is False
assert result.exit_code == -1
assert "timed out" in result.error_message.lower()
@pytest.mark.asyncio
async def test_git_worktree_sandbox_get_git_branch_name():
"""Test getting current git branch name in worktree"""
with TemporaryDirectory() as tmpdir:
sandbox = GitWorktreeSandbox(
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
sandbox.working_dir = tmpdir
with patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.get_current_branch",
new=AsyncMock(return_value="feat-wo-test123"),
):
branch = await sandbox.get_git_branch_name()
assert branch == "feat-wo-test123"
@pytest.mark.asyncio
async def test_git_worktree_sandbox_cleanup():
"""Test worktree sandbox cleanup"""
sandbox = GitWorktreeSandbox(
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
with patch(
"src.agent_work_orders.sandbox_manager.git_worktree_sandbox.remove_worktree",
return_value=(True, None),
):
await sandbox.cleanup()
# No exception should be raised
def test_sandbox_factory_git_worktree():
"""Test creating git worktree sandbox via factory"""
factory = SandboxFactory()
sandbox = factory.create_sandbox(
sandbox_type=SandboxType.GIT_WORKTREE,
repository_url="https://github.com/owner/repo",
sandbox_identifier="wo-test123",
)
assert isinstance(sandbox, GitWorktreeSandbox)
assert sandbox.repository_url == "https://github.com/owner/repo"
assert sandbox.sandbox_identifier == "wo-test123"

View File

@@ -1,372 +0,0 @@
"""Tests for Worktree Operations"""
import os
import pytest
from pathlib import Path
from unittest.mock import MagicMock, patch
from tempfile import TemporaryDirectory
from src.agent_work_orders.utils.worktree_operations import (
_get_repo_hash,
get_base_repo_path,
get_worktree_path,
ensure_base_repository,
create_worktree,
validate_worktree,
remove_worktree,
setup_worktree_environment,
)
@pytest.mark.unit
def test_get_repo_hash_consistent():
"""Test that same URL always produces same hash"""
url = "https://github.com/owner/repo"
hash1 = _get_repo_hash(url)
hash2 = _get_repo_hash(url)
assert hash1 == hash2
assert len(hash1) == 8 # 8-character hash
@pytest.mark.unit
def test_get_repo_hash_different_urls():
"""Test that different URLs produce different hashes"""
url1 = "https://github.com/owner/repo1"
url2 = "https://github.com/owner/repo2"
hash1 = _get_repo_hash(url1)
hash2 = _get_repo_hash(url2)
assert hash1 != hash2
@pytest.mark.unit
def test_get_base_repo_path():
"""Test getting base repository path"""
url = "https://github.com/owner/repo"
path = get_base_repo_path(url)
assert "repos" in path
assert "main" in path
assert Path(path).is_absolute()
@pytest.mark.unit
def test_get_worktree_path():
"""Test getting worktree path"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
path = get_worktree_path(url, work_order_id)
assert "repos" in path
assert "trees" in path
assert work_order_id in path
assert Path(path).is_absolute()
@pytest.mark.unit
def test_ensure_base_repository_new_clone():
"""Test ensuring base repository when it doesn't exist"""
url = "https://github.com/owner/repo"
mock_logger = MagicMock()
mock_result = MagicMock()
mock_result.returncode = 0
with patch("subprocess.run", return_value=mock_result), patch(
"os.path.exists", return_value=False
), patch("pathlib.Path.mkdir"):
base_path, error = ensure_base_repository(url, mock_logger)
assert base_path is not None
assert error is None
assert "main" in base_path
@pytest.mark.unit
def test_ensure_base_repository_already_exists():
"""Test ensuring base repository when it already exists"""
url = "https://github.com/owner/repo"
mock_logger = MagicMock()
mock_result = MagicMock()
mock_result.returncode = 0
with patch("subprocess.run", return_value=mock_result), patch(
"os.path.exists", return_value=True
):
base_path, error = ensure_base_repository(url, mock_logger)
assert base_path is not None
assert error is None
@pytest.mark.unit
def test_ensure_base_repository_clone_failure():
"""Test ensuring base repository when clone fails"""
url = "https://github.com/owner/repo"
mock_logger = MagicMock()
mock_result = MagicMock()
mock_result.returncode = 1
mock_result.stderr = "Clone failed"
with patch("subprocess.run", return_value=mock_result), patch(
"os.path.exists", return_value=False
), patch("pathlib.Path.mkdir"):
base_path, error = ensure_base_repository(url, mock_logger)
assert base_path is None
assert error is not None
assert "Clone failed" in error
@pytest.mark.unit
def test_create_worktree_success():
"""Test creating worktree successfully"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
branch_name = "feat-test"
mock_logger = MagicMock()
mock_result = MagicMock()
mock_result.returncode = 0
with patch(
"src.agent_work_orders.utils.worktree_operations.ensure_base_repository",
return_value=("/tmp/base", None),
), patch("subprocess.run", return_value=mock_result), patch(
"os.path.exists", return_value=False
), patch("pathlib.Path.mkdir"):
worktree_path, error = create_worktree(
url, work_order_id, branch_name, mock_logger
)
assert worktree_path is not None
assert error is None
assert work_order_id in worktree_path
@pytest.mark.unit
def test_create_worktree_already_exists():
"""Test creating worktree when it already exists"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
branch_name = "feat-test"
mock_logger = MagicMock()
expected_path = get_worktree_path(url, work_order_id)
with patch(
"src.agent_work_orders.utils.worktree_operations.ensure_base_repository",
return_value=("/tmp/base", None),
), patch("os.path.exists", return_value=True):
worktree_path, error = create_worktree(
url, work_order_id, branch_name, mock_logger
)
assert worktree_path == expected_path
assert error is None
@pytest.mark.unit
def test_create_worktree_branch_exists():
"""Test creating worktree when branch already exists"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
branch_name = "feat-test"
mock_logger = MagicMock()
# First call fails with "already exists", second succeeds
mock_result_fail = MagicMock()
mock_result_fail.returncode = 1
mock_result_fail.stderr = "already exists"
mock_result_success = MagicMock()
mock_result_success.returncode = 0
with patch(
"src.agent_work_orders.utils.worktree_operations.ensure_base_repository",
return_value=("/tmp/base", None),
), patch(
"subprocess.run", side_effect=[mock_result_success, mock_result_fail, mock_result_success]
), patch("os.path.exists", return_value=False), patch("pathlib.Path.mkdir"):
worktree_path, error = create_worktree(
url, work_order_id, branch_name, mock_logger
)
assert worktree_path is not None
assert error is None
@pytest.mark.unit
def test_create_worktree_base_repo_failure():
"""Test creating worktree when base repo setup fails"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
branch_name = "feat-test"
mock_logger = MagicMock()
with patch(
"src.agent_work_orders.utils.worktree_operations.ensure_base_repository",
return_value=(None, "Base repo error"),
):
worktree_path, error = create_worktree(
url, work_order_id, branch_name, mock_logger
)
assert worktree_path is None
assert error == "Base repo error"
@pytest.mark.unit
def test_validate_worktree_success():
"""Test validating worktree when everything is correct"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
worktree_path = get_worktree_path(url, work_order_id)
state = {"worktree_path": worktree_path}
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = worktree_path # Git knows about it
with patch("os.path.exists", return_value=True), patch(
"subprocess.run", return_value=mock_result
):
is_valid, error = validate_worktree(url, work_order_id, state)
assert is_valid is True
assert error is None
@pytest.mark.unit
def test_validate_worktree_no_path_in_state():
"""Test validating worktree when state has no path"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
state = {} # No worktree_path
is_valid, error = validate_worktree(url, work_order_id, state)
assert is_valid is False
assert "No worktree_path" in error
@pytest.mark.unit
def test_validate_worktree_directory_not_found():
"""Test validating worktree when directory doesn't exist"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
worktree_path = get_worktree_path(url, work_order_id)
state = {"worktree_path": worktree_path}
with patch("os.path.exists", return_value=False):
is_valid, error = validate_worktree(url, work_order_id, state)
assert is_valid is False
assert "not found" in error
@pytest.mark.unit
def test_validate_worktree_not_registered_with_git():
"""Test validating worktree when git doesn't know about it"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
worktree_path = get_worktree_path(url, work_order_id)
state = {"worktree_path": worktree_path}
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = "/some/other/path" # Doesn't contain our path
with patch("os.path.exists", return_value=True), patch(
"subprocess.run", return_value=mock_result
):
is_valid, error = validate_worktree(url, work_order_id, state)
assert is_valid is False
assert "not registered" in error
@pytest.mark.unit
def test_remove_worktree_success():
"""Test removing worktree successfully"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
mock_logger = MagicMock()
mock_result = MagicMock()
mock_result.returncode = 0
with patch("os.path.exists", return_value=True), patch(
"subprocess.run", return_value=mock_result
):
success, error = remove_worktree(url, work_order_id, mock_logger)
assert success is True
assert error is None
@pytest.mark.unit
def test_remove_worktree_fallback_to_manual():
"""Test removing worktree with fallback to manual removal"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
mock_logger = MagicMock()
mock_result = MagicMock()
mock_result.returncode = 1
mock_result.stderr = "Git remove failed"
with patch("os.path.exists", return_value=True), patch(
"subprocess.run", return_value=mock_result
), patch("shutil.rmtree"):
success, error = remove_worktree(url, work_order_id, mock_logger)
# Should succeed via manual cleanup
assert success is True
assert error is None
@pytest.mark.unit
def test_remove_worktree_no_base_repo():
"""Test removing worktree when base repo doesn't exist"""
url = "https://github.com/owner/repo"
work_order_id = "wo-test123"
mock_logger = MagicMock()
def mock_exists(path):
# Base repo doesn't exist, but worktree directory does
return "main" not in path
with patch("os.path.exists", side_effect=mock_exists), patch("shutil.rmtree"):
success, error = remove_worktree(url, work_order_id, mock_logger)
assert success is True
assert error is None
@pytest.mark.unit
def test_setup_worktree_environment(tmp_path):
"""Test setting up worktree environment"""
worktree_path = str(tmp_path)
backend_port = 9107
frontend_port = 9207
mock_logger = MagicMock()
setup_worktree_environment(worktree_path, backend_port, frontend_port, mock_logger)
ports_env_path = tmp_path / ".ports.env"
assert ports_env_path.exists()
content = ports_env_path.read_text()
assert "BACKEND_PORT=9107" in content
assert "FRONTEND_PORT=9207" in content