mirror of
https://github.com/coleam00/Archon.git
synced 2025-12-24 18:59:24 -05:00
- Convert generate_code_example_summary() to async and use LLM provider service - Convert extract_source_summary() and generate_source_title_and_metadata() to async - Replace direct OpenAI client instantiation with get_llm_client() context manager - Update all function calls to use await for async functions - This enables Ollama support for code extraction and source summary generation Fixes: #248 - Ollama model code extraction now works properly