Files
archon/python
John Fitzpatrick f0dc898f7b Fix Issue #248: Replace hardcoded OpenAI usage with unified LLM provider service
- Convert generate_code_example_summary() to async and use LLM provider service
- Convert extract_source_summary() and generate_source_title_and_metadata() to async
- Replace direct OpenAI client instantiation with get_llm_client() context manager
- Update all function calls to use await for async functions
- This enables Ollama support for code extraction and source summary generation

Fixes: #248 - Ollama model code extraction now works properly
2025-09-04 13:27:26 -07:00
..