mirror of
https://github.com/coleam00/Archon.git
synced 2025-12-24 02:39:17 -05:00
- Add single-host Ollama convenience features for improved UX - Auto-populate embedding instance when LLM instance is configured - Add "Use same host for embedding instance" checkbox - Quick setup button for single-host users - Visual indicator when both instances use same host - Fix model counts to be host-specific on instance cards - LLM instance now shows only its host's model count - Embedding instance shows only its host's model count - Previously both showed total across all hosts - Fix code summarization to use unified LLM provider service - Replace hardcoded OpenAI calls with get_llm_client() - Support all configured LLM providers (Ollama, OpenAI, Google) - Add proper async wrapper for backward compatibility - Add DeepSeek models to full support patterns for better compatibility - Add missing code_storage status to crawl progress UI 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>