Files
archon/python
John Fitzpatrick 7e51b0b3a2 feat: Enhance Ollama UX with single-host convenience features and fix code summarization
- Add single-host Ollama convenience features for improved UX
  - Auto-populate embedding instance when LLM instance is configured
  - Add "Use same host for embedding instance" checkbox
  - Quick setup button for single-host users
  - Visual indicator when both instances use same host

- Fix model counts to be host-specific on instance cards
  - LLM instance now shows only its host's model count
  - Embedding instance shows only its host's model count
  - Previously both showed total across all hosts

- Fix code summarization to use unified LLM provider service
  - Replace hardcoded OpenAI calls with get_llm_client()
  - Support all configured LLM providers (Ollama, OpenAI, Google)
  - Add proper async wrapper for backward compatibility

- Add DeepSeek models to full support patterns for better compatibility
- Add missing code_storage status to crawl progress UI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-07 19:19:29 -07:00
..