Files
archon/python/src/server/api_routes/providers_api.py
Josh 394ac1befa Feat:Openrouter/Anthropic/grok-support (#231)
* Add Anthropic and Grok provider support

* feat: Add crucial GPT-5 and reasoning model support for OpenRouter

- Add requires_max_completion_tokens() function for GPT-5, o1, o3, Grok-3 series
- Add prepare_chat_completion_params() for reasoning model compatibility
- Implement max_tokens → max_completion_tokens conversion for reasoning models
- Add temperature handling for reasoning models (must be 1.0 default)
- Enhanced provider validation and API key security in provider endpoints
- Streamlined retry logic (3→2 attempts) for faster issue detection
- Add failure tracking and circuit breaker analysis for debugging
- Support OpenRouter format detection (openai/gpt-5-nano, openai/o1-mini)
- Improved Grok provider empty response handling with structured fallbacks
- Enhanced contextual embedding with provider-aware model selection

Core provider functionality:
- OpenRouter, Grok, Anthropic provider support with full embedding integration
- Provider-specific model defaults and validation
- Secure API connectivity testing endpoints
- Provider context passing for code generation workflows

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fully working model providers, addressing securtiy and code related concerns, throughly hardening our code

* added multiprovider support, embeddings model support, cleaned the pr, need to fix health check, asnyico tasks errors, and contextual embeddings error

* fixed contextual embeddings issue

* - Added inspect-aware shutdown handling so get_llm_client always closes the underlying AsyncOpenAI / httpx.AsyncClient while the loop is   still alive, with defensive logging if shutdown happens late (python/src/server/services/llm_provider_service.py:14, python/src/server/    services/llm_provider_service.py:520).

* - Restructured get_llm_client so client creation and usage live in separate try/finally blocks; fallback clients now close without         logging spurious Error creating LLM client when downstream code raises (python/src/server/services/llm_provider_service.py:335-556).    - Close logic now sanitizes provider names consistently and awaits whichever aclose/close coroutine the SDK exposes, keeping the loop      shut down cleanly (python/src/server/services/llm_provider_service.py:530-559).                                                                                                                                                                                                       Robust JSON Parsing                                                                                                                                                                                                                                                                   - Added _extract_json_payload to strip code fences / extra text returned by Ollama before json.loads runs, averting the markdown-induced   decode errors you saw in logs (python/src/server/services/storage/code_storage_service.py:40-63).                                          - Swapped the direct parse call for the sanitized payload and emit a debug preview when cleanup alters the content (python/src/server/     services/storage/code_storage_service.py:858-864).

* added provider connection support

* added provider api key not being configured warning

* Updated get_llm_client so missing OpenAI keys automatically fall back to Ollama (matching existing tests) and so unsupported providers     still raise the legacy ValueError the suite expects. The fallback now reuses _get_optimal_ollama_instance and rethrows ValueError(OpenAI  API key not found and Ollama fallback failed) when it cant connect.  Adjusted test_code_extraction_source_id.py to accept the new optional argument on the mocked extractor (and confirm its None when         present).

* Resolved a few needed code rabbit suggestion   - Updated the knowledge API key validation to call create_embedding with the provider argument and removed the hard-coded OpenAI fallback  (python/src/server/api_routes/knowledge_api.py).                                                                                           - Broadened embedding provider detection so prefixed OpenRouter/OpenAI model names route through the correct client (python/src/server/    services/embeddings/embedding_service.py, python/src/server/services/llm_provider_service.py).                                             - Removed the duplicate helper definitions from llm_provider_service.py, eliminating the stray docstring that was causing the import-time  syntax error.

* updated via code rabbit PR review, code rabbit in my IDE found no issues and no nitpicks with the updates! what was done:    Credential service now persists the provider under the uppercase key LLM_PROVIDER, matching the read path (no new EMBEDDING_PROVIDER     usage introduced).                                                                                                                          Embedding batch creation stops inserting blank strings, logging failures and skipping invalid items before they ever hit the provider    (python/src/server/services/embeddings/embedding_service.py).                                                                               Contextual embedding prompts use real newline characters everywhereboth when constructing the batch prompt and when parsing the         models response (python/src/server/services/embeddings/contextual_embedding_service.py).                                                   Embedding provider routing already recognizes OpenRouter-prefixed OpenAI models via is_openai_embedding_model; no further change needed  there.                                                                                                                                      Embedding insertion now skips unsupported vector dimensions instead of forcing them into the 1536-column, and the backoff loop uses      await asyncio.sleep so we no longer block the event loop (python/src/server/services/storage/code_storage_service.py).                      RAG settings props were extended to include LLM_INSTANCE_NAME and OLLAMA_EMBEDDING_INSTANCE_NAME, and the debug log no longer prints     API-key prefixes (the rest of the TanStack refactor/EMBEDDING_PROVIDER support remains deferred).

* test fix

* enhanced Openrouters parsing logic to automatically detect reasoning models and parse regardless of json output or not. this commit creates a robust way for archons parsing to work throughly with openrouter automatically, regardless of the model youre using, to ensure proper functionality with out breaking any generation capabilities!

---------

Co-authored-by: Chillbruhhh <joshchesser97@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-22 10:36:30 +03:00

155 lines
5.4 KiB
Python

"""
Provider status API endpoints for testing connectivity
Handles server-side provider connectivity testing without exposing API keys to frontend.
"""
import httpx
from fastapi import APIRouter, HTTPException, Path
from ..config.logfire_config import logfire
from ..services.credential_service import credential_service
# Provider validation - simplified inline version
router = APIRouter(prefix="/api/providers", tags=["providers"])
async def test_openai_connection(api_key: str) -> bool:
"""Test OpenAI API connectivity"""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(
"https://api.openai.com/v1/models",
headers={"Authorization": f"Bearer {api_key}"}
)
return response.status_code == 200
except Exception as e:
logfire.warning(f"OpenAI connectivity test failed: {e}")
return False
async def test_google_connection(api_key: str) -> bool:
"""Test Google AI API connectivity"""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(
"https://generativelanguage.googleapis.com/v1/models",
headers={"x-goog-api-key": api_key}
)
return response.status_code == 200
except Exception:
logfire.warning("Google AI connectivity test failed")
return False
async def test_anthropic_connection(api_key: str) -> bool:
"""Test Anthropic API connectivity"""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(
"https://api.anthropic.com/v1/models",
headers={
"x-api-key": api_key,
"anthropic-version": "2023-06-01"
}
)
return response.status_code == 200
except Exception as e:
logfire.warning(f"Anthropic connectivity test failed: {e}")
return False
async def test_openrouter_connection(api_key: str) -> bool:
"""Test OpenRouter API connectivity"""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(
"https://openrouter.ai/api/v1/models",
headers={"Authorization": f"Bearer {api_key}"}
)
return response.status_code == 200
except Exception as e:
logfire.warning(f"OpenRouter connectivity test failed: {e}")
return False
async def test_grok_connection(api_key: str) -> bool:
"""Test Grok API connectivity"""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(
"https://api.x.ai/v1/models",
headers={"Authorization": f"Bearer {api_key}"}
)
return response.status_code == 200
except Exception as e:
logfire.warning(f"Grok connectivity test failed: {e}")
return False
PROVIDER_TESTERS = {
"openai": test_openai_connection,
"google": test_google_connection,
"anthropic": test_anthropic_connection,
"openrouter": test_openrouter_connection,
"grok": test_grok_connection,
}
@router.get("/{provider}/status")
async def get_provider_status(
provider: str = Path(
...,
description="Provider name to test connectivity for",
regex="^[a-z0-9_]+$",
max_length=20
)
):
"""Test provider connectivity using server-side API key (secure)"""
try:
# Basic provider validation
allowed_providers = {"openai", "ollama", "google", "openrouter", "anthropic", "grok"}
if provider not in allowed_providers:
raise HTTPException(
status_code=400,
detail=f"Invalid provider '{provider}'. Allowed providers: {sorted(allowed_providers)}"
)
# Basic sanitization for logging
safe_provider = provider[:20] # Limit length
logfire.info(f"Testing {safe_provider} connectivity server-side")
if provider not in PROVIDER_TESTERS:
raise HTTPException(
status_code=400,
detail=f"Provider '{provider}' not supported for connectivity testing"
)
# Get API key server-side (never expose to client)
key_name = f"{provider.upper()}_API_KEY"
api_key = await credential_service.get_credential(key_name, decrypt=True)
if not api_key or not isinstance(api_key, str) or not api_key.strip():
logfire.info(f"No API key configured for {safe_provider}")
return {"ok": False, "reason": "no_key"}
# Test connectivity using server-side key
tester = PROVIDER_TESTERS[provider]
is_connected = await tester(api_key)
logfire.info(f"{safe_provider} connectivity test result: {is_connected}")
return {
"ok": is_connected,
"reason": "connected" if is_connected else "connection_failed",
"provider": provider # Echo back validated provider name
}
except HTTPException:
# Re-raise HTTP exceptions (they're already properly formatted)
raise
except Exception as e:
# Basic error sanitization for logging
safe_error = str(e)[:100] # Limit length
logfire.error(f"Error testing {provider[:20]} connectivity: {safe_error}")
raise HTTPException(status_code=500, detail={"error": "Internal server error during connectivity test"})