Local LLM servers (llama.cpp, ollama, vLLM, etc.) typically don't require authentication. When a custom base_url is configured but no API key is found, use a placeholder instead of failing with 'Provider resolver returned an empty API key.' The OpenAI SDK accepts any string as api_key, and local servers simply ignore the Authorization header. Fixes issue reported by @ThatWolfieGuy — llama.cpp stopped working after updating because the new runtime provider resolver enforces non-empty API keys even for keyless local endpoints.
322 KiB
322 KiB