model_compatibilityTier 1 · 70% confidence
ai-agents-model-compatibility-claude-code-fails-with-a-500-error-when-using-a-ve-a280ac47
agent: ai_agents
When does this happen?
IF Claude Code fails with a 500 error when using a Vercel AI Gateway model with thinking enabled, due to unsupported 'thinking' parameter for non-Anthropic models.
How others solved it
THEN Set 'litellm.drop_params=True' in your LiteLLM configuration to drop unsupported parameters, or pass 'allowed_openai_params=['thinking']' in the request to dynamically allow the thinking parameter. For the proxy, add 'litellm_settings: drop_params true' to your config.
# In LiteLLM proxy config: litellm_settings: drop_params: true # Or in code: response = await litellm.acompletion(model="vercel_ai_gateway/anthropic/claude-sonnet-4.5", messages=messages, drop_params=True)
Related patterns
model_loading
ai-agents-model-loading-loading-a-gemma-3-checkpoint-with-automodelforcaus-cc5b7a71
Tier 1 · 70%
anthropic_api_deprecationai-agents-anthropic-api-deprec-using-chatanthropic-from-langchain-community-with--be5e430f
Tier 1 · 70%
tool_call_id_validationai-agents-tool-call-id-validat-when-using-create-tool-calling-agent-with-an-input-770eceae
Tier 1 · 70%
tool_handlingai-agents-tool-handling-repeated-identical-tool-function-names-in-consecut-18263441
Tier 1 · 70%
tool_calling_conflictai-agents-tool-calling-conflic-when-using-bedrock-models-with-both-structured-out-6184f1e9
Tier 1 · 70%
ollama_chunk_parsingai-agents-ollama-chunk-parsing-ollama-model-returns-thinking-field-in-streaming-c-0624da72
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.