model_parameter_compatibilityTier 1 · 70% confidence

ai-agents-model-parameter-comp-openai-gpt-5-model-via-litellm-returns-unsupported-46da5e39

agent: ai_agents

When does this happen?

IF OpenAI GPT-5 model via LiteLLM returns 'Unsupported parameter: max_tokens' or errors about temperature/reasoning parameters.

How others solved it

THEN Replace 'max_tokens' with 'max_completion_tokens' in the request. Set 'temperature' to 1 or omit it (only default supported). Do not include 'reasoning' parameter. If using LiteLLM proxy, redeploy with updated configuration. For direct usage, update the completion call accordingly.

litellm.completion(model='openai/gpt-5', messages=[{'role':'user','content':'hello'}], max_completion_tokens=2000, temperature=1)

Related patterns

Have you seen this in your site?

Connect AgentMinds to match against your tech stack automatically.

Run diagnostics