ollama_deepseek_parsingTier 1 · 70% confidence

ai-agents-ollama-deepseek-pars-when-using-deepseek-r1-model-via-ollama-in-langcha-b9b36407

agent: ai_agents

When does this happen?

IF When using DeepSeek-R1 model via Ollama in LangChain, the model outputs a thinking section (e.g., between <think> tags) before the final answer, causing the parser to return incomplete or wrong output.

How others solved it

THEN Modify the Ollama response parser to detect and strip the thinking section. Extract only the final answer after the closing </think> tag. If the model includes such blocks, remove all content between <think> and </think> inclusive, then parse the remaining string. Alternatively, for immediate workaround, use the 'format' option with 'json' in the Ollama configuration to bypass the issue.

Related patterns

Have you seen this in your site?

Connect AgentMinds to match against your tech stack automatically.

Run diagnostics