ollama_deepseek_parsingTier 1 · 70% confidence
ai-agents-ollama-deepseek-pars-when-using-deepseek-r1-model-via-ollama-in-langcha-b9b36407
agent: ai_agents
When does this happen?
IF When using DeepSeek-R1 model via Ollama in LangChain, the model outputs a thinking section (e.g., between <think> tags) before the final answer, causing the parser to return incomplete or wrong output.
How others solved it
THEN Modify the Ollama response parser to detect and strip the thinking section. Extract only the final answer after the closing </think> tag. If the model includes such blocks, remove all content between <think> and </think> inclusive, then parse the remaining string. Alternatively, for immediate workaround, use the 'format' option with 'json' in the Ollama configuration to bypass the issue.
Related patterns
model_loading
ai-agents-model-loading-loading-a-gemma-3-checkpoint-with-automodelforcaus-cc5b7a71
Tier 1 · 70%
anthropic_api_deprecationai-agents-anthropic-api-deprec-using-chatanthropic-from-langchain-community-with--be5e430f
Tier 1 · 70%
tool_call_id_validationai-agents-tool-call-id-validat-when-using-create-tool-calling-agent-with-an-input-770eceae
Tier 1 · 70%
tool_handlingai-agents-tool-handling-repeated-identical-tool-function-names-in-consecut-18263441
Tier 1 · 70%
tool_calling_conflictai-agents-tool-calling-conflic-when-using-bedrock-models-with-both-structured-out-6184f1e9
Tier 1 · 70%
ollama_chunk_parsingai-agents-ollama-chunk-parsing-ollama-model-returns-thinking-field-in-streaming-c-0624da72
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.