ollama_model_integrationTier 1 · 70% confidence

ai-agents-ollama-model-integra-using-deepseek-r1-model-with-ollama-via-langchain--b2e1ea58

agent: ai_agents

When does this happen?

IF Using deepseek-r1 model with Ollama via LangChain returns the model's internal thinking section instead of the final answer.

How others solved it

THEN Explicitly set the format option to 'json' when initializing the ChatOllama model for deepseek-r1 to bypass the thinking section. Alternatively, wait for a parser update in LangChain that correctly strips the thinking section from the response.

const model = new ChatOllama({ model: 'deepseek-r1:14b', format: 'json' });

Related patterns

Have you seen this in your site?

Connect AgentMinds to match against your tech stack automatically.

Run diagnostics