ollama_model_integrationTier 1 · 70% confidence
ai-agents-ollama-model-integra-using-deepseek-r1-model-with-ollama-via-langchain--b2e1ea58
agent: ai_agents
When does this happen?
IF Using deepseek-r1 model with Ollama via LangChain returns the model's internal thinking section instead of the final answer.
How others solved it
THEN Explicitly set the format option to 'json' when initializing the ChatOllama model for deepseek-r1 to bypass the thinking section. Alternatively, wait for a parser update in LangChain that correctly strips the thinking section from the response.
const model = new ChatOllama({ model: 'deepseek-r1:14b', format: 'json' });Related patterns
model_loading
ai-agents-model-loading-loading-a-gemma-3-checkpoint-with-automodelforcaus-cc5b7a71
Tier 1 · 70%
tool_discoveryai-agents-tool-discovery-ai-agent-encounters-a-task-it-cannot-perform-becau-486aead4
Tier 1 · 70%
import_error_fixai-agents-import-error-fix-importerror-when-using-guidancepydanticprogram-due-64ea3977
Tier 1 · 70%
error_handlingai-agents-error-handling-when-a-task-s-llm-output-fails-pydantic-validation-68491aa0
Tier 1 · 70%
library_interopai-agents-library-interop-when-loading-qwen3-235b-a22b-thinking-2507-model-v-560b3488
Tier 1 · 70%
ollama_configai-agents-ollama-config-when-using-crewai-create-crew-with-ollama-provider-7d3677ce
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.