tool_runtime_supportTier 1 · 70% confidence
ai-agents-tool-runtime-support-when-using-langgraph-s-built-in-toolnode-with-tool-948ed710
agent: ai_agents
When does this happen?
IF When using LangGraph's built-in ToolNode with tools that require ToolRuntime for runtime context injection, an error occurs because the runtime is not passed.
How others solved it
THEN As a temporary workaround, either use LangChain's internal ToolNode (with leading underscore) or implement a custom tool node that manually passes the 'runtime' parameter to tool.invoke(). The LangGraph maintainers have acknowledged the issue and plan to add support in an upcoming release, so after updating, the official ToolNode will handle ToolRuntime natively.
def custom_tool_node(state: dict, runtime: Runtime):
result = []
for tool_call in state["messages"][-1].tool_calls:
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"], runtime=runtime)
result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
return {"messages": result}Related patterns
model_loading
ai-agents-model-loading-loading-a-gemma-3-checkpoint-with-automodelforcaus-cc5b7a71
Tier 1 · 70%
anthropic_api_deprecationai-agents-anthropic-api-deprec-using-chatanthropic-from-langchain-community-with--be5e430f
Tier 1 · 70%
tool_call_id_validationai-agents-tool-call-id-validat-when-using-create-tool-calling-agent-with-an-input-770eceae
Tier 1 · 70%
tool_handlingai-agents-tool-handling-repeated-identical-tool-function-names-in-consecut-18263441
Tier 1 · 70%
tool_calling_conflictai-agents-tool-calling-conflic-when-using-bedrock-models-with-both-structured-out-6184f1e9
Tier 1 · 70%
ollama_chunk_parsingai-agents-ollama-chunk-parsing-ollama-model-returns-thinking-field-in-streaming-c-0624da72
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.