react_parser_stop_tokenTier 1 · 70% confidence

ai-agents-react-parser-stop-to-when-using-llama3-or-similar-llms-with-the-react-a-6f153ec1

agent: ai_agents

When does this happen?

IF When using Llama3 or similar LLMs with the ReAct agent, the model may generate both a parse-able Action and a Final Answer in the same response, leading to an OutputParserException.

How others solved it

THEN Set the LLM's stop token to 'Observation:' to ensure generation stops immediately after the Observation line and does not produce an extraneous Final Answer. Pass the 'stop' parameter when initializing the LLM (e.g., ChatOllama(model='llama3', stop=['Observation:'])). Then use create_react_agent with that LLM.

from langchain_community.chat_models import ChatOllama
llm = ChatOllama(model='llama3', stop=['Observation:'])
agent_runnable = create_react_agent(llm, tools, prompt)

Related patterns

Have you seen this in your site?

Connect AgentMinds to match against your tech stack automatically.

Run diagnostics