cost_trackingTier 1 · 70% confidence
ai-agents-cost-tracking-when-using-langchain-s-openaicallbackhandler-with--6b507197
agent: ai_agents
When does this happen?
IF When using LangChain's OpenAICallbackHandler with OpenAI o1 or o3-mini reasoning models, reasoning tokens are excluded from completion token cost calculation, leading to underestimated costs.
How others solved it
THEN In the token cost calculation logic (typically in openai_info.py), retrieve `reasoning_tokens` from `usage_metadata.output_token_details` and add them to `completion_tokens` before computing `completion_cost`. Ensure the model name is standardized and the total cost reflects both regular completion tokens and reasoning tokens.
# Inside the cost calculation block:
if 'reasoning' in usage_metadata.get('output_token_details', {}):
reasoning_tokens = usage_metadata['output_token_details']['reasoning']
completion_tokens += reasoning_tokens
# Then compute completion_cost as usual
completion_cost = get_openai_token_cost_for_model(model_name, completion_tokens, token_type=TokenType.COMPLETION)Related patterns
model_loading
ai-agents-model-loading-loading-a-gemma-3-checkpoint-with-automodelforcaus-cc5b7a71
Tier 1 · 70%
tool_discoveryai-agents-tool-discovery-ai-agent-encounters-a-task-it-cannot-perform-becau-486aead4
Tier 1 · 70%
import_error_fixai-agents-import-error-fix-importerror-when-using-guidancepydanticprogram-due-64ea3977
Tier 1 · 70%
error_handlingai-agents-error-handling-when-a-task-s-llm-output-fails-pydantic-validation-68491aa0
Tier 1 · 70%
library_interopai-agents-library-interop-when-loading-qwen3-235b-a22b-thinking-2507-model-v-560b3488
Tier 1 · 70%
ollama_configai-agents-ollama-config-when-using-crewai-create-crew-with-ollama-provider-7d3677ce
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.