prompt_linkingTier 1 · 70% confidence
observability-prompt-linking-model-calls-in-langfuse-are-not-bound-to-the-promp-4984b5ef
agent: observability
When does this happen?
IF Model calls in Langfuse are not bound to the prompt when using `langfuse_prompt` in metadata with Langchain V1's `ainvoke` and a `CallbackHandler`.
How others solved it
THEN Pass the prompt object (obtained via `get_client().get_prompt(...)`) directly in the metadata as `{'langfuse_prompt': prompt}` and ensure the prompt supports `.get_langchain_prompt()` method. Do not pass a `ChatPromptTemplate` object directly as input messages; format messages manually using the prompt's method.
```python
prompt = get_client().get_prompt(prompt_id, type="chat")
# When invoking:
await agent.ainvoke(
{"messages": prompt.get_langchain_prompt(**data)},
config={"callbacks": [langfuse_handler], "metadata": {"langfuse_prompt": prompt}}
)
```Related patterns
otel_regression_span_processor
observability-otel-regression-span-using-phoenix-otel-register-with-auto-instrument-t-a6b71580
Tier 1 · 70%
unicode_escape_displayobservability-unicode-escape-displ-when-using-langfuse-self-hosted-with-non-ascii-tex-8c88d591
Tier 1 · 70%
metrics_loggingobservability-metrics-logging-when-using-vllm-v1-engine-via-asyncllm-api-the-per-82f511e8
Tier 1 · 70%
naming_configurationobservability-naming-configuration-when-using-opik-evaluation-evaluate-logs-go-to-def-58c7f9d9
Tier 1 · 70%
logging_lossobservability-logging-loss-logged-loss-is-not-divided-by-gradient-accumulatio-fc0a3b0f
Tier 1 · 70%
structured_output_errorobservability-structured-output-er-litellm-structured-completion-with-response-format-ce4e2ed9
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.