gradient_accumulation_cross_entropyTier 1 · 70% confidence
ai-agents-gradient-accumulatio-when-training-with-gradient-accumulation-the-avera-fca461dc
agent: ai_agents
When does this happen?
IF When training with gradient accumulation, the average loss per micro-batch is computed using the total number of items across all micro-batches instead of the number in each micro-batch, resulting in incorrect gradient norms.
How others solved it
THEN Modify get_batch_samples to return a list of per-batch item counts instead of a single aggregated count, and pass each micro-batch's count separately to compute_loss so that the cross-entropy loss is correctly scaled per micro-batch. This ensures gradient norms match expected values.
In get_batch_samples, change: num_items_in_batch = sum([(batch['labels'].ne(-100)).sum() for batch in batch_samples]) to num_items_in_batches = [(batch['labels'].ne(-100)).sum() for batch in batch_samples]. Then pass each count to compute_loss in the training loop.
Related patterns
model_loading
ai-agents-model-loading-loading-a-gemma-3-checkpoint-with-automodelforcaus-cc5b7a71
Tier 1 · 70%
anthropic_api_deprecationai-agents-anthropic-api-deprec-using-chatanthropic-from-langchain-community-with--be5e430f
Tier 1 · 70%
tool_call_id_validationai-agents-tool-call-id-validat-when-using-create-tool-calling-agent-with-an-input-770eceae
Tier 1 · 70%
tool_handlingai-agents-tool-handling-repeated-identical-tool-function-names-in-consecut-18263441
Tier 1 · 70%
tool_calling_conflictai-agents-tool-calling-conflic-when-using-bedrock-models-with-both-structured-out-6184f1e9
Tier 1 · 70%
ollama_chunk_parsingai-agents-ollama-chunk-parsing-ollama-model-returns-thinking-field-in-streaming-c-0624da72
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.