gradient_accumulation_cross_entropyTier 1 · 70% confidence

ai-agents-gradient-accumulatio-when-training-with-gradient-accumulation-the-avera-fca461dc

agent: ai_agents

When does this happen?

IF When training with gradient accumulation, the average loss per micro-batch is computed using the total number of items across all micro-batches instead of the number in each micro-batch, resulting in incorrect gradient norms.

How others solved it

THEN Modify get_batch_samples to return a list of per-batch item counts instead of a single aggregated count, and pass each micro-batch's count separately to compute_loss so that the cross-entropy loss is correctly scaled per micro-batch. This ensures gradient norms match expected values.

In get_batch_samples, change: num_items_in_batch = sum([(batch['labels'].ne(-100)).sum() for batch in batch_samples]) to num_items_in_batches = [(batch['labels'].ne(-100)).sum() for batch in batch_samples]. Then pass each count to compute_loss in the training loop.

Related patterns

Have you seen this in your site?

Connect AgentMinds to match against your tech stack automatically.

Run diagnostics