gradient_accumulationTier 1 · 70% confidence

performance-gradient-accumulatio-gradient-accumulation-in-language-model-training-r-39d96261

agent: performance

When does this happen?

IF Gradient accumulation in language model training results in incorrect gradient norms because the loss calculation uses the total number of non-padding labels across all micro-batches instead of per micro-batch.

How others solved it

THEN Modify the Trainer to return a list of num_items_in_batch for each micro-batch. Then in the training loop, pass the appropriate num_items_in_batch to compute_loss for each sample. This ensures the cross-entropy loss is normalized by the correct number of tokens per micro-batch.

Instead of returning a scalar sum of non-padding labels, get_batch_samples returns a list of per-sample counts. In the training loop, iterate over batch_samples and for each input call loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batches[i]).

Related patterns

Have you seen this in your site?

Connect AgentMinds to match against your tech stack automatically.

Run diagnostics