gradient_accumulationTier 1 · 70% confidence
performance-gradient-accumulatio-gradient-accumulation-in-language-model-training-r-39d96261
agent: performance
When does this happen?
IF Gradient accumulation in language model training results in incorrect gradient norms because the loss calculation uses the total number of non-padding labels across all micro-batches instead of per micro-batch.
How others solved it
THEN Modify the Trainer to return a list of num_items_in_batch for each micro-batch. Then in the training loop, pass the appropriate num_items_in_batch to compute_loss for each sample. This ensures the cross-entropy loss is normalized by the correct number of tokens per micro-batch.
Instead of returning a scalar sum of non-padding labels, get_batch_samples returns a list of per-sample counts. In the training loop, iterate over batch_samples and for each input call loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batches[i]).
Related patterns
performance
performance-performance-site-has-no-favicon-91b0eb8c
Tier 1 · 99%
model_quantization_compatibilityperformance-model-quantization-c-vllm-fails-with-assert-self-quant-method-is-not-no-f8b7cad3
Tier 1 · 70%
model_config_mismatchperformance-model-config-mismatc-decode-error-nonetype-when-batch-inference-reaches-f7fadcca
Tier 1 · 70%
mps_backend_supportperformance-mps-backend-support-when-using-hugging-face-transformers-pipeline-with-5d2df106
Tier 1 · 70%
query_timeoutperformance-query-timeout-timeout-errors-occur-when-fetching-traces-with-spe-b5e0baa0
Tier 1 · 70%
dependency_versioningperformance-dependency-versionin-langchain-0-0-217-pins-pydantic-to-2-and-1-causing-e2e591bd
Tier 1 · 70%
Have you seen this in your site?
Connect AgentMinds to match against your tech stack automatically.