Quantitative Bounds for Length Generalization in Transformers

Publication Date: 11/10/2025

Event: https://arxiv.org

Reference: https://arxiv.org/abs/2510.27015

Authors: Zachary Izzo, NEC Laboratories America, Inc.; Eshaan Nichan, Princeton University; Jason D. Lee, UC Berkeley

Abstract: We study the problem of length generalization (LG) in transformers: the ability of a model trained on shorter sequences to maintain performance when evaluated on much longer, previously unseen inputs. Prior work by Huang et al. (2025) established that transformers eventually achieve length generalization once the training sequence length exceeds some finite threshold, but left open the question of how large it must be. In this work, we provide the first quantitative bounds on the required training length for length generalization to occur. Motivated by previous empirical and theoretical work, we analyze LG in several distinct problem settings: error control vs. average error control over an input distribution, infinite-precision softmax attention vs. finite-precision attention (which reduces to an argmax) in the transformer, and one- vs. two-layer transformers. In all scenarios, we prove that LG occurs when the internal behavior of the transformer on longer sequences can be “simulated” by its behavior on shorter sequences seen during training. Our bounds give qualitative estimates for the length of training data required for a transformer to generalize, and we verify these insights empirically. These results sharpen our theoretical understanding of the mechanisms underlying extrapolation in transformers, and formalize the intuition that richer training data is required for generalization on more complex tasks.

Publication Link: https://arxiv.org/abs/2510.27015

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply