Skip to content

An information theoretic explanation for several empirical phenomena in language models

License

Notifications You must be signed in to change notification settings

nayakanuj/Size_Scaling_Emergence_Plateau

Repository files navigation

Code for "An Information Theory of Compute-Optimal Size Scaling, Emergence, and Plateaus in Language Models" at Neural Compression Workshop, NeurIPS 2024.

TLDR: We present a simplified unified graph framework to explain compute-optimal size scaling, emergent capabilities, and performance plateauing using tools from iterative decoding in information theory and random network theory.

Abstract: Recent empirical studies show three phenomena with increasing size of language models: compute-optimal size scaling, emergent capabilities, and performance plateauing. We present a simple unified mathematical framework to explain all of these language model scaling phenomena, building on recent skill-text bipartite graph frameworks for semantic learning. Modeling the learning of concepts from texts as an iterative process yields an analogy to iterative decoding of low-density parity check (LDPC) codes in information theory. Thence, drawing on finite-size scaling characterizations of LDPC decoding, we derive the compute-optimal size scaling (Chinchilla rule) for language models. Further, using tools from random network theory, we provide a simple explanation for both emergence of complex skills and plateauing of performance as the size of language models scale. We see multiple plateaus.

Notebook: info_theory_size_scaling_plateaus.ipynb

Citation:

@inproceedings{nayak2024information,
  title={An Information Theory of Compute-Optimal Size Scaling, Emergence, and Plateaus in Language Models},
  author={Nayak, Anuj K and Varshney, Lav R},
  booktitle={Workshop on Machine Learning and Compression, NeurIPS 2024}
}

Releases

No releases published

Packages

No packages published