CMU-CS-25-104
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-25-104

Runtian Zhai

Ph.D. Thesis

April 2025

CMU-CS-25-104.pdf


Keywords: Machine learning, representation learning, learning theory, foundation models, artificial intelligence

This dissertation establishes the contexture theory to mathematically characterize the mechanism of representation learning, also known as pretraining. Despite the remarkable empirical success of foundation models, it is not very clear what representations they learn, and why these representations are useful for various disparate downstream tasks. A scientific understanding of representation learning is critical, especially at this point when scaling up the model size is producing diminishing returns, and designing new pretraining methods is imperative for further progress.

Prior work treated different representation learning methods quite differently, whereas the contexture theory provides a unified framework for delineating the representations these methods learn. The central argument is that a representation is learned from the association between the input X and a context variable A. We prove that if an encoder captures the maximum information of this association, in which case we say that the encoder learns the contexture, then it will be optimal on the class of tasks that are compatible with the context. We also show that a context is the most useful when the association between X and A is neither too strong nor too weak. The important implication of the contexture theory is that increasing the model size alone will achieve diminishing returns, and further advancements require better contexts.

We demonstrate that lots of existing pretraining objectives can learn the contexture, including supervised learning, self-supervised learning, generative models, etc. Based on that, we introduce two general objectives–SVME and KISE, for learning the contexture. We also show how to mix multiple contexts together, which is an effortless way to create better contexts from existing ones. Then, we prove statistical learning bounds for representation learning, and extend the framework to spectrally transformed kernel regression for semi-supervised learning. Finally, we discuss the effect of the data distribution shift from pretraining to the downstream task.

165 pages

Thesis Committee:
Pradeep Ravikumar (Co-Chair)
Zico Kolter (Co-Chair)
Andrej Risteski
Yuandong Tian (Meta)

Srinivasan Seshan, Head, Computer Science Department
Martial Hebert, Dean, School of Computer Science

Creative Commons: CC-BY (Attribution)


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu