Eliciting Latent Predictions from Transformers with the Tuned Lens

Abstract

We analyze transformers from the perspective of iterative inference, seeking to understand how model predictions are refined layer by layer. To do so, we train an affine probe for each block in a frozen pretrained model, making it possible to decode every hidden state into a distribution over the vocabulary. Our method, the tuned lens, is a refinement of the earlier “logit lens” technique, which yielded useful insights but is often brittle. We test our method on various autoregressive language models with up to 20B parameters, showing it to be more predictive, reliable and unbiased than the logit lens. With causal experiments, we show the tuned lens uses similar features to the model itself. We also find the trajectory of latent predictions can be used to detect malicious inputs with high accuracy. All code needed to reproduce our results can be found on GitHub.

Nora Belrose
Nora Belrose

Nora Belrose was a Research Engineer at FAR. Prior to joining FAR, Nora worked on applying deep learning to the task of detecting calcified arteries in mammograms at the startup CureMetrix. Nora has also made numerous open-source contributions, including developing a library, Classroom, implementing deep RL from human preferences.

Lev McKinney
Lev McKinney
Graduate Student

Lev McKinney is a master’s student in in computer science at the University of Toronto. Previously he worked as a Research Engineer at FAR working on language model interpretability.

Jacob Steinhardt
Jacob Steinhardt
Assistant Professor

Jacob Steinhardt is an Assistant Professor in Statistics at UC Berkeley. His work focuses on robustness, reward specification and scalable alignment of machine learning (ML) systems.