Codebook Features: Sparse and Discrete Interpretability for Neural Networks

Abstract

Understanding neural networks is challenging in part because of the dense, continuous nature of their hidden states. We explore whether we can train neural networks to have hidden states that are sparse, discrete, and more interpretable by quantizing their continuous features into what we call codebook features. Codebook features are produced by finetuning neural networks with vector quantization bottlenecks at each layer, producing a network whose hidden features are the sum of a small number of discrete vector codes chosen from a larger codebook. Surprisingly, we find that neural networks can operate under this extreme bottleneck with only modest degradation in performance. This sparse, discrete bottleneck also provides an intuitive way of controlling neural network behavior: first, find codes that activate when the desired behavior is present, then activate those same codes during generation to elicit that behavior. We validate our approach by training codebook Transformers on several different datasets. First, we explore a finite state machine dataset with far more hidden states than neurons. In this setting, our approach overcomes the superposition problem by assigning states to distinct codes, and we find that we can make the neural network behave as if it is in a different state by activating the code for that state. Second, we train Transformer language models with up to 410M parameters on two natural language datasets. We identify codes in these models representing diverse, disentangled concepts (ranging from negative emotions to months of the year) and find that we can guide the model to generate different topics by activating the appropriate codes during inference. Overall, codebook features appear to be a promising unit of analysis and control for neural networks and interpretability. Our codebase and models are open-sourced on GitHub.

Alex Tamkin
Alex Tamkin
Research Scientist

Alex is a research scientist at Anthropic. He completed his PhD in Computer Science at Stanford, advised by Noah Goodman, where he was an Open Philanthropy AI Fellow. His work focuses on understanding and controlling large pretrained language models.

Mohammad Taufeeque
Mohammad Taufeeque
Research Engineer

Mohammad Taufeeque is a research engineer at FAR. Taufeeque has a bachelor’s degree in Computer Science & Engineering from IIT Bombay, India. He has previously interned at Microsoft Research, working on adapting deployed neural text classifiers to out-of-distribution data.