Towards Automated Circuit Discovery for Mechanistic Interpretability


Through considerable effort and intuition, several recent works have reverse-engineered nontrivial behaviors of transformer models. This paper systematizes the mechanistic interpretability process they followed. First, researchers choose a metric and dataset that elicit the desired model behavior. Then, they apply activation patching to find which abstract neural network units are involved in the behavior. By varying the dataset, metric, and units under investigation, researchers can understand the functionality of each component. We automate one of the process’ steps: to identify the circuit that implements the specified behavior in the model’s computational graph. We propose several algorithms and reproduce previous interpretability results to validate them. For example, the ACDC algorithm rediscovered 5/5 of the component types in a circuit in GPT-2 Small that computes the Greater-Than operation. ACDC selected 68 of the 32,000 edges in GPT-2 Small, all of which were manually found by previous work. Our code is available at

Adrià Garriga-Alonso
Adrià Garriga-Alonso
Research Scientist

Adrià Garriga-Alonso is a scientist at FAR, working on understanding what learned optimizers want. Previously he worked at Redwood Research on neural network interpretability, and holds a PhD from the University of Cambridge.