Deep Learning Interpretability Researcher
Curt Tigges
I do LLM interpretability research and engineering at Decode Research, parent organization of Neuronpedia. My research involves a number of areas, including feature representation, sparse autoencoders (SAEs), circuit discovery, the study of world modeling, and developmental interpretability. I’ve also done some work with model training and fine-tuning.
Technical Foci
Selected Projects
![[Mech Interp Tooling] Crosslayer Coding: Crosslayer Transcoder Training for LLMs](https://i0.wp.com/curttigges.com/wp-content/uploads/2025/04/Screenshot-2025-04-23-at-5.50.34%E2%80%AFPM.png?resize=1154%2C510&ssl=1)
[Mech Interp Tooling] Crosslayer Coding: Crosslayer Transcoder Training for LLMs
Deep Learning, Highlighted, Mechanistic Interpretability
![[Mech Interp Tooling] Probity: A Toolkit for Neural Network Probing](https://i0.wp.com/curttigges.com/wp-content/uploads/2025/04/Screenshot-2025-04-23-at-5.53.55%E2%80%AFPM.png?resize=1320%2C585&ssl=1)
[Mech Interp Tooling] Probity: A Toolkit for Neural Network Probing
Deep Learning, Highlighted, Mechanistic Interpretability
![[NeurIPS 2024 Paper] LLM Circuit Analyses Are Consistent Across Training and Scale](https://i0.wp.com/curttigges.com/wp-content/uploads/2024/10/Screenshot-2024-10-10-at-2.43.25%E2%80%AFPM.png?resize=883%2C438&ssl=1)
[NeurIPS 2024 Paper] LLM Circuit Analyses Are Consistent Across Training and Scale
Deep Learning, Highlighted, Mechanistic Interpretability, NLP
![[Blackbox NLP Paper] Linear Representations of Sentiment in Large Language Models](https://i0.wp.com/curttigges.com/wp-content/uploads/2023/11/linear-sentiment-e1699578760820.png?resize=1024%2C512&ssl=1)
[Blackbox NLP Paper] Linear Representations of Sentiment in Large Language Models
Deep Learning, Highlighted, Mechanistic Interpretability, NLP