Deep Learning Interpretability Researcher
Curt Tigges
I do LLM interpretability research at the EleutherAI Institute. My research involves a number of areas, including feature representation and circuit discovery, the study of world modeling, and developmental interpretability. I've also done some work with model training and fine-tuning.