This paper was accepted on the Workshop on Unifying Representations in Neural Fashions (UniReps) at NeurIPS 2025.
Activation steering strategies in giant language fashions (LLMs) have emerged as an efficient option to carry out focused updates to boost generated language with out requiring giant quantities of adaptation knowledge. We ask whether or not the options found by activation steering strategies are interpretable. We establish neurons chargeable for particular ideas (e.g., “cat”) utilizing the “discovering specialists” methodology from analysis on activation steering and present that the ExpertLens, i.e., inspection of those neurons supplies insights about mannequin illustration. We discover that ExpertLens representations are secure throughout fashions and datasets and carefully align with human representations inferred from behavioral knowledge, matching inter-human alignment ranges. ExpertLens considerably outperforms the alignment captured by phrase/sentence embeddings. By reconstructing human idea group by means of ExpertLens, we present that it allows a granular view of LLM idea illustration. Our findings counsel that ExpertLens is a versatile and light-weight strategy for capturing and analyzing mannequin representations.
