Kristian Kuznetsov


2025

pdf bib
Feature-Level Insights into Artificial Text Detection with Sparse Autoencoders
Kristian Kuznetsov | Laida Kushnareva | Anton Razzhigaev | Polina Druzhinina | Anastasia Voznyuk | Irina Piontkovskaya | Evgeny Burnaev | Serguei Barannikov
Findings of the Association for Computational Linguistics: ACL 2025

Artificial Text Detection (ATD) is becoming increasingly important with the rise of advanced Large Language Models (LLMs). Despite numerous efforts, no single algorithm performs consistently well across different types of unseen text or guarantees effective generalization to new LLMs. Interpretability plays a crucial role in achieving this goal. In this study, we enhance ATD interpretability by using Sparse Autoencoders (SAE) to extract features from Gemma-2-2B’s residual stream. We identify both interpretable and efficient features, analyzing their semantics and relevance through domain- and model-specific statistics, a steering approach, and manual or LLM-based interpretation of obtained features. Our methods offer valuable insights into how texts from various models differ from human-written content. We show that modern LLMs have a distinct writing style, especially in information-dense domains, even though they can produce human-like outputs with personalized prompts. The code for this paper is available at https://github.com/pyashy/SAE_ATD.

2024

pdf bib
Robust AI-Generated Text Detection by Restricted Embeddings
Kristian Kuznetsov | Eduard Tulchinskii | Laida Kushnareva | German Magai | Serguei Barannikov | Sergey Nikolenko | Irina Piontkovskaya
Findings of the Association for Computational Linguistics: EMNLP 2024

Growing amount and quality of AI-generated texts makes detecting such content more difficult. In most real-world scenarios, the domain (style and topic) of generated data and the generator model are not known in advance. In this work, we focus on the robustness of classifier-based detectors of AI-generated text, namely their ability to transfer to unseen generators or semantic domains. We investigate the geometry of the embedding space of Transformer-based text encoders and show that clearing out harmful linear subspaces helps to train a robust classifier, ignoring domain-specific spurious features. We investigate several subspace decomposition and feature selection strategies and achieve significant improvements over state of the art methods in cross-domain and cross-generator transfer. Our best approaches for head-wise and coordinate-based subspace removal increase the mean out-of-distribution (OOD) classification score by up to 9% and 14% in particular setups for RoBERTa and BERT embeddings respectively. We release our code and data: https://github.com/SilverSolver/RobustATD