@inproceedings{srivastava-etal-2025-thinkslm,
    title = "{T}hink{SLM}: Towards Reasoning in Small Language Models",
    author = "Srivastava, Gaurav  and
      Cao, Shuxiang  and
      Wang, Xuan",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1659/",
    pages = "32600--32650",
    ISBN = "979-8-89176-332-6",
    abstract = "Reasoning has long been viewed as an emergent property of large language models (LLMs). However, recent studies challenge this assumption, showing that small language models (SLMs) can also achieve competitive reasoning performance. This paper introduces $\textbf{ThinkSLM}$, the first extensive benchmark to systematically evaluate and study the reasoning abilities of SLMs trained from scratch or derived from LLMs through quantization, pruning, and distillation. We first establish a reliable evaluation criterion comparing available methods and LLM judges against our human evaluations. Then we present a study evaluating $\textbf{72}$ diverse SLMs from $\textbf{six}$ major model families across $\textbf{17 reasoning benchmarks}$. We repeat all our experiments $\textbf{three}$ times to ensure a robust assessment. Our findings show that: $\textbf{\textit{1)}}$ reasoning ability in SLMs is strongly influenced by training methods and data quality rather than solely model scale; $\textbf{\textit{2)}}$ quantization preserves reasoning capability, while pruning significantly disrupts it;$\textbf{\textit{ 3)}}$ larger models consistently exhibit higher robustness against adversarial perturbations and intermediate reasoning, but certain smaller models closely match or exceed the larger models' performance. Our findings challenge the assumption that scaling is the only way to achieve strong reasoning. Instead, we foresee a future where SLMs with strong reasoning capabilities can be developed through structured training or post-training compression. Our $\textbf{ThinkSLM}$ Leaderboard is publicly available at: https://ctrl-gaurav.github.io/thinkslm.github.io/."
}Markdown (Informal)
[ThinkSLM: Towards Reasoning in Small Language Models](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1659/) (Srivastava et al., EMNLP 2025)
ACL
- Gaurav Srivastava, Shuxiang Cao, and Xuan Wang. 2025. ThinkSLM: Towards Reasoning in Small Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 32600–32650, Suzhou, China. Association for Computational Linguistics.