Adam Jardine


2025

pdf bib
InductionBench: LLMs Fail in the Simplest Complexity Class
Wenyue Hua | Tyler Wong | Fei Sun | Liangming Pan | Adam Jardine | William Yang Wang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Large language models (LLMs) have shown remarkable improvements in reasoning and many existing benchmarks have been addressed by models such as o1 and o3 either fully or partially. However, a majority of these benchmarks emphasize deductive reasoning, including mathematical and coding tasks in which rules such as mathematical axioms or programming syntax are clearly defined, based on which LLMs can plan and apply these rules to arrive at a solution. In contrast, inductive reasoning, where one infers the underlying rules from observed data, remains less explored. Such inductive processes lie at the heart of scientific discovery, as they enable researchers to extract general principles from empirical observations. To assess whether LLMs possess this capacity, we introduce InductionBench, a new benchmark designed to evaluate the inductive reasoning ability of LLMs. Our experimental findings reveal that even the most advanced modelw available struggle to master the simplest complexity classes within the subregular hierarchy of functions, highlighting a notable deficiency in current LLMs’ inductive reasoning capabilities. Coda and data are available https://anonymous.4open.science/r/inductive_reasoning_benchmark-BB2D.

2020

pdf bib
Quantifier-free tree transductions
Shiori Ikawa | Akane Ohtaka | Adam Jardine
Proceedings of the Society for Computation in Linguistics 2020

2019

pdf bib
Autosegmental Input Strictly Local Functions
Jane Chandlee | Adam Jardine
Transactions of the Association for Computational Linguistics, Volume 7

Autosegmental representations (ARs; Goldsmith, 1976) are claimed to enable local analyses of otherwise non-local phenomena Odden (1994). Focusing on the domain of tone, we investigate this ability of ARs using a computationally well-defined notion of locality extended from Chandlee (2014). The result is a more nuanced understanding of the way in which ARs interact with phonological locality.

pdf bib
Q-Theory Representations are Logically Equivalent to Autosegmental Representations
Nick Danis | Adam Jardine
Proceedings of the Society for Computation in Linguistics (SCiL) 2019

pdf bib
Quantifier-free least fixed point functions for phonology
Jane Chandlee | Adam Jardine
Proceedings of the 16th Meeting on the Mathematics of Language

pdf bib
Learning with Partially Ordered Representations
Jane Chandlee | Remi Eyraud | Jeffrey Heinz | Adam Jardine | Jonathan Rawski
Proceedings of the 16th Meeting on the Mathematics of Language

2017

pdf bib
On the Logical Complexity of Autosegmental Representations
Adam Jardine
Proceedings of the 15th Meeting on the Mathematics of Language

2016

pdf bib
Learning Tier-based Strictly 2-Local Languages
Adam Jardine | Jeffrey Heinz
Transactions of the Association for Computational Linguistics, Volume 4

The Tier-based Strictly 2-Local (TSL2) languages are a class of formal languages which have been shown to model long-distance phonotactic generalizations in natural language (Heinz et al., 2011). This paper introduces the Tier-based Strictly 2-Local Inference Algorithm (2TSLIA), the first nonenumerative learner for the TSL2 languages. We prove the 2TSLIA is guaranteed to converge in polynomial time on a data sample whose size is bounded by a constant.

2015

pdf bib
A Concatenation Operation to Derive Autosegmental Graphs
Adam Jardine | Jeffrey Heinz
Proceedings of the 14th Meeting on the Mathematics of Language (MoL 2015)