Michael Greenspan
2023
JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions
Mo Yu
|
Yi Gu
|
Xiaoxiao Guo
|
Yufei Feng
|
Xiaodan Zhu
|
Michael Greenspan
|
Murray Campbell
|
Chuang Gan
Findings of the Association for Computational Linguistics: ACL 2023
Commonsense reasoning simulates the human ability to make presumptions about our physical world, and it is an essential cornerstone in building general AI systems. We proposea new commonsense reasoning dataset based on human’s Interactive Fiction (IF) gameplaywalkthroughs as human players demonstrate plentiful and diverse commonsense reasoning.The new dataset provides a natural mixture of various reasoning types and requires multi-hopreasoning. Moreover, the IF game-based construction procedure requires much less humaninterventions than previous ones. Different from existing benchmarks, our dataset focuseson the assessment of functional commonsense knowledge rules rather than factual knowledge.Hence, in order to achieve higher performance on our tasks, models need to effectively uti-lize such functional knowledge to infer the outcomes of actions, rather than relying solely onmemorizing facts. Experiments show that the introduced dataset is challenging to previousmachine reading models as well as the new large language models with a significant 20%performance gap compared to human experts.
2022
Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference
Yufei Feng
|
Xiaoyu Yang
|
Xiaodan Zhu
|
Michael Greenspan
Transactions of the Association for Computational Linguistics, Volume 10
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the introspective revision algorithm modifies intermediate symbolic reasoning steps to discover reward-earning operations as well as leverages external knowledge to alleviate spurious reasoning and training inefficiency. The framework is supported by properly designed local relation models to avoid input entangling, which helps ensure the interpretability of the proof paths. The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability, compared with previous models on the existing datasets.
2020
Exploring End-to-End Differentiable Natural Logic Modeling
Yufei Feng
|
Zi’ou Zheng
|
Quan Liu
|
Michael Greenspan
|
Xiaodan Zhu
Proceedings of the 28th International Conference on Computational Linguistics
We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components. The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information. Experiments show that the proposed framework can effectively model monotonicity-based reasoning, compared to the baseline neural network models without built-in inductive bias for monotonicity-based reasoning. Our proposed model shows to be robust when transferred from upward to downward inference. We perform further analyses on the performance of the proposed model on aggregation, showing the effectiveness of the proposed subcomponents on helping achieve better intermediate aggregation performance.
Search
Co-authors
- Yufei Feng 3
- Xiaodan Zhu 3
- Zi’ou Zheng 1
- Quan Liu 1
- Xiaoyu Yang 1
- show all...