Hikaru Tomonari
2024
JFLD: A Japanese Benchmark for Deductive Reasoning Based on Formal Logic
Terufumi Morishita
|
Atsuki Yamaguchi
|
Gaku Morio
|
Hikaru Tomonari
|
Osamu Imaichi
|
Yasuhiro Sogawa
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Large language models (LLMs) have proficiently solved a broad range of tasks with their rich knowledge but often struggle with logical reasoning. To foster the research on logical reasoning, many benchmarks have been proposed so far. However, most of these benchmarks are limited to English, hindering the evaluation of LLMs specialized for each language. To address this, we propose **JFLD** (**J**apanese **F**ormal **L**ogic **D**eduction), a deductive reasoning benchmark for Japanese. JFLD assess whether LLMs can generate logical steps to (dis-)prove a given hypothesis based on a given set of facts. Its key features are assessing pure logical reasoning abilities isolated from knowledge and assessing various reasoning rules. We evaluate various Japanese LLMs and see that they are still poor at logical reasoning, thus highlighting a substantial need for future research.
2022
Robustness Evaluation of Text Classification Models Using Mathematical Optimization and Its Application to Adversarial Training
Hikaru Tomonari
|
Masaaki Nishino
|
Akihiro Yamamoto
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022
Neural networks are known to be vulnerable to adversarial examples due to slightly perturbed input data. In practical applications of neural network models, the robustness of the models against perturbations must be evaluated. However, no method can strictly evaluate their robustness in natural language domains. We therefore propose a method that evaluates the robustness of text classification models using an integer linear programming (ILP) solver by an optimization problem that identifies a minimum synonym swap that changes the classification result. Our method allows us to compare the robustness of various models in realistic time. It can also be used for obtaining adversarial examples. Because of the minimal impact on the altered sentences, adversarial examples with our method obtained high scores in human evaluations of grammatical correctness and semantic similarity for an IMDb dataset. In addition, we implemented adversarial training with the IMDb and SST2 datasets and found that our adversarial training method makes the model robust.
Search
Co-authors
- Terufumi Morishita 1
- Atsuki Yamaguchi 1
- Gaku Morio 1
- Osamu Imaichi 1
- Yasuhiro Sogawa 1
- show all...