Shuting Bai
2022
Developing and Evaluating a Dataset for How-to Tip Machine Reading at Scale
Fuzhu Zhu | Shuting Bai | Tingxuan Li | Takehito Utsuro
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation
Fuzhu Zhu | Shuting Bai | Tingxuan Li | Takehito Utsuro
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation
2021
Evaluating a How-to Tip Machine Comprehension Model with QA Examples collected from a Community QA Site
Tingxuan Li | Shuting Bai | Fuzhu Zhu | Takehito Utsuro
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation
Tingxuan Li | Shuting Bai | Fuzhu Zhu | Takehito Utsuro
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation
2020
MRC Examples Answerable by BERT without a Question Are Less Effective in MRC Model Training
Hongyu Li | Tengyang Chen | Shuting Bai | Takehito Utsuro | Yasuhide Kawada
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop
Hongyu Li | Tengyang Chen | Shuting Bai | Takehito Utsuro | Yasuhide Kawada
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop
Models developed for Machine Reading Comprehension (MRC) are asked to predict an answer from a question and its related context. However, there exist cases that can be correctly answered by an MRC model using BERT, where only the context is provided without including the question. In this paper, these types of examples are referred to as “easy to answer”, while others are as “hard to answer”, i.e., unanswerable by an MRC model using BERT without being provided the question. Based on classifying examples as answerable or unanswerable by BERT without the given question, we propose a method based on BERT that splits the training examples from the MRC dataset SQuAD1.1 into those that are “easy to answer” or “hard to answer”. Experimental evaluation from a comparison of two models, one trained only with “easy to answer” examples and the other with “hard to answer” examples demonstrates that the latter outperforms the former.