@inproceedings{teranishi-matsumoto-2022-coordination,
    title = "Coordination Generation via Synchronized Text-Infilling",
    author = "Teranishi, Hiroki  and
      Matsumoto, Yuji",
    editor = "Calzolari, Nicoletta  and
      Huang, Chu-Ren  and
      Kim, Hansaem  and
      Pustejovsky, James  and
      Wanner, Leo  and
      Choi, Key-Sun  and
      Ryu, Pum-Mo  and
      Chen, Hsin-Hsi  and
      Donatelli, Lucia  and
      Ji, Heng  and
      Kurohashi, Sadao  and
      Paggio, Patrizia  and
      Xue, Nianwen  and
      Kim, Seokhwan  and
      Hahm, Younggyun  and
      He, Zhong  and
      Lee, Tony Kyungil  and
      Santus, Enrico  and
      Bond, Francis  and
      Na, Seung-Hoon",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.coling-1.517/",
    pages = "5914--5924",
    abstract = "Generating synthetic data for supervised learning from large-scale pre-trained language models has enhanced performances across several NLP tasks, especially in low-resource scenarios. In particular, many studies of data augmentation employ masked language models to replace words with other words in a sentence. However, most of them are evaluated on sentence classification tasks and cannot immediately be applied to tasks related to the sentence structure. In this paper, we propose a simple yet effective approach to generating sentences with a coordinate structure in which the boundaries of its conjuncts are explicitly specified. For a given span in a sentence, our method embeds a mask with a coordinating conjunction in two ways ({''}X and [mask]'', ``[mask] and X'') and forces masked language models to fill the two blanks with an identical text. To achieve this, we introduce decoding methods for BERT and T5 models with the constraint that predictions for different masks are synchronized. Furthermore, we develop a training framework that effectively selects synthetic examples for the supervised coordination disambiguation task. We demonstrate that our method produces promising coordination instances that provide gains for the task in low-resource settings."
}Markdown (Informal)
[Coordination Generation via Synchronized Text-Infilling](https://preview.aclanthology.org/ingest-emnlp/2022.coling-1.517/) (Teranishi & Matsumoto, COLING 2022)
ACL
- Hiroki Teranishi and Yuji Matsumoto. 2022. Coordination Generation via Synchronized Text-Infilling. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5914–5924, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.