Abstract
Copy mechanisms explicitly obtain unchanged tokens from the source (input) sequence to generate the target (output) sequence under the neural seq2seq framework. However, most of the existing copy mechanisms only consider single word copying from the source sentences, which results in losing essential tokens while copying long spans. In this work, we propose a plug-and-play architecture, namely BioCopy, to alleviate the problem aforementioned. Specifically, in the training stage, we construct a BIO tag for each token and train the original model with BIO tags jointly. In the inference stage, the model will firstly predict the BIO tag at each time step, then conduct different mask strategies based on the predicted BIO label to diminish the scope of the probability distributions over the vocabulary list. Experimental results on two separate generative tasks show that they all outperform the baseline models by adding our BioCopy to the original model structure.- Anthology ID:
- 2021.sustainlp-1.6
- Volume:
- Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Virtual
- Editors:
- Nafise Sadat Moosavi, Iryna Gurevych, Angela Fan, Thomas Wolf, Yufang Hou, Ana Marasović, Sujith Ravi
- Venue:
- sustainlp
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 53–57
- Language:
- URL:
- https://aclanthology.org/2021.sustainlp-1.6
- DOI:
- 10.18653/v1/2021.sustainlp-1.6
- Cite (ACL):
- Yi Liu, Guoan Zhang, Puning Yu, Jianlin Su, and Shengfeng Pan. 2021. BioCopy: A Plug-And-Play Span Copy Mechanism in Seq2Seq Models. In Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, pages 53–57, Virtual. Association for Computational Linguistics.
- Cite (Informal):
- BioCopy: A Plug-And-Play Span Copy Mechanism in Seq2Seq Models (Liu et al., sustainlp 2021)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2021.sustainlp-1.6.pdf