@inproceedings{xie-etal-2021-iie,
    title = "{IIE}-{NLP}-Eyas at {S}em{E}val-2021 Task 4: Enhancing {PLM} for {R}e{CAM} with Special Tokens, Re-Ranking, {S}iamese Encoders and Back Translation",
    author = "Xie, Yuqiang  and
      Xing, Luxi  and
      Peng, Wei  and
      Hu, Yue",
    editor = "Palmer, Alexis  and
      Schneider, Nathan  and
      Schluter, Natalie  and
      Emerson, Guy  and
      Herbelot, Aurelie  and
      Zhu, Xiaodan",
    booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.semeval-1.22/",
    doi = "10.18653/v1/2021.semeval-1.22",
    pages = "199--204",
    abstract = "This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of QA is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approach gains significant performance compared with the baseline systems. Our system achieves eighth rank (87.51{\%}) and tenth rank (89.64{\%}) on the official blind test set of subtask 1 and subtask 2 respectively."
}Markdown (Informal)
[IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation](https://preview.aclanthology.org/ingest-emnlp/2021.semeval-1.22/) (Xie et al., SemEval 2021)
ACL