2022
pdf
abs
VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding
Dou Hu
|
Xiaolong Hou
|
Xiyang Du
|
Mengyuan Zhou
|
Lianxin Jiang
|
Yang Mo
|
Xiaofeng Shi
Findings of the Association for Computational Linguistics: EMNLP 2022
Pre-trained language models have been widely applied to standard benchmarks. Due to the flexibility of natural language, the available resources in a certain domain can be restricted to support obtaining precise representation. To address this issue, we propose a novel Transformer-based language model named VarMAE for domain-adaptive language understanding. Under the masked autoencoding objective, we design a context uncertainty learning module to encode the token’s context into a smooth latent distribution. The module can produce diverse and well-formed contextual representations. Experiments on science- and finance-domain NLU tasks demonstrate that VarMAE can be efficiently adapted to new domains with limited resources.
pdf
abs
PINGAN_AI at SemEval-2022 Task 9: Recipe knowledge enhanced model applied in Competence-based Multimodal Question Answering
Zhihao Ruan
|
Xiaolong Hou
|
Lianxin Jiang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes our system used in the SemEval-2022 Task 09: R2VQ - Competence-based Multimodal Question Answering. We propose a knowledge-enhanced model for predicting answer in QA task, this model use BERT as the backbone. We adopted two knowledge-enhanced methods in this model: the knowledge auxiliary text method and the knowledge embedding method. We also design an answer extraction task pipeline, which contains an extraction-based model, an automatic keyword labeling module, and an answer generation module. Our system ranked 3rd in task 9 and achieved an exact match score of 78.21 and a word-level F1 score of 82.62.
2021
pdf
abs
RG PA at SemEval-2021 Task 1: A Contextual Attention-based Model with RoBERTa for Lexical Complexity Prediction
Gang Rao
|
Maochang Li
|
Xiaolong Hou
|
Lianxin Jiang
|
Yang Mo
|
Jianping Shen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
In this paper we propose a contextual attention based model with two-stage fine-tune training using RoBERTa. First, we perform the first-stage fine-tune on corpus with RoBERTa, so that the model can learn some prior domain knowledge. Then we get the contextual embedding of context words based on the token-level embedding with the fine-tuned model. And we use Kfold cross-validation to get K models and ensemble them to get the final result. Finally, we attain the 2nd place in the final evaluation phase of sub-task 2 with pearson correlation of 0.8575.
pdf
abs
FPAI at SemEval-2021 Task 6: BERT-MRC for Propaganda Techniques Detection
Xiaolong Hou
|
Junsong Ren
|
Gang Rao
|
Lianxin Lian
|
Zhihao Ruan
|
Yang Mo
|
JIanping Shen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
The objective of subtask 2 of SemEval-2021 Task 6 is to identify techniques used together with the span(s) of text covered by each technique. This paper describes the system and model we developed for the task. We first propose a pipeline system to identify spans, then to classify the technique in the input sequence. But it severely suffers from handling the overlapping in nested span. Then we propose to formulize the task as a question answering task by MRC framework which achieves a better result compared to the pipeline method. Moreover, data augmentation and loss design techniques are also explored to alleviate the problem of data sparse and imbalance. Finally, we attain the 3rd place in the final evaluation phase.
2020
pdf
abs
FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection
Chenyang Guo
|
Xiaolong Hou
|
Junsong Ren
|
Lianxin Jiang
|
Yang Mo
|
Haiqin Yang
|
Jianping Shen
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes the model we apply in the SemEval-2020 Task 10. We formalize the task of emphasis selection as a simplified query-based machine reading comprehension (MRC) task, i.e. answering a fixed question of “Find candidates for emphasis”. We propose our subword puzzle encoding mechanism and subword fusion layer to align and fuse subwords. By introducing the semantic prior knowledge of the informative query and some other techniques, we attain the 7th place during the evaluation phase and the first place during train phase.