Jipeng Li
2020
Ferryman at SemEval-2020 Task 7: Ensemble Model for Assessing Humor in Edited News Headlines
Weilong Chen
|
Jipeng Li
|
Chenghao Huang
|
Wei Bai
|
Yanru Zhang
|
Yan Wang
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Natural language processing (NLP) has been applied to various fields including text classification and sentiment analysis. In the shared task of assessing the funniness of edited news headlines, which is a part of the SemEval 2020 competition, we preprocess datasets by replacing abbreviation, stemming words, then merge three models including Light Gradient Boosting Machine (LightGBM), Long Short-Term Memory (LSTM), and Bidirectional Encoder Representation from Transformer (BERT) by taking the average to perform the best. Our team Ferryman wins the 9th place in Sub-task 1 of Task 7 - Regression.
Ferryman at SemEval-2020 Task 12: BERT-Based Model with Advanced Improvement Methods for Multilingual Offensive Language Identification
Weilong Chen
|
Peng Wang
|
Jipeng Li
|
Yuanshuai Zheng
|
Yan Wang
|
Yanru Zhang
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Indiscriminately posting offensive remarks on social media may promote the occurrence of negative events such as violence, crime, and hatred. This paper examines different approaches and models for solving offensive tweet classification, which is a part of the OffensEval 2020 competition. The dataset is Offensive Language Identification Dataset (OLID), which draws 14,200 annotated English Tweet comments. The main challenge of data preprocessing is the unbalanced class distribution, abbreviation, and emoji. To overcome these issues, methods such as hashtag segmentation, abbreviation replacement, and emoji replacement have been adopted for data preprocessing approaches. The main task can be divided into three sub-tasks, and are solved by Term Frequency–Inverse Document Frequency(TF-IDF), Bidirectional Encoder Representation from Transformer (BERT), and Multi-dropout respectively. Meanwhile, we applied different learning rates for different languages and tasks based on BERT and non-BERTmodels in order to obtain better results. Our team Ferryman ranked the 18th, 8th, and 21st with F1-score of 0.91152 on the English Sub-task A, Sub-task B, and Sub-task C, respectively. Furthermore, our team also ranked in the top 20 on the Sub-task A of other languages.
Search
Co-authors
- Weilong Chen 2
- Yanru Zhang 2
- Yan Wang 2
- Chenghao Huang 1
- Wei Bai 1
- show all...