Maoqin Yang


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2021

pdf bib
Maoqin @ DravidianLangTech-EACL2021: The Application of Transformer-Based Model
Maoqin Yang
Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages

This paper describes the result of team-Maoqin at DravidianLangTech-EACL2021. The provided task consists of three languages(Tamil, Malayalam, and Kannada), I only participate in one of the language task-Malayalam. The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages (Tamil-English, Malayalam-English, and Kannada-English) collected from social media. This is a classification task at the comment/post level. Given a Youtube comment, systems have to classify it into Not-offensive, Offensive-untargeted, Offensive-targeted-individual, Offensive-targeted-group, Offensive-targeted-other, or Not-in-indented-language. I use the transformer-based language model with BiGRU-Attention to complete this task. To prove the validity of the model, I also use some other neural network models for comparison. And finally, the team ranks 5th in this task with a weighted average F1 score of 0.93 on the private leader board.

pdf bib
Gulu at SemEval-2021 Task 7: Detecting and Rating Humor and Offense
Maoqin Yang
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)

Humor recognition is a challenging task in natural language processing. This document presents my approaches to detect and rate humor and offense from the given text. This task includes 2 tasks: task 1 which contains 3 subtasks (1a, 1b, and 1c), and task 2. Subtask 1a and 1c can be regarded as classification problems and take ALBERT as the basic model. Subtask 1b and 2 can be viewed as regression issues and take RoBERTa as the basic model.