2023
pdf
abs
The Mechanical Bard: An Interpretable Machine Learning Approach to Shakespearean Sonnet Generation
Edwin Agnew
|
Michelle Qiu
|
Lily Zhu
|
Sam Wiseman
|
Cynthia Rudin
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
We consider the automated generation of sonnets, a poetic form constrained according to meter, rhyme scheme, and length. Sonnets generally also use rhetorical figures, expressive language, and a consistent theme or narrative. Our constrained decoding approach allows for the generation of sonnets within preset poetic constraints, while using a relatively modest neural backbone. Human evaluation confirms that our approach produces Shakespearean sonnets that resemble human-authored sonnets, and which adhere to the genre’s defined constraints and contain lyrical language and literary devices.
2021
pdf
abs
There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It
Jianyou Wang
|
Xiaoxuan Zhang
|
Yuren Zhou
|
Christopher Suh
|
Cynthia Rudin
Transactions of the Association for Computational Linguistics, Volume 9
Limerick generation exemplifies some of the most difficult challenges faced in poetry generation, as the poems must tell a story in only five lines, with constraints on rhyme, stress, and meter. To address these challenges, we introduce LimGen, a novel and fully automated system for limerick generation that outperforms state-of-the-art neural network-based poetry models, as well as prior rule-based poetry models. LimGen consists of three important pieces: the Adaptive Multi-Templated Constraint algorithm that constrains our search to the space of realistic poems, the Multi-Templated Beam Search algorithm which searches efficiently through the space, and the probabilistic Storyline algorithm that provides coherent storylines related to a user-provided prompt word. The resulting limericks satisfy poetic constraints and have thematically coherent storylines, which are sometimes even funny (when we are lucky).
pdf
abs
Multitask Learning for Citation Purpose Classification
Yasa M. Baig
|
Alex X. Oesterling
|
Rui Xin
|
Haoyang Yu
|
Angikar Ghosal
|
Lesia Semenova
|
Cynthia Rudin
Proceedings of the Second Workshop on Scholarly Document Processing
We present our entry into the 2021 3C Shared Task Citation Context Classification based on Purpose competition. The goal of the competition is to classify a citation in a scientific article based on its purpose. This task is important because it could potentially lead to more comprehensive ways of summarizing the purpose and uses of scientific articles, but it is also difficult, mainly due to the limited amount of available training data in which the purposes of each citation have been hand-labeled, along with the subjectivity of these labels. Our entry in the competition is a multi-task model that combines multiple modules designed to handle the problem from different perspectives, including hand-generated linguistic features, TF-IDF features, and an LSTM-with- attention model. We also provide an ablation study and feature analysis whose insights could lead to future work.
2020
pdf
abs
Metaphor Detection Using Contextual Word Embeddings From Transformers
Jerry Liu
|
Nathan O’Hara
|
Alexander Rubin
|
Rachel Draelos
|
Cynthia Rudin
Proceedings of the Second Workshop on Figurative Language Processing
The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bi-directional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.
pdf
abs
A Transformer Approach to Contextual Sarcasm Detection in Twitter
Hunter Gregory
|
Steven Li
|
Pouya Mohammadi
|
Natalie Tarn
|
Rachel Draelos
|
Cynthia Rudin
Proceedings of the Second Workshop on Figurative Language Processing
Understanding tone in Twitter posts will be increasingly important as more and more communication moves online. One of the most difficult, yet important tones to detect is sarcasm. In the past, LSTM and transformer architecture models have been used to tackle this problem. We attempt to expand upon this research, implementing LSTM, GRU, and transformer models, and exploring new methods to classify sarcasm in Twitter posts. Among these, the most successful were transformer models, most notably BERT. While we attempted a few other models described in this paper, our most successful model was an ensemble of transformer models including BERT, RoBERTa, XLNet, RoBERTa-large, and ALBERT. This research was performed in conjunction with the sarcasm detection shared task section in the Second Workshop on Figurative Language Processing, co-located with ACL 2020.
2008
pdf
Arabic Morphological Tagging, Diacritization, and Lemmatization Using Lexeme Models and Feature Ranking
Ryan Roth
|
Owen Rambow
|
Nizar Habash
|
Mona Diab
|
Cynthia Rudin
Proceedings of ACL-08: HLT, Short Papers
2006
pdf
Re-Ranking Algorithms for Name Tagging
Heng Ji
|
Cynthia Rudin
|
Ralph Grishman
Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing