Lin Yang


2018

pdf bib
Super Characters: A Conversion from Sentiment Classification to Image Classification
Baohua Sun | Lin Yang | Patrick Dong | Wenhan Zhang | Jason Dong | Charles Young
Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis

We propose a method named Super Characters for sentiment classification. This method converts the sentiment classification problem into image classification problem by projecting texts into images and then applying CNN models for classification. Text features are extracted automatically from the generated Super Characters images, hence there is no need of any explicit step of embedding the words or characters into numerical vector representations. Experimental results on large social media corpus show that the Super Characters method consistently outperforms other methods for sentiment classification and topic classification tasks on ten large social media datasets of millions of contents in four different languages, including Chinese, Japanese, Korean and English.

2013

pdf bib
Learning to translate with products of novices: a suite of open-ended challenge problems for teaching MT
Adam Lopez | Matt Post | Chris Callison-Burch | Jonathan Weese | Juri Ganitkevitch | Narges Ahmidi | Olivia Buzek | Leah Hanson | Beenish Jamil | Matthias Lee | Ya-Ting Lin | Henry Pao | Fatima Rivera | Leili Shahriyari | Debu Sinha | Adam Teichert | Stephen Wampler | Michael Weinberger | Daguang Xu | Lin Yang | Shang Zhao
Transactions of the Association for Computational Linguistics, Volume 1

Machine translation (MT) draws from several different disciplines, making it a complex subject to teach. There are excellent pedagogical texts, but problems in MT and current algorithms for solving them are best learned by doing. As a centerpiece of our MT course, we devised a series of open-ended challenges for students in which the goal was to improve performance on carefully constrained instances of four key MT tasks: alignment, decoding, evaluation, and reranking. Students brought a diverse set of techniques to the problems, including some novel solutions which performed remarkably well. A surprising and exciting outcome was that student solutions or their combinations fared competitively on some tasks, demonstrating that even newcomers to the field can help improve the state-of-the-art on hard NLP problems while simultaneously learning a great deal. The problems, baseline code, and results are freely available.