Aria Haghighi


2022

pdf
CTM - A Model for Large-Scale Multi-View Tweet Topic Classification
Vivek Kulkarni | Kenny Leung | Aria Haghighi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track

Automatically associating social media posts with topics is an important prerequisite for effective search and recommendation on many social media platforms. However, topic classification of such posts is quite challenging because of (a) a large topic space (b) short text with weak topical cues, and (c) multiple topic associations per post. In contrast to most prior work which only focuses on post-classification into a small number of topics (10-20), we consider the task of large-scale topic classification in the context of Twitter where the topic space is 10 times larger with potentially multiple topic associations per Tweet. We address the challenges above and propose a novel neural model, that (a) supports a large topic space of 300 topics (b) takes a holistic approach to tweet content modeling – leveraging multi-modal content, author context, and deeper semantic cues in the Tweet. Our method offers an effective way to classify Tweets into topics at scale by yielding superior performance to other approaches (a relative lift of 20% in median average precision score) and has been successfully deployed in production at Twitter.

pdf
Towards Improved Distantly Supervised Multilingual Named-Entity Recognition for Tweets
Ramy Eskander | Shubhanshu Mishra | Sneha Mehta | Sofia Samaniego | Aria Haghighi
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)

Recent low-resource named-entity recognition (NER) work has shown impressive gains by leveraging a single multilingual model trained using distantly supervised data derived from cross-lingual knowledge bases. In this work, we investigate such approaches by leveraging Wikidata to build large-scale NER datasets of Tweets and propose two orthogonal improvements for low-resource NER in the Twitter social media domain: (1) leveraging domain-specific pre-training on Tweets; and (2) building a model for each language family rather than an all-in-one single multilingual model. For (1), we show that mBERT with Tweet pre-training outperforms the state-of-the-art multilingual transformer-based language model, LaBSE, by a relative increase of 34.6% in F1 when evaluated on Twitter data in a language-agnostic multilingual setting. For (2), we show that learning NER models for language families outperforms a single multilingual model by relative increases of 14.1%, 15.8% and 45.3% in F1 when utilizing mBERT, mBERT with Tweet pre-training and LaBSE, respectively. We conduct analyses and present examples for these observed improvements.

2021

pdf
LMSOC: An Approach for Socially Sensitive Pretraining
Vivek Kulkarni | Shubhanshu Mishra | Aria Haghighi
Findings of the Association for Computational Linguistics: EMNLP 2021

While large-scale pretrained language models have been shown to learn effective linguistic representations for many NLP tasks, there remain many real-world contextual aspects of language that current approaches do not capture. For instance, consider a cloze test “I enjoyed the _____ game this weekend”: the correct answer depends heavily on where the speaker is from, when the utterance occurred, and the speaker’s broader social milieu and preferences. Although language depends heavily on the geographical, temporal, and other social contexts of the speaker, these elements have not been incorporated into modern transformer-based language models. We propose a simple but effective approach to incorporate speaker social context into the learned representations of large-scale language models. Our method first learns dense representations of social contexts using graph representation learning algorithms and then primes language model pretraining with these social context representations. We evaluate our approach on geographically-sensitive language modeling tasks and show a substantial improvement (more than 100% relative lift on MRR) compared to baselines.

pdf
Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction
Shubhanshu Mishra | Aria Haghighi
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

We evaluate a simple approach to improving zero-shot multilingual transfer of mBERT on social media corpus by adding a pretraining task called translation pair prediction (TPP), which predicts whether a pair of cross-lingual texts are a valid translation. Our approach assumes access to translations (exact or approximate) between source-target language pairs, where we fine-tune a model on source language task data and evaluate the model in the target language. In particular, we focus on language pairs where transfer learning is difficult for mBERT: those where source and target languages are different in script, vocabulary, and linguistic typology. We show improvements from TPP pretraining over mBERT alone in zero-shot transfer from English to Hindi, Arabic, and Japanese on two social media tasks: NER (a 37% average relative improvement in F1 across target languages) and sentiment classification (12% relative improvement in F1) on social media text, while also benchmarking on a non-social media task of Universal Dependency POS tagging (6.7% relative improvement in accuracy). Our results are promising given the lack of social media bitext corpus. Our code can be found at: https://github.com/twitter-research/multilingual-alignment-tpp.

2012

pdf bib
Proceedings of the Demonstration Session at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Aria Haghighi | Yaser Al-Onaizan
Proceedings of the Demonstration Session at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf
Structured Relation Discovery using Generative Models
Limin Yao | Aria Haghighi | Sebastian Riedel | Andrew McCallum
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Modeling Syntactic Context Improves Morphological Segmentation
Yoong Keok Lee | Aria Haghighi | Regina Barzilay
Proceedings of the Fifteenth Conference on Computational Natural Language Learning

pdf
Content Models with Attitude
Christina Sauper | Aria Haghighi | Regina Barzilay
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Event Discovery in Social Media Feeds
Edward Benson | Aria Haghighi | Regina Barzilay
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Ordering Prenominal Modifiers with a Reranking Approach
Jenny Liu | Aria Haghighi
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf
An Entity-Level Approach to Information Extraction
Aria Haghighi | Dan Klein
Proceedings of the ACL 2010 Conference Short Papers

pdf
Coreference Resolution in a Modular, Entity-Centered Model
Aria Haghighi | Dan Klein
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Incorporating Content Structure into Text Analysis Applications
Christina Sauper | Aria Haghighi | Regina Barzilay
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Simple Type-Level Unsupervised POS Tagging
Yoong Keok Lee | Aria Haghighi | Regina Barzilay
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf
Simple Coreference Resolution with Rich Syntactic and Semantic Features
Aria Haghighi | Dan Klein
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Better Word Alignments with Supervised ITG Models
Aria Haghighi | John Blitzer | John DeNero | Dan Klein
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf
Exploring Content Models for Multi-Document Summarization
Aria Haghighi | Lucy Vanderwende
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2008

pdf bib
A Global Joint Model for Semantic Role Labeling
Kristina Toutanova | Aria Haghighi | Christopher D. Manning
Computational Linguistics, Volume 34, Number 2, June 2008 - Special Issue on Semantic Role Labeling

pdf
Learning Bilingual Lexicons from Monolingual Corpora
Aria Haghighi | Percy Liang | Taylor Berg-Kirkpatrick | Dan Klein
Proceedings of ACL-08: HLT

pdf
Coarse-to-Fine Syntactic Machine Translation using Language Projections
Slav Petrov | Aria Haghighi | Dan Klein
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf
Approximate Factoring for A* Search
Aria Haghighi | John DeNero | Dan Klein
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf
Unsupervised Coreference Resolution in a Nonparametric Bayesian Model
Aria Haghighi | Dan Klein
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf
Prototype-Driven Learning for Sequence Models
Aria Haghighi | Dan Klein
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf
Prototype-Driven Grammar Induction
Aria Haghighi | Dan Klein
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

2005

pdf
Robust Textual Inference via Graph Matching
Aria Haghighi | Andrew Ng | Christopher Manning
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf
A Joint Model for Semantic Role Labeling
Aria Haghighi | Kristina Toutanova | Christopher Manning
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf
Joint Learning Improves Semantic Role Labeling
Kristina Toutanova | Aria Haghighi | Christopher Manning
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)