Abstract
We present Pixie, a manually annotated dataset for preference classification comprising 8,890 sentences drawn from app reviews. Unlike previous studies on preference classification, Pixie contains implicit (omitting an entity being compared) and indirect (lacking comparative linguistic cues) comparisons. We find that transformer-based pretrained models, finetuned on Pixie, achieve a weighted average F1 score of 83.34% and outperform the existing state-of-the-art preference classification model (73.99%).- Anthology ID:
- 2022.acl-short.13
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 106–112
- Language:
- URL:
- https://aclanthology.org/2022.acl-short.13
- DOI:
- 10.18653/v1/2022.acl-short.13
- Cite (ACL):
- Amanul Haque, Vaibhav Garg, Hui Guo, and Munindar Singh. 2022. Pixie: Preference in Implicit and Explicit Comparisons. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 106–112, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Pixie: Preference in Implicit and Explicit Comparisons (Haque et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2022.acl-short.13.pdf
- Code
- ahaque2/pixie