Constrained Multi-Task Learning for Bridging Resolution

Hideo Kobayashi, Yufang Hou, Vincent Ng


Abstract
We examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi-task learning framework for bridging resolution, within which we (1) design cross-task consistency constraints to guide the learning process; (2) pre-train the entity coreference model in the multi-task framework on the large amount of publicly available coreference data; and (3) integrating prior knowledge encoded in rule-based resolvers. Our approach achieves state-of-the-art results on three standard evaluation corpora.
Anthology ID:
2022.acl-long.56
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
759–770
Language:
URL:
https://aclanthology.org/2022.acl-long.56
DOI:
10.18653/v1/2022.acl-long.56
Bibkey:
Cite (ACL):
Hideo Kobayashi, Yufang Hou, and Vincent Ng. 2022. Constrained Multi-Task Learning for Bridging Resolution. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 759–770, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Constrained Multi-Task Learning for Bridging Resolution (Kobayashi et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.acl-long.56.pdf
Code
 juntaoy/dali-bridging
Data
BASHIISNotes