An Investigation of Potential Function Designs for Neural CRF

Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu


Abstract
The neural linear-chain CRF model is one of the most widely-used approach to sequence labeling. In this paper, we investigate a series of increasingly expressive potential functions for neural CRF models, which not only integrate the emission and transition functions, but also explicitly take the representations of the contextual words as input. Our extensive experiments show that the decomposed quadrilinear potential function based on the vector representations of two neighboring labels and two neighboring words consistently achieves the best performance.
Anthology ID:
2020.findings-emnlp.236
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2600–2609
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.236
DOI:
10.18653/v1/2020.findings-emnlp.236
Bibkey:
Cite (ACL):
Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2020. An Investigation of Potential Function Designs for Neural CRF. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2600–2609, Online. Association for Computational Linguistics.
Cite (Informal):
An Investigation of Potential Function Designs for Neural CRF (Hu et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.findings-emnlp.236.pdf