FLAT: Chinese NER Using Flat-Lattice Transformer

Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang


Abstract
Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information. However, since the lattice structure is complex and dynamic, the lattice-based models are hard to fully utilize the parallel computation of GPUs and usually have a low inference speed. In this paper, we propose FLAT: Flat-LAttice Transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans. Each span corresponds to a character or latent word and its position in the original lattice. With the power of Transformer and well-designed position encoding, FLAT can fully leverage the lattice information and has an excellent parallel ability. Experiments on four datasets show FLAT outperforms other lexicon-based models in performance and efficiency.
Anthology ID:
2020.acl-main.611
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6836–6842
Language:
URL:
https://aclanthology.org/2020.acl-main.611
DOI:
10.18653/v1/2020.acl-main.611
Bibkey:
Cite (ACL):
Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. FLAT: Chinese NER Using Flat-Lattice Transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6836–6842, Online. Association for Computational Linguistics.
Cite (Informal):
FLAT: Chinese NER Using Flat-Lattice Transformer (Li et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2020.acl-main.611.pdf
Video:
 http://slideslive.com/38929311
Code
 LeeSureman/Flat-Lattice-Transformer
Data
MSRA CN NEROntoNotes 4.0Resume NERWeibo NER