Wenqi Zhang
2022
Multi-View Reasoning: Consistent Contrastive Learning for Math Word Problem
Wenqi Zhang
|
Yongliang Shen
|
Yanna Ma
|
Xiaoxia Cheng
|
Zeqi Tan
|
Qingpeng Nong
|
Weiming Lu
Findings of the Association for Computational Linguistics: EMNLP 2022
Math word problem solver requires both precise relation reasoning about quantities in the text and reliable generation for the diverse equation. Current sequence-to-tree or relation extraction methods regard this only from a fixed view, struggling to simultaneously handle complex semantics and diverse equations. However, human solving naturally involves two consistent reasoning views: top-down and bottom-up, just as math equations also can be expressed in multiple equivalent forms: pre-order and post-order. We propose a multi-view consistent contrastive learning for a more complete semantics-to-equation mapping. The entire process is decoupled into two independent but consistent views: top-down decomposition and bottom-up construction, and the two reasoning views are aligned in multi-granularity for consistency, enhancing global generation and precise reasoning. Experiments on multiple datasets across two languages show our approach significantly outperforms the existing baselines, especially on complex problems. We also show after consistent alignment, multi-view can absorb the merits of both views and generate more diverse results consistent with the mathematical laws.
Query-based Instance Discrimination Network for Relational Triple Extraction
Zeqi Tan
|
Yongliang Shen
|
Xuming Hu
|
Wenqi Zhang
|
Xiaoxia Cheng
|
Weiming Lu
|
Yueting Zhuang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Joint entity and relation extraction has been a core task in the field of information extraction. Recent approaches usually consider the extraction of relational triples from a stereoscopic perspective, either learning a relation-specific tagger or separate classifiers for each relation type. However, they still suffer from error propagation, relation redundancy and lack of high-level connections between triples. To address these issues, we propose a novel query-based approach to construct instance-level representations for relational triples. By metric-based comparison between query embeddings and token embeddings, we can extract all types of triples in one step, thus eliminating the error propagation problem. In addition, we learn the instance-level representation of relational triples via contrastive learning. In this way, relational triples can not only enclose rich class-level semantics but also access to high-order global connections. Experimental results show that our proposed method achieves the state of the art on five widely used benchmarks.
Search
Co-authors
- Yongliang Shen 2
- Xiaoxia Cheng 2
- Zeqi Tan 2
- Weiming Lu 2
- Yanna Ma 1
- show all...