Minghao Zhang


2021

pdf
基于HowNet的无监督汉语动词隐喻识别方法(Unsupervised Chinese Verb Metaphor Recognition Method Based on HowNet)
Minghao Zhang (张明昊) | Dongyu Zhang (张冬瑜) | Hongfei Lin (林鸿飞)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

隐喻是人类思维和语言理解的核心问题。随着互联网发展和海量文本出现,利用自然语言处理技术对隐喻文本进行自动识别成为一种迫切的需求。但是目前在汉语隐喻识别研究中,由于语义资源有限,导致模型容易过拟合。此外,主流隐喻识别方法存在可解释性差的缺点。针对上述问题,本文构建了一个规模较大的汉语动词隐喻数据集,并且提出了一种基于HowNet的无监督汉语动词隐喻识别模型。实验结果表明,本文提出的模型能够有效地应用于动词隐喻识别任务,识别效果超过了对比的无监督模型;并且,与其它用于隐喻识别的深度学习模型相比,本文模型具有结构简单、参数少、可解释性强的优点。

pdf
MultiMET: A Multimodal Dataset for Metaphor Understanding
Dongyu Zhang | Minghao Zhang | Heting Zhang | Liang Yang | Hongfei Lin
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Metaphor involves not only a linguistic phenomenon, but also a cognitive phenomenon structuring human thought, which makes understanding it challenging. As a means of cognition, metaphor is rendered by more than texts alone, and multimodal information in which vision/audio content is integrated with the text can play an important role in expressing and understanding metaphor. However, previous metaphor processing and understanding has focused on texts, partly due to the unavailability of large-scale datasets with ground truth labels of multimodal metaphor. In this paper, we introduce MultiMET, a novel multimodal metaphor dataset to facilitate understanding metaphorical information from multimodal text and image. It contains 10,437 text-image pairs from a range of sources with multimodal annotations of the occurrence of metaphors, domain relations, sentiments metaphors convey, and author intents. MultiMET opens the door to automatic metaphor understanding by investigating multimodal cues and their interplay. Moreover, we propose a range of strong baselines and show the importance of combining multimodal cues for metaphor understanding. MultiMET will be released publicly for research.