This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we generate only three BibTeX files per volume, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
Recently, it has been discovered that incorporating structure information (e.g., dependency trees) can improve the performance of aspect-based sentiment analysis (ABSA). The structure information is often obtained from off-the-shelf parsers, which are sub-optimal and unwieldy. Therefore, adaptively inducing task-specific structures is helpful in resolving this issue. In this work, we concentrate on adaptive graph structure induction for ABSA and explore the impact of neuron-level manipulation from a spectral perspective on structure induction. Specifically, we consider word representations from PLMs (pre-trained language models) as node features and employ a graph learning module to adaptively generate adjacency matrices, followed by graph neural networks (GNNs) to capture both node features and structural information. Meanwhile, we propose the Neuron Filtering (NeuLT), a method to conduct neuron-level manipulations on word representations in the frequency domain. We conduct extensive experiments on three public datasets to observe the impact of NeuLT on structure induction and ABSA. The results and further analysis demonstrate that performing neuron-level manipulation through NeuLT can shorten Aspects-sentiment Distance of induced structures and be beneficial to improve the performance of ABSA. The effects of our method can achieve or come close to SOTA (state-of-the-art) performance.
Recently, incorporating structure information (e.g. dependency syntactic tree) can enhance the performance of aspect-based sentiment analysis (ABSA). However, this structure information is obtained from off-the-shelf parsers, which is often sub-optimal and cumbersome. Thus, automatically learning adaptive structures is conducive to solving this problem. In this work, we concentrate on structure induction from pre-trained language models (PLMs) and throw the structure induction into a spectrum perspective to explore the impact of scale information in language representation on structure induction ability. Concretely, the main architecture of our model is composed of commonly used PLMs (e.g. RoBERTa, etc), and a simple yet effective graph structure learning (GSL) module (graph learner + GNNs). Subsequently, we plug in spectral filters with different bands respectively after the PLMs to produce filtered language representations and feed them into the GSL module to induce latent structures. We conduct extensive experiments on three public benchmarks for ABSA. The results and further analyses demonstrate that introducing this spectral approach can shorten Aspects-sentiment Distance (AsD) and be beneficial to structure induction. Even based on such a simple framework, the effects on three datasets can reach SOTA (state of the art) or near SOTA performance. Additionally, our exploration also has the potential to be generalized to other tasks or to bring inspiration to other similar domains.
Financial volatility prediction is vital for indicating a company’s risk profile. Transcripts of companies’ earnings calls are important unstructured data sources to be utilized to access companies’ performance and risk profiles. However, current works ignore the role of financial metrics knowledge (such as EBIT, EPS, and ROI) in transcripts, which is crucial for understanding companies’ performance, and little consideration is given to integrating text and price information. In this work, we statistic common financial metrics and make a special dataset based on these metrics. Then, we introduce a knowledge-enhanced financial volatility prediction method (KeFVP) to inject knowledge of financial metrics into text comprehension by knowledge-enhanced adaptive pre-training (KePt) and effectively incorporating text and price information by introducing a conditional time series prediction module. We conduct extensive experiments on three real-world public datasets, and the results indicate that KeFVP is effective and outperforms all the state-of-the-art methods.
Aspect-based sentiment analysis (ABSA) has drawn more and more attention because of its extensive applications. However, towards the sentence carried with more than one aspect, most existing works generate an aspect-specific sentence representation for each aspect term to predict sentiment polarity, which neglects the sentiment relationship among aspect terms. Besides, most current ABSA methods focus on sentences containing only one aspect term or multiple aspect terms with the same sentiment polarity, which makes ABSA degenerate into sentence-level sentiment analysis. In this paper, to deal with this problem, we construct a heterogeneous graph to model inter-aspect relationships and aspect-context relationships simultaneously and propose a novel Composition-based Heterogeneous Graph Multi-channel Attention Network (CHGMAN) to encode the constructed heterogeneous graph. Meanwhile, we conduct extensive experiments on three datasets: MAMSATSA, Rest14, and Laptop14, experimental results show the effectiveness of our method.