2025
pdf
bib
abs
Large Language Models are Good Relational Learners
Fang Wu
|
Vijay Prakash Dwivedi
|
Jure Leskovec
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have demonstrated remarkable capabilities across various domains, yet their application to relational deep learning (RDL) remains underexplored. Existing approaches adapt LLMs by traversing relational links between entities in a database and converting the structured data into flat text documents, but this text-based serialization disregards critical relational structures, introduces redundancy, and often exceeds standard LLM context lengths. We introduce Rel-LLM, a novel architecture that employs a graph neural network (GNN) based encoder to create structured relational prompts for LLMs within a retrieval-augmented generation (RAG) framework. Unlike traditional text-based serialization approaches, our method preserves the inherent relational structure of databases while enabling LLMs to effectively process and reason over complex entity relationships. Specifically, the GNN encoder extracts a local subgraph around an entity to build feature representations that contain relevant entity relationships and temporal dependencies. These representations are transformed into structured prompts using a denormalization process, effectively allowing the LLM to reason over relational structures. Through extensive experiments, we demonstrate that Rel-LLM outperforms existing methods on key RDL tasks, offering a scalable and efficient approach to integrating LLMs with structured data sources. Code is available at
https://github.com/smiles724/Rel-LLM.
2022
pdf
bib
abs
LinkBERT: Pretraining Language Models with Document Links
Michihiro Yasunaga
|
Jure Leskovec
|
Percy Liang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Language model (LM) pretraining captures various knowledge from text corpora, helping downstream tasks. However, existing methods such as BERT model a single document, and do not capture dependencies or knowledge that span across documents. In this work, we propose LinkBERT, an LM pretraining method that leverages links between documents, e.g., hyperlinks. Given a text corpus, we view it as a graph of documents and create LM inputs by placing linked documents in the same context. We then pretrain the LM with two joint self-supervised objectives: masked language modeling and our new proposal, document relation prediction. We show that LinkBERT outperforms BERT on various downstream tasks across two domains: the general domain (pretrained on Wikipedia with hyperlinks) and biomedical domain (pretrained on PubMed with citation links). LinkBERT is especially effective for multi-hop reasoning and few-shot QA (+5% absolute improvement on HotpotQA and TriviaQA), and our biomedical LinkBERT sets new states of the art on various BioNLP tasks (+7% on BioASQ and USMLE). We release our pretrained models, LinkBERT and BioLinkBERT, as well as code and data.
2021
pdf
bib
abs
LM-Critic: Language Models for Unsupervised Grammatical Error Correction
Michihiro Yasunaga
|
Jure Leskovec
|
Percy Liang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Grammatical error correction (GEC) requires a set of labeled ungrammatical / grammatical sentence pairs for training, but obtaining such annotation can be prohibitively expensive. Recently, the Break-It-Fix-It (BIFI) framework has demonstrated strong results on learning to repair a broken program without any labeled examples, but this relies on a perfect critic (e.g., a compiler) that returns whether an example is valid or not, which does not exist for the GEC task. In this work, we show how to leverage a pretrained language model (LM) in defining an LM-Critic, which judges a sentence to be grammatical if the LM assigns it a higher probability than its local perturbations. We apply this LM-Critic and BIFI along with a large set of unlabeled sentences to bootstrap realistic ungrammatical / grammatical pairs for training a corrector. We evaluate our approach on GEC datasets on multiple domains (CoNLL-2014, BEA-2019, GMEG-wiki and GMEG-yahoo) and show that it outperforms existing methods in both the unsupervised setting (+7.7 F0.5) and the supervised setting (+0.5 F0.5).
pdf
bib
abs
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
Michihiro Yasunaga
|
Hongyu Ren
|
Antoine Bosselut
|
Percy Liang
|
Jure Leskovec
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. Here we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph-based message passing. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.
2016
pdf
bib
Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora
William L. Hamilton
|
Kevin Clark
|
Jure Leskovec
|
Dan Jurafsky
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change
William L. Hamilton
|
Jure Leskovec
|
Dan Jurafsky
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change
William L. Hamilton
|
Jure Leskovec
|
Dan Jurafsky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
pdf
bib
abs
Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health
Tim Althoff
|
Kevin Clark
|
Jure Leskovec
Transactions of the Association for Computational Linguistics, Volume 4
Mental illness is one of the most pressing public health issues of our time. While counseling and psychotherapy can be effective treatments, our knowledge about how to conduct successful counseling conversations has been limited due to lack of large-scale data with labeled outcomes of the conversations. In this paper, we present a large-scale, quantitative study on the discourse of text-message-based counseling conversations. We develop a set of novel computational discourse analysis methods to measure how various linguistic aspects of conversations are correlated with conversation outcomes. Applying techniques such as sequence-based conversation models, language model comparisons, message clustering, and psycholinguistics-inspired word frequency analyses, we discover actionable conversation strategies that are associated with better conversation outcomes.
pdf
bib
Learning Linguistic Descriptors of User Roles in Online Communities
Alex Wang
|
William L. Hamilton
|
Jure Leskovec
Proceedings of the First Workshop on NLP and Computational Social Science
2014
pdf
bib
abs
Exploiting Social Network Structure for Person-to-Person Sentiment Analysis
Robert West
|
Hristo S. Paskov
|
Jure Leskovec
|
Christopher Potts
Transactions of the Association for Computational Linguistics, Volume 2
Person-to-person evaluations are prevalent in all kinds of discourse and important for establishing reputations, building social bonds, and shaping public opinion. Such evaluations can be analyzed separately using signed social networks and textual sentiment analysis, but this misses the rich interactions between language and social context. To capture such interactions, we develop a model that predicts individual A’s opinion of individual B by synthesizing information from the signed social network in which A and B are embedded with sentiment analysis of the evaluative texts relating A to B. We prove that this problem is NP-hard but can be relaxed to an efficiently solvable hinge-loss Markov random field, and we show that this implementation outperforms text-only and network-only versions in two very different datasets involving community-level decision-making: the Wikipedia Requests for Adminship corpus and the Convote U.S. Congressional speech corpus.
2013
pdf
bib
A computational approach to politeness with application to social factors
Cristian Danescu-Niculescu-Mizil
|
Moritz Sudhof
|
Dan Jurafsky
|
Jure Leskovec
|
Christopher Potts
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)