Sarah Rajtmajer
2024
Can Large Language Models Discern Evidence for Scientific Hypotheses? Case Studies in the Social Sciences
Sai Koneru
|
Jian Wu
|
Sarah Rajtmajer
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Hypothesis formulation and testing are central to empirical research. A strong hypothesis is a best guess based on existing evidence and informed by a comprehensive view of relevant literature. However, with exponential increase in the number of scientific articles published annually, manual aggregation and synthesis of evidence related to a given hypothesis is a challenge. Our work explores the ability of current large language models (LLMs) to discern evidence in support or refute of specific hypotheses based on the text of scientific abstracts. We share a novel dataset for the task of scientific hypothesis evidencing using community-driven annotations of studies in the social sciences. We compare the performance of LLMs to several state of the art methods and highlight opportunities for future research in this area. Our dataset is shared with the research community: https://github.com/Sai90000/ScientificHypothesisEvidencing.git
2020
Acknowledgement Entity Recognition in CORD-19 Papers
Jian Wu
|
Pei Wang
|
Xin Wei
|
Sarah Rajtmajer
|
C. Lee Giles
|
Christopher Griffin
Proceedings of the First Workshop on Scholarly Document Processing
Acknowledgements are ubiquitous in scholarly papers. Existing acknowledgement entity recognition methods assume all named entities are acknowledged. Here, we examine the nuances between acknowledged and named entities by analyzing sentence structure. We develop an acknowledgement extraction system, AckExtract based on open-source text mining software and evaluate our method using manually labeled data. AckExtract uses the PDF of a scholarly paper as input and outputs acknowledgement entities. Results show an overall performance of F_1=0.92. We built a supplementary database by linking CORD-19 papers with acknowledgement entities extracted by AckExtract including persons and organizations and find that only up to 50–60% of named entities are actually acknowledged. We further analyze chronological trends of acknowledgement entities in CORD-19 papers. All codes and labeled data are publicly available at https://github.com/lamps-lab/ackextract.
A Semantics-based Approach to Disclosure Classification in User-Generated Online Content
Chandan Akiti
|
Anna Squicciarini
|
Sarah Rajtmajer
Findings of the Association for Computational Linguistics: EMNLP 2020
As users engage in public discourse, the rate of voluntarily disclosed personal information has seen a steep increase. So-called self-disclosure can result in a number of privacy concerns. Users are often unaware of the sheer amount of personal information they share across online forums, commentaries, and social networks, as well as the power of modern AI to synthesize and gain insights from this data. This paper presents an approach to detect emotional and informational self-disclosure in natural language. We hypothesize that identifying frame semantics can meaningfully support this task. Specifically, we use Semantic Role Labeling to identify the lexical units and their semantic roles that signal self-disclosure. Experimental results on Reddit data show the performance gain of our method when compared to standard text classification methods based on BiLSTM, and BERT. In addition to improved performance, our approach provides insights into the drivers of disclosure behaviors.
Search
Co-authors
- Jian Wu 2
- Pei Wang 1
- Xin Wei 1
- C. Lee Giles 1
- Christopher Griffin 1
- show all...