2023
pdf
bib
NanoNER: Named Entity Recognition for Nanobiology Using Experts’ Knowledge and Distant Supervision
Ran Cheng
|
Martin Lentschat
|
Cyril Labbe
Proceedings of the Second Workshop on Information Extraction from Scientific Publications
pdf
Detection of Tortured Phrases in Scientific Literature
Eléna Martel
|
Martin Lentschat
|
Cyril Labbe
Proceedings of the Second Workshop on Information Extraction from Scientific Publications
2022
pdf
abs
Investigating the detection of Tortured Phrases in Scientific Literature
Puthineath Lay
|
Martin Lentschat
|
Cyril Labbe
Proceedings of the Third Workshop on Scholarly Document Processing
With the help of online tools, unscrupulous authors can today generate a pseudo-scientific article and attempt to publish it. Some of these tools work by replacing or paraphrasing existing texts to produce new content, but they have a tendency to generate nonsensical expressions. A recent study introduced the concept of “tortured phrase”, an unexpected odd phrase that appears instead of the fixed expression. E.g. counterfeit consciousness instead of artificial intelligence. The present study aims at investigating how tortured phrases, that are not yet listed, can be detected automatically. We conducted several experiments, including non-neural binary classification, neural binary classification and cosine similarity comparison of the phrase tokens, yielding noticeable results.
pdf
abs
Citation Context Classification: Critical vs Non-critical
Sonita Te
|
Amira Barhoumi
|
Martin Lentschat
|
Frédérique Bordignon
|
Cyril Labbé
|
François Portet
Proceedings of the Third Workshop on Scholarly Document Processing
Recently, there have been numerous research in Natural Language Processing on citation analysis in scientific literature. Studies of citation behavior aim at finding how researchers cited a paper in their work. In this paper, we are interested in identifying cited papers that are criticized. Recent research introduces the concept of Critical citations which provides a useful theoretical framework, making criticism an important part of scientific progress. Indeed, identifying critics could be a way to spot errors and thus encourage self-correction of science. In this work, we investigate how to automatically classify the critical citation contexts using Natural Language Processing (NLP). Our classification task consists of predicting critical or non-critical labels for citation contexts. For this, we experiment and compare different methods, including rule-based and machine learning methods, to classify critical vs. non-critical citation contexts. Our experiments show that fine-tuning pretrained transformer model RoBERTa achieved the highest performance among all systems.