Elsa Olivetti
2021
MS-Mentions: Consistently Annotating Entity Mentions in Materials Science Procedural Text
Tim O’Gorman
|
Zach Jensen
|
Sheshera Mysore
|
Kevin Huang
|
Rubayyat Mahbub
|
Elsa Olivetti
|
Andrew McCallum
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Material science synthesis procedures are a promising domain for scientific NLP, as proper modeling of these recipes could provide insight into new ways of creating materials. However, a fundamental challenge in building information extraction models for material science synthesis procedures is getting accurate labels for the materials, operations, and other entities of those procedures. We present a new corpus of entity mention annotations over 595 Material Science synthesis procedural texts (157,488 tokens), which greatly expands the training data available for the Named Entity Recognition task. We outline a new label inventory designed to provide consistent annotations and a new annotation approach intended to maximize the consistency and annotation speed of domain experts. Inter-annotator agreement studies and baseline models trained upon the data suggest that the corpus provides high-quality annotations of these mention types. This corpus helps lay a foundation for future high-quality modeling of synthesis procedures.
2019
The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures
Sheshera Mysore
|
Zachary Jensen
|
Edward Kim
|
Kevin Huang
|
Haw-Shiuan Chang
|
Emma Strubell
|
Jeffrey Flanigan
|
Andrew McCallum
|
Elsa Olivetti
Proceedings of the 13th Linguistic Annotation Workshop
Materials science literature contains millions of materials synthesis procedures described in unstructured natural language text. Large-scale analysis of these synthesis procedures would facilitate deeper scientific understanding of materials synthesis and enable automated synthesis planning. Such analysis requires extracting structured representations of synthesis procedures from the raw text as a first step. To facilitate the training and evaluation of synthesis extraction models, we introduce a dataset of 230 synthesis procedures annotated by domain experts with labeled graphs that express the semantics of the synthesis sentences. The nodes in this graph are synthesis operations and their typed arguments, and labeled edges specify relations between the nodes. We describe this new resource in detail and highlight some specific challenges to annotating scientific text with shallow semantic structure. We make the corpus available to the community to promote further research and development of scientific information extraction systems.
Search
Co-authors
- Sheshera Mysore 2
- Kevin Huang 2
- Andrew Mccallum 2
- Zachary Jensen 1
- Edward Kim 1
- show all...