Jesse Davis
2024
DMON: A Simple Yet Effective Approach for Argument Structure Learning
Sun Wei
|
Mingxiao Li
|
Jingyuan Sun
|
Jesse Davis
|
Marie-Francine Moens
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Argument structure learning (ASL) entails predicting relations between arguments. Because it can structure a document to facilitate its understanding, it has been widely applied in many fields (medical, commercial, and scientific domains). Despite its broad utilization, ASL remains a challenging task because it involves examining the complex relationships between the sentences in a potentially unstructured discourse. To resolve this problem, we have developed a simple yet effective approach called Dual-tower Multi-scale cOnvolution neural Network (DMON) for the ASL task. Specifically, we organize arguments into a relationship matrix that together with the argument embeddings forms a relationship tensor and design a mechanism to capture relations with contextual arguments. Experimental results on three different-domain argument mining datasets demonstrate that our framework outperforms state-of-the-art models. We will release the code after paper acceptance.
2021
Mapping probability word problems to executable representations
Simon Suster
|
Pieter Fivez
|
Pietro Totis
|
Angelika Kimmig
|
Jesse Davis
|
Luc de Raedt
|
Walter Daelemans
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
While solving math word problems automatically has received considerable attention in the NLP community, few works have addressed probability word problems specifically. In this paper, we employ and analyse various neural models for answering such word problems. In a two-step approach, the problem text is first mapped to a formal representation in a declarative language using a sequence-to-sequence model, and then the resulting representation is executed using a probabilistic programming system to provide the answer. Our best performing model incorporates general-domain contextualised word representations that were finetuned using transfer learning on another in-domain dataset. We also apply end-to-end models to this task, which bring out the importance of the two-step approach in obtaining correct solutions to probability problems.
2010
Learning First-Order Horn Clauses from Web Text
Stefan Schoenmackers
|
Jesse Davis
|
Oren Etzioni
|
Daniel Weld
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Search
Co-authors
- Simon Suster 1
- Pieter Fivez 1
- Pietro Totis 1
- Angelika Kimmig 1
- Luc De Raedt 1
- show all...