Chirag Jain
2024
Generating Clarification Questions for Disambiguating Contracts
Anmol Singhal
|
Chirag Jain
|
Preethu Rose Anish
|
Arkajyoti Chakraborty
|
Smita Ghaisas
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Enterprises frequently enter into commercial contracts that can serve as vital sources of project-specific requirements. Contractual clauses are obligatory, and the requirements derived from contracts can detail the downstream implementation activities that non-legal stakeholders, including requirement analysts, engineers, and delivery personnel, need to conduct. However, comprehending contracts is cognitively demanding and error-prone for such stakeholders due to the extensive use of Legalese and the inherent complexity of contract language. Furthermore, contracts often contain ambiguously worded clauses to ensure comprehensive coverage. In contrast, non-legal stakeholders require a detailed and unambiguous comprehension of contractual clauses to craft actionable requirements. In this work, we introduce a novel legal NLP task that involves generating clarification questions for contracts. These questions aim to identify contract ambiguities on a document level, thereby assisting non-legal stakeholders in obtaining the necessary details for eliciting requirements. This task is challenged by three core issues: (1) data availability, (2) the length and unstructured nature of contracts, and (3) the complexity of legal text. To address these issues, we propose ConRAP, a retrieval-augmented prompting framework for generating clarification questions to disambiguate contractual text. Experiments conducted on contracts sourced from the publicly available CUAD dataset show that ConRAP with ChatGPT can detect ambiguities with an F2 score of 0.87. 70% of the generated clarification questions are deemed useful by human evaluators.
2020
HINT3: Raising the bar for Intent Detection in the Wild
Gaurav Arora
|
Chirag Jain
|
Manas Chaturvedi
|
Krupal Modi
Proceedings of the First Workshop on Insights from Negative Results in NLP
Intent Detection systems in the real world are exposed to complexities of imbalanced datasets containing varying perception of intent, unintended correlations and domain-specific aberrations. To facilitate benchmarking which can reflect near real-world scenarios, we introduce 3 new datasets created from live chatbots in diverse domains. Unlike most existing datasets that are crowdsourced, our datasets contain real user queries received by the chatbots and facilitates penalising unwanted correlations grasped during the training process. We evaluate 4 NLU platforms and a BERT based classifier and find that performance saturates at inadequate levels on test sets because all systems latch on to unintended patterns in training data.
2018
Exploring the importance of context and embeddings in neural NER models for task-oriented dialogue systems
Pratik Jayarao
|
Chirag Jain
|
Aman Srivastava
Proceedings of the 15th International Conference on Natural Language Processing
Search
Co-authors
- Gaurav Arora 1
- Manas Chaturvedi 1
- Krupal Modi 1
- Anmol Singhal 1
- Preethu Rose Anish 1
- show all...