Ganesh Katrapati
2025
Can Constructions “SCAN” Compositionality ?
Ganesh Katrapati
|
Manish Shrivastava
Proceedings of the Second International Workshop on Construction Grammars and NLP
Sequence to Sequence models struggle at compositionality and systematic generalisation even while they excel at many other tasks.We attribute this limitation to their failure to internalise constructions—conventionalised form–meaning pairings that license productive recombination. Building on these insights, we introduce an unsupervised procedure for mining pseudo-constructions: variable-slot templates automatically extracted from training data. When applied to the SCAN dataset, ourmethod yields large gains out-of-distribution splits: accuracy rises to 47.8% on ADD JUMP and to 20.3% on AROUND RIGHT without any architectural changes or additional supervision. The model also attains competitive performance with ≤ 40% of the original training data, demonstrating strong data efficiency. Our findings highlight the promise of construction-aware preprocessing as an alternative to heavy architectural or training-regime interventions.
2023
A Survey of using Large Language Models for Generating Infrastructure as Code
Kalahasti Ganesh Srivatsa
|
Sabyasachi Mukhopadhyay
|
Ganesh Katrapati
|
Manish Shrivastava
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
Infrastructure as Code (IaC) is a revolutionary approach which has gained significant prominence in the Industry. IaC manages and provisions IT infrastructure using machinereadable code by enabling automation, consistency across the environments, reproducibility, version control, error reduction and enhancement in scalability. However, IaC orchestration is often a painstaking effort which requires specialised skills as well as a lot of manual effort. Automation of IaC is a necessity in the present conditions of the Industry and in this survey, we study the feasibility of applying Large Language Models (LLM) to address this problem. LLMs are large neural network-based models which have demonstrated significant language processing abilities and shown to be capable of following a range of instructions within a broad scope. Recently, they have also been adapted for code understanding and generation tasks successfully, which makes them a promising choice for the automatic generation of IaC configurations. In this survey, we delve into the details of IaC, usage of IaC in different platforms, their challenges, LLMs in terms of code-generation aspects and the importance of LLMs in IaC along with our own experiments. Finally, we conclude by presenting the challenges in this area and highlighting the scope for future research.
2018
IIT(BHU)–IIITH at CoNLL–SIGMORPHON 2018 Shared Task on Universal Morphological Reinflection
Abhishek Sharma
|
Ganesh Katrapati
|
Dipti Misra Sharma
Proceedings of the CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection