Gaurav Maheshwari
2022
Fair NLP Models with Differentially Private Text Encoders
Gaurav Maheshwari
|
Pascal Denis
|
Mikaela Keller
|
Aurélien Bellet
Findings of the Association for Computational Linguistics: EMNLP 2022
Encoded text representations often capture sensitive attributes about individuals (e.g., race or gender), which raise privacy concerns and can make downstream models unfair to certain groups. In this work, we propose FEDERATE, an approach that combines ideas from differential privacy and adversarial training to learn private text representations which also induces fairer models. We empirically evaluate the trade-off between the privacy of the representations and the fairness and accuracy of the downstream model on four NLP datasets. Our results show that FEDERATE consistently improves upon previous methods, and thus suggest that privacy and fairness can positively reinforce each other.
2021
An End-to-End Approach for Full Bridging Resolution
Joseph Renner
|
Priyansh Trivedi
|
Gaurav Maheshwari
|
Rémi Gilleron
|
Pascal Denis
Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
2020
Message Passing for Hyper-Relational Knowledge Graphs
Mikhail Galkin
|
Priyansh Trivedi
|
Gaurav Maheshwari
|
Ricardo Usbeck
|
Jens Lehmann
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Hyper-relational knowledge graphs (KGs) (e.g., Wikidata) enable associating additional key-value pairs along with the main triple to disambiguate, or restrict the validity of a fact. In this work, we propose a message passing based graph encoder - StarE capable of modeling such hyper-relational KGs. Unlike existing approaches, StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact. We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K. Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks. We also confirm that leveraging qualifiers is vital for link prediction with gains up to 25 MRR points compared to triple-based representations.
Search
Co-authors
- Priyansh Trivedi 2
- Pascal Denis 2
- Joseph Renner 1
- Rémi Gilleron 1
- Mikaela Keller 1
- show all...