2025
pdf
bib
abs
Non-Determinism of “Deterministic” LLM System Settings in Hosted Environments
Berk Atıl
|
Sarp Aykent
|
Alexa Chittams
|
Lisheng Fu
|
Rebecca J. Passonneau
|
Evan Radcliffe
|
Guru Rajan Rajagopal
|
Adam Sloan
|
Tomasz Tudrej
|
Ferhan Ture
|
Zhe Wu
|
Lixinyu Xu
|
Breck Baldwin
Proceedings of the 5th Workshop on Evaluation and Comparison of NLP Systems
LLM (large language model) users of hosted providers commonly notice that outputs can vary for the same inputs under settings expected to be deterministic. While it is difficult to get exact statistics, recent reports on specialty news sites and discussion boards suggest that among users in all communities, the majority of LLM usage today is through cloud-based APIs. Yet the questions of how pervasive non- determinism is, and how much it affects perfor- mance results, have not to our knowledge been systematically investigated. We apply five API- based LLMs configured to be deterministic to eight diverse tasks across 10 runs. Experiments reveal accuracy variations of up to 15% across runs, with a gap of up to 70% between best pos- sible performance and worst possible perfor- mance. No LLM consistently delivers the same outputs or accuracies, regardless of task. We speculate about the sources of non-determinism such as input buffer packing across multiple jobs. To better quantify our observations, we introduce metrics focused on quantifying de- terminism, TARr@N for the total agreement rate at N runs over raw output, and TARa@N for total agreement rate of parsed-out answers. Our code and data will be publicly available at https://github.com/Anonymous.
2021
pdf
bib
abs
Learning Relatedness between Types with Prototypes for Relation Extraction
Lisheng Fu
|
Ralph Grishman
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Relation schemas are often pre-defined for each relation dataset. Relation types can be related from different datasets and have overlapping semantics. We hypothesize we can combine these datasets according to the semantic relatedness between the relation types to overcome the problem of lack of training data. It is often easy to discover the connection between relation types based on relation names or annotation guides, but hard to measure the exact similarity and take advantage of the connection between the relation types from different datasets. We propose to use prototypical examples to represent each relation type and use these examples to augment related types from a different dataset. We obtain further improvement (ACE05) with this type augmentation over a strong baseline which uses multi-task learning between datasets to obtain better feature representation for relations. We make our implementation publicly available:
https://github.com/fufrank5/relatedness2018
pdf
bib
abs
Distantly Supervised Attribute Detection from Reviews
Lisheng Fu
|
Pablo Barrio
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
This work aims to detect specific attributes of a place (e.g., if it has a romantic atmosphere, or if it offers outdoor seating) from its user reviews via distant supervision: without direct annotation of the review text, we use the crowdsourced attribute labels of the place as labels of the review text. We then use review-level attention to pay more attention to those reviews related to the attributes. The experimental results show that our attention-based model predicts attributes for places from reviews with over 98% accuracy. The attention weights assigned to each review provide explanation of capturing relevant reviews.
pdf
bib
abs
A Case Study on Learning a Unified Encoder of Relations
Lisheng Fu
|
Bonan Min
|
Thien Huu Nguyen
|
Ralph Grishman
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
Typical relation extraction models are trained on a single corpus annotated with a pre-defined relation schema. An individual corpus is often small, and the models may often be biased or overfitted to the corpus. We hypothesize that we can learn a better representation by combining multiple relation datasets. We attempt to use a shared encoder to learn the unified feature representation and to augment it with regularization by adversarial training. The additional corpora feeding the encoder can help to learn a better feature representation layer even though the relation schemas are different. We use ACE05 and ERE datasets as our case study for experiments. The multi-task model obtains significant improvement on both datasets.
2017
pdf
bib
abs
Domain Adaptation for Relation Extraction with Domain Adversarial Neural Network
Lisheng Fu
|
Thien Huu Nguyen
|
Bonan Min
|
Ralph Grishman
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Relations are expressed in many domains such as newswire, weblogs and phone conversations. Trained on a source domain, a relation extractor’s performance degrades when applied to target domains other than the source. A common yet labor-intensive method for domain adaptation is to construct a target-domain-specific labeled dataset for adapting the extractor. In response, we present an unsupervised domain adaptation method which only requires labels from the source domain. Our method is a joint model consisting of a CNN-based relation classifier and a domain-adversarial classifier. The two components are optimized jointly to learn a domain-independent representation for prediction on the target domain. Our model outperforms the state-of-the-art on all three test domains of ACE 2005.
2016
pdf
bib
A Two-stage Approach for Extending Event Detection to New Types via Neural Networks
Thien Huu Nguyen
|
Lisheng Fu
|
Kyunghyun Cho
|
Ralph Grishman
Proceedings of the 1st Workshop on Representation Learning for NLP
2013
pdf
bib
An Efficient Active Learning Framework for New Relation Types
Lisheng Fu
|
Ralph Grishman
Proceedings of the Sixth International Joint Conference on Natural Language Processing