Andriy Mulyar
2023
GPT4All: An Ecosystem of Open Source Compressed Language Models
Yuvanesh Anand
|
Zach Nussbaum
|
Adam Treat
|
Aaron Miller
|
Richard Guo
|
Benjamin Schmidt
|
Brandon Duderstadt
|
Andriy Mulyar
Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)
Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks.The accessibility of these models has lagged behind their performance.State-of-the-art LLMs require costly infrastructure; are only accessible via rate-limited, geo-locked, and censored web interfaces; and lack publicly available code and technical reports.In this paper, we tell the story of GPT4All, a popular open source repository that aims to democratize access to LLMs.We outline the technical details of the original GPT4All model family, as well as the evolution of the GPT4All project from a single model into a fully fledged open source ecosystem.It is our hope that this paper acts as both a technical overview of the original GPT4All models as well as a case study on the subsequent growth of the GPT4All open source ecosystem.
2020
Clinical Concept Linking with Contextualized Neural Representations
Elliot Schumacher
|
Andriy Mulyar
|
Mark Dredze
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In traditional approaches to entity linking, linking decisions are based on three sources of information – the similarity of the mention string to an entity’s name, the similarity of the context of the document to the entity, and broader information about the knowledge base (KB). In some domains, there is little contextual information present in the KB and thus we rely more heavily on mention string similarity. We consider one example of this, concept linking, which seeks to link mentions of medical concepts to a medical concept ontology. We propose an approach to concept linking that leverages recent work in contextualized neural models, such as ELMo (Peters et al. 2018), which create a token representation that integrates the surrounding context of the mention and concept name. We find a neural ranking approach paired with contextualized embeddings provides gains over a competitive baseline (Leaman et al. 2013). Additionally, we find that a pre-training step using synonyms from the ontology offers a useful initialization for the ranker.
Search
Co-authors
- Yuvanesh Anand 1
- Zach Nussbaum 1
- Adam Treat 1
- Aaron Miller 1
- Richard Guo 1
- show all...