@inproceedings{padro-sauri-2024-fine,
    title = "Fine-Tuning Open Access {LLM}s for High-Precision {NLU} in Goal-Driven Dialog Systems",
    author = "Padr{\'o}, Llu{\'i}s  and
      Saur{\'i}, Roser",
    editor = "Gaspari, Federico  and
      Moorkens, Joss  and
      Aldabe, Itziar  and
      Farwell, Aritz  and
      Altuna, Begona  and
      Piperidis, Stelios  and
      Rehm, Georg  and
      Rigau, German",
    booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.tdle-1.3/",
    pages = "33--42",
    abstract = "This paper presents a set of experiments on fine-tuning LLMs to produce high-precision semantic representations for the NLU component of a dialog system front-end. The aim of this research is threefold: First, we want to explore the capabilities of LLMs on real, industry-based use cases that involve complex data and strict requirements on results. Since the LLM output should usable by the application back-end, the produced semantic representation must satisfy strict format and consistency requirements. Second, we want to evaluate the cost-benefit of open-source LLMs, that is, the feasibility of running this kind of models in machines affordable to small-medium enterprises (SMEs), in order to assess how far this organizations can go without depending on the large players controlling the market, and with a moderate use of computation resources. Finally, we also want to assess the language scalability of the LLMs in this kind of applications; specifically, whether a multilingual model is able to cast patterns learnt from one language to other ones {--}with special attention to underresourced languages{--}, thus reducing required training data and computation costs. This work was carried out within an R{\&}D context of assisting a real company in defining its NLU model strategy, and thus the results have a practical, industry-level focus."
}Markdown (Informal)
[Fine-Tuning Open Access LLMs for High-Precision NLU in Goal-Driven Dialog Systems](https://preview.aclanthology.org/ingest-emnlp/2024.tdle-1.3/) (Padró & Saurí, TDLE 2024)
ACL