@inproceedings{tao-etal-2024-textual,
    title = "Textual Dataset Distillation via Language Model Embedding",
    author = "Tao, Yefan  and
      Kong, Luyang  and
      Kan, Andrey  and
      Callot, Laurent",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.findings-emnlp.733/",
    doi = "10.18653/v1/2024.findings-emnlp.733",
    pages = "12557--12569",
    abstract = "Dataset distillation is a process aimed at condensing datasets while preserving essential characteristics. In the text domain, prevailing methods typically generate distilled data as embedding vectors, which are not human-readable. This approach simplifies optimization but limits the transferability of distilled data across different model architectures. To address this limitation, we introduce a model-agnostic, data-efficient method that leverages Language Model (LM) embeddings. Compared to parameter-efficient methods such as LORA, our approach achieves comparable performance with significantly faster processing times. We evaluate our methodology through classification tasks on datasets like IMDB and AG-News, demonstrating performance that is on par with or exceeds previous model-dependent techniques. By utilizing LM embeddings, our method offers enhanced flexibility and improved transferability, expanding the range of potential applications."
}Markdown (Informal)
[Textual Dataset Distillation via Language Model Embedding](https://preview.aclanthology.org/ingest-emnlp/2024.findings-emnlp.733/) (Tao et al., Findings 2024)
ACL