Self-StrAE at SemEval-2024 Task 1: Making Self-Structuring AutoEncoders Learn More With Less

Mattia Opper, Siddharth Narayanaswamy


Abstract
We present two simple improvements to the Self-Structuring AutoEncoder (Self-StrAE). Firstly, we show that including reconstruction to the vocabulary as an auxiliary objective improves representation quality. Secondly, we demonstrate that increasing the number of independent channels leads to significant improvements in embedding quality, while simultaneously reducing the number of parameters. Surprisingly, we demonstrate that this trend can be followed to the extreme, even to point of reducing the total number of non-embedding parameters to seven. Our system can be pre-trained from scratch with as little as 10M tokens of input data, and proves effective across English, Spanish and Afrikaans.
Anthology ID:
2024.semeval-1.18
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
108–115
Language:
URL:
https://aclanthology.org/2024.semeval-1.18
DOI:
Bibkey:
Cite (ACL):
Mattia Opper and Siddharth Narayanaswamy. 2024. Self-StrAE at SemEval-2024 Task 1: Making Self-Structuring AutoEncoders Learn More With Less. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 108–115, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Self-StrAE at SemEval-2024 Task 1: Making Self-Structuring AutoEncoders Learn More With Less (Opper & Narayanaswamy, SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-checklist/2024.semeval-1.18.pdf
Supplementary material:
 2024.semeval-1.18.SupplementaryMaterial.txt