SubmissionNumber#=%=#8 FinalPaperTitle#=%=#CTYUN-AI at SemEval-2024 Task 7: Boosting Numerical Understanding with Limited Data Through Effective Data Alignment ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Yuming Fan JobTitle#==# Organization#==#China Telecom Cloud Technology Co., Ltd Abstract#==#Large language models (LLMs) have demonstrated remarkable capabilities in pushing the boundaries of natural language understanding. Nevertheless, the majority of existing open-source LLMs still fall short of meeting satisfactory standards when it comes to addressing numerical problems, especially as the enhancement of their numerical capabilities heavily relies on extensive data. To bridge the gap, we aim to improve the numerical understanding of LLMs by means of efficient data alignment, utilizing only a limited amount of necessary data. Specifically, we first use a data discovery strategy to obtain the most effective portion of numerical data from large datasets. Then, self-augmentation is performed to maximize the potential of the training data. Thirdly, answers of all traning samples are aligned based on some simple rules. Finally, our method achieves the first place in the competition, offering new insights and methodologies for numerical understanding research in LLMs. Author{1}{Firstname}#=%=#Yuming Author{1}{Lastname}#=%=#Fan Author{1}{Username}#=%=#ctyun-ai Author{1}{Email}#=%=#18800171785@163.com Author{1}{Affiliation}#=%=#China Telecom Cloud Technology Co., Ltd Author{2}{Firstname}#=%=#Dongming Author{2}{Lastname}#=%=#Yang Author{2}{Username}#=%=#dongmingyang Author{2}{Email}#=%=#yangdongming@pku.edu.cn Author{2}{Affiliation}#=%=#China Telecom Cloud Technology Co., Ltd Author{3}{Firstname}#=%=#Xu Author{3}{Lastname}#=%=#He Author{3}{Username}#=%=#hexusunshine Author{3}{Email}#=%=#xiaoxusunshine@gmail.com Author{3}{Affiliation}#=%=#China Telecom Cloud Technology Co., Ltd ========== èéáğö