A widespread strategy for obtaining a language model that performs well in a target domain is to fine-tune it by training it to do unsupervised next-token prediction on data from that...
本文探讨了在目标领域微调语言模型时面临的挑战,如有限数据导致的过拟合和遗忘预训练分布。研究表明,混合1%的预训练数据可以有效防止遗忘并减轻过拟合现象。