← Home

Quick answer

Model merging has emerged as a promising technique for enhancing large language models, though its application in large-scale pre-training remains relatively unexplored. In this paper, we present a comprehensive investigation of model merging techniques during the pre-training process.

Claim

Model Merging in Pre-training of Large Language Models

Authors
Yunshui Li·
Yiyuan Ma·
Shen Yan·
Chaoyi Zhang·
Jing Liu·
Jianqiao Lu·
Ziwen Xu·
Mengzhao Chen·
Minrui Wang·
Shiyi Zhan·
Jin Ma·
Xunhao Lai·
Deyi Liu·
Yao Luo·
Xingyan Bin·
Hongbin Ren·
Mingji Han·
Wenhao Hao·
Bairen Yi·
LingJun Liu·
Bole Ma·
Xiaoying Jia·
Xun Zhou·
Siyuan Qiao·
Liang Xiang·
Yonghui Wu

ABSTRACT

Model merging has emerged as a promising technique for enhancing large language models, though its application in large-scale pre-training remains relatively unexplored. In this paper, we present a comprehensive investigation of model merging techniques during the pre-training process. Through extensive experiments with both dense and Mixture-of-Experts (MoE) architectures ranging from millions to over 100 billion parameters, we demonstrate that merging checkpoints trained with constant learning rates not only achieves significant performance improvements but also enables accurate prediction of annealing behavior. These improvements lead to both more efficient model development and significantly lower training costs. Our detailed ablation studies on merging strategies and hyperparameters provide new insights into the underlying mechanisms while uncovering novel applications. Through comprehensive experimental analysis, we offer the open-source community practical pre-training guidelines for effective model merging.

Review Snapshot

Explore ratings

0.0
★★★★★
0 ratings
5 star
0%
4 star
0%
3 star
0%
2 star
0%
1 star
0%

Recommendation

0%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Model Merging in Pre-training of Large Language Models.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful