← Home

Quick answer

We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens.

Claim

DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence

Authors
DeepSeek-AI·
Qihao Zhu·
Daya Guo·
Zhihong Shao·
Dejian Yang·
Peiyi Wang·
Runxin Xu·
Y. Wu·
Yukun Li·
Huazuo Gao·
Shirong Ma·
Wangding Zeng·
Xiao Bi·
Zihui Gu·
Hanwei Xu·
Damai Dai·
Kai Dong·
Liyue Zhang·
Yishi Piao·
Zhibin Gou·
Zhenda Xie·
Zhewen Hao·
Bingxuan Wang·
Junxiao Song·
Deli Chen·
Xin Xie·
Kang Guan·
Yuxiang You·
Aixin Liu·
Qiushi Du·
Wenjun Gao·
Xuan Lu·
Qinyu Chen·
Yaohui Wang·
Chengqi Deng·
Jiashi Li·
Chenggang Zhao·
Chong Ruan·
Fuli Luo·
Wenfeng Liang

ABSTRACT

We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks.

Review Snapshot

Explore ratings

0.0
★★★★★
0 ratings
5 star
0%
4 star
0%
3 star
0%
2 star
0%
1 star
0%

Recommendation

0%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence | Attendemia