logo
LLM
AI Fundamentals

DeepSeek-V3 Technical Report

1/7/2025 • arxiv.org

Your browser doesn't support PDF viewing. Download the PDF instead.

We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. The model checkpoints are available at https://github.com/deepseek-ai/DeepSeek-V3.

Read Full Article...

C4AIL Commentary

The technical report on DeepSeek is a treasure trove of information about cutting edge AI model development and, combined with the huggingface repository, gives us all the details we need to calculate model training and inference cost all the way down to the hyperscale layer.

The fact that the model is trained with such a low number of GPU hours on sanction compliant H800 cards does not bode well for OpenAI who have been trying to bring up the price for their services.