Quick answer
AI Summary: Establishes the 'Chinchilla Scaling Laws,' proving that LLMs should be scaled equally in model size and training data tokens for compute-optimal performance.
AI Summary: Establishes the 'Chinchilla Scaling Laws,' proving that LLMs should be scaled equally in model size and training data tokens for compute-optimal performance.
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling model size while keeping the amount of training data constant. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, Chinchilla (70B parameters), which outperforms much larger models like Gopher (280B).
Share your opinion to help other learners triage faster.
Write a reviewInvite someone by email to share an invited review for Training Compute-Optimal Large Language Models.