logo
AI Fundamentals
LLM

Pretraining on the Test Set Is All You Need

9/1/2025 • arxiv.org

Your browser doesn't support PDF viewing. Download the PDF instead.

Inspired by recent work demonstrating the promise of smaller Transformer-based language models pretrained on carefully curated data, we supercharge such approaches by investing heavily in curating a novel, high quality, non-synthetic data mixture based solely on evaluation benchmarks. Using our novel dataset mixture consisting of less than 100 thousand tokens, we pretrain a 1 million parameter transformer-based LLM \textbf{phi-CTNL} (pronounced ``fictional”) that achieves perfect results across diverse academic benchmarks, strictly outperforming all known foundation models. \textbf{phi-CTNL} also beats power-law scaling and exhibits a never-before-seen grokking-like ability to accurately predict downstream evaluation benchmarks’ canaries.

Read Full Article...

C4AIL Commentary

A conclusive demonstration that LLMs memorize and that training on the question-answer set of any test or benchmark is sufficient for state of the art performance.