logo
🔒

Member Only Content

To access all features, please consider upgrading to full Membership.

AI Ecosystem Intelligence Explorer

LLM

21 of 213 articles

Pretraining on the Test Set Is All You Need

Inspired by recent work demonstrating the promise of smaller Transformer-based language models pretrained on carefully curated data, we supercharge such approaches by investing heavily in curating a novel, high quality, non-synthetic data mixture based solely on evaluation benchmarks. Using our novel dataset mixture consisting of less than 100 thousand tokens, we pretrain a 1 million parameter transformer-based LLM \textbf{phi-CTNL} (pronounced ``fictional”) that achieves perfect results across diverse academic benchmarks, strictly outperforming all known foundation models. \textbf{phi-CTNL} also beats power-law scaling and exhibits a never-before-seen grokking-like ability to accurately predict downstream evaluation benchmarks’ canaries.

LLM
AI Fundamentals
 
9/1/2025

LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find

Chain-of-thought AI “degrades significantly” when asked to generalize beyond training.

LLM
Research
 
8/12/2025

📖 LLM Inference in Production

Everything you need to know about LLM inference

LLM
Prompting
Applied AI
AI Fundamentals
 
7/11/2025

Meta’s Llama 3.1 can recall 42 percent of the first Harry Potter book

New research could have big implications for copyright lawsuits against generative AI.

LLM
Legal and IP
 
6/16/2025

How much do language models memorize?

We propose a new method for estimating how much a model knows about a datapoint and use it to measure the capacity of modern language models. Prior studies of language model memorization have struggled to disentangle memorization from generalization. We formally separate memorization into two components: unintended memorization, the information a model contains about a specific dataset, and generalization, the information a model contains about the true data-generation process. When we completely eliminate generalization, we can compute the total memorization, which provides an estimate of model capacity: our measurements estimate that GPT-style models have a capacity of approximately 3.6 bits per parameter. We train language models on datasets of increasing size and observe that models memorize until their capacity fills, at which point “grokking” begins, and unintended memorization decreases as models begin to generalize. We train hundreds of transformer language models ranging from $500K$ to $1.5B$ parameters and produce a series of scaling laws relating model capacity and data size to membership inference.

LLM
Research
AI Fundamentals
 
6/5/2025

Limit of RLVR

Reasoning LLMs Are Just Efficient Samplers: RL Training Elicits No Transcending Capacity

LLM
AI Fundamentals
 
5/28/2025

LLM Inference Economics from First Principles

The main product LLM companies offer these days is access to their models via an API, and the key question that will determine the profitability they can enjoy is the inference cost structure.

LLM
Research
AI Fundamentals
 
5/17/2025

Say What You Mean: A Response to ‘Let Me Speak Freely’

A recent paper from the research team at Appier, Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large Language Models, made some very serious accusations about the quality of LLM evaluation results when performing structured generation. Their (Tam, et al.) ultimate conclusion was:

LLM
 
5/5/2025

Developing an AI-Powered Tool for Automatic Citation Validation Using NVIDIA NIM

The accuracy of citations is crucial for maintaining the integrity of both academic and AI-generated content. When citations are inaccurate or wrong…

LLM
Research
 
5/2/2025
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only
Members Only