The Scaling Fallacy: Larger LLMs won’t necessarily lead to Artificial General Intelligence.

Wendy Wee

The scaling hypothesis has been a very … wealthy one. With people like Sam Altman (OpenAI) and Dario Amodei (Anthropic) being believers, billions of dollars have gone into making large language models (“LLMs”) like ChatGPT and Claude bigger and … BIGGER.

The belief is that just by feeding an LLM with more training data, parameters, compute, and other resources (“scaling”), Artificial General Intelligence (“AGI”) will eventually emerge.


e = get, head

Dive into said