*To skip to the graphs go to: The 'Scaling Paradox' (
01:25:12), Misleading charts from AI companies (
01:34:28), Policy debates should dream much bigger — some radical suggestions (
01:46:32).* The era of making AI smarter by just making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods. And those underlying technical changes force a big rethink of what coming years will look like.
Toby Ord — Oxford philosopher and bestselling author of _The Precipice_ — has been tracking these shifts and mapping out the implications both for governments and our lives.
As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.
What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.
So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.
The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica’s worth of reasoning to solve individual problems — at a cost of over $1,000 per question.
This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out — starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.
Learn more and full transcript: https://80k.info/to25
_Recorded on May 23, 2025._
Chapters:
• Cold open (
00:00:00)
• Toby Ord is back — for a 4th time! (
00:01:20)
• Everything has changed and changed again since 2020 (
00:01:39)
• Is x-risk up or down? (
00:08:02)
• The new scaling era: compute at inference... (
00:09:32)
• Means less concentration (
00:32:40)
• But rich people will get access first. And we may not even know. (
00:36:34)
• 'Compute governance' is now much harder (
00:42:51)
• 'IDA' might let AI blast past human level — or crash and burn (
00:50:11)
• Reinforcement learning brings back 'reward hacking' agents (
01:07:32)
• Will we get warning shots? (
01:17:35)
• The 'Scaling Paradox' (
01:25:12)
• Misleading charts from AI companies (
01:34:28)
• Policy debates should dream much bigger. Some radical suggestions. (
01:46:32)
• Moratoriums have worked before (
01:59:55)
• AI might 'go rogue' early on (
02:17:38)
• Lamps are regulated much more than AI (
02:25:25)
• Companies made a strategic error shooting down SB 1047 (
02:34:35)
• They should build in emergency brakes for AI (
02:40:39)
• Toby's bottom lines (
02:49:37)
*Tell us what you thought!* https://forms.gle/enUSk8HXiCrqSA9J8
_Video editing: Simon Monsour_
_Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, & Dominic Armstrong_
_Music: Ben Cordell_
_Camera operator: Jeremy Chevillotte_
_Transcriptions and web: Katy Moore_