What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.
Sponsor messages:
========
Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com
Tufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard!
https://tufalabs.ai/
========
Guest Powerhouse
Gary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)
Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/)
Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)
Transcript: http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOno
TOC:
Introduction: The AI Arms Race
00:00:04 - The Danger of Automated AI R&D
00:00:43 - The Rationalization: "If we don't, someone else will"
00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini)
00:02:55 - Guest Introductions
The Philosophical Stakes
00:04:13 - What is the Positive Vision for AGI?
00:07:00 - The Abundance Scenario: Superintelligent Economy
00:09:06 - Differentiating AGI and Superintelligence (ASI)
00:11:41 - Sam Altman: "A Decade in a Month"
00:14:47 - Economic Inequality & The UBI Problem
Policy and Red Lines
00:17:13 - The Pause Letter: Stopping vs. Delaying AI
00:20:03 - Defining Three Concrete Red Lines for AI Development
00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage"
00:31:15 - Transparency and Public Perception
00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win"
Forecasting AGI: Timelines and Methodologies
00:42:29 - The Case for Short Timelines (Median 2028)
00:47:00 - Scaling Limits: Compute, Data, and Money
00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding
00:53:15 - The 10^45 FLOP Thought Experiment
The Great Debate: Cognitive Gaps vs. Scaling
00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition
01:00:46 - Current AI Can't Play Chess Reliably
01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?
01:16:13 - The Multi-Dimensional Nature of Intelligence
01:24:26 - The Benchmark Debate: Data Contamination and Reliability
01:31:15 - The Superhuman Coder Milestone Debate
01:37:45 - The Driverless Car Analogy
The Alignment Problem
01:39:45 - Has Any Progress Been Made on Alignment?
01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me"
01:46:30 - Distinguishing Model vs. Process Alignment
Scenarios and Conclusions
01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift
01:53:35 - Will AI Become Jeff Dean?
01:58:41 - Takeoff Speeds and Exceeding Human Intelligence
02:03:19 - Final Disagreements and Closing Remarks
REFS:
Gary Marcus (2001) - The Algebraic Mind
https://mitpress.mit.edu/9780262632683/the-algebraic-mind/
00:59:00
Gary Marcus & Ernest Davis (2019) - Rebooting AI
https://www.amazon.co.uk/Rebooting-AI-Building-Artificial-Intelligence-ebook/dp/B07MYLGQLB
01:31:59
Gary Marcus (2024) - Taming Silicon Valley
https://www.amazon.co.uk/Taming-Silicon-Valley-Ensure-Works-ebook/dp/B0CQWWM94N
00:03:01
Ajeya Cotra (2020) - "Forecasting TAI with Biological Anchors"
https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines
00:53:15
Daniel Kokotajlo (2021) - "What 2026 looks like"
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like
00:55:00
Dan Hendrycks et al. (2021) - "Measuring Massive Multitask Language Understanding" (MMLU)
https://arxiv.org/abs/2009.03300
00:03:48
Dan Hendrycks et al. (2021) - "Measuring Mathematical Problem Solving With the MATH Dataset"
https://arxiv.org/abs/2103.03874
00:03:48
Aitor Lewkowycz, Anders Andreassen et al. (2022) - "Solving Quantitative Reasoning Problems with Language Models" (Minerva)
https://arxiv.org/abs/2206.14858
01:21:45
Apple Research (2024) - "GSM-Symbolic/The Illusion of Thinking"
https://arxiv.org/abs/2410.05229
https://machinelearning.apple.com/research/illusion-of-thinking
00:59:15