Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI

Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI

63.404 Lượt nghe
Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI
This week I welcome two of the most important technologists in any field. Jeff Dean is Google's Chief Scientist, and through 25 years at the company, has worked on basically the most transformative systems in modern computing: from MapReduce, BigTable, Tensorflow, AlphaChip, to Gemini. Noam Shazeer invented or co-invented all the main architectures and techniques that are used for modern LLMs: from the Transformer itself, to Mixture of Experts, to Mesh Tensorflow, to Gemini and many other things. We talk about their 25 years at Google, going from PageRank to MapReduce to the Transformer to MoEs to AlphaChip – and soon to ASI. Sponsors * Meter wants to radically improve the digital world we take for granted. They’re developing a foundation model that automates network management end-to-end. To do this, they just announced a long-term partnership with Microsoft for tens of thousands of GPUs, and they’re recruiting a world class AI research team. To learn more, go to https://meter.com/dwarkesh. * Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at https://scale.com/dwarkesh. * Curious how Jane Street teaches their new traders? They use Figgie, a rapid-fire card game that simulates the most exciting parts of markets and trading. It’s become so popular that Jane Street hosts an inter-office Figgie championship every year. Download from the app store or play on your desktop at https://www.figgie.com/. Advertisers To sponsor a future episode, visit: https://www.dwarkeshpatel.com/p/advertise Timestamps 00:00:00 - Intro 00:03:29 - Joining Google in 1999 00:06:20 - Future of Moore's Law 00:11:04 - Future TPUs 00:13:56 - Jeff’s undergrad thesis: parallel backprop 00:15:54 - LLMs in 2007 00:25:09 - “Holy shit” moments 00:27:28 - AI fulfills Google’s original mission 00:32:00 - Doing Search in-context 00:36:12 - The internal coding model 00:37:29 - What will 2027 models do? 00:43:20 - A new architecture every day? 00:49:10 - Automated chips and intelligence explosion 00:53:07 - Future of inference scaling 01:02:38 - Already doing multi-datacenter runs 01:08:15 - Debugging at scale 01:12:41 - Fast takeoff and superalignment 01:20:51 - A million evil Jeff Deans 01:24:22 - Fun times at Google 01:27:51 - World compute demand in 2030 01:34:37 - Getting back to modularity 01:44:48 - Keeping a giga-MoE in-memory 01:49:35 - All of Google in one model 01:57:59 - What’s missing from distillation 02:03:10 - Open research, pros and cons 02:09:58 - Going the distance