Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.
🎧 / 🎥 Listen or watch the full episode on our page: https://twimlai.com/go/723.
🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_confirmation=1
🗣️ CONNECT WITH US!
===============================
Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai/
Follow us on Twitter: https://twitter.com/twimlai
Follow us on LinkedIn: https://www.linkedin.com/company/twimlai/
Join our Slack Community: https://twimlai.com/community/
Subscribe to our newsletter: https://twimlai.com/newsletter/
Want to get in touch? Send us a message: https://twimlai.com/contact/
📖 CHAPTERS
===============================
00:00 - Introduction
2:43 - Recurrent Depth Approach
7:05 - Motivation and challenges
12:59 - Reasoning Results
14:42 - Internal vs. verbalized reasoning
24:08 - Per token specialization
29:32 - Searching in latent space for next token prediction
32:08 - Comparison to diffusion models
34:22 - Compute and hardware challenges
37:10 - Is it reproducible?
39:07 - Dataset
40:30 - Model comparison
45:02 - Model performance across various domains
47:05 - Recurrent depth simplifying LLMs
51:30 - Model training
52:50 - Model safety
55:28 - Future directions
🔗 LINKS & RESOURCES
===============================
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach - https://arxiv.org/abs/2502.05171
Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - 678 - https://twimlai.com/podcast/twimlai/coercing-llms-to-do-and-reveal-almost-anything/
📸 Camera: https://amzn.to/3TQ3zsg
🎙️Microphone: https://amzn.to/3t5zXeV
🚦Lights: https://amzn.to/3TQlX49
🎛️ Audio Interface: https://amzn.to/3TVFAIq
🎚️ Stream Deck: https://amzn.to/3zzm7F5