Why AI Development Is Not What You Think with Connor Leahy | TGS 184

Why AI Development Is Not What You Think with Connor Leahy | TGS 184

24.606 Lượt nghe
Why AI Development Is Not What You Think with Connor Leahy | TGS 184
(Conversation recorded on May 21st, 2025) Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer’ – AI generated content that crowds out true human creations, propelled by algorithms that can’t tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies. What kinds of policy and regulatory approaches could help slow down AI’s acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology’s impacts on mental health, meaning, and societal well-being? About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH. Show Notes and More: https://www.thegreatsimplification.com/episode/184-connor-leahy Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie: https://youtu.be/-xr9rIQxwj4?feature=shared --- Support The Institute for the Study of Energy and Our Future: https://www.thegreatsimplification.com/support Join our Substack newsletter: https://natehagens.substack.com/ Join our Discord channel and connect with other listeners: https://discord.gg/ZFfQqtqMJf --- 00:00 - Introduction 02:25 - Defining AI, AGI, and ASI 10:57 - Worst Case Scenario 16:01 - Energy Demand 23:10 - Hallucinations 27:26 - Oversight 31:20 - Risk to Labor 33:46 - Loss of Humanity 41:12 - Addiction 44:05 - Algorithmic Cancer 57:43 - Extinction 01:04:07 - Good AI 01:10:43 - Concerns of LLMs 01:21:11 - What Can We Do? 01:29:10 - Closing Questions