Eli Lifland - Forecasting the Future - Eli Lifland on AI 2027 and What Comes Next
Guest Speaker Session: Eli Lifland on AI 2027 - Forecasting Our AI Future
Join Cohere Lab's Safety & Alignment Reading Group for an unmissable conversation with Eli Lifland, elite forecaster and co-author of the compelling "AI 2027" scenario that's reshaping how we think about AI development trajectories.
Why This Matters
The "AI 2027" forecast presents a detailed and sobering examination of how superhuman AI systems might emerge in the near future. With recent advances happening faster than many anticipated, understanding possible development paths has never been more critical for everyone concerned with AI safety.
About the Speaker
Eli Lifland stands at the forefront of AI forecasting expertise:
Co-founder of the AI Futures Project and Sage
Co-lead of the Samotsvety Forecasting team
#1 ranked forecaster on RAND's Forecasting Initiative
Known for combining technical insight with practical governance considerations
The AI 2027 Project
This groundbreaking scenario work brings together an exceptional team:
Daniel Kokotajlo (former OpenAI governance researcher) whose previous AI forecasts have proven remarkably accurate
Scott Alexander (renowned writer of Astral Codex Ten) who helped craft the narrative
Thomas Larsen (founder of Center for AI Policy) bridging technical and policy perspectives
Romeo Dean (Harvard CS researcher) contributing cutting-edge technical insights
Their collaborative work maps potential pathways of AI development through the next critical years, examining geopolitical tensions, technical breakthroughs, and the challenges of keeping increasingly powerful systems aligned with human values.
Community Discussion
After Eli's presentation, our diverse international community will engage directly with these forecasts and their implications for alignment research, safety work, and global cooperation.
Eli Lifland is a researcher at the AI Futures Project, specializing in forecasting artificial general intelligence (AGI) capabilities and scenario planning. He co-authored the widely discussed AI 2027 scenario forecast, which explores potential trajectories of AI development in the near future. Eli also co-founded and advises Sage, an organization dedicated to creating interactive AI explainers and forecasting tools. He previously contributed to AI robustness research at Ought, working on the AI-powered research assistant Elicit, and co-created TextAttack, a Python framework for adversarial attacks in natural language processing. Eli holds degrees in computer science and economics from the University of Virginia.
This session is brought to you by the Cohere Labs Open Science Community - a space where ML researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We'd like to extend a special thank you to Alif Munim and Abrar Frahman, Leads of our AI Safety and Alignment group for their dedication in organizing this event.
If you’re interested in sharing your work, we welcome you to join us! Simply fill out the form at https://forms.gle/ALND9i6KouEEpCnz6 to express your interest in becoming a speaker.
Join the Cohere Labs Open Science Community to see a full list of upcoming events (https://tinyurl.com/CohereLabsCommunityApp).