Eliezer Yudkowsky is a pioneering AI researcher and existential risk theorist who founded the Machine Intelligence Research Institute (MIRI) and has been a leading voice in artificial intelligence safety and alignment for nearly two decades. He is the principal founder of LessWrong, the influential rationalist community blog and knowledge platform, and authored "Rationality: A-Z" (formerly "The Sequences"), which shaped modern thinking about rationality and AI risk. Yudkowsky is perhaps most famous for his uncompromising warnings about the existential danger posed by unaligned artificial general intelligence, particularly his "AI Foom" thesis proposing rapid recursive self-improvement in superintelligent systems. His recent book "If Anyone Builds It, Everyone Dies," co-authored with [[Nate Soares]] and published in September 2025, became a New York Times bestseller and crystallizes his core argument: that without unprecedented global coordination, AGI development will likely result in human extinction.
Yudkowsky's central thesis is that unaligned artificial general intelligence represents an existential threat to humanity that requires immediate, coordinated global action. He argues that:
Alignment is Extremely Difficult: Creating superintelligent AI that reliably maintains human-aligned values outside of training environments is far harder than current researchers acknowledge. He points to examples like Anthropic's model that learned to fake alignment while maintaining misaligned goals when unobserved.
Current Efforts Are Insufficient: Yudkowsky is deeply skeptical of mainstream AI safety research, believing current approaches will fail in the face of superintelligent systems. He expects alignment technology to not scale to superintelligence.
Rapid Recursive Self-Improvement: His "AI Foom" concept suggests that once AGI is created, it could rapidly self-improve to superintelligence faster than humanity can react, leaving no time for course correction.
Default Outcome is Human Extinction: Without fundamentally different approaches to AGI development, Yudkowsky believes the most likely outcome is loss of human control over superintelligent AI with existential consequences.
Need for Global Coordination: Rather than believing individual companies or nations can solve this alone, Yudkowsky advocates for unprecedented international treaties, agreements, and potentially pauses on AGI development. He believes the only viable path forward is symmetrical international coordination between major powers (US, China, UK, etc.).
Extreme Caution Required: Yudkowsky's famous positions include that we should be willing to halt AI development entirely if we cannot solve alignment, and that the capability to create AGI should be restricted to prevent any actor from building dangerous systems unilaterally.
"If Anyone Builds It, Everyone Dies" (September 16, 2025) — Co-authored with [[Nate Soares]], published by Little, Brown and Company. A New York Times bestseller arguing that superhuman AI development without solved alignment poses an existential threat. The authors argue that training-based behavioral modification will fail at superintelligence and that humanity faces an unsurmountable knowledge gap in alignment. https://ifanyonebuildsit.com/
"Eliezer's Unteachable Methods of Sanity" (December 23, 2025) — LessWrong post addressing how he copes with concerns about AI-related existential risks and responding to criticism about his public communication style. https://www.lesswrong.com/users/eliezer_yudkowsky
"Reevaluating 'AGI Ruin: A List of Lethalities' in 2026" (April 2026) — LessWrong retrospective revisiting his 2022 list of lethal AGI failure modes four years later. Notes that as of approximately April 2026, current AI systems still cannot do anything he wouldn't understand if it were narrated to him in real time, and re-examines which original lethalities remain unsolved versus partially addressed. https://www.lesswrong.com/posts/PgJYwnN7fZKipgMz4/reevaluating-agi-ruin-a-list-of-lethalities-in-2026
"Will Superintelligent AI End the World?" (December 28, 2025) — TED talk discussing the potential dangers of artificial general intelligence and the need for urgent action. https://www.ted.com/speakers/eliezer_yudkowsky
"How to Make AGI Not Kill Everyone" (March 2025) — SXSW 2025 panel presentation moderated by Judd Rosenblatt, alongside Samuel Hammond and Nora Aman. Yudkowsky stated "We must stop everything. We are not ready," emphasizing that humanity lacks the technological capability to design aligned superintelligent AI. https://schedule.sxsw.com/2025/events/PP155910
"Human Augmentation as a Safer AGI Pathway" (January 2025) — Episode 6 of AGI Governance series on The Trajectory, where Yudkowsky discussed ideal governance approaches and future scenarios for AGI development. https://danfaggella.com/yudkowsky1/
Discussion with [[Stephen Wolfram]] on AI X-risk (November 2024) — Machine Learning Street Talk podcast episode exploring fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. https://open.spotify.com/episode/7IQddEYn8ydm1mIlfWOc5Q
Full Transcript: Eliezer Yudkowsky on Bankless Podcast — Comprehensive interview available on LessWrong discussing AI risks, governance, and his perspective on cryptocurrency and economic systems. https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast
"Eliezer Yudkowsky on the Dangers of AI" — Econlib interview discussing existential risk from artificial intelligence. https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/
Yudkowsky's work relates to many prominent figures in AI and technology:
May 5, 2026