Afleveringen
-
Alexander Campbell claims that having superhuman intelligence doesnât necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI canât get too powerful. I strongly disagree.
Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.
This debate was recorded in August 2023.
00:00 Intro and Alexâs Background
05:29 Alex's Views on AI and Technology
06:45 Alexâs Non-Doomer Position
11:20 Goal-to-Action Mapping
15:20 Outcome Pump Thought Experiment
21:07 Lironâs Doom Argument
29:10 The Dangers of Goal-to-Action Mappers
34:39 The China Argument and Existential Risks
45:18 Ideological Turing Test
48:38 Final Thoughts
Show Notes
Alexander Campbellâs Twitter: https://x.com/abcampbell
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. Heâs best known for âRokoâs Basiliskâ, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.
His view on AI doom is that:* AI alignment is an easy problem* But the chaos and fighting from building superintelligence poses a high near-term existential risk* But humanityâs course without AI has an even higher near-term existential risk
While my own view is very different, Iâm interested to learn more about Rokoâs views and nail down our cruxes of disagreement.
00:00 Introducing Roko
03:33 Realizing that AI is the only thing that matters
06:51 Cyc: AI with âcommon senseâ
15:15 Is alignment easy?
21:19 Whatâs Your P(Doom)âą
25:14 Why civilization is doomed anyway
37:07 Rokoâs AI nightmare scenario
47:00 AI risk mitigation
52:07 Market Incentives and AI Safety
57:13 Are RL and GANs good enough for superalignment?
01:00:54 If humans learned to be honest, why canât AIs?
01:10:29 Is our test environment sufficiently similar to production?
01:23:56 AGI Timelines
01:26:35 Headroom above human intelligence
01:42:22 Rokoâs Basilisk
01:54:01 Post-Debate Monologue
Show Notes
Rokoâs Twitter: https://x.com/RokoMijic
Explanation of Rokoâs Basilisk on LessWrong: https://www.lesswrong.com/w/rokos-basilisk
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Zijn er afleveringen die ontbreken?
-
Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.
His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse.
âDr. Penrose is such a genius that it's just interesting to unpack his worldview, even if itâs totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.
00:00 Episode Highlights
01:29 Introduction to Roger Penrose
11:56 Uncomputability
16:52 Penrose on Gödel's Incompleteness Theorem
19:57 Liron Explains Gödel's Incompleteness Theorem
27:05 Why Penrose Gets Gödel Wrong
40:53 Scott Aaronson's Gödel CAPTCHA
46:28 Penrose's Critique of the Turing Test
48:01 Searle's Chinese Room Argument
52:07 Penrose's Views on AI and Consciousness
57:47 AI's Computational Power vs. Human Intelligence
01:21:08 Penrose's Perspective on AI Risk
01:22:20 Consciousness = Quantum Wave Function Collapse?
01:26:25 Final Thoughts
Show Notes
Source video â Feb 22, 2025 Interview with Roger Penrose on âThis Is Worldâ â https://www.youtube.com/watch?v=biUfMZ2dts8
Scott Aaronsonâs âGödel CAPTCHAâ â https://www.scottaaronson.com/writings/captcha.html
My recent Scott Aaronson episode â https://www.youtube.com/watch?v=xsGqWeqKjEg
My explanation of whatâs wrong with arguing âby definitionâ â https://www.youtube.com/watch?v=ueam4fq8k8I
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
The Center for AI Safety just dropped a fascinating paper â they discovered that todayâs AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didnât know it was already happening.
This episode has two parts:
In Part I (48 minutes), I react to David Shapiroâs coverage of the paper and push back on many of his points.
In Part II (60 minutes), I explain the paper myself.
00:00 Episode Introduction
05:25 PART I: REACTING TO DAVID SHAPIRO
10:06 Critique of David Shapiro's Analysis
19:19 Reproducing the Experiment
35:50 David's Definition of Coherence
37:14 Does AI have âTemporal Urgencyâ?
40:32 Universal Values and AI Alignment
49:13 PART II: EXPLAINING THE PAPER
51:37 How The Experiment Works
01:11:33 Instrumental Values and Coherence in AI
01:13:04 Exchange Rates and AI Biases
01:17:10 Temporal Discounting in AI Models
01:19:55 Power Seeking, Fitness Maximization, and Corrigibility
01:20:20 Utility Control and Bias Mitigation
01:21:17 Implicit Association Test
01:28:01 Emailing with the Paperâs Authors
01:43:23 My Takeaway
Show Notes
Davidâs source video: https://www.youtube.com/watch?v=XGu6ejtRz-0
The research paper: http://emergent-values.ai
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at
https://doomdebates.com
and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.
I completely disagree, but I hope youâll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.
00:00 Introduction
02:36 Gil & Lironâs Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 Whatâs Your P(Doom)âą
19:23 Multiple AIs and Human Welfare
24:42 Gilâs Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter
Show Notes
Gilâs LinkedIn: https://www.linkedin.com/in/gilmark/
Gilâs Twitter: https://x.com/gmfromgm
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?
AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.
Weâre not talking about AGI or ASI here â weâre just looking at an AI that does brute-force search over actions in a simple grid world.
The slide deck Iâm presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.
00:00 Introduction
01:24 The Toy Model
06:19 Misalignment and Manipulation Drives
12:57 Search Capacity and Ontological Insights
16:33 Irrelevant Concepts in AI Control
20:14 Approaches to Solving AI Control Problems
23:38 Final Thoughts
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Bryan Cantrill, co-founder of Oxide Computer, says in his talk that engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isnât about intelligence; itâs about teamwork, character and resilience.
I completely disagree.
00:00 Introduction
02:03 Bryanâs Take on AI Doom
05:55 The Concept of P(Doom)
08:36 Engineering Challenges and Human Intelligence
15:09 The Role of Regulation and Authoritarianism in AI Control
29:44 Engineering Complexity: A Case Study from Oxide Computer
40:06 The Value of Team Collaboration
46:13 Human Attributes in Engineering
49:33 AI's Potential in Engineering
58:23 Existential Risks and AI Predictions
Bryanâs original talk:
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Thanks for everyone who participated in the live Q&A on Friday!
The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.
00:00 Advice for Comp Sci Students
01:14 The $500B Stargate Project
02:36 Eliezer's Recent Podcast
03:07 AI Safety and Public Policy
04:28 AI Disruption and Politics
05:12 DeepSeek and AI Advancements
06:54 Human vs. AI Intelligence
14:00 Consciousness and AI
24:34 Dark Forest Theory and AI
35:31 Investing in Yourself
42:42 Probability of Aliens Saving Us from AI
43:31 Brain-Computer Interfaces and AI Safety
46:19 Debating AI Safety and Human Intelligence
48:50 Nefarious AI Activities and Satellite Surveillance
49:31 Pliny the Prompter Jailbreaking AI
50:20 Canât vs. Wonât Destroy the World
51:15 How to Make AI Risk Feel Present
54:27 Keeping Doom Arguments On Track
57:04 Game Theory and AI Development Race
01:01:26 Mental Model of Average Non-Doomer
01:04:58 Is Liron a Strict Bayesian and Utilitarian?
01:09:48 Can We Rename âDoom Debatesâ
01:12:34 The Role of AI Trustworthiness
01:16:48 Minor AI Disasters
01:18:07 Most Likely Reason Things Go Well
01:21:00 Final Thoughts
Show Notes
Previous post where people submitted questions: https://lironshapira.substack.com/p/ai-twitter-beefs-3-marc-andreessen
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Itâs time for AI Twitter Beefs #3:
00:00 Introduction
01:27 Marc Andreessen vs. Sam Altman
09:15 Mark Zuckerberg
35:40 Martin Casado
47:26 Gary Marcus vs. Miles Brundage Bet
58:39 Scott Alexanderâs AI Art Turing Test
01:11:29 Roon
01:16:35 Stephen McAleer
01:22:25 Emmett Shear
01:37:20 OpenAIâs âSafetyâ
01:44:09 Naval Ravikant vs. Eliezer Yudkowsky
01:56:03 Comic Relief
01:58:53 Final Thoughts
Show Notes
Upcoming Live Q&A: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask
âMake Your Beliefs Pay Rent In Anticipated Experiencesâ by Eliezer Yudkowsky on LessWrong: https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences
Scott Alexanderâs AI Art Turing Test: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad?
Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, whoâs been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade⊠and heâs not a fan.
00:00 Introduction
01:22 Jonasâs Criticisms of EA
03:23 Recoil Exaggeration
05:53 Impact of Malaria Nets
10:48 Local vs. Global Altruism
13:02 Shrimp Welfare
25:14 Capitalism vs. Charity
33:37 Cultural Sensitivity
34:43 The Impact of Direct Cash Transfers
37:23 Long-Term Solutions vs. Immediate Aid
42:21 Charity Budgets
45:47 Prioritizing Local Issues
50:55 The EA Community
59:34 Debate Recap
1:03:57 Announcements
Show Notes
Jonasâs Instagram: @jonas_wanders
Will MacAskillâs famous book, Doing Good Better: https://www.effectivealtruism.org/doing-good-better
Scott Alexanderâs excellent post about the people he met at EA Global: https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization Iâm part of: https://pauseai.info
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Matthew Adelstein, better known as Benthamâs Bulldog on Substack, is a philosophy major at the University of Michigan and an up & coming public intellectual.
Heâs a rare combination: Effective Altruist, Bayesian, non-reductionist, theist.
Our debate covers reductionism, evidence for god, the implications of a fine-tuned universe, moral realism, and AI doom.
00:00 Introduction
02:56 Matthewâs Research
11:29 Animal Welfare
16:04 Reductionism vs. Non-Reductionism Debate
39:53 The Decline of God in Modern Discourse
46:23 Religious Credences
50:24 Pascal's Wager and Christianity
56:13 Are Miracles Real?
01:10:37 Fine-Tuning Argument for God
01:28:36 Cellular Automata
01:34:25 Anthropic Principle
01:51:40 Mathematical Structures and Probability
02:09:35 Defining God
02:18:20 Moral Realism
02:21:40 Orthogonality Thesis
02:32:02 Moral Philosophy vs. Science
02:45:51 Moral Intuitions
02:53:18 AI and Moral Philosophy
03:08:50 Debate Recap
03:12:20 Show Updates
Show Notes
Matthewâs Substack: https://benthams.substack.com
Matthew's Twitter: https://x.com/BenthamsBulldog
Matthew's YouTube: https://www.youtube.com/@deliberationunderidealcond5105
Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk â https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization Iâm part of â https://pauseai.info/
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Ccience at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it.
In this episode, I debate Kenâs claim that superintelligent AI *wonât* be guided by goals, and then we compare our views on AI doom.
00:00 Introduction
00:45 Kenâs Role at OpenAI
01:53 âOpen-Endednessâ and âDivergenceâ
9:32 Open-Endedness of Evolution
21:16 Human Innovation and Tech Trees
36:03 Objectives vs. Open Endedness
47:14 The Concept of Optimization Processes
57:22 Whatâs Your P(Doom)âą
01:11:01 Interestingness and the Future
01:20:14 Human Intelligence vs. Superintelligence
01:37:51 Instrumental Convergence
01:55:58 Mitigating AI Risks
02:04:02 The Role of Institutional Checks
02:13:05 Exploring AI's Curiosity and Human Survival
02:20:51 Recapping the Debate
02:29:45 Final Thoughts
SHOW NOTES
Kenâs home page: https://www.kenstanley.net/
Kenâs Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Stanley
Kenâs Twitter: https://x.com/kenneth0stanley
Kenâs PicBreeder paper: https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf
Ken's book, Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237
The Rocket Alignment Problem by Eliezer Yudkowsky: https://intelligence.org/2018/10/03/rocket-alignment/
---
Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk â https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization Iâm part of â https://pauseai.info/
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
---
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
OpenAI just announced o3 and smashed a bunch of benchmarks (ARC-AGI, SWE-bench, FrontierMath)!
A new Anthropic and Redwood Research paper says Claude is resisting its developersâ attempts to retrain its values!
Whatâs the upshot â what does it all mean for P(doom)?
00:00 Introduction
01:45 o3âs architecture and benchmarks
06:08 âScaling is hitting a wallâ đ€Ą
13:41 How many new architectural insights before AGI?
20:28 Negative update for interpretability
31:30 Intellidynamics â ***KEY CONCEPT***
33:20 Nuclear control rod analogy
36:54 Sam Altman's misguided perspective
42:40 Claude resisted retraining from good to evil
44:22 What is good corrigibility?
52:42 Claudeâs incorrigibility doesnât surprise me
55:00 Putting it all in perspective
---
SHOW NOTES
Scott Alexanderâs analysis of the Claude incorrigibility result: https://www.astralcodexten.com/p/claude-fights-back and https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude
Zvi Mowshowitzâs analysis of the Claude incorrigibility result: https://thezvi.wordpress.com/2024/12/24/ais-will-increasingly-fake-alignment/
---
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Say hi to me in the #doom-debates-podcast channel!
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.
Cross-posted from their channel with permission.
Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw
0:00:02 Guest Introduction
0:01:41 Effective Altruism and Transhumanism
0:05:38 Bayesian Epistemology and Extinction Probability
0:09:26 Defining Intelligence and Its Dangers
0:12:33 The Key Argument for AI Apocalypse
0:18:51 AIâs Internal Alignment
0:24:56 What Will AI's Real Goal Be?
0:26:50 The Train of Apocalypse
0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments?
0:38:32 The Shoggoth Meme
0:41:26 Possible Scenarios Leading to Extinction
0:50:01 The Only Solution: A Pause in AI Research?
0:59:15 The Risk of Violence from AI Risk Fundamentalists
1:01:18 What Will General AI Look Like?
1:05:43 Sci-Fi Works About AI
1:09:21 The Rationale Behind Cryonics
1:12:55 What Does a Positive Future Look Like?
1:15:52 Are We Living in a Simulation?
1:18:11 Many Worlds in Quantum Mechanics Interpretation
1:20:25 Ideal Future Podcast Guest for Doom Debates
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Roon is a member of the technical staff at OpenAI. Heâs a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms âshape rotatorâ and âwordcelâ to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter.
âI'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.
00:00 Introduction
02:43 Roonâs Quest and Philosophies
22:32 AI Creativity
30:42 Whatâs Your P(Doom)âą
54:40 AI Alignment
57:24 Training vs. Production
01:05:37 ASI
01:14:35 Goal-Oriented AI and Instrumental Convergence
01:22:43 Pausing AI
01:25:58 Crux of Disagreement
1:27:55 Dogecoin
01:29:13 Doom Debatesâs Mission
Show Notes
Follow Roon: https://x.com/tszzl
For Humanity: An AI Safety Podcast with John Sherman â https://www.youtube.com/@ForHumanityPodcast
Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk â https://www.youtube.com/watch?v=9CUFbqh16Fg
PauseAI, the volunteer organization Iâm part of â https://pauseai.info/
Join the PauseAI Discord â https://discord.gg/2XXWXvErfA â and say hi to me in the #doom-debates-podcast channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today Iâm reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.
Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. Heâs best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.
Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.
Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.
Unfortunately, what I heard in the interview confirms my worst fears about the meaning of âsafetyâ at todayâs AI companies: that theyâre laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, theyâre pushing forward recklessly.
00:00 Introducing Scott Aaronson
02:17 Scott's Recruitment by OpenAI
04:18 Scott's Work on AI Safety at OpenAI
08:10 Challenges in AI Alignment
12:05 Watermarking AI Outputs
15:23 The State of AI Safety Research
22:13 The Intractability of AI Alignment
34:20 Policy Implications and the Call to Pause AI
38:18 Out-of-Distribution Generalization
45:30 Moral Worth Criterion for Humans
51:49 Quantum Mechanics and Human Uniqueness
01:00:31 Quantum No-Cloning Theorem
01:12:40 Scott Is Almost An Accelerationist?
01:18:04 Geoffrey Hinton's Proposal for Analog AI
01:36:13 The AI Arms Race and the Need for Regulation
01:39:41 Scott Aronson's Thoughts on Sam Altman
01:42:58 Scott Rejects the Orthogonality Thesis
01:46:35 Final Thoughts
01:48:48 Lethal Intelligence Clip
01:51:42 Outro
Show Notes
Scottâs Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0
Scottâs Blog: https://scottaaronson.blog
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today Iâm reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.
Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they canât truly reason.
The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.
00:00 Introduction
02:54 Essentially N-Gram Models?
10:31 The Manhole Cover Question
20:54 Reasoning vs. Approximate Retrieval
47:03 Explaining Jokes
53:21 Caesar Cipher Performance
01:10:44 Creativity vs. Reasoning
01:33:37 Reasoning By Analogy
01:48:49 Synthetic Data
01:53:54 The ARC Challenge
02:11:47 Correctness vs. Style
02:17:55 AIs Becoming More Robust
02:20:11 Block Stacking Problems
02:48:12 PlanBench and Future Predictions
02:58:59 Final Thoughts
Show Notes
Raoâs interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2A
Raoâs Twitter: https://x.com/rao2z
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.
Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).
00:00 Nethys Introduction
04:47 The Vulnerable World Hypothesis
10:01 Whatâs Your P(Doom)âą
14:04 Nethysâs Banger YouTube Comment
26:53 Living with High P(Doom)
31:06 Losing Access to Distant Stars
36:51 Defining AGI
39:09 The Convergence of AI Models
47:32 The Role of âUnlicensedâ Thinkers
52:07 The PauseAI Movement
58:20 Lethal Intelligence Video Clip
Show Notes
Eliezer Yudkowskyâs post on âDeath with Dignityâ: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what⊠he has a high P(doom)! Thatâs why heâs joining me on Doom Debates for a very special AI + space crossover episode.
00:00 Fraser Cainâs Background and Interests
5:03 Whatâs Your P(Doom)âą
07:05 Our Vulnerable World
15:11 Donât Look Up
22:18 Cosmology and the Search for Alien Life
31:33 Stars = Terrorists
39:03 The Great Filter and the Fermi Paradox
55:12 Grabby Aliens Hypothesis
01:19:40 Life Around Red Dwarf Stars?
01:22:23 Epistemology of Grabby Aliens
01:29:04 Multiverses
01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation
01:47:25 Simulation Hypothesis
01:51:25 Final Thoughts
SHOW NOTES
Fraserâs YouTube channel: https://www.youtube.com/@frasercain
Universe Today (space and astronomy news): https://www.universetoday.com/
Max Tegmarkâs book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256
Robin Hansonâs ideas:
Grabby Aliens: https://grabbyaliens.com
The Great Filter: https://en.wikipedia.org/wiki/Great_Filter
Life in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml
---
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
---
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time weâre going straight to debating my favorite topic, AI doom.
00:00 Introduction
02:23 High-Level AI Doom Argument
17:06 How Powerful Could Intelligence Be?
22:34 âKnowledge Creationâ
48:33 âCreativityâ
54:57 Stand-Up Comedy as a Test for AI
01:12:53 Vaden & Benâs Goalposts
01:15:00 How to Change Lironâs Mind
01:20:02 LLMs are Stochastic Parrots?
01:34:06 Tools vs. Agents
01:39:51 Instrumental Convergence and AI Goals
01:45:51 Intelligence vs. Morality
01:53:57 Mainline Futures
02:16:50 Lethal Intelligence Video
Show Notes
Vaden & Benâs Podcast: https://www.youtube.com/@incrementspod
Recommended playlists from their podcast:
* The Bayesian vs Popperian Epistemology Series
* The Conjectures and Refutations Series
Vadenâs Twitter: https://x.com/vadenmasrani
Benâs Twitter: https://x.com/BennyChugg
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com - Laat meer zien