Afleveringen
-
Episode 121
I spoke with Professor Ryan Tibshirani about:
* Differences between the ML and statistics communities in scholarship, terminology, and other areas.
* Trend filtering
* Why you canât just use garbage prediction functions when doing conformal prediction
Ryan is a Professor in the Department of Statistics at UC Berkeley. He is also a Principal Investigator in the Delphi group. From 2011-2022, he was a faculty member in Statistics and Machine Learning at Carnegie Mellon University. From 2007-2011, he did his Ph.D. in Statistics at Stanford University.
Reach me at [email protected] for feedback, ideas, guest suggestions.
The Gradient Podcast on: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:10) Ryanâs background and path into statistics
* (07:00) Cultivating taste as a researcher
* (11:00) Conversations within the statistics community
* (18:30) Use of terms, disagreements over stability and definitions
* (23:05) Nonparametric Regression
* (23:55) Background on trend filtering
* (33:48) Analysis and synthesis frameworks in problem formulation
* (39:45) Neural networks as a specific take on synthesis
* (40:55) Divided differences, falling factorials, and discrete splines
* (41:55) Motivations and background
* (48:07) Divided differences vs. derivatives, approximation and efficiency
* (51:40) Conformal prediction
* (52:40) Motivations
* (1:10:20) Probabilistic guarantees in conformal prediction, choice of predictors
* (1:14:25) Assumptions: i.i.d. and exchangeability â conformal prediction beyond exchangeability
* (1:25:00) Next directions
* (1:28:12) Epidemic forecasting â COVID-19 impact and trends survey
* (1:29:10) Survey methodology
* (1:38:20) Data defect correlation and its limitations for characterizing datasets
* (1:46:14) Outro
Links:
* Ryanâs homepage
* Works read/mentioned
* Nonparametric Regression
* Adaptive Piecewise Polynomial Estimation via Trend Filtering (2014)
* Divided Differences, Falling Factorials, and Discrete Splines: Another Look at Trend Filtering and Related Problems (2020)
* Distribution-free Inference
* Distribution-Free Predictive Inference for Regression (2017)
* Conformal Prediction Under Covariate Shift (2019)
* Conformal Prediction Beyond Exchangeability (2023)
* Delphi and COVID-19 research
* Flexible Modeling of Epidemics
* Real-Time Estimation of COVID-19 Infections
* The US COVID-19 Trends and Impact Survey and Big data, big problems: Responding to âAre we there yet?â
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.
Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:43) Sashaâs background
* (01:52) How Sasha became interested in sociotechnical work
* (03:08) Larger models and theory of change for AI/climate work
* (07:18) Quantifying emissions for ML systems
* (09:40) Aggregate inference vs training costs
* (10:22) Hardware and data center locations
* (15:10) More efficient hardware vs. bigger models â Jevons paradox
* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports
* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs
* (28:22) General vs. task-specific models
* (31:20) Architectures and efficiency
* (33:45) Sequence-to-sequence architectures vs. decoder-only
* (36:35) Hardware efficiency/utilization
* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment
* (40:50) Stable Bias
* (46:45) Understanding model biases and representations
* (52:07) Future work
* (53:45) Metaethical perspectives on benchmarking for AI ethics
* (54:30) âMoral benchmarksâ
* (56:50) Reflecting on âethicalityâ of systems
* (59:00) Transparency and ethics
* (1:00:05) Advice for picking research directions
* (1:02:58) Outro
Links:
* Sashaâs homepage and Twitter
* Papers read/discussed
* Climate Change / Carbon Emissions of AI Models
* Quantifying the Carbon Emissions of Machine Learning
* Power Hungry Processing: Watts Driving the Cost of AI Deployment?
* Tackling Climate Change with Machine Learning
* CodeCarbon
* Responsible AI
* Stable Bias: Analyzing Societal Representations in Diffusion Models
* Metaethical Perspectives on âBenchmarkingâ AI Ethics
* Measuring Data
* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
Zijn er afleveringen die ontbreken?
-
In episode 119 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Sipser.
Professor Sipser is the Donner Professor of Mathematics and member of the Computer Science and Artificial Intelligence Laboratory at MIT.
He received his PhD from UC Berkeley in 1980 and joined the MIT faculty that same year. He was Chairman of Applied Mathematics from 1998 to 2000 and served as Head of the Mathematics Department 2004-2014. He served as interim Dean of Science 2013-2014 and then as Dean of Science 2014-2020.
He was a research staff member at IBM Research in 1980, spent the 1985-86 academic year on the faculty of the EECS department at Berkeley and at MSRI, and was a Lady Davis Fellow at Hebrew University in 1988. His research areas are in algorithms and complexity theory, specifically efficient error correcting codes, interactive proof systems, randomness, quantum computation, and establishing the inherent computational difficulty of problems. He is the author of the widely used textbook, Introduction to the Theory of Computation (Third Edition, Cengage, 2012).
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:40) Professor Sipserâs background
* (04:35) On interesting questions
* (09:00) Different kinds of research problems
* (13:00) What makes certain problems difficult
* (18:48) Nature of the P vs NP problem
* (24:42) Identifying interesting problems
* (28:50) Lower bounds on the size of sweeping automata
* (29:50) Why sweeping automata + headway to P vs. NP
* (36:40) Insights from sweeping automata, infinite analogues to finite automata problems
* (40:45) Parity circuits
* (43:20) Probabilistic restriction method
* (47:20) Relativization and the polynomial time hierarchy
* (55:10) P vs. NP
* (57:23) The non-connection between GOâs polynomial space hardness and AlphaGo
* (1:00:40) On handicapping Turing Machines vs. oracle strategies
* (1:04:25) The Natural Proofs Barrier and approaches to P vs. NP
* (1:11:05) Debates on methods for P vs. NP
* (1:15:04) On the possibility of solving P vs. NP
* (1:18:20) On academia and its role
* (1:27:51) Outro
Links:
* Professor Sipserâs homepage
* Papers discussed/read
* Halting space-bounded computations (1978)
* Lower bounds on the size of sweeping automata (1979)
* GO is Polynomial-Space Hard (1980)
* A complexity theoretic approach to randomness (1983)
* Parity, circuits, and the polynomial-time hierarchy (1984)
* A follow-up to Furst-Saxe-Sipser
* The Complexity of Finite Functions (1991)
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 118 of The Gradient Podcast, Daniel Bashir speaks to Andrew Lee.
Andrew is co-founder and CEO of Shortwave, a company dedicated to building a better product experience for email, particularly by leveraging AI. He previously co-founded and was CTO at Firebase.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:43) Andrewâs previous work, Firebase
* (04:48) Benefits of lacking experience in building Firebase
* (08:55) On âabstract reasoningâ vs empirical capabilities
* (10:30) Shortwaveâs AI system as a black box
* (11:55) Motivations for Shortwave
* (17:10) Why is Google not innovating on email?
* (21:53) Shortwaveâs overarching product vision and pivots
* (27:40) Shortwave AI features
* (33:20) AI features for email and security concerns
* (35:45) Shortwaveâs AI Email Assistant + architecture
* (43:40) Issues with chaining LLM calls together
* (45:25) Understanding implicit context in utterances, modularization without loss of context
* (48:56) Performance for AI assistant, batching and pipelining
* (55:10) Prompt length
* (57:00) On shipping fast
* (1:00:15) AI improvements that Andrew is following
* (1:03:10) Outro
Links:
* Andrewâs blog and Twitter
* Shortwave
* Introducing Ghostwriter
* Everything we shipped for AI Launch Week
* A deep dive into the worldâs smartest email AI
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
âYou get more of what you engage with. Everyone who complains about coverage should understand that every click, every quote tweet, every argument is registered by these publications as engagement. If what you want is really meaty, dispassionate, balanced, and fair explainers, you need to click on that, you need to read the whole thing, you need to share it, talk about it, comment on it. We get the media that we deserve.â
In episode 117 of The Gradient Podcast, Daniel Bashir speaks to Joss Fong.
Joss is a producer focused on science and technology, and was a founding member of the Vox video team. Her work has been recognized by the AAAS Kavli Science Journalism Awards, the Online Journalism Awards, and the News & Documentary Emmys. She holds a master's degree in science, health, and environmental reporting from NYU.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:32) Jossâs path into videomaking, J-school
* (07:45) Consumption and creation in explainer journalism
* (10:45) Finding clarity in information
* (13:15) Communication of ML research
* (15:55) Video journalism and science communication as separate and overlapping disciplines
* (19:41) Evolution of videos and videomaking
* (26:33) Explaining AI and communicating mental models
* (30:47) Meeting viewers in the middle, competing for attention
* (34:07) Explanatory techniques in Glad You Asked
* (37:10) Storytelling and communicating scientific information
* (40:57) âIs Beauty Culture Hurting Us?â and participating in video narratives
* (46:37) AI beauty filters
* (52:59) Obvious bias in generative AI
* (59:31) Definitions and ideas of progress, humanities and technology
* (1:05:08) âIterative developmentâ and outsourcing quality control to the public
* (1:07:10) Disagreement about (tech) journalismâs purpose
* (1:08:51) Incentives in newsrooms and journalistic organizations
* (1:12:04) AI for video generation and implications, limits of creativity
* (1:17:20) Skill and creativity
* (1:22:35) Jossâs new YouTube channel!
* (1:23:29) Outro
Links:
* Jossâs website and playlist of selected work
* AI-focused videos
* AI Art, Explained (2022)
* AI can do your homework. Now what? (2023)
* Computers just got a lot better at writing (2020)
* Facebook showed this ad to 95% women. Is that a problem? (2020)
* What facial recognition steals from us (2019)
* The big debate about the future of work (2017)
* AI and Creativity short film for Runwayâs AIFF (2023)
* Others
* Is Beauty Culture Hurting Us? from Glad You Asked (2020)
* Jossâs Scientific American videos :)
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 116 of The Gradient Podcast, Daniel Bashir speaks to Kate Park.
Kate is the Director of Product at Scale AI. Prior to joining Scale, Kate worked on Tesla Autopilot as the AI teamâs first and lead product manager building the industryâs first data engine. She has also published research on spoken natural language processing and a travel memoir.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:11) Kateâs background
* (03:22) Tesla and cameras vs. Lidar, importance of data
* (05:12) âData is keyâ
* (07:35) Data vs. architectural improvements
* (09:36) Effort for data scaling
* (10:55) Transfer of capabilities in self-driving
* (13:44) Data flywheels and edge cases, deployment
* (15:48) Transition to Scale
* (18:52) Perspectives on shifting to transformers and data
* (21:00) Data engines for NLP vs. for vision
* (25:32) Model evaluation for LLMs in data engines
* (27:15) InstructGPT and data for RLHF
* (29:15) Benchmark tasks for assessing potential labelers
* (32:07) Biggest challenges for data engines
* (33:40) Expert AI trainers
* (36:22) Future work in data engines
* (38:25) Need for human labeling when bootstrapping new domains or tasks
* (41:05) Outro
Links:
* Scale Data Engine
* OpenAI case study
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 115 of The Gradient Podcast, Daniel Bashir speaks to Ben Wellington.
Ben is the Deputy Head of Feature Forecasting at Two Sigma, a financial sciences company. Ben has been at Two Sigma for more than 15 years, and currently leads efforts focused on natural language processing and feature forecasting. He is also the author of data science blog I Quant NY, which has influenced local government policy, including changes in NYC street infrastructure and the design of NYC subway vending machines. Ben is a Visiting Assistant Professor in the Urban and Community Planning program at the Pratt Institute in Brooklyn where he teaches statistics using urban open data. He holds a Ph.D. in Computer Science from New York University.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Benâs background
* (04:30) Why Ben was interested in NLP
* (05:48) Benâs work on translational equivalence, dominant techniques
* (10:14) Scaling, large datasets at Two Sigma
* (12:50) Applying ML techniques to quantitative finance, features in financial ML systems
* (17:27) Baselines and time-dependence in constructing features, human knowledge
* (19:23) Black box models in finance
* (24:00) Two Sigmaâs presence in the AI research community
* (26:55) Short- and long-term research initiatives at Two Sigma
* (30:42) How ML fits into Two Sigmaâs investment strategy
* (34:05) Alpha and competition in investing
* (36:13) Temporality in data
* (40:38) Challenges for finance/AI and beating the market
* (44:36) Reproducibility
* (49:47) I Quant NY and storytelling with data
* (56:43) Descriptive statistics and stories
* (1:01:05) Benefits of simple methods
* (1:07:11) Outro
Links:
* Benâs work on translational equivalence and scalable discriminative learning
* Two Sigma Insights
* Storytelling with data and I Quant NY
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
âThere is this move from generality in a relative sense of âwe are not as specialized as insectsâ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word âgeneralâ in completely unhinged ways.â
In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao.
Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:38) Origins of Ribbonfarm and Venkatâs academic background
* (04:23) Voice and recurring themes in Venkatâs work
* (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability
* (21:00) More on abstractions vs. tractability in Venkatâs work
* (29:07) Scaling of industrial value systems, characterizing AI as a discipline
* (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines
* (55:05) Psychometric terms
* (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem đ€Ą)
* (1:18:13) LLM training and efficiency, comparing LLMs to humans
* (1:23:35) Experiential age, analogies for knowledge transfer
* (1:30:50) More clarification on the analogy
* (1:37:20) Massed Muddler Intelligence and protocols
* (1:38:40) Introducing protocols and the Summer of protocols
* (1:49:15) Evolution of protocols, hardness
* (1:54:20) LLMs, protocols, time, future visions, and progress
* (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge
* (2:14:23) Directions for ML people in protocols research
* (2:18:05) Outro
Links:
* Venkatâs Twitter and homepage
* Mediocre Computing
* Summer of Protocols and 2024 Call for Applications (apply!)
* Essays discussed
* Patch models and their applications to multivehicle command and control
* From Mediocre Computing
* Text is All You Need
* Magic, Mundanity, and Deep Protocolization
* A Camera, Not an Engine
* Massed Muddler Intelligence
* On protocols
* The Unreasonable Sufficiency of Protocols
* Protocols Donât Build Pyramids
* Protocols in (Emergency) Time
* Atoms, Institutions, Blockchains
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 113 of The Gradient Podcast, Daniel Bashir speaks to Professor Sasha Rush.
Professor Rush is an Associate Professor at Cornell University and a Researcher at HuggingFace. His research aims to develop natural language processing systems that are safe, fast, and controllable. His group is interested primarily in tasks that involve text generation, and they study data-driven probabilistic methods that combine deep-learning based models with probabilistic controls. He is also interested in open-source NLP and deep learning, and develops projects to make deep learning systems safer, clearer, and easier to use.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:47) Professor Rushâs background
* (03:23) Professor Rushâs reflections on prior workâimportance of learning and inference
* (04:58) How much engineering matters in deep learning, the Rush vs. Frankle Bet
* (07:12) On encouraging and incubating good research
* (10:50) Features of good research environments
* (12:36) 5% bets in Professor Rushâs research: State-Space Models (SSMs) as an alternative to Transformers
* (15:58) SSMs vs. Transformers
* (18:53) Probabilistic Context-Free Grammarsâare (P)CFGs worth paying attention to?
* (20:53) Sequence-level knowledge distillation: approximating sequence-level distributions
* (25:08) Pruning and knowledge distillation â orthogonality of efficiency techniques
* (26:33) Broader thoughts on efficiency
* (28:31) Works on prompting
* (28:58) Prompting and In-Context Learning
* (30:05) Thoughts on mechanistic interpretability
* (31:25) Multitask prompted training enables zero-shot task generalization
* (33:48) How many data points is a prompt worth?
* (35:13) Directions for controllability in LLMs
* (39:11) Controllability and safety
* (41:23) Open-source work, deep learning libraries
* (42:08) A story about Professor Rushâs post-doc at FAIR
* (43:51) The impact of PyTorch
* (46:08) More thoughts on deep learning libraries
* (48:48) Levels of abstraction, PyTorch as an interface to motivate research
* (50:23) Empiricism and research commitments
* (53:32) Outro
Links:
* Research
* Early work / PhD
* Dual Decomposition and LP Relaxations
* Vine Pruning for Efficient Multi-Pass Dependency Parsing
* Improved Parsing and POS Tagging Using Inter-Sentence Dependency Constraints
* Research â interpretable and controllable natural language generation
* Compound Probabilistic Context-Free Grammars for Grammar Induction
* Multitask prompted training enables zero-shot task generalization
* Research â deep generative models
* A Neural Attention Model for Abstractive Sentence Summarization
* Learning Neural Templates for Text Generation
* How many data points is a prompt worth?
* Research â efficient algorithms and hardware for speech, translation, dialogue
* Sequence-Level Knowledge Distillation
* Open-source work
* NamedTensor
* Torch Struct
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 112 of The Gradient Podcast, Daniel Bashir speaks to Cameron Jones and Sean Trott.
Cameron is a PhD candidate in the Cognitive Science Department at the University of California, San Diego. His research compares how humans and large language models process language about world knowledge, situation models, and theory of mind.
Sean is an Assistant Teaching Professor in the Cognitive Science Department at the University of California, San Diego. His research interests include probing large language models, ambiguity in languages, how ambiguous words are represented, and pragmatic inference. He previously completed his PhD at UCSD.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:55) Cameronâs background
* (06:00) Seanâs background
* (08:15) Unexpected capabilities of language models and the need for embodiment to understand meaning
* (11:05) Interpreting results of Turing tests, separating what humans and LLMs do when behaving as though they âunderstandâ
* (14:27) Internal mechanisms, interpretability, how we test theories
* (16:40) Languages are efficient, but for whom?
* (17:30) Initial motivations: lexical ambiguity
* (19:20) The balance of meanings across wordforms
* (22:35) Tension between speaker- and comprehender-oriented pressures in lexical ambiguity
* (25:05) Context and potential vs. realized ambiguity
* (27:15) LLM-ology
* (28:30) Studying LLMs as models of human cognition and as interesting objects of study in their own right
* (30:03) Example of explaining away effects
* (33:54) The internalist account of belief sensitivityâbehavior and internal representations
* (37:43) LLMs and the False Belief Task
* (42:05) Hypothetical on observed behavior and inferences about internal representations
* (48:05) Distributional Semantics Still Canât Account for Affordances
* (50:25) Tests of embodied theories and limitations of distributional cues
* (53:54) Multimodal models and object affordances
* (58:30) Language and grounding, other buzzwords
* (59:45) How could we know if LLMs understand language?
* (1:04:50) Reference: as a thing words do vs. ontological notion
* (1:11:38) The Role of Physical Inference in Pronoun Resolution
* (1:16:40) World models and world knowledge
* (1:19:45) EPITOME
* (1:20:20) The different tasks
* (1:26:43) Confounders / âattendingâ in LM performance on tasks
* (1:30:30) Another hypothetical, on theory of mind
* (1:32:26) How much information can language provide in service of mentalizing?
* (1:35:14) Convergent validity and coherence/validity of theory of mind
* (1:39:30) Interpretive questions about behavior w/r/t/ theory of mind
* (1:43:35) Does GPT-4 Pass the Turing Test?
* (1:44:00) History of the Turing Test
* (1:47:05) Interrogator strategies and the strength of the Turing Test
* (1:52:15) âInternal lifeâ and personality
* (1:53:30) How should this research impact how we assess / think about LLM abilities?
* (1:58:56) Outro
Links:
* Cameronâs homepage and Twitter
* Seanâs homepage and Twitter
* Research â Language and NLP
* Languages are efficient, but for whom?
* Research â LLM-ology
* Do LLMs know what humans know?
* Distributional Semantics Still Canât Account for Affordances
* In Cautious Defense of LLM-ology
* Should Psycholinguists use LLMs as âmodel organismsâ?
* (Re)construing Meaning in NLP
* Research â language and grounding, theory of mind, reference [insert other buzzwords here]
* Do LLMs have a âtheory of mindâ?
* How could we know if LLMs understand language?
* Does GPT-4 Pass the Turing Test?
* Could LMs change language?
* The extended mind and why it matters for cognitive science research
* EPITOME
* The Role of Physical Inference in Pronoun Resolution
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 111 of The Gradient Podcast, Daniel Bashir speaks to Nicholas Thompson.
Nicholas is the CEO of The Atlantic. Previously, he served as editor-in-chief of Wired and editor of Newyorker.com. Nick also cofounded Atavist, which sold to Automattic in 2018. Publications under Nickâs leadership have won numerous National Magazine Awards and Pulitzer Prizes, and one WIRED story he edited was the basis for the movie Argo. Nick is also the co-founder of Speakeasy AI, a software platform designed to foster constructive online conversations about the worldâs most pressing problems.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:12) Nickâs path into journalism
* (03:25) The Washington Monthly â a turning point
* (05:09) Perspectives from different positions in the journalism industry
* (08:16) What is great journalism?
* (09:42) Example from The Atlantic
* (11:00) Other examples/pieces of good journalism
* (12:20) Pieces on aging
* (12:56) Mortality and life-force associated with running â Nickâs piece in WIRED
* (15:30) On urgency
* (18:20) The job of an editor
* (22:23) AI in journalism â benefits and limitations
* (26:45) How AI can help writers, experimentation
* (28:40) Examples of AI in journalism and issues: CNET, Sports Illustrated, Nickâs thoughts on how AI should be used in journalism
* (32:20) Speakeasy AI and creating healthy conversation spaces
* (34:00) Details about Speakeasy
* (35:12) Business pivots and business model trouble
* (35:37) Remaining gaps in fixing conversational spaces
* (38:27) Lessons learned
* (40:00) Nickâs optimism about Speakeasy-like projects
* (43:14) Social simulacra, a âTroll WestWorld,â algorithmic adjustments in social media
* (46:11) Lessons and wisdom from journalism about engagement, more on engagement in social media
* (50:27) Successful and unsuccessful futures for AI in journalism
* (54:17) Previous warnings about synthetic media, Nickâs perspective on risks from synthetic media in journalism
* (57:00) Stop trying to build AGI
(59:13) Outro
Links:
* Nicholasâs Twitter and website
* Speakeasy AI
* Writing
* âTo Run My Best Marathon at Age 44, I Had to Outrun My Pastâ in WIRED
* âThe year AI actually changes the media businessâ in NiemanLabâs Predictions for Journalism 2023
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.
Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:11) Professor Kambhampatiâs background
* (06:07) Explanation in AI
* (18:08) What people want from explanationsâvocabulary and symbolic explanations
* (21:23) The realization of new concepts in explanationâanalogy and grounding
* (30:36) Thinking and language
* (31:48) Conscious and subconscious mental activity
* (36:58) Tacit and explicit knowledge
* (42:09) The development of planning as a research area
* (46:12) RL and planning
* (47:47) What makes a planning problem hard?
* (51:23) Scalability in planning
* (54:48) LLMs do not perform reasoning
* (56:51) How to show LLMs arenât reasoning
* (59:38) External verifiers and backprompting LLMs
* (1:07:51) LLMs as cognitive orthotics, language and representations
* (1:16:45) Finding out what kinds of representations an AI system uses
* (1:31:08) âCompilingâ system 2 knowledge into system 1 knowledge in LLMs
* (1:39:53) The Generative AI Paradox, reasoning and retrieval
* (1:43:48) AI as an ersatz natural science
* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering
* (1:58:33) Outro
Links:
* Professor Kambhampatiâs Twitter and homepage
* Research and Writing â Planning and Human-Aware AI Systems
* A Validation-structure-based theory of plan modification and reuse (1990)
* Challenges of Human-Aware AI Systems (2020)
* Polanyi vs. Planning (2021)
* LLMs and Planning
* Can LLMs Really Reason and Plan? (2023)
* On the Planning Abilities of LLMs (2023)
* Other
* Changing the nature of AI research
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 109 of The Gradient Podcast, Daniel Bashir speaks to Russ Maschmeyer.
Russ is the Product Lead for AI and Spatial Commerce at Shopify. At Shopify, he leads a team that looks at how AI can better empower entrepreneurs, with a particular interest in how image generation can help make the lives of business owners and merchants more productive. He previously led design for multiple services at Facebook and co-founded Primer, an AR-enabled interior design marketplace.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:50) Russâs background and a hacked Kinect sensor
* (06:00) Instruments and emotion, embodiment and accessibility
* (08:45) Natural language as input and generative AI in creating emotive experiences
* (10:55) Work on search queries and recommendations at Facebook, designing for search
* (16:35) AI in the retail and entrepreneurial landscape
* (19:15) Shopify and AI for business owners
* (22:10) Vision and directions for AI in commerce
* (25:01) Personalized experiences for shopping
* (28:45) Challenges for creating personalized experiences
* (31:49) Intro to spatial commerce
* (34:48) AR/VR devices and spatial commerce
* (37:30) MR and AI for immersive product search
* (41:35) Implementation details
* (48:05) WonkaVision and difficulties for immersive web experiences
* (52:10) Future projects and directions for spatial commerce
* (55:10) Outro
Links:
* Russâs Twitter and homepage
* With a Wave of the Hand, Improvising on Kinect in The New York Times
* Shopify Spatial Commerce Projects
* MR and AI for immersive product search
* A more immersive web with a simple optical illusion
* What if your room had a reset button?
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 108 of The Gradient Podcast, Daniel Bashir speaks to Professor Benjamin Breen.
Professor Breen is an associate professor of history at UC Santa Cruz specializing in the history of science, medicine, globalization, and the impacts of technological change. He is the author of multiple books including The Age of Intoxication: Origins of the Global Drug Trade and the more recent Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science, which you can pre-order now.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:05) Professor Breenâs background
* (04:47) End of history narratives / millenarian thinking in AI/technology
* (09:53) Transformative technological change and societal change
* (16:45) AI and psychedelics
* (17:23) Techno-utopianism
* (26:08) Technologies as metaphors for humanity
* (32:34) McLuhanist thinking / brain as a computational machine, Prof. Breenâs skepticism
* (37:13) Issues with overblown narratives about technology
* (42:46) Narratives about transformation and their impacts on progress
* (45:23) The historical importance of todayâs AI landscape
* (50:05) International aspects of the history of technology
* (53:13) Doomerism vs optimism, why doomerism is appealing
* (57:58) Automation, meta-skills, jobs â advice for early career
* (1:01:08) LLMs and (history) education
* (1:07:10) Outro
Links:
* Professor Breenâs Twitter and homepage
* Books
* Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science
* The Age of Intoxication: Origins of the Global Drug Trade
* Writings
* Into the mystic
* âAlien Jesusâ
* Simulating History with ChatGPT
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 107 of The Gradient Podcast, Daniel Bashir speaks to Professor Ted Gibson.
Ted is a Professor of Cognitive Science at MIT. He leads the TedLab, which investigates why languages look the way they do; the relationship between culture and cognition, including language; and how people learn, represent, and process language.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:13) Prof Gibsonâs background
* (05:33) The computational linguistics community and NLP, engineering focus
* (10:48) Models of brains
* (12:03) Prof Gibsonâs focus on behavioral work
* (12:53) How dependency distances impact language processing
* (14:03) Dependency distances and the origin of the problem
* (18:53) Dependency locality theory
* (21:38) The structures languages tend to use
* (24:58) Sentence parsing: structural integrations and memory costs
* (36:53) Reading strategies vs. ordinary language processing
* (40:23) Legalese
* (46:18) Cross-dependencies
* (50:11) Number as a cognitive technology
* (54:48) Experiments
* (1:03:53) Why counting is useful for Western societies
* (1:05:53) The Whorf hypothesis
* (1:13:05) Language as Communication
* (1:13:28) The noisy channel perspective on language processing
* (1:27:08) Fedorenko lab experimentsâlanguage for thought vs. communication and Chomskyâs claims
* (1:43:53) Thinking without language, inner voices, language processing vs. language as an aid for other mental processing
* (1:53:01) Dependency grammars and a critique of Chomskyâs grammar proposals, LLMs
* (2:08:48) LLM behavior and internal representations
* (2:12:53) Outro
Links:
* Tedâs lab page and Twitter
* Re-imagining our theories of language
* Research â linguistic complexity and dependency locality theory
* Linguistic complexity: locality of syntactic dependencies (1998)
* The Dependency Locality Theory: A Distance-Based Theory of Linguistic Complexity (2000)
* Consequences of the Serial Nature of Linguistic Input for Sentential Complexity (2005)
* Large-scale evidence of dependency length minimization in 37 languages (2015)
* Dependency locality as an explanatory principle for word order (2020)
* Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex (2022)
* A resource-rational model of human processing of recursive linguistic structure (2022)
* Research â language processing / communication and cross-linguistic universals
* Number as a cognitive technology: Evidence from PirahĂŁ language and cognition (2008)
* The communicative function of ambiguity in language (2012)
* The rational integration of noisy evidence and prior semantic expectations in sentence interpretation (2013)
* Color naming across languages reflects color use (2017)
* How Efficiency Shapes Human Language (2019)
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 106 of The Gradient Podcast, Daniel Bashir speaks to Professor Harvey Lederman.
Professor Lederman is a professor of philosophy at UT Austin. He has broad interests in contemporary philosophy and in the history of philosophy: his areas of specialty include philosophical logic, the Ming dynasty philosopher Wang Yangming, epistemology, and philosophy of language. He has recently been working on incomplete preferences, on trying in the philosophy of language, and on Wang Yangmingâs moral metaphysics.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:15) Harveyâs background
* (05:30) Higher-order metaphysics and propositional attitudes
* (06:25) Motivations
* (12:25) Setup: syntactic types and ontological categories
* (25:11) What makes higher-order languages meaningful and not vague?
* (25:57) Higher-order languages corresponding to the world
* (30:52) Extreme vagueness
* (35:32) Desirable features of languages and important questions in philosophy
* (36:42) Higher-order identity
* (40:32) Intuitions about mental content, language, context-sensitivity
* (50:42) Perspectivism
* (51:32) Co-referring names, identity statements
* (55:42) The paperâs approach, âknowâ as context-sensitive
* (57:24) Propositional attitude psychology and mentalese generalizations
* (59:57) The âgood standingâ of theorizing about propositional attitudes
* (1:02:22) Mentalese
* (1:03:32) âDoes knowledge imply belief?â â when a question does not have good standing
* (1:06:17) Sense, Reference, and Substitution
* (1:07:07) Fregeans and the principle of Substitution
* (1:12:12) Follow-up work to this paper
* (1:13:39) Do Language Models Produce Reference Like Libraries or Like Librarians?
* (1:15:02) Bibliotechnism
* (1:19:08) Inscriptions and reference, what it takes for something to refer
* (1:22:37) Derivative and basic reference
* (1:24:47) Intuition: n-gram models and reference
* (1:28:22) Meaningfulness in sentences produced by n-gram models
* (1:30:40) Bibliotechnism and LLMs, disanalogies to n-grams
* (1:33:17) On other recent work (vector grounding, do LMs refer?, etc.)
* (1:40:12) Causal connections and reference, how bibliotechnism makes good on the meanings of sentences
* (1:45:46) RLHF, sensitivity to truth and meaningfulness
* (1:48:47) Intelligibility
* (1:50:52) When LLMs produce novel reference
* (1:53:37) Novel reference vs. find-replace
* (1:56:00) Directionality example
* (1:58:22) Human intentions and derivative reference
* (2:00:47) Between bibliotechnism and agency
* (2:05:32) Where do invented names / novel reference come from?
* (2:07:17) Further questions
* (2:10:04) Outro
Links:
* Harveyâs homepage and Twitter
* Papers discussed
* Higher-order metaphysics and propositional attitudes
* Perspectivism
* Sense, Reference, and Substitution
* Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 105 of The Gradient Podcast, Daniel Bashir speaks to Eric Jang.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:25) Updates since Ericâs last interview
* (06:07) The problem space of humanoid robots
* (08:42) Motivations for the book âAI is Good for Youâ
* (12:20) Definitions of AGI
* (14:35) ~ AGI timelines ~
* (16:33) Do we have the ingredients for AGI?
* (18:58) Rediscovering old ideas in AI and robotics
* (22:13) Ingredients for AGI
* (22:13) Artificial Life
* (25:02) Selection at different levels of informationâintelligence at different scales
* (32:34) AGI as a collective intelligence
* (34:53) Human in the loop learning
* (37:38) From getting correct answers to doing things correctly
* (40:20) Levels of abstraction for modeling decision-making â the neurobiological stack
* (44:22) Implementing loneliness and other details for AGI
* (47:31) Experience in AI systems
* (48:46) Asking for Generalization
* (49:25) Linguistic relativity
* (52:17) Language vs. complex thought and Fedorenko experiments
* (54:23) Efficiency in neural design
* (57:20) Generality in the human brain and evolutionary hypotheses
* (59:46) Embodiment and real-world robotics
* (1:00:10) Moravecâs Paradox and the importance of embodiment
* (1:05:33) How embodiment fits into the pictureâin verification vs. in learning
* (1:10:45) Nonverbal information for training intelligent systems
* (1:11:55) AGI and humanity
* (1:12:20) The positive future with AGI
* (1:14:55) The negative future â technology as a lever
* (1:16:22) AI in the military
* (1:20:30) How AI might contribute to art
* (1:25:41) Ericâs own work and a positive future for AI
* (1:29:27) Outro
Links:
* Ericâs book
* Ericâs Twitter and homepage
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 104 of The Gradient Podcast, Daniel Bashir speaks to Nathan Benaich.
Nathan is Founder and General Partner at Air Street Capital, a VC firm focused on investing in AI-first technology and life sciences companies. Nathan runs a number of communities focused on AI including the Research and Applied AI Summit and leads Spinout.fyi to improve the creation of university spinouts. Nathan co-authors the State of AI Report.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Updates in Nathan World â Air Streetâs second fund, spinouts,
* (07:30) Events: Research and Applied AI Summit, State of AI Report launches
* (09:50) The State of AI: main messages, the increasing role of subject matter experts
* Research
* (14:13) Open and closed-source
* (17:55) Benchmarking and evaluation, small/large models and industry verticals
* (21:10) âVibesâ in LLM evaluation
* (24:00) Codegen models, personalized AI, curriculum learning
* (26:20) The exhaustion of human-generated data, lukewarm content, synthetic data
* (29:50) Opportunities for AI applications in the natural sciences
* (35:15) Reinforcement Learning from Human Feedback and alternatives
* (38:30) Industry
* (39:00) ChatGPT and productivity
* (42:37) General app wars, ChatGPT competitors
* (45:50) Computeâdemand, supply, competition
* (50:55) Export controls and geopolitics
* (54:45) Startup funding and compute spend
* (59:15) Politics
* (59:40) Calls for regulation, regulatory divergence
* (1:04:40) AI safety
* (1:07:30) Nathanâs perspective on regulatory approaches
* (1:12:30) The UKâs early access to frontier models, standards setting, regulation difficulties
* (1:17:20) Jailbreaking, constitutional AI, robustness
* (1:20:50) Predictions!
* (1:25:00) Generative AI misuse in elections and politics (and, this prediction coming true in Bangladesh)
* (1:26:50) Progress on AI governance
* (1:30:30) European dynamism
* (1:35:08) Outro
Links:
* Nathanâs homepage and Twitter
* The 2023 State of AI Report
* Bringing Dynamism to European Defense
* A prediction coming true: How AI is disrupting Bangladeshâs election
* Air Street Capital is hiring a full-time Community Lead!
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 103 of The Gradient Podcast, Daniel Bashir speaks to Dr. Kathleen Fisher.
As the director of DARPAâs Information Innovation Office (I2O), Dr. Kathleen Fisher oversees a portfolio that includes most of the agencyâs AI-related research and development efforts, including the recent AI Forward initiative. AI Forward explores new directions for AI research that will result in trustworthy systems for national security missions. This summer, roughly 200 participants from the commercial sector, academia, and the U.S. government attended workshops that generated ideas to inform DARPAâs next phase of AI exploratory projects. Dr. Fisher previously served as a program manager in I2O from 2011 to 2014. As a program manager, she conceptualized, created, and executed programs in high-assurance computing and machine learning, including Probabilistic Programming for Advancing Machine Learning (PPAML), making building ML applications easier. She was also a co-author of a recent paper about the threats posed by large language models.
Since 2018, DARPA has dedicated over $2 billion in R&D funding to AI research. The agency DARPA has been generating groundbreaking research and development for 65 years â leading to game-changing military capabilities and icons of modern society, such as initiating the research field that rendered self-driving cars and developing the technology that led to Appleâs Siri.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:30) Kathleenâs background
* (05:05) Intersections between programming languages and AI
* (07:15) Neuro-symbolic AI, trade-offs between flexibility and guarantees
* (09:45) History of DARPA and the Information Innovation Office (I2O)
* (13:55) DARPAâs perspective on research
* (17:10) Galvanizing a research community
* (20:06) DARPAâs recent investments in AI and AI Forward
* (26:35) Dual-use nature of generative AI, identifying and mitigating security risks, Kathleenâs perspective on short-term and long-term risk (note: the âGradient podcastâ Kathleen mentions is from Last Week in AI)
* (30:10) Concerns about deployment and interaction
* (32:20) Outcomes from AI Forward workshops and themes
* (36:10) Incentives in building and using AI technologies, friction
* (38:40) Interactions between DARPA and other government agencies
* (40:09) Future research directions
* (44:04) Ways to stay up to date on DARPAâs work
* (45:40) Outro
Links:
* DARPA I2O website
* Probabilistic Programming for Advancing Machine Learning (PPAML) (Archived)
* Assured Neuro Symbolic Learning and Reasoning (ANSR)
* AI Cyber Challenge
* AI Forward
* Identifying and Mitigating the Security Risks of Generative AI Paper
* FoundSci Solicitation
* FACT Solicitation
* Semantic Forensics (SemaFor)
* GARD Open Source Resources
* I2O Newsletter signup
Get full access to The Gradient at thegradientpub.substack.com/subscribe -
In episode 102 of The Gradient Podcast, Daniel Bashir speaks to Peter Tse.
Professor Tse is a Professor of Cognitive Neuroscience and chair of the department of Psychological and Brain Sciences at Dartmouth College. His research focuses on using brain and behavioral data to constrain models of the neural bases of attention and consciousness, unconscious processing that precedes and constructs consciousness, mental causation, and human capacities for imagination and creativity. He is especially interested in the processing that goes into the construction of conscious experience between retinal activation at time 0 and seeing an event about a third of a second later.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:45) Prof. Tseâs background
* (03:25) Early experiences in physics/math and philosophy of physics
* (06:10) Choosing to study neuroscience
* (07:15) Prof Tseâs commitments about determinism
* (10:00) Quantum theory and determinism
* (13:45) Biases/preferences in choosing theories
* (20:41) Falsifiability and scientific questions, transition from physics to neuroscience
* (30:50) How neuroscience is unusual among the sciences
* (33:20) Neuroscience and subjectivity
* (34:30) Reductionism
* (37:30) Gestalt psychology
* (41:30) Introspection in neuroscience
* (45:30) The preconscious buffer and construction of conscious experience, color constancy
* (53:00) Perceptual and cognitive inference
* (55:00) AI systems and intrinsic meaning
* (57:15) Information vs. meaning
* (1:01:45) Consciousness and representation of bodily states
* (1:05:10) Our second-order free will
* (1:07:20) Jaegwon Kimâs exclusion argument
* (1:11:45) Why Kim thought his own argument was wrong
* (1:15:00) Resistance and counterarguments to Kim
* (1:19:45) Criterial causation
* (1:23:00) How neurons evaluate inputs criterially
* (1:24:00) Concept neurons in the hippocampus
* (1:31:57) Criterial causation and physicalism, mental causation
* (1:40:10) Daniel makes another attempt to push back đ€Ą
* (1:45:47) More on AI
* (1:47:05) Prof Tseâs perspective on modern AI systems, differences with human cognition
* (2:17:25) Consciousness, attention, spirituality
* (2:20:10) Prof Tseâs hopes for AI
* (2:23:30) Outro
Links:
* Professor Tseâs homepage
* Papers
* Vision/Perception
* Perceptual learning based on the learning of diagnostic features
* Complete mergeability and amodal completion
* Attention
* How Attention Can Alter Appearances
* How Top-down Attention Alters Bottom-up preconscious operations
* Consciousness
* Network structure and dynamics of the mental workspace
* On free will
* NDPR review of âNeural Basis of Free Willâ
* Kripkeâs Category Error
* Ontological Indeterminism undermines Kimâs Exclusion Argument
Get full access to The Gradient at thegradientpub.substack.com/subscribe - Laat meer zien