Afleveringen
-
What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLaR panel, and many fantastic research papers, you wouldn't have captured my experience. In this episode we discuss the drama and takeaways from NeurIPS 2024.
Posters available at time of episode preparation can be found on the episode webpage.
EPISODE RECORDED 2024.12.22
(00:00) - Recording date(00:05) - Intro(00:44) - Obligatory mentions(01:54) - SoLaR panel(18:43) - Test of Time(24:17) - And now: science!(28:53) - Downsides of benchmarks(41:39) - Improving the science of ML(53:07) - Performativity(57:33) - NopenAI and Nanthropic(01:09:35) - Fun/interesting papers(01:13:12) - Initial takes on o3(01:18:12) - WorkArena(01:25:00) - Outro
LinksNote: many workshop papers had not yet been published to arXiv as of preparing this episode, the OpenReview submission page is provided in these cases.
NeurIPS statement on inclusivityCTOL Digital Solutions article - NeurIPS 2024 Sparks Controversy: MIT Professor's Remarks Ignite "Racism" Backlash Amid Chinese Researchers’ Triumphs(1/2) NeurIPS Best Paper - Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale PredictionVisual Autoregressive Model report this link now provides a 404 errorDon't worry, here it is on archive.isReuters article - ByteDance seeks $1.1 mln damages from intern in AI breach case, report saysCTOL Digital Solutions article - NeurIPS Award Winner Entangled in ByteDance's AI Sabotage Accusations: The Two Tales of an AI GeniusReddit post on Ilya's talkSoLaR workshop pageReferenced Sources
Harvard Data Science Review article - Data Science at the SingularityPaper - Reward Reports for Reinforcement LearningPaper - It's Not What Machines Can Learn, It's What We Cannot TeachPaper - NeurIPS Reproducibility ProgramPaper - A Metric Learning Reality CheckImproving Datasets, Benchmarks, and Measurements
Tutorial video + slides - Experimental Design and Analysis for AI Researchers (I think you need to have attended NeurIPS to access the recording, but I couldn't find a different version)Paper - BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best PracticesPaper - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?Paper - A Systematic Review of NeurIPS Dataset Management PracticesPaper - The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks TrackPaper - Benchmark Repositories for Better BenchmarkingPaper - Croissant: A Metadata Format for ML-Ready DatasetsPaper - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites ParadoxPaper - Evaluating Generative AI Systems is a Social Science Measurement ChallengePaper - Report Cards: Qualitative Evaluation of LLMsGovernance Related
Paper - Towards Data Governance of Frontier AI ModelsPaper - Ways Forward for Global AI Benefit SharingPaper - How do we warn downstream model providers of upstream risks?Unified Model Records toolPaper - Policy Dreamer: Diverse Public Policy Creation via Elicitation and Simulation of Human PreferencesPaper - Monitoring Human Dependence on AI Systems with Reliance DrillsPaper - On the Ethical Considerations of Generative AgentsPaper - GPAI Evaluation Standards Taskforce: Towards Effective AI GovernancePaper - Levels of Autonomy: Liability in the age of AI AgentsCertified Bangers + Useful Tools
Paper - Model Collapse Demystified: The Case of RegressionPaper - Preference Learning Algorithms Do Not Learn Preference RankingsLLM Dataset Inference paper + repodattri paper + repoDeTikZify paper + repoFun Benchmarks/Datasets
Paloma paper + datasetRedPajama paper + datasetAssemblage webpageWikiDBs webpageWhodunitBench repoApeBench paper + repoWorkArena++ paperOther Sources
Paper - The Mirage of Artificial Intelligence Terms of Use Restrictions -
The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.
Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/
(00:00) - Recorded 2024.12.08(00:54) - Actual intro(03:00) - System cards vs. academic papers(05:36) - Starting off sus(08:28) - o1.continued(12:23) - Rant #1: figure 1(18:27) - A diamond in the rough(19:41) - Hiding copyright violations(21:29) - Rant #2: Jacob on "hallucinations"(25:55) - More ranting and "hallucination" rate comparison(31:54) - Fairness, bias, and bad science comms(35:41) - System, dev, and user prompt jailbreaking(39:28) - Chain-of-thought and Rao-Blackwellization(44:43) - "Red-teaming"(49:00) - Apollo's bit(51:28) - METR's bit(59:51) - Pass@???(01:04:45) - SWE Verified(01:05:44) - Appendix bias metrics(01:10:17) - The muck and the meaning
Linkso1 system cardOpenAI press release collection - 12 Days of OpenAI
NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment TestApollo Research's paper - Frontier Models are Capable of In-context SchemingVentureBeat article - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT ProThe Atlantic article - The GPT Era Is Already Ending
Additional o1 Coverage
60 Minutes article + video - Labelers training AI say they're overworked, underpaid and exploited by big American tech companiesReflections article - The hidden health dangers of data labeling in AI developmentPrivacy International article = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets
On Data Labelers
Paper - Measuring Faithfulness in Chain-of-Thought ReasoningPaper - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought PromptingPaper - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language ModelsPaper - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models
Chain-of-Thought Papers Cited
Andy Jones blogpost - Rao-BlackwellizationPaper - Training on the Test Task Confounds Evaluation and EmergencePaper - Best-of-N JailbreakingResearch landing page - SWE BenchCode Competition - Konwinski PrizeLakera game = GandalfKate Crawford's Atlas of AIBlueDot Impact's course - Intro to Transformative AI
Other Mentioned/Relevant Sources
Cruz's letter to Merrick GarlandAWS News Blog article - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performanceBleepingComputer article - Ultralytics AI model hijacked to infect thousands with cryptominerThe Register article - Microsoft teases Copilot Vision, the AI sidekick that judges your tabsFox Business article - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure
Unrelated Developments -
Zijn er afleveringen die ontbreken?
-
While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this opportunity to examine policies being discussed or implemented by leading governments around the world.
(00:00) - Intro(00:29) - Hot off the press(02:59) - Repealing the AI executive order?(11:16) - "Manhattan" for AI(24:33) - EU(30:47) - UK(39:27) - Bengio(44:39) - Comparing EU/UK to USA(45:23) - China(51:12) - Taxes(55:29) - The muck
LinksSFChronicle article - US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their workTrump's Executive Order on AI (the AI governance executive order at home)Biden's Executive Order on AICongressional report brief which advises a "Manhattan Project for AI"Non-USA
CAIRNE resource collection on CERN for AIUK Frontier AI Taskforce report (2023)International interim report (2024)Bengio's paper - AI and Catastrophic RiskDavidad's Safeguarded AI program at ARIAMIT Technology Review article - Four things to know about China’s new AI rules in 2024GovInsider article - Australia’s national policy for ethical use of AI starts to take shapeFuture of Privacy forum article - The African Union’s Continental AI Strategy: Data Protection and Governance Laws Set to Play a Key Role in AI RegulationTaxes
Macroeconomic Dynamics paper - Automation, Stagnation, and the Implications of a Robot TaxCESifo paper - AI, Automation, and TaxationGavTax article - Taxation of Artificial Intelligence and AutomationPerplexity Pages
CERN for AI pageChina's AI policy pageSingapore's AI policy pageAI policy in Africa, India, Australia pageOther Sources
Artificial Intelligence Made Simple article - NYT's "AI Outperforms Doctors" Story Is WrongIntel report - Reclaim Your Day: The Impact of AI PCs on ProductivityHeise Online article - Users on AI PCs slower, Intel sees problem in unenlightened usersThe Hacker News article - North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedInFuturism article - Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're UnderageVice article - 'AI Jesus' Is Now Taking Confessions at a Church in SwitzerlandPolitico article - Ted Cruz: Congress 'doesn't know what the hell it's doing' with AI regulationUS Senate Committee on Commerce, Science, and Transportation press release - Sen. Cruz Sounds Alarm Over Industry Role in AI Czar Harris’s Censorship Agenda -
Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode we look into these articles, as well as a wide variety of economic forecasting, empirical analysis, and technical papers to understand the validity, and impact of these reports. We also use this as an opportunity to contextualize the realized versus promised fruits of "AI".
(00:23) - Hot off the press(01:49) - The end of scaling(10:50) - "Useful tools" and "agentic" "AI"(17:19) - The end of quantization(25:18) - Hedging(29:41) - The end of upwards mobility(33:12) - How to grow an economy(38:14) - Transformative & disruptive tech(49:19) - Finding the meaning(56:14) - Bursting AI bubble and Trump(01:00:58) - The muck
LinksThe Information article - OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements SlowsBloomberg [article] - OpenAI, Google and Anthropic Are Struggling to Build More Advanced AIReuters article - OpenAI and others seek new path to smarter AI as current methods hit limitationsPaper on the end of quantization - Scaling Laws for PrecisionTim Dettmers Tweet on "Scaling Laws for Precision"Empirical Analysis
WU Vienna paper - Unslicing the pie: AI innovation and the labor share in European regionsIMF paper - The Labor Market Impact of Artificial Intelligence: Evidence from US RegionsNBER paper - Automation, Career Values, and Political PreferencesPew Research Center report - Which U.S. Workers Are More Exposed to AI on Their Jobs?Forecasting
NBER/Acemoglu paper - The Simple Macroeconomics of AINBER/Acemoglu paper - Harms of AIIMF report - Gen-AI: Artificial Intelligence and the Future of WorkSubmission to Open Philanthropy AI Worldviews Contest - Transformative AGI by 2043 is <1% likelyExternalities and the Bursting Bubble
NBER paper - Bubbles, Rational Expectations and Financial MarketsClayton Christensen lecture capture - Clayton Christensen: Disruptive innovationThe New Republic article - The “Godfather of AI” Predicted I Wouldn’t Have a Job. He Was Wrong.Latent Space article - $2 H100s: How the GPU Rental Bubble BurstOn Productization
Palantir press release on introduction of Claude to US security and defenseArs Technica article - Claude AI to process secret government data through new Palantir dealOpenAI press release on partnering with Condé NastCandid Technology article - Shutterstock and Getty partner with OpenAI and BRIAE2BStripe agentsRobopairOther Sources
CBS News article - Google AI chatbot responds with a threatening message: "Human … Please die."Biometric Update article - Travelers to EU may be subjected to AI lie detectorTechcrunch article - OpenAI’s tumultuous early years revealed in emails from Musk, Altman, and othersRichard Ngo Tweet on leaving OpenAI -
October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.
(00:48) - The memorandum(06:28) - What the press is saying(10:39) - What's in the text(13:48) - Potential harms(17:32) - Miscellaneous notable stuff(31:11) - What's the US governments take on AI?(45:45) - The civil side - comments on reporting(49:31) - The commenters(01:07:33) - Our final hero(01:10:46) - The muck
LinksUnited States National Security Memorandum on AIFact Sheet on the National Security MemorandumFramework to Advance AI Governance and Risk Management in National SecurityRelated Media
CAIS Newsletter - AI Safety Newsletter #43NIST report - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence ProfileACLU press release - ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key ProtectionsWikipedia article - Presidential MemorandumReuters article - White House presses gov't AI use with eye on security, guardrailsForbes article - America’s AI Security Strategy Acknowledges There’s No Stopping AIDefenseScoop article - New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilitiesNYTimes article - Biden Administration Outlines Government ‘Guardrails’ for A.I. ToolsForbes article - 5 Things To Know About The New National Security Memorandum On AI – And What ChatGPT ThinksFederal News Network interview - A look inside the latest White House artificial intelligence memoGovtech article - Reactions Mostly Positive to National Security AI MemoThe Information article - Biden Memo Encourages Military Use of AIOther Sources
Physical Intelligence press release - π0: Our First Generalist PolicyOpenAI press release - Introducing ChatGPT SearchWhoPoo App!! -
Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.
(00:00) - Intro(00:22) - Hot off the press(05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000(09:23) - Breaking down "computer use"(13:16) - Our understanding(16:03) - Diverging business models(32:07) - Why has Anthropic chosen this strategy?(43:14) - Changing the frame(48:00) - Polishing the lilyLinks
Anthropic press release - Introducing Claude 3.5 Sonnet (New)Model Card AddendumOther Anthropic Relevant Media
Paper - Sabotage Evaluations for Frontier ModelsAnthropic press release - Anthropic's Updated RSPAlignment Forum blogpost - Anthropic's Updated RSPTweet - Response to scare regarding Anthropic training on user dataAnthropic press release - Developing a computer use modelSimon Willison article - Initial explorations of Anthropic’s new Computer Use capabilityTweet - ARC Prize performanceThe Information article - Anthropic Has Floated $40 Billion Valuation in Funding TalksOther Sources
LWN.net article - OSI readies controversial Open AI definitionNational Security MemorandumFramework to Advance AI Governance and Risk Management in National SecurityReuters article - Mother sues AI chatbot company Character.AI, Google over son's suicideMedium article - A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source ModelsThe Guardian article - Google's solution to accidental algorithmic racism: ban gorillasTIME article - Ethical AI Isn’t to Blame for Google’s Gemini DebacleLatacora article - The SOC2 Starting SevenGrandview Research market trends - Robotic Process Automation Market Trends -
Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!
(00:00) - Intro(00:28) - Hot off the press(02:43) - Why listen?(06:07) - Why might VCs invest?(15:52) - What are people saying(23:10) - How *is* OpenAI making money?(28:18) - Is AI hype dying?(41:08) - Why might big companies invest?(48:47) - Concrete impacts of AI(52:37) - Outcome 1: OpenAI as a commodity(01:04:02) - Outcome 2: AGI(01:04:42) - Outcome 3: best plausible case(01:07:53) - Outcome 1*: many ways to bust(01:10:51) - Outcome 4+: shock factor(01:12:51) - What's the muck(01:21:17) - Extended outroLinks
Reuters article - OpenAI closes $6.6 billion funding haul with investment from Microsoft and NvidiaGoldman Sachs report - GenAI: Too Much Spend, Too Little BenefitApricitas Economics article - The AI Investment BoomDiscussion of "The AI Investment Boom" on YCombinatorState of AI in 13 ChartsFortune article - OpenAI sees $5 billion loss in 2024 and soaring sales as big ChatGPT fee hikes planned, report saysMore on AI Hype (Dying)
Latent Space article - The Winds of AI WinterArticle by Gary Marcus - The Great AI Retrenchment has BegunTimmermanReport article - AI: If Not Now, When? No, Really - When?MIT News article - Who Will Benefit from AI?Washington Post article - The AI Hype bubble is deflating. Now comes the hard part.Andreesen Horowitz article - Why AI Will Save the WorldOther Sources
Human-Centered Artificial Intelligence Foundation Model Transparency IndexCointelegraph article - Europe gathers global experts to draft ‘Code of Practice’ for AIReuters article - Microsoft's VP of GenAI research to join OpenAITwitter post from Tim Brooks on joining DeepMindEdward Zitron article - The Man Who Killed Google Search -
The Open Source AI Definition is out after years of drafting, will it reestablish brand meaning for the “Open Source” term? Also, the 2024 Nobel Prizes in Physics and Chemistry are heavily tied to AI; we scrutinize not only this year's prizes, but also Nobel Prizes as a concept.
(00:00) - Intro(00:30) - Hot off the press(03:45) - Open Source AI background(10:30) - Definitions and changes in RC1(18:36) - “Business source”(22:17) - Parallels with legislation(26:22) - Impacts of the OSAID(33:58) - 2024 Nobel Prize Context(37:21) - Chemistry prize(45:06) - Physics prize(50:29) - Takeaways(52:03) - What’s the real muck?(01:00:27) - OutroLinks
Open Source AI Definition, Release Candidate 1OSAID RC1 announcementAll Nobel Prizes 2024More Reading on Open Source AI
Kairos.FM article - Open Source AI is a lie, but it doesn't have to beThe Register article - The open source AI civil war approachesMIT Technology Review article - We finally have a definition for open-source AIOn Nobel Prizes
Paper - Access to Opportunity in the Sciences: Evidence from the Nobel LaureatesPhysics prize - scientific background, popular infoChemistry prize - scientific background, popular infoReuters article - Google's Nobel prize winners stir debate over AI researchWikipedia article - Nobel diseaseOther Sources
Pivot.ai article - People are ‘blatantly stealing my work,’ AI artist complainsPaper - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsPaper - Reclaiming AI as a Theoretical Tool for Cognitive Science | Computational Brain & Behavior -
Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the now vetoed California legislation that had Big Tech scared.
(00:00) - Intro(00:31) - Updates from a relatively slow week(03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen)(05:24) - What is SB1047(12:30) - Definitions(17:18) - Understanding the bill(28:42) - What are the players saying about it? (46:44) - Addressing critiques(55:59) - Open Source(01:02:36) - Takeaways(01:15:40) - Clarification on impact to big tech(01:18:51) - Outro
LinksSB1047 legislation pageSB1047 CalMatters pageNewsom vetoes SB1047CAIS newsletter on SB1047Prominent AI nerd letterAnthropic's letterSB1047 ~explainerAdditional SB1047 Related Coverage
Opposition to SB1047 'makes no sense'Newsom on SB1047Andreesen Horowitz on SB1047Classy move by DanEx-OpenAI employee says Altman doesn't want regulationOther Sources
o1 doesn't measure up in new benchmark paperOpenAI losses and gainsOpenAI crypto hack"Murati out" -Mira Murati, probablyAltman pitching datacenters to White HouseSam Altman, 'podcast bro'Paper: Contract Design with Safety Inspections -
OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one!
⚠ Opt out of LinkedIn's GenAI scraping ➡️ https://lnkd.in/epziUeTi
(00:00) - Intro(00:25) - Other recent news(02:57) - Hot off the press(03:58) - Why might someone care?(04:52) - What is it?(06:49) - How is it being sold?(10:45) - How do they explain it, technically?(27:09) - Reflection AI Drama(40:19) - Why do we care?(46:39) - Scraping away the muck
Note: at around 32 minutes, Igor says the incorrect Llama model version for the story he is telling. Jacob dubbed over those mistakes with the correct versioning.Links relating to o1
OpenAI blogpostSystem card webpageGitHub collection of o1 related mediaAMA Twitter threadFrancois Chollet Tweet on reasoning and o1The academic paper doing something very similar to o1Other stuff we mention
OpenAI's huge valuation hinges on upending corporate structureMeta acknowledges it’s scraping all public posts for AI trainingWhite House announces new private sector voluntary commitments to combat image-based sexual abuseSam Altman wants you to be gratefulThe Zuck is done apologizingIAPS report on technical safety research at AI companiesLlama2 70B is "about as good" as GPT-4 at summarization tasks