Afleveringen

  • When the American company OpenAI released ChatGPT, it was the first time that a lot of people had ever interacted with Generative AI. ChatGPT has become so popular that, for many, it’s now synonymous with artificial intelligence.

    But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn’t just a new company, it could be an entirely different approach to building artificial intelligence.

    To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she’s better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.

    Mentioned:

    “The messy, secretive reality behind OpenAI’s bid to save the world,” by Karen Hao

    Further Reading:

    “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.

    “A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef

    “Technical Report: Analyzing DeepSeek-R1â€Čs Impact on AI Development,” by Azizi Othman

  • Do I have your attention right now? I’m guessing probably not. Or, at least, not all of it. In all likelihood, you’re listening to this on your morning commute, or while you wash the dishes or check your e-mail.

    We are living in a world of perpetual distraction. There are more things to read, watch and listen to than ever before – but our brains, it turns out, can only absorb so much. Politicians like Donald Trump have figured out how to exploit this dynamic. If you’re constantly saying outrageous things, it becomes almost impossible to focus on the things that really matter. Trump’s former strategist Steve Bannon called this strategy “flooding the zone.”

    As the host of the MSNBC show All In, Chris Hayes has had a front-row seat to the war for our attention – and, now, he’s decided to sound the alarm with a new book called The Sirens’ Call: How Attention Became the World’s Most Endangered Resource.

    Hayes joined me to explain how our attention became so scarce, and what happens to us when we lose the ability to focus on the things that matter most.

    Mentioned:

    "Twitter and Tear Gas: The Power and Fragility of Networked Protest," by Zeynep Tufekci

    Further Reading:

    "Ethics of the Attention Economy: The Problem of Social Media Addiction," by Vikram R. Bhargava and Manuel Velasquez.

    "The Attention Economy Labour, Time and Power in Cognitive Capitalism," by Claudio Celis Bueno

    “The business of news in the attention economy: Audience labor and MediaNews Group’s efforts to capitalize on news consumption,” Brice Nixon

  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • It’s become pretty easy to spot phishing scams: UPS orders you never made, banking alerts from companies you don’t bank with, phone calls from unfamiliar area codes. But over the past decade, these scams – and the technology behind them – have become more sophisticated, invasive and sinister, largely due to the rise of something called ‘mercenary spyware.’

    The most potent version of this tech is Pegasus, a surveillance tool developed by an Israeli company called NSO Group. Once Pegasus infects your phone, it can see your texts, track your movement, and download your passwords – all without you realizing you’d been hacked.

    We know a lot of this because of Ron Deibert. Twenty years ago, he founded Citizen Lab, a research group at the University of Toronto that has helped expose some of the most high profile cases of cyber espionage around the world.

    Ron has a new book out called Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy, and he sat down with me to explain how spyware works, and what it means for our privacy – and our democracy.

    Note: We reached out to NSO Group about the claims made in this episode and they did not reply to our request for comment.

    Mentioned:

    “Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy,” by Ron Deibert

    “Meta’s WhatsApp says spyware company Paragon targeted users in two dozen countries,” by Raphael Satter, Reuters

    Further Reading:

    “The Autocrat in Your iPhone,” by Ron Deibert

    “A Comprehensive Analysis of Pegasus Spyware and Its Implications for Digital Privacy and Security,” Karwan Kareem

    “Stopping the Press: New York Times Journalist Targeted by Saudi-linked Pegasus Spyware Operator,” by Bill Marczak, Siena Anstis, Masashi Crete-Nishihata, John Scott-Railton, and Ron Deibert

  • We’ve spent a lot of time on this show talking about AI: how it’s changing war, how your doctor might be using it, and whether or not chatbots are curing, or exacerbating, loneliness.

    But what we haven’t done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.

    Where do our queries go once they’ve been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?

    To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.

    Further Reading:

    “Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building It Anyway,” Machines Like Us podcast

    “ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths

    “A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge

    “Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell

    “Anatomy of an AI System,” by Kate Crawford and Vladan Joler“

    Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo

  • We spend a lot of time talking about AI on this show: how we should govern it, the ideologies of the people making it, and the ways it's reshaping our lives.

    But before we barrel into a year where I think AI will be everywhere, we thought this might be a good moment to step back and ask an important question: what exactly is AI?

    On our next episode, we'll be joined by Derek Ruths, a Professor of Computer Science at McGill University.

    And he's given me permission to ask him anything and everything about AI.

    If you have questions about AI, or how its impacting your life, we want to hear them. Send an email or a voice recording to: [email protected]

    Thanks – and we’ll see you next Tuesday!

  • In February, 2024, Megan Garcia’s 14-year-old son Sewell took his own life.

    As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.

    They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.

    I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children.

    We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.

    A spokesperson for Character.AI made the following statement:

    “We do not comment on pending litigation.

    Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.

    The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.

    Additional ways we have integrated safety across our platform include:

    Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.

    User Inputs: While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.

    Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.

    We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.

    There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.”

    If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline.

    Mentioned:

    Megan Garcia v. Character Technologies, Et Al.

    “Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas

    “Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker

    “Can AI Companions Cure Loneliness?,” Machines Like Us

    “An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha Tiku

    Further Reading:

    “Can A.I. Be Blamed for a Teen’s Suicide?” by Kevin Roose

    “Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us

  • In July, there was a recall on two brands of plant-based milks, Silk and Great Value, after a listeria outbreak that led to at least 20 illnesses and three deaths. Public health officials determined the same strain of listeria had been making people sick for almost a year. When Globe reporters began looking into what happened, they found a surprising fact: the facility that the bacteria was traced to had not been inspected for listeria in years.

    The reporters learned that in 2019 the Canadian Food Inspection Agency introduced a new system that relies on an algorithm to prioritize sites for inspectors to visit. Investigative reporters Grant Robertson and Kathryn Blaze Baum talk about why this new system of tracking was created, and what went wrong.

  • The board game Go has more possible board configurations than there are atoms in the universe.

    Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.

    In 2016, researchers at Google’s DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.

    After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.

    He wasn’t alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.

    But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He’s spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.

    Mentioned:

    “AlphaGo”

    “The Beauty of Games” by Frank Lantz

    “Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.

    “Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern

    “Heads-up limit hold’em poker is solved” by Michael Bowling Et al.

    Further Reading:

    “How to Play a Game” by Frank Lantz

    “The Afterlife of Go” by Frank Lantz

    “How A.I. Conquered Poker” by Keith Romer

    “In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade Metz

    Hey Robot by Frank Lantz

    Universal Paperclips by Frank Lantz

  • The past few months have seen a series of bold proclamations from the most powerful people in tech.

    In September, Mark Zuckerberg announced that Meta had developed “the most advanced glasses the world had ever seen.” That same day, Open AI CEO Sam Altman predicted we could have artificial super intelligence within a couple of years. Elon Musk has said he’ll land rockets on Mars by 2026.

    We appear to be living through the kinds of technological leaps we used to only dream about. But whose dreams were those, exactly?

    In her latest book, Imagination: A Manifesto, Ruha Benjamin argues that our collective imagination has been monopolized by the Zuckerbergs and Musks of the world. But, she says, it doesn’t need to be that way.

    Mentioned:

    “Imagination: A Manifesto,” by Ruha Benjamin

    Summer of Soul (...Or, When the Revolution Could Not Be Televised), directed by Questlove

    “The Black Woman: An Anthology,” by Toni Cade Bambara

    “The New Artificial Intelligentsia,” by Ruha Benjamin

    “Race After Technology,” by Ruha Benjamin

    Breonna's Garden, with Ju'Niyah Palmer

    “Viral Justice,” by Ruha Benjamin

    The Parable Series, by Octavia Butler

    Further Reading:

    “AI could make health care fairer—by helping us believe what patients say,” by Karen Hao

    “How an Attempt at Correcting Bias in Tech Goes Wrong,” by Sidney Fussell

    “Unmasking AI: My Mission to Protect What Is Human in a World of Machines,’” by Joy Buolamwini

    “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” by Timnit Gebru and Émile P. Torres

  • Margrethe Vestager has spent the past decade standing up to Silicon Valley. As the EU’s Competition Commissioner, she’s waged landmark legal battles against tech giants like Meta, Microsoft and Amazon. Her two latest wins will cost Apple and Google billions of dollars.

    With her decade-long tenure as one of the world’s most powerful anti-trust watchdogs coming to an end, Vestager has turned her attention to AI. She spearheaded the EU’s AI Act, which will be the first and, so far, most ambitious piece of AI legislation in the world.

    But the clock is ticking – both on her term and on the global race to govern AI, which Vestager says we have “very little time” to get right.

    Mentioned:

    The EU Artificial Intelligence Act

    “Dutch scandal serves as a warning for Europe over risks of using algorithms,” by Melissa HeikkilĂ€

    “Belgian man dies by suicide following exchanges with chatbot” by Lauren Walker

    The Digital Services Act

    The Digital Markets Act

    General Data Protection Regulation (GDPR)

    “The future of European competitiveness” by Mario Draghi

    “Governing AI for Humanity: Final Report” by the United Nations Secretary-General’s High-level Advisory Body

    The Artificial Intelligence and Data Act (AIDA)

    Further Reading:

    “Apple, Google must pay billions in back taxes and fines, E.U. court rules” by Ellen Francis and Cat Zakrzewski

    “OpenAI Lobbied the E.U. to Water Down AI Regulation” by Billy Perrigo

    “The total eclipse of Margrethe Vestager” by Samuel Stolton

    “Digital Empires: The Global Battle to Regulate Technology” by Anu Bradford

    “The Brussels Effect: How the European Union Rules the World” by Anu Bradford

  • We’re off this week, so we’re bringing you an episode from our Globe and Mail sister show Lately.

    That creeping feeling that everything online is getting worse has a name: “enshittification,” a term for the slow degradation of our experience on digital platforms. The enshittification cycle is why you now have to wade through slop to find anything useful on Google, and why your charger is different from your BFF’s.

    According to Cory Doctorow, the man who coined the memorable moniker, this digital decay isn’t inevitable. It’s a symptom of corporate under-regulation and monopoly – practices being challenged in courts around the world, like the US Department of Justice’s antitrust suit against Google.

    Cory Doctorow is a British-Canadian journalist, blogger and author of Chokepoint Capitalism, as well as speculative fiction works like The Lost Cause and the new novella Spill.

    Every Friday, Lately takes a deep dive into the big, defining trends in business and tech that are reshaping our every day. It’s hosted by Vass Bednar.

    Machines Like Us will be back in two weeks.

  • The tech lobby has quietly turned Silicon Valley into the most powerful political operation in America.

    Pro crypto donors are now responsible for almost half of all corporate donations this election. Elon Musk has gone from an occasional online troll to, as one of our guests calls him, “MAGA’s Minister of Propaganda.” And for the first time, the once reliably blue Silicon Valley seems to be shifting to the right. What does all this mean for the upcoming election?

    To help us better understand this moment, we spoke with three of the most prominent tech writers in the U.S. Charles Duhigg (author of the bestseller Supercommunicators) has a recent piece in the New Yorker called “Silicon Valley, the New Lobbying Monster.” Charlie Warzel is a staff writer at the Atlantic, and Nitasha Tiku is a tech culture reporter at the Washington Post.

    Mentioned:

    “Silicon Valley, the New Lobbying Monster” by Charles Duhigg

    “Big Crypto, Big Spending: Crypto Corporations Spend an Unprecedented $119 Million Influencing Elections” by Rick Claypool via Public Citizen

    “I’m Running Out of Ways to Explain How Bad This Is” by Charlie Warzel

    “Elon Musk Has Reached a New Low” by Charlie Warzel

    “The movement to diversify Silicon Valley is crumbling amid attacks on DEI” by Naomi Nix, Cat Zakrzewski and Nitasha Tiku

    “The Techno-Optimist Manifesto” by Marc Andreessen

    “Trump Vs. Biden: Tech Policy,” The Ben & Marc Show

    “The MAGA Aesthetic Is AI Slop” by Charlie Warzel

    Further Reading:

    “Biden's FTC took on big tech, big pharma and more. What antitrust legacy will Biden leave behind?” by Paige Sutherland and Meghna Chakrabarti

    “Inside the Harris campaign’s blitz to win back Silicon Valley” by Cat Zakrzewski, Nitasha Tiku and Elizabeth Dwoskin

    “The Little Tech Agenda” by Marc Andreessen and Ben Horowitz

    “Silicon Valley had Harris’s back for decades. Will she return the favor?” by Cristiano Lima-Strong and Cat Zakrzewski

    “SEC’s Gensler turns tide against crypto in courts” by Declan Harty

    “Trump vs. Harris is dividing Silicon Valley into feuding political camps” by Trisha Thadani, Elizabeth Dwoskin, Nitasha Tiku and Gerrit De Vynck

    “Inside the powerful Peter Thiel network that anointed JD Vance” by Elizabeth Dwoskin, Cat Zakrzewski, Nitasha Tiku and Josh Dawsey

  • What kind of future are we building for ourselves? In some ways, that’s the central question of this show.

    It’s also a central question of speculative fiction. And one that few people have tried to answer as thoughtfully – and as poetically – as Emily St. John Mandel.

    Mandel is one of Canada’s great writers. She’s the author of six award winning novels, the most recent of which is Sea of Tranquility – a story about a future where we have moon colonies and time travelling detectives. But Mandel might be best known for Station Eleven, which was made into a big HBO miniseries in 2021. In Station Eleven, Mandel envisioned a very different future. One where a pandemic has wiped out nearly everyone on the planet, and the world has returned to a pre industrial state. In other words, a world without technology.

    I think speculative fiction carries tremendous power. In fact, I think that AI is ultimately an act of speculation. The AI we have chosen to build, and our visions of what AI could become, have been shaped by acts of imagination.

    So I wanted to speak to someone who has made a career imagining other worlds, and thinking about how humans will fit into them.

    Mentioned:

    “Last Night in Montreal” by Emily St. John Mandel

    “Station Eleven” by Emily St. John Mandel

    The Nobel Prize in Literature 2014 – Lecture by Patrick Modiano

    “The Glass Hotel” by Emily St. John Mandel

    “Sea of Tranquility” by Emily St. John Mandel

    Summary of the 2023 WGA MBA, Writers Guild of America

    Her (2013)

    “The Handmaid’s Tale” by Margaret Atwood

    “Shell Game” by Evan Ratliff

    Replika

    Further Reading:

    “Can AI Companions Cure Loneliness?,” Machines Like Us

    “Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building it Anyway.,” Machines Like Us

    “The Road” by Cormac McCarthy

  • A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was – how should I say this – a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed.

    While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy.

    And then there was Yoshua Bengio.

    Bengio is one of AI’s pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn’t be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio.

    But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he’s dedicated himself to AI safety. He’s a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute.

    And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it’s too late.

    Mentioned:

    “Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio

    “Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton

    “Computing Machinery and Intelligence” by Alan Turing

    “International Scientific Report on the Safety of Advanced AI”

    “Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.

    “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”

    Further reading:

    “‘Deep Learning’ Guru Reveals the Future of AI” by Cade Metz

    “MontrĂ©al Declaration for a Responsible Development of Artificial Intelligence”

    “This A.I. Subculture’s Motto: Go, Go, Go” By Kevin Roose

    “Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio

  • In 2015, 195 countries gathered in Paris to discuss how to address the climate crisis. Although there was plenty they couldn’t agree on, there was one point of near-absolute consensus: if the planet becomes 2°C hotter than it was before industrialization, the effects will be catastrophic. Despite that consensus, we have continued barrelling toward that 2°C threshold. And while the world is finally paying attention to climate change, the pace of our action is radically out of step with the severity of the problem. What is becoming increasingly clear is that just cutting our emissions – by switching to clean energy or driving electric cars – will not be sufficient. We will also need some bold technological solutions if we want to maintain some semblance of life as we know it.

    Luckily, everything is on the table. Grinding entire mountains into powder and dumping them into oceans. Sucking carbon directly out of the air and burying it underground. Spraying millions of tons of sulphur dioxide directly into the atmosphere.

    Gwynne Dyer has spent the past four years interviewing the world’s leading climate scientists about the moonshots that could save the planet. Dyer is a journalist and historian who has written a dozen books over his career, and has become one of Canada’s most trusted commentators on war and geopolitics.

    But his latest book, Intervention Earth, is about the battle to save the planet.

    Like any reporting on the climate, it’s inevitably a little depressing. But with this book Dyer has also given us a different way of thinking about the climate crisis – and maybe even a road map for how technology could help us avoid our own destruction.

    Mentioned:

    “Intervention Earth: Life-Saving Ideas from the World’s Climate Engineers” by Gwynne Dyer

    “Scientists warn Earth warming faster than expected – due to reduction in ship pollution” by Nicole Mortillaro

    “Global warming in the pipeline” by James Hansen, et al.

    “Albedo Enhancement by Stratospheric Sulfur Injections: A Contribution to Resolve a Policy Dilemma?” by Paul Crutzen

    Further Reading:

    Interview with Hans Joachim Schellnhuber and Gwynne Dyer

  • For nearly a year now, the world has been transfixed – and horrified – by what’s happening in the Gaza Strip. Yet for all the media coverage, there seems to be far less known about how this war is actually being fought. And the how of this conflict, and its enormous human toll, might end up being its most enduring legacy.

    In April, the Israeli magazine +972 published a story describing how Israel was using an AI system called Lavender to target potential enemies for air strikes, sometimes with a margin of error as high as 10 per cent.

    I remember reading that story back in the spring and being shocked, not that such tools existed, but that they were already being used at this scale on the battlefield. P.W. Singer was less surprised. Singer is one of the world’s foremost experts on the future of warfare. He’s a strategist at the think tank New America, a professor of practice at Arizona State University, and a consultant for everyone from the US military to the FBI.

    So if anyone can help us understand the black box of autonomous weaponry and AI warfare, it’s P.W. Singer.

    Mentioned:

    “‘The Gospel’: how Israel uses AI to select bombing targets in Gaza” by Harry Davies, Bethan McKernan, and Dan Sabbagh

    “‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza” by Yuval Abraham

    “Ghost Fleet: A Novel of the Next World War” by P. W. Singer and August Cole

    Further Reading:

    “Burn-In: A Novel of the Real Robotic Revolution” by P. W. Singer and August Cole

    “The AI revolution is already here” by P. W. Singer

    “Humans must be held responsible for decisions AI weapons make” in The Asahi Shimbun

    “Useful Fiction”

  • Things do not look good for journalism right now. This year, Bell Media, VICE, and the CBC all announced significant layoffs. In the US, there were cuts at the Washington Post, the LA Times, Vox and NPR – to name just a few. A recent study from Northwestern University found that an average of two and a half American newspapers closed down every single week in 2023 (up from two a week the year before).

    One of the central reasons for this is that the advertising model that has supported journalism for more than a century has collapsed. Simply put, Google and Meta have built a better advertising machine, and they’ve crippled journalism’s business model in the process.

    It wasn’t always obvious this was going to happen. Fifteen or twenty years ago, a lot of publishers were actually making deals with social media companies, thinking they were going to lead to bigger audiences and more clicks.

    But these turned out to be faustian bargains. The journalism industry took a nosedive, while Google and Meta became two of the most profitable companies in the world.

    And now we might be doing it all over again with a new wave of tech companies like OpenAI. Over the past several years, OpenAI, operating in a kind of legal grey area, has trained its models on news content it hasn’t paid for. While some news outlets, like the New York Times, have chosen to sue OpenAI for copyright infringement, many publishers (including The Atlantic, the Financial Times, and NewsCorp) have elected to sign deals with OpenAI instead.

    Julia Angwin has been worried about the thorny relationship between big tech and journalism for years. She’s written a book about MySpace, documented the rise of big tech, and won a Pulitzer for her tech reporting with the Wall Street Journal.

    She was also one of the few people warning publishers the first time around that making deals with social media companies maybe wasn’t the best idea.

    Now, she’s ringing the alarm again, this time as a New York Times contributing opinion writer and the CEO of a journalism startup called Proof News that is preoccupied with the question of how to get people reliable information in the age of AI.

    Mentions:

    “Stealing MySpace: The Battle to Control the Most Popular Website in America,” by Julia Angwin

    “What They Know” WSJ series by Julia Angwin

    “The Bad News About the News” by Robert G. Kaiser

    “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work” by By Michael M. Grynbaum and Ryan Mac

    “Seeking Reliable Election Information? Don’t Trust AI” by Julia Angwin, Alondra Nelson, Rina Palta

    Further Reading:

    “Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance” by Julia Angwin

    “A Letter From Our Founder” by Julia Angwin

  • Last year, the venture capitalist Marc Andreesen published a document he called “The Techno-Optimist Manifesto.” In it, he argued that “everything good is downstream of growth,” government regulation is bad, and that the only way to achieve real progress is through technology.

    Of course, Silicon Valley has always been driven by libertarian sensibilities and an optimistic view of technology. But the radical techno-optimism of people like Andreesen, and billionaire entrepreneurs like Peter Thiel and Elon Musk, has morphed into something more extreme. In their view, technology and government are always at odds with one another.

    But if that’s true, then how do you explain someone like Audrey Tang?

    Tang, who, until May of this year, was Taiwan’s first Minister of Digital Affairs, is unabashedly optimistic about technology. But she’s also a fervent believer in the power of democratic government.

    To many in Silicon Valley, this is an oxymoron. But Tang doesn’t see it that way. To her, technology and government are – and have always been – symbiotic.

    So I wanted to ask her what a technologically enabled democracy might look like – and she has plenty of ideas. At times, our conversation got a little bit wonky. But ultimately, this is a conversation about a better, more inclusive form of democracy. And why she thinks technology will get us there.

    Just a quick note: we recorded this interview a couple of months ago, while Tang was still the Minister of Digital Affairs.

    Mentions:

    “vTaiwan”

    “Polis”

    “Plurality: The Future of Collaborative Technology and Democracy” by E. Glen Weyl, Audrey Tang and âż» Community

    “Collective Constitutional AI: Aligning a Language Model with Public Input,” Anthropic

    Further Reading:

    “The simple but ingenious system Taiwan uses to crowdsource its laws” by Chris Horton

    “How Taiwan’s Unlikely Digital Minister Hacked the Pandemic” by Andrew Leonard

  • If you listened to our last couple of episodes, you’ll have heard some pretty skeptical takes on AI. But if you look at the stock market right now, you won’t see any trace of that skepticism. Since the launch of ChatGPT in late 2022, the chip company NVIDIA, whose chips are used in the majority of AI systems, has seen their stock shoot up by 700%. A month ago, that briefly made them the most valuable company in the world, with a market cap of more than $3.3 trillion.

    And it’s not just chip companies. The S&P 500 (the index that tracks the 500 largest companies in the U.S.) is at an all-time high this year, in no small part because of the sheen of AI. And here in Canada, a new report from Microsoft claims that generative AI will add $187 billion to the domestic economy by 2030. As wild as these numbers are, they may just be the tip of the iceberg. Some researchers argue that AI will completely revolutionize our economy, leading to per capita growth rates of 30%. In case those numbers mean absolutely nothing to you, 25 years of 30% growth means we’d be a thousand times richer than we are now. It’s hard to imagine what that world would like – or how the average person fits into it. Luckily, Rana Foroohar has given this some thought. Foroohar is a global business columnist and an associate editor at The Financial Times. I wanted to have her on the show to help me work through what these wild predictions really mean and, most importantly, whether or not she thinks they’ll come to fruition.

    Mentioned:

    “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” by Daron Acemoglu and Simon Johnson (2023)

    “Manias, Panics, and Crashes: A History of Financial Crises” by Charles P. Kindleberger (1978)

    “Irrational Exuberance” by Robert J. Shiller (2016)

    “Gen AI: Too much spend, too little benefit?” by Goldman Sachs Research (2024)

    “Workers could be the ones to regulate AI” by Rana Foroohar (Financial Times, 2023)

    “The Financial Times and OpenAI strike content licensing deal” (Financial Times, 2024)

    “Is AI about to kill what’s left of journalism?” by Rana Foroohar (Financial Times, 2024)

    “Deaths of Despair and the Future of Capitalism” by Anne Case and Angus Deaton (2020)

    “The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade” by David H. Autor, David Dorn & Gordon H. Hanson (2016)

    Further Reading:

    “Beware AI euphoria” by Rana Foroohar (Financial Times, 2024)

    “AlphaGo” by Google DeepMind (2020)

  • Douglas Rushkoff has spent the last thirty years studying how digital technologies have shaped our world. The renowned media theorist is the author of twenty books, the host of the Team Human podcast, and a professor of Media Theory and Digital Economics at City University of New York. But when I sat down with him, he didn’t seem all that excited to be talking about AI. Instead, he suggested – I think only half jokingly – that he’d rather be talking about the new reboot of Dexter.

    Rushkoff’s lack of enthusiasm around AI may stem from the fact that he doesn’t see it as the ground shifting technology that some do. Rather, he sees generative artificial intelligence as just the latest in a long line of communication technologies – more akin to radio or television than fire or electricity.

    But while he may not believe that artificial intelligence is going to bring about some kind of techno-utopia, he does think its impact will be significant. So eventually we did talk about AI. And we ended up having an incredibly lively conversation about whether computers can create real art, how the “California ideology” has shaped artificial intelligence, and why it’s not too late to ensure that technology is enabling human flourishing – not eroding it.

    Mentioned:

    “Cyberia” by Douglas Rushkoff

    “The Original WIRED Manifesto” by Louis Rossetto

    “The Long Boom: A History of the Future, 1980–2020″ by Peter Schwartz and Peter Leyden

    “Survival of the Richest: Escape Fantasies of the Tech Billionaires” by Douglas Rushkoff

    “Artificial Creativity: How AI teaches us to distinguish between humans, art, and industry” by Douglas Rushkoff” by Douglas Rushkoff

    “Empirical Science Began as a Domination Fantasy” by Douglas Rushkoff

    “A Declaration of the Independence of Cyberspace” by John Perry Barlow

    “The Californian Ideology” by Richard Barbrook and Andy Cameron

    “Can AI Bring Humanity Back to Health Care?,” Machines Like Us Episode 5

    Further Reading:

    “The Medium is the Massage: An Inventory of Effects” by Marshall McLuhan

    “Technopoly: The Surrender of Culture to Technology” by Neil Postman

    “Amusing Ourselves to Death” by Neil Postman