Afleveringen
-
News and journalism organizations and dominant tech companies are in a years-long battle over content, clicks and revenue, and the tech companies are winning. What are policy options that encourage both the sustainability and quality of news content on popular online platforms? In this episode, Rebecca Rand explores perspectives on the subject, drawing on a conversation hosted by Justin Hendrix with experts Anya Schiffrin and Cory Doctorow at the Knight Foundation's INFORMED conference earlier this year.
-
In October 2023, during the third Belt and Road Forum in Beijing, China's leader Xi Jinping signaled a shift in focus from more grandiose physical infrastructure projects to 'small yet smart' initiatives. This shift underscores the need to understand China's ambitions to reshape global digital governance, moving away from an open and free internet towards a model rooted in government control and mass surveillance.
The advocacy group Article 19 documents this shift in a recent report titled "The Digital Silk Road: China and the Rise of Digital Repression in the Indo-Pacific," examining China's influence on digital infrastructure and governance in Cambodia, Malaysia, Nepal, and Thailand. As the Indo-Pacific remains strategically significant for China in deploying next-generation technologies, the report argues that assessing China’s regional partnerships and their implications for digital repression is crucial for understanding its broader ambitions to reshape global digital norms.
To discuss these issues in more depth, Justin Hendrix is joined by:
Michael Caster, Asia Digital Program Manager at ARTICLE 19; andCatherine Tai, the deputy director for Asia and the Pacific team at Center for International Enterprise (CIPE).
-
Zijn er afleveringen die ontbreken?
-
In this episode, we explore a topic that sits at the heart of global digital policy: the contrasting visions of internet governance championed by the United States and its Western allies versus those promoted by China and nations in its orbit. This debate is playing out across various international venues and has profound implications for the future of digital rights, privacy, and the open internet. Justin Hendrix is joined by experts at the Atlantic Council that study these issues from a variety of angles and across multiple geographies, including:
Rose Jackson, the director of the Democracy + Tech Initiative within the Atlantic Council Technology Programs;Konstantinos Komaitis, a nonresident fellow with the Democracy + Tech Initiative of the Atlantic Council's Digital Forensic Research Lab;Kenton Thibaut, a senior resident China fellow at the Atlantic Council's Digital Forensic Research Lab; andIria Puyosa, a senior research fellow at the Atlantic Council’s Digital Forensic Research Lab.
-
Angela Zhang is the author of High Wire: How China Regulates Big Tech and Governs Its Economy, published this year by Oxford University Press. With a career in the practice of law and in teaching it, Zhang has held roles King’s College London and at New York University School of Law, and most recently served as Director of Philip K. H. Wong Center for Chinese Law at the University of Hong Kong. She will join the University of Southern California as a Professor of Law in fall 2024.
-
A topic we returned to often in this podcast is the dire need for independent technology researchers to have access to platform data. Without it, we cannot understand the extent of the harms and effects of social media on people and on society, and we cannot understand the limits of those harms. This makes it difficult to respond in acute moments such as elections, and to understand issues such as the relationship between tech platforms and social cohesion, or mental health, or any number of the other issues policymakers care about.
In this episode, Justin Hendrix speaks with two people on the front lines of the fight to secure access to data, including advocating for Meta to do better in light of the impending deprecation of CrowdTangle, a tool used by researchers study Meta's products, including Facebook and Instagram. They are:
Brandi Guerkink, the executive director of the Coalition for Independent Technology Research, andClaire Pershan, EU advocacy lead at the Mozilla Foundation.
-
Madhumita Murgia, AI editor at the Financial Times, is the author of a new book called Code Dependent: Living in the Shadow of AI. The book combines reporting and research to provide a look at the role that AI and automated decision-making is playing in reshaping our lives, our politics, and our economies across the world.
-
Dr. Arati Prabhakar the Director of the White House Office of Science and Technology Policy and Technology Policy and Science Advisor to President Joe Biden. This week, she hosted an event in Washington DC called "AI Aspirations: R&D for Public Missions." Speakers included executive branch officials and agency leaders, from the Secretary of Education to the Food and Drug Administration Commissioner, as well as lawmakers such as Senators Amy Klobuchar and Mark Warner, and Representative Don Beyer. Prior to the event, Justin Hendrix spoke to Dr. Prabhakar about OSTP's priorities.
-
What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, Justin Hendrix speaks with Elise Silva, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and John Wihbey, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a recent piece in Tech Policy Press titled "AI-Powered Search and the Rise of Google’s 'Concierge Wikipedia.'” Wihbey is the author a paper published last month titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"
-
What role did technology play in India's elections, and what impact will the outcome have on tech policy in the country? Joining Justin Hendrix are three experts: Amber Sinha and Vandinika Shukla, both fellows at Tech Policy Press, and Prateek Waghre, the executive director at the Internet Freedom Foundation. Plus, Tech Policy Press program manager Prithvi Iyer sums up the election result.
-
The guests in this episode are authors of a new study titled Political Machines: Understanding the Role of AI in the US 2024 Elections and Beyond. The study is based on interviews with a variety of individuals who are currently grappling with how generative AI tools and systems will change the way the work.
In a series of field interviews, the authors spoke with three vendors of political generative AI tools, a political candidate, a legal expert, a technology expert, an extremism expert, a digital organizer, a trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. Joining Justin Hendrix to discuss the results are:
Dean Jackson, the principal at Public Circle LLC and a reporting fellow with Tech Policy Press;Zelly Martin, a PhD candidate at the University of Texas at Austin and a senior research fellow at the Propaganda Research Lab at the Center for Media Engagement; and Inga Trauthig, head of research at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.
-
This episode focuses on the role of shareholder activism in pursuing transparency and accountability from tech firms. In a week where board resolutions are up for a vote at Meta and Alphabet related to each company's development and deployment of artificial intelligence, Justin Hendrix spoke to five individuals working at the intersection of sustainable investing in tech accountability:
Michael Connor, Executive Director of Open MICJessica Dheere, Advocacy Director at Open MICNatasha Lamb, Chief Investment Officer at Arjuna CapitalJonas Kron, Chief Advocacy Officer at Trillium Asset ManagementChristina O'Connell, Senior Manager for Shareholder Engagement and Investments at Ekō
-
As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced, a group of nonprofit and academic organizations put out what they call a "shadow report" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, Justin Hendrix spoke to some of the report's authors, including:
Sarah West, co-executive director of the AI Now InstituteNasser Eledroos, policy lead on technology at Color of ChangeParamita Shah, executive director of Just Futures LawCynthia Conti-Cook, director of research and policy at the Surveillance Resistance Lab
-
A conversation with Marwa Fatafta, who serves as policy and advocacy director for the nonprofit Access now, which has worked on digital civil rights, connectivity and censorship issues for the past 15 years. Along with other groups, Access Now has engaged Meta in recent months over what it says is the “systematic censorship of Palestinian voices” amidst the Israel-Hamas war in Gaza.
-
One tech journalist whose byline always draws me in is Chris Stokel-Walker. He writes for multiple publications including The New York Times, The Washington Post, The Economist, Wired, Fast Company, and New Scientist. Now, he’s got a new book out: How AI Ate the World: A Brief History of Artificial Intelligence - And Its Long Future. Last week, I had the chance to speak with him about it, and about how he covers technology and tech policy generally.
-
On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." Just hours after the report was released, Justin Hendrix spoke to two civil rights advocates who are working on AI policy about the good and the bad of the Senate report, and more broadly about how to set AI policy priorities that ensure a brighter future for all:
Alejandra Montoya-Boyer, Senior Director at the Center for Civil Rights & Tech at the Leadership Conference on Civil and Human RightsClaudia Ruiz, Senior Civil Rights Policy Analyst at UnidosUS
-
Last October, Dr. Jasmine McNealy, as an associate professor at the University of Florida, a Senior Fellow in Tech Policy with the Mozilla Foundation, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, wrote in Tech Policy Press about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.”
The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. Justin Hendrix attended the workshop, and recently he checked in with Dr. McNealy and three of the other attendees he met there:
Michaela Henley, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;Dr. Dominique Harrison, founding principal of Equity Innovation Ventures; andDr. Theodora Dryer, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.
-
The Hippocratic oath, named for a Greek physician who lived ~2,500 years ago that some call the father of modern medicine, is one of the earliest examples of an expression of professional ethics. It is a symbol of a profession that has built in a number of protections for patient interests, with ethical frameworks and requirements that seek to assure they are maintained.
Today’s guest is Chinmayi Sharma, an Associate Professor at Fordham Law School. Sharma thinks there should be a similar professional ethics framework in place for the developers of AI systems, and she’s written a substantial paper on the 'why' and the 'how' of her proposal.
-
One topic we come back to again and again on this podcast is disinformation. In many episodes, we’ve discussed various phenomena related to this ambiguous term, and we’ve tried to use science to guide the way.
But the guests in this episode suggest that in the broader political discourse, the term is more than over used. Often, they say, lawmakers and other elites that employ it are crossing the line into hyping the effects of disinformation, which they say only helps propagandists and diminishes trust in society. To learn more Justin Hendrix spoke with Gavin Wilde, Thomas Rid, and Olga Belogolova, who with Lee Foster are the authors of an essay in the publication Foreign Affairs titled "Don’t Hype the Disinformation: Downplaying the Risk Helps Foreign Propagandists, But So Does Exaggerating It."
-
In an introduction to a special issue of the journal First Monday on topics related to AI and power, Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more.
To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and Émile P. Torres.
-
Last week President Joe Biden signed into law a measure that would force the Chinese firm ByteDance to divest its ownership of TikTok, or risk the app being banned in the US. The measure also included restrictions on the sale of personal data to foreign entities. What are the implications of these moves for US and global tech policy going forward? What will the inevitable legal challenges look like?
To learn more, Justin Hendrix spoke with Anupam Chander, law professor at Georgetown and a visiting scholar at the Institute for Rebooting Social Media at Harvard University; Rose Jackson, the director of the Democracy and Tech Initiative at the Atlantic Council; and Justin Sherman, CEO of global cyber strategies and adjunct professor at Duke University.
- Laat meer zien