Afleveringen

  • Marius Dragomir, the Founding Director of the Media and Journalism Research Centre, shares with me the most surprising findings from his recent research on propaganda narratives, as well as the revealing discoveries about ownership and financial aspects of the 100 AI tools most used by journalists.

    Remember that 67 % of these 100 tools lack critical data on ownership and finances. Check the tools you are using.

    How can the media industry become less dependent on Big Tech? One of the possible solutions lies with audiences and the private sector. We are discussing the case of the Czech Republic – what makes it so special?

    Main things we've discussed in this episode:

    One of the Media and Journalism Research Centre's recent research articles was titled "Ownership and Financial Insights of AI Used in Journalism", and we are discussing its findings. They looked into the 100 AI tools most used by journalists and found that only 33 % of AI tool companies demonstrate sufficient transparency, with 67 % lacking critical data on ownership, finances and other basic information.There is one AI-powered fact-checking tool mentioned in the research article called Clarity, owned by former Israeli military personnel. The owners promoted it as the best fact-checking tool for the Israeli-Palestinian war. How might this opacity affect journalistic independence and credibility in the long term?The impact technology is having on fact-checking in generalBBC's research on AI assistants' news accuracy. Their tests of ChatGPT, Copilot, Gemini, and Perplexity exposed major flaws. 51 % of AI responses had significant issues. 19 % of responses introduced errors when citing the BBC. 13 % misquoted or made up BBC content entirely.The Centre's research on disinformation and propaganda narratives in different parts of the world. Marius shares trends and examples that surprised him in terms of how these narratives are being shaped and distributed.Big Tech and media - a healthy or toxic relationship?How Big Tech turned from a liberating tool to an oppressive one, from a solution to a problem for many journalistsMarius shares his observations on how Big Tech companies work closely with governments, prioritising the government's information on search engines and social media. We are talking about Europe here, not only authoritarian regimes.What are the solutions - how can the media industry become less dependent on Big Tech? One of the possible solutions lies with audiences and the private sector. The case of the Czech Republic.On regulations and ethics: what would be Marius's top three AI-related regulations that he would issue immediately?When regulations work and when they don'tWhat media organisations need to change to survive and stay relevantOn public service journalism. What is the biggest challenge that public service journalism in Europe faces today? And what are the possible solutions?Talking about the evolving media ecosystems in Europe, Marius came up with four distinct models: the Corporate model, the Public Interest model, the Captured model (high government control) and the Atomised model (journalism for sale, private interests driven).How relevant are all of these models in a world where generative AI is becoming a new storytelling medium? In that world where every viewer is an audience of one - what will the perception of facts as such be? And how might that type of storytelling medium change the perception of non-fiction reporting?Given the trend towards hyper-personalised storytelling through AI, how might this affect the traditional role of public service media in creating shared national conversations and cultural touchpoints? Are we risking further social fragmentation?What will replace the increasingly commercialised and disengaging social media?
  • Do you believe you care about facts more than people in India? For a split second, did you notice any bias in your quick mental response?

    Whom do you trust more when it comes to news: a well-established media brand or your closest friend who runs a popular current affairs YouTube channel? Who do you think is more likely to spread misinformation?

    As Meta and other tech giants show their allegiance to Trump's administration and become increasingly partisan, will financial connections with these platforms become a reputational liability for the media industry?

    We discuss these and many other technology, storytelling, and misinformation-related issues in this episode of AI-FLUENT with Rakesh Dubbudu, founder and CEO of India's fact-checking website, Factly.

    Main Topics We Discussed in This Episode:

    How much do people in India care about facts?The significant silent population - they need fact-checkers, not people on the extremesThe relationship between fact-checking websites in India: friends, competitors or enemies?DeepSeek: what signals does this AI tool send to the media industry?Censorship in Indian mediaThe most frequent questions Factly's journalists ask themselves about generative AIWhy people with generalist skills who can connect the dots will shine in the world of AIThe biggest misconception about AI in the storytelling industryThe biggest challenge for Factly in general and for Rakesh as CEO in particularFactly's relationship with Meta as a fact-checking partner after Mark Zuckerberg announced the end of fact-checking on Facebook in the US, with likely further expansionAs Meta and other tech giants show their allegiance to Trump's administration and become increasingly partisan, will financial connections with them become a reputational liability for the media industry?How Factly is working to decrease its reliance on external tech platformsHow Factly ensures unbiased data input when building their own AI toolsHow they maintain and measure public trust in their fact-checking operations amid declining trust in traditional media globally, particularly when handling politically sensitive topics in IndiaThe specific patterns and trends of misinformation in IndiaHow misinformation can be addressed holisticallyFactly's revenue model and their use of AI tools to reimagine their business modelThe book Rakesh is currently reading to better understand generative AI
  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • Why do we choose to read or watch something - are we manipulated by algorithms? Do we have any cognitive independence? How and where will we receive news in the near future? In a world where the majority of content will be created by AI, how will we know whom to trust?

    And what does neuroscience tell us about all of that? In a nutshell, these are the questions we tried to answer in this episode with Mariano Blejman, an Argentinian media entrepreneur and the founder of the Media Party festival in Buenos Aires.

    Main Topics We Discussed in This Episode:

    Synthetic democracies and the role of AI in creating themWhat neuroscience tells us about why we choose certain contentWhen truth becomes as scarce as water, it will start creating valueThe broken business model of the media: opportunities to fix it or rebuild from scratchIn a world where the majority of content will be created by AI, how will we know whom to trustHaven't heard about a BBC car? Well, it doesn't exist yet, but it might be one of the ways people will receive their news in the futureNarcissism is one of the media industry's problemsHow to use neuroscience to understand news consumption and audience behaviour in general. Mariano refers to Annemarie Dooling's researchMariano explains what his startup is doing to help newsrooms understand their audiences' behaviour. He is fascinated with neuroscience and keeps returning to it. I am becoming increasingly fascinated with it too :-)Mariano's predictions on the future of news consumption: content will flow to you without being asked. Your behaviour will activate a prompt, rather than you asking for a prompt - and other predictionsHow will the relationship between media and tech industries develop: marriage, divorce, or a never-ending affair?Examples from Latin America of how journalists use AI tools to avoid censorshipHow journalists can have more agency to influence the way generative AI technology is developingMad cow worldAnd about neuroscience again
  • Technology has a value but doesn't have values, reminds us Madhav Chinnappa. It's up to us humans who have values to define how the technology will be used and whether it will bring more good than evil.

    The majority of us are now in a so-called efficiency phase of using AI tools, thinking mainly about how to optimise things. However, those who will jump to a creativity phase more quickly will have an advantage. Ask yourself which phase you are in now. And if you are still only in the first one, you really should be thinking about how to move on to the second, and quickly.

    Main Topics We Discussed in This Episode:

    How we use AI without realising we are doing itFrom an electric bike phase of AI to jet airplane - and how the jet airplane phase will transform storytellingFree tools versus paid toolsContent creation versus business modelLicensing and how platforms can compensate content creators: what is fair?Is Human Native a sort of eBay where the rights holders and AI developers meet?Extractive versus sustaining technologyInterest in non-English languages and how you as a non-English speaker/data owner can use this as an advantageCommon patterns of how newsrooms use or misuse generative AIAI won't replace you - it will augment youMisconception of traffic and how some newsrooms will pay a price for thatWhat aspects of using generative AI we are not thinking aboutEthical considerations and risksEfficiency play versus creativity, audience-focused playHow AI technology changes your audience's behaviour and what it means for the journalism industryThe reality about the audience's view on AI will not be defined by news; it will be defined by other parts of their lifeShould we label AI-generated or human-created content, or both?Trust and interdependency of different industries and how they use generative AIDo generative AI tools reduce the amount of intention and meaning in the world?Do we as journalists have agency to influence how AI technology is developing, and if you feel you don't - what to do to have more agency
  • As a media organisation, they stand on three pillars: journalism, technology and the wisdom of crowds. From the very beginning, they have worked at the intersection of journalism, technology and community engagement.

    It was co-founded by a woman who was awarded the Nobel Peace Prize for safeguarding freedom of expression and for her efforts to address corruption in her country.

    They were the first media organisation in their country to publish guidelines on AI use in 2023.

    They were among ten digital media organisations selected by OpenAI to participate in "innovative experiments around deliberative technologies".

    By now, I am sure you have guessed that in a new episode of the AI-FLUENT podcast, I am talking to Gemma Mendoza. Along with Maria Ressa and other journalists, she was one of the co-founders of Rappler, the most prominent Filipino news website. Now she leads digital innovation and disinformation research at Rappler.

    Main Topics We Discussed In This Episode:

    How to balance the speed and efficiency that AI offers with Rappler's commitment to slow journalism and deep investigative reportingAI's impact on the relationship between journalists and their sourcesThe biggest misconception about generative AI in journalism and the most surprising aspects of its developmentRappler—one of the most famous Filipino news websites in the world—stands on three pillars: journalism, technology and the wisdom of crowds. How does their newsroom rely on this wisdom?How they utilise AI at the intersection of journalism, technology and community engagementAt Rappler, they create their own AI tools in-house. What determines whether they create their own tool rather than use existing market solutions? How do they address ethical implications when using their own data, including ensuring it isn't biased?The role and input of Gen Z journalists in Rappler's newsroomRequired changes in journalism schools' educational systems to better prepare future journalists for the new realityGemma, who leads Rappler's efforts to address disinformation in digital media, shares recent examples of how AI tools have made their work more efficientPatterns and peculiarities in how people use deepfake technologiesIn 2023, Rappler became the first Filipino newsroom to publish guidelines on AI use. Gemma's recommendations for other newsrooms worldwide on approaching AI—what are the crucial aspects not to overlook?The future of journalism in three wordsGemma's case for journalism: why should a 15-year-old become a journalist in this AI-driven world?
  • Are you a small newsroom or a one-person content creator, and are you, like this episode's guest, happy to fail fast and fail often? You might not have a designated tech team, yet you want to use AI-powered tools to speed up your work and solve certain problems.

    Are you tempted to use AI to produce even more content? Pause here and reconsider: might there be better ways to use generative AI to advance your work and develop a deeper understanding of those whom you serve?

    We are discussing all of this and more with Tshepo Tshabalala from LSE's JournalismAI global initiative, which helps small newsrooms devise ideas and solutions for using AI wisely and responsibly. I started this conversation by asking Tshepo what kinds of projects they work on with different newsrooms around the world.

    Main Topics We Discussed In This Episode:

    Why is this network beneficial for small newsrooms applying for their fellowships?Quick solutions versus long-term AI strategyExamples of AI-powered solutions from different newsroomsCommon misperceptions of AI-powered tools in newsrooms JournalismAI works withWhy does your audience come (or not) to you? Do you really know why?Ethical implications of using AI in a newsroom: guidelines or no guidelines?Creativity versus meaning: does generative AI reduce us to recyclers of meaning rather than creators of it?Example of a "wow project" that Tshepo has come across recentlyBlitz questions
  • If journalism is valuable, as many of us think, why don't people pay for it? That's the question Alan returned to several times during our conversation. And he actually gave the answer, an existential answer to this question.

    What's the purpose of your newsroom? Why do you exist as a media organisation? What audience needs do you serve? If you are able to answer these questions honestly without any corporate fluff, you come closer to answering the money question: why should people pay for your content?

    'We have gone past peak content', Alan says. Nobody wakes up in the morning wanting more content - people want to get stuff done, they want a sense of community, they want to learn something new, they want to have fun etc. The problem is, as Alan points out, that many newsrooms still operate as if people wake up wanting more content for the sake of content. Wake up!

    Main Topics We Discussed in This Episode:

    How AI Enables Media SolopreneursSupply versus Demand Side Thinking: How Does It Work in Journalism?How to Use AI to Get a Deeper Understanding of What Audiences WantNo One Wakes Up in the Morning Wanting More Content - Why Do We Create So Much of It Then?What Is a Better Use of AI-powered Tools, Apart from Creating Content?How to Stand Out in a Crowded Content Market Wherever You AreWe All Pay for Stuff and Services, Why Not for News?"What People Buy Is Very Different from What People Get, or What People Want"Creativity versus Recycling Old NarrativesAI and the Nature of OriginalityHow to Measure AI Impact on Revenue Generation, Audience Engagement and CredibilityWhere Does Obsession with Optimising Everything Lead Us?AI as a Co-thinker and Co-founder of Your Potential Start-upLifehack from Alan on Using AI in the Context of StorytellingOn the Future of Journalism

    Come back and listen to us on January 9, 2025 for an all new episode.

  • You can investigate serious corruption cases, but it doesn't mean that the way you talk about it to your audience should be super serious and boring. What if you come up with an AI-powered avatar that uses facts checked by human journalists, yet speaks to the audience in a sarcastic voice as an assistant rather than a know-it-all expert?

    Claudia Báez, a digital innovator and co-founder of the Colombian investigative website Cuestión Pública, thinks that investigative journalism should by no means be delivered in a way that is appealing only to men in suits over 40.

    She created an AI-powered tool - Odín - to help her team of journalists stay relevant to the current news agenda, reach different audiences, and, of course, save time and money.

    I talked to Claudia about this and much more, so maybe you, as a storyteller anywhere in the world, can be inspired and apply some of these insights in your everyday professional life.

    Main topics we discussed in this episode:

    Why Odín and how does this AI-powered tool help the team of Cuestión Pública stay relevant to audiences in Colombia?What's the most important question regarding AI we should ask ourselves first?What are the challenges investigative journalists face in Colombia, and how does technology help them solve some of those problems?Investigative journalism doesn't have to be delivered in a serious and often boring way. Generative AI gives us many opportunities to experiment with formats aimed at different audiences.On trust and transparency, and why journalists need to collaborateClaudia’s favourite AI tools with examples from Spain, Argentina and Venezuela On the future of journalism
  • In this episode, I am talking to Chris Roper, Deputy CEO of Code for Africa and Amanda Strydom, Senior Programme Manager for CivicSignal, a programme within Code for Africa, which maps and offers insights into media ecosystems in Africa using research and machine learning tools. We talked about how the newsrooms they work with apply AI in their professional life. What are the AI-related issues that African newsrooms are truggling with and what kind of solutions are they coming up with?

    Code for Africa is also well-known for their work in tackling mis- and disinformation. Chris and Amanda talk about different ways their organisation is helping African journalists to fight disinformation in a foundational way and what’s the role of AI in it.

    Main topics we’ve discussed in this episode:

    -The role of AI in helping journalists and citizens tackle mis/disinformation

    -Generational differences in perceiving misinformation

    -Ethical policies of using AI in a newsroom: how to approach them

    -Who owns your data which you share with AI tools

    -The environmental effect of AI

    -Life hacks from Chris and Amanda: how to use AI in a storytelling context

    -The future of AI and how it will shape the future of journalism

    -Does AI create more inequality in the Global South

    -What help journalists can get from Code for Africa and how they can collaborate with the organisation

    -AI-related regulations and laws: the ideal and real scenarios

    -The importance of AI tools in investigative journalism

    -Exciting AI projects Chris and Amanda are working on now

  • AI-Fluent is my new podcast where I talk with storytellers from around the world about journalism and storytelling in all its shapes and forms, its marriage with AI and other technology, and innovative thinking.

    Most of my guests are from the Global South, so it's a rare opportunity to listen to people with different perspectives, different challenges, and solutions they have to offer. New episode every Friday