Afleveringen
-
In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems.
- What happens when personalized content becomes political propaganda?
- Is YouTube the new social media without us realizing it?
- Can regulations keep up with AI’s accelerating influence?
- And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity?
This episode dives into:
- The unintended consequences of algorithmic curation
- The collapse of objective reality in the digital age
- AI-driven misinformation in elections
- The tension between regulation and free speech
- Global responses—from Finland’s education system to the EU AI Act
- What society can (and should) do to fight back
Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss.
π Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world.
Topics covered:
The evolution of AI education and BABL AI’s new subscription model for training & certifications.
Why AI auditing skills are becoming essential for professionals across industries.
How AI governance roles will shape the future of business leadership.
The impact of AI on workforce transition and how individuals can future-proof their careers.
The EU AI Act’s new AI literacy requirements—what they mean for organizations.
Want to level up your AI knowledge? Check out BABL AI’s courses & certifications!
π Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership
π Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Zijn er afleveringen die ontbreken?
-
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance.
What’s in this episode?
β RightsCon Recap – How AI has taken over the human rights agenda
β AI Auditing & Accountability – Why organizations need to prove AI compliance
β Investors Are Paying Attention – Why AI risk management is becoming a priority
β The Role of Education – Why AI literacy is the key to ethical and responsible AI
β The International Association of Algorithmic Auditors – A new professional field is emerging
π If you're passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI.
In this episode, you'll discover:
• Ezra’s Journey: From being a student in our AI & Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech.
• Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction.
• The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users.
• Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations.
If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen.
π Connect with Ezra Schwartz:
Website: https://www.artandtech.com
LinkedIn: https://www.linkedin.com/in/ezraschwartz
Responsible AgeTech Conference I’m organizing: https://responsible-agetech.orgCheck out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
π©πͺ People can join Quantpi's "RAI in Action" event series kicking off in Germany in March:
π https://www.quantpi.com/resources/events
πΊπΈ U.S. based folks can join Quantpi's GTC session on March 20th called "A scalable approach toward trustworthy AI":
π https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&linkId=100000328230011&tab.catalogallsessionstab=16566177511100015Kus&search=antoine#/session/1726160038299001jn0f
π Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".
π Sign up for our courses today: https://babl.ai/courses/
π Follow us for more: https://linktr.ee/babl.ai
ποΈ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi ποΈ
In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations.
π‘ Topics Covered:
βοΈ What is black box AI testing, and why is it crucial?
βοΈ How Quantpi ensures model robustness and fairness across different AI systems
βοΈ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance
βοΈ Challenges businesses face in AI model evaluation and best practices for testing
βοΈ Career insights for aspiring AI governance professionals
With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable.
π Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI!
π’ Listen to the podcast on all major podcast streaming platforms
π© Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/
π Follow Quantpi for more AI insights: https://www.quantpi.comCheck out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Join host Dr. Shea Brown (CEO of BABL AI) along with guest speakers COO Jeffery Recker and CSO Bryan Ilg for an in-depth discussion on the rapidly evolving world of AI regulation. In this episode, our panel unpacks:
The EU AI Act in Action: Learn about the new obligations now in force under the EU AI Act—including the crucial requirements of AI literacy (Article 4) and the prohibition of high-risk AI practices (Article 5).
Compliance Timelines & What’s Next: Get the lowdown on the phased rollout, with upcoming standards and enforcement deadlines on the horizon, and discover practical steps companies should take to prepare.
North American Regulatory Landscape: Explore the contrasting regulatory approaches in North America, from the shifting federal stance in the US to state-specific laws (like Colorado’s AI Act and New York’s local law 144), and why this uncertainty matters for businesses.
Risk, Ethics & the Future of AI in Business: Delve into the importance of risk management, AI literacy training, and human-centered design. Our guests share insights on why responsible AI isn’t just about compliance—it’s also a competitive advantage in today’s fast-paced market.
Whether you’re a business leader, technologist, or policy enthusiast, this episode offers valuable perspectives on how organizations can navigate the complex, global landscape of AI governance while protecting their customers and staying ahead of regulatory demands.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
ποΈ Lunchtime BABLing: Interview with Abhi Sanka ποΈ
Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI's inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journey—from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft.
Explore Abhi's insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society.
π Highlights:
Abhi’s academic and professional path to responsible AI.
The challenges of auditing agentic AI and aligning governance frameworks.
The importance of community and collaboration in advancing responsible AI.
Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community.
Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation!
π Abhi's Linkedin: https://www.linkedin.com/in/abhisanka/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
ποΈ In this engaging episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with special guest Soribel Feliz, a former US diplomat turned AI governance expert. Soribel shares her fascinating career journey from the State Department to big tech roles at Meta and Microsoft, and now as an AI governance and compliance specialist at DHS. π
From her early experiences moderating content algorithms at Meta to advising on AI policy in the US Senate, Soribel discusses the evolution of AI, its ethical challenges, and the crucial importance of data privacy and workforce impacts. She also opens up about transitioning into the tech world, overcoming technical learning curves, and her dedication to helping others navigate career uncertainties in the AI-driven future. πβ¨
π Key Highlights:
Soribel's career leap from diplomacy to tech and AI policy.
The ethical dilemmas and societal impacts of AI she’s witnessed firsthand.
Her thoughts on AI literacy gaps and the need for growth mindset education.
Practical advice for those transitioning into AI or confronting job uncertainties.
π This episode is packed with wisdom, optimism, and actionable insights for young professionals, career changers, and anyone passionate about responsible AI.
π Follow Soribel Feliz for more on AI governance, career guidance, and navigating uncertainty in a rapidly evolving world. Links to her website and newsletter are in the description below.
Linkedin: https://www.linkedin.com/in/soribel-f-b5242b14/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
ποΈ Lunchtime BABLing: 2024 - An AI Year in Review ποΈ
Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI!
In this final episode of the year, the trio dives into:
π The rapid growth of Responsible AI and algorithmic auditing in 2024.
π How large language models are redefining audits and operational workflows.
π The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide.
π The rise of AI literacy and the "race for competency" in businesses and society.
π€ Exciting (and risky!) trends like AI agents and their potential for transformation in 2025.
Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&A session!
π Looking Ahead to 2025
What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead.
π Key Takeaways:
AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations.
Education and competency-building are essential to navigating the changing AI landscape.
The global regulatory response is reshaping how AI is developed, deployed, and audited.
Link to Raymon Sun's Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker
π‘ Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. π
π What’s in this episode?
The transition from legal tech to AI compliance.
Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act.
How the EU AI Act fits into Europe’s product safety legislation.
The challenges and confusion around conformity assessments and AI literacy requirements.
Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals.
π οΈ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001.
π Learn more about Aleksandr’s courses: https://aia.tiulkanov.info
π€ Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work?
From fears of job displacement to the rise of entirely new roles, the trio explores:
πΉ How AI will reshape industries and automate parts of our jobs.
πΉ The importance of upskilling to stay competitive in an AI-driven world.
πΉ Emerging career paths in responsible AI, compliance, and risk management.
πΉ The delicate balance between technological disruption and human creativity.
π Whether you're a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you.
π Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role.
π§ Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
ποΈ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations?
In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. π¨π€
Key topics include:
Federal deregulation and the push for state-level AI governance.
The potential repeal of Biden's executive order on AI.
Implications for organizations navigating a fragmented compliance framework.
The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies.
How deregulation might affect innovation, litigation, and risk management in AI development.
This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice.
Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems.
This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management.
Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
π Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".
π Courses Mentioned:
1οΈβ£ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce
2οΈβ£ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems
3οΈβ£ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification
4οΈβ£ BABL AI Course Catalog: https://babl.ai/courses/
π Follow us for more: https://linktr.ee/babl.ai
In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements.
Throughout the episode, Dr. Brown covers:
AI literacy obligations for providers and deployers under the EU AI Act.
The importance of AI literacy in ensuring compliance.
An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs?
Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is?
They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond.
If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations.
In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand.
π Topics discussed:
Deloitte’s Medicaid eligibility system in Texas
The role of the FTC and the NIST AI Risk Management Framework
How AI governance can safeguard against unintentional harm
Why proactive risk management is key, even for non-AI systems
What companies can learn from this case to improve compliance and oversight
Tune in now and stay ahead of the curve! πβ¨
π If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
-
In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there.
In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including:
Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records.
Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken.
Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU.
Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance.
Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy.
Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future.
π Key Topics Discussed:
What documentation and transparency measures are required to demonstrate compliance?
How can businesses effectively maintain and update these records?
How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation?
What are the biggest challenges you foresee in complying with the EU AI Act?
What resources or support mechanisms are being provided to businesses to help them comply with the new regulations?
How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector?
What are the penalties for non-compliance, and how will they be determined and applied?
What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy?
What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve?
How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary?
π If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations.
With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations.
The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act.
Don't miss this informative session to ensure your organization is ready for the changes ahead!
π Key Topics Discussed:
What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU?
What impact will this have outside the EU?
What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act?
Are there any particular high-risk AI systems that require special attention under the new regulations?
How do you assess and manage the risks associated with AI systems?
What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of?
How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations?
How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act?
π Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips!
π If you found this episode helpful, please like and subscribe to stay updated on future episodes.
#AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAICheck out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of "Building Trust in AI."
Episode Highlights:
Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors.
AI's Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations.
Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust.
Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability.
Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices.
Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders.
If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! - Laat meer zien