Afleveringen

  • In this episode of the Eye on AI podcast, Tyler Xuan Saltsman, CEO of Edgerunner, joins Craig Smith to explore how AI is reshaping military strategy, logistics, and defense technology—pushing the boundaries of what’s possible in modern warfare.

    Tyler shares the vision behind Edgerunner, a company at the cutting edge of generative AI for military applications. From logistics and mission planning to autonomous drones and battlefield intelligence, Edgerunner is building domain-specific AI that enhances decision-making, ensuring national security while keeping humans in control.

    We dive into how AI-powered military agents work, including the LoRA (Low-Rank Adaptation) model, which fine-tunes AI to think and act like military specialists—whether in logistics, aircraft maintenance, or real-time combat scenarios. Tyler explains how retrieval-augmented generation (RAG) and small language models allow warfighters to access mission-critical intelligence without relying on the internet, bringing real-time AI support directly to the battlefield.

    Tyler also discusses the future of drone warfare—how AI-driven, vision-enabled drones can neutralize threats autonomously, reducing reliance on human pilots while increasing battlefield efficiency. With autonomous swarms, AI-powered kamikaze drones, and real-time situational awareness, the landscape of modern warfare is evolving fast.

    Beyond combat, we explore AI’s role in security, including advanced weapons detection systems that can safeguard military bases, schools, and public spaces. Tyler highlights the urgent need for transparency in AI, contrasting Edgerunner’s open and auditable AI models with the black-box approaches of major tech companies.

    Discover how AI is transforming military operations, from logistics to combat strategy, and what this means for the future of defense technology.

    Don’t forget to like, subscribe, and hit the notification bell for more deep dives into AI, defense, and cutting-edge technology!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    00:00) Introduction – AI for the Warfighter

    (01:34) How AI is Transforming Military Logistics(

    04:44) Running AI on the Edge – No Internet Required

    (06:49) AI-Powered Mission Planning & Risk Mitigation

    (14:32) The Future of AI in Drone Warfare

    (22:17) AI’s Role in Strategic Defense & Economic Warfare

    (26:34) The U.S.-China AI Race – Are We Falling Behind?

    (35:17) The Future of AI in Warfare



  • In this episode of the Eye on AI podcast, Matt Price, CEO of Crescendo, joins Craig Smith to discuss how generative AI is reshaping customer service and blending seamlessly with human expertise to create next-level customer experiences.

    Matt shares the story behind Crescendo, a company at the forefront of revolutionizing customer service by integrating advanced AI technology with human-driven solutions. With a focus on outcome-based service delivery and quality assurance, Crescendo is setting a new standard for customer engagement.

    We dive into Crescendo’s innovative approach, including its use of large language models (LLMs) combined with proprietary IP to deliver consistent, high-quality support across 56 languages. Matt explains how Crescendo’s AI tools are designed to handle routine tasks while enabling human agents to focus on complex, empathy-driven interactions—resulting in higher job satisfaction and better customer outcomes.

    Matt highlights how Crescendo is redefining the BPO industry, combining AI and human capabilities to reduce costs while improving the quality of customer interactions. From enhancing agent retention to enabling scalable, multilingual support, Crescendo’s impact is transformative.

    Discover how Matt and his team are designing a future where AI and humans work together to deliver exceptional customer experiences—reimagining what’s possible in the world of customer service.

    Don’t forget to like, subscribe, and hit the notification bell for more insights into AI, technology, and innovation!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction to Matt Price and Crescendo

    (01:49) The rise of AI in customer service

    (05:34) Using AI and human expertise for better customer experiences

    (07:47) How Gen AI reduces costs and improves engagement

    (09:37) Challenges in customer service design and innovation

    (11:32) Moving from hidden chatbots to front-and-center customer interaction

    (14:08) Training human agents to work seamlessly with AI

    (17:02) Using AI to analyze and improve service interactions

    (19:15) Outcome-based pricing vs traditional headcount models

    (21:53) Improving contact center roles with AI integration

    (25:08) The importance of curating accurate knowledge bases for AI

    (28:05) Crescendo’s acquisition of PartnerHero and its impact

    (30:39) Scaling customer service with AI-human collaboration

    (32:06) Multilingual support: AI in 56 languages

    (33:49) The vast market potential of AI-driven customer service

    (36:28) How Crescendo is reshaping customer service with AI innovation

    (42:42) Building customer profiles for personalized support




  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • In this special episode of the Eye on AI podcast, Sepp Hochreiter, the inventor of Long Short-Term Memory (LSTM) networks, joins Craig Smith to discuss the profound impact of LSTMs on artificial intelligence, from language models to real-time robotics.

    Sepp reflects on the early days of LSTM development, sharing insights into his collaboration with Jürgen Schmidhuber and the challenges they faced in gaining recognition for their groundbreaking work.

    He explains how LSTMs became the foundation for technologies used by giants like Amazon, Apple, and Google, and how they paved the way for modern advancements like transformers. Topics include:

    - The origin story of LSTMs and their unique architecture.
    - Why LSTMs were crucial for sequence data like speech and text.
    - The rise of transformers and how they compare to LSTMs.
    - Real-time robotics: using LSTMs to build energy-efficient, autonomous systems.

    The next big challenges for AI and robotics in the era of generative AI. Sepp also shares his optimistic vision for the future of AI, emphasizing the importance of efficient, scalable models and their potential to revolutionize industries from healthcare to autonomous vehicles.

    Don’t miss this deep dive into the history and future of AI, featuring one of its most influential pioneers.

    (00:00) Introduction: Meet Sepp Hochreiter
    (01:10) The Origins of LSTMs
    (02:26) Understanding the Vanishing Gradient Problem
    (05:12) Memory Cells and LSTM Architecture
    (06:35) Early Applications of LSTMs in Technology
    (09:38) How Transformers Differ from LSTMs
    (13:38) Exploring XLSTM for Industrial Applications
    (15:17) AI for Robotics and Real-Time Systems
    (18:55) Expanding LSTM Memory with Hopfield Networks
    (21:18) The Road to XLSTM Development
    (23:17) Industrial Use Cases of XLSTM
    (27:49) AI in Simulation: A New Frontier
    (32:26) The Future of LSTMs and Scalability
    (35:48) Inference Efficiency and Potential Applications
    (39:53) Continuous Learning and Adaptability in AI
    (42:59) Training Robots with XLSTM Technology
    (44:47) NXAI: Advancing AI in Industry

  • In this episode of the Eye on AI podcast, Paras Jain, CEO and Co-founder of Genmo, joins Craig Smith to explore the cutting-edge world of AI-driven video generation, the open-source revolution, and the future of creative storytelling.

    Paras shares the story behind Genmo, a company at the forefront of advancing video generation technologies, and their groundbreaking model, Mochi One. With a focus on motion quality and prompt adherence, Genmo is redefining what's possible in generative AI for video, offering unmatched precision and creative possibilities.

    We delve into the innovative approach behind Mochi One, including its state-of-the-art architecture, which enables fast, high-quality video generation. Paras explains how Genmo’s commitment to open source empowers developers and researchers worldwide, fostering rapid advancements, customization, and the creation of new tools, like video-to-video editing.

    The conversation touches on key themes such as scalability, synthetic data pipelines, and the transformative potential of AI in creating immersive virtual worlds. Paras also explores how Genmo is bridging the gap between cutting-edge AI and practical applications, from TikTok-ready videos to future possibilities like interactive environments and real-time video game-like experiences.

    Discover how Paras and his team are shaping the future of video creation, blending art, science, and open collaboration to push the boundaries of generative AI.

    Don’t forget to like, subscribe, and hit the notification bell for more insightful conversations on AI, technology, and innovation!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction to Paras Jain and Genmo

    (01:45) Video generation with Mochi One

    (04:41) Open-source AI in video generation

    (06:08) Building Mochi One

    (12:03) Simulating complex physics in video models

    (14:35) Reducing latency: Fast video generation at scale

    (20:34) Tackling cost challenges for longer videos

    (23:14) Character consistency in AI-generated videos

    (27:17) Why video models represent intelligence's next frontier

    (30:18) How diffusion models create sharp, realistic videos

    (34:02) Visualizing the denoising process in video generation

    (39:36) Exploring user-generated video creations with Mochi One

    (42:05) Monetizing open-source AI for video generation

    (45:47) Video generation's potential in the metaverse

    (47:19) Collaborating with universities to advance AI

    (48:39) The future of generative AI



  • This episode is sponsored by Oracle.

    Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you’re saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.

    Cut your current cloud bill in HALF if you move to OCI now: https://oracle.com/eyeonai



    In this episode of the Eye on AI podcast, Jamie Lerner, CEO of Quantum, joins Craig Smith to discuss the future of data storage, unstructured data management, and AI’s transformative role in modern workflows.

    Jamie shares his journey leading Quantum, a company revolutionizing the storage and management of unstructured data for industries like healthcare, media, and AI research. With decades of expertise in creating innovative data solutions, Quantum is at the forefront of enabling efficient, secure, and scalable data workflows.

    We dive into Quantum’s cutting-edge technologies, from high-speed flash storage systems like the Myriad file system to cost-effective, long-term archival solutions such as tape systems. Jamie unpacks how Quantum supports AI-powered workflows, enabling seamless data movement, metadata tagging, and policy-driven automation for unstructured data like medical imaging, genomics, and video archives.

    Jamie also explores the critical role of data sovereignty in today’s global landscape, the growing importance of "forever archives," and how Quantum’s tools help organizations balance exponential data growth with flat budgets. He sheds light on innovations like synthetic DNA and compressed storage mediums, providing a glimpse into the future of data storage.

    Don’t forget to like, subscribe, and hit the notification bell for more engaging discussions on AI, technology, and innovation!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction to Jamie Lerner and Quantum

    (02:21) Quantum’s Focus on Unstructured Data Storage

    (05:19) Structured vs. Unstructured Data: Key Differences

    (07:52) Managing Data Workflows with AI and Automation

    (10:55) Quantum’s Role in Long-Term Data Archives

    (13:32) Data Sovereignty and Security

    (16:18) How Data is Stored and Protected Across Mediums

    (19:54) Metadata in AI and Data Management

    (21:29) Quantum’s Role in Building Forever Archives

    (24:16) Tape Storage: Efficiency and Longevity

    (29:11) Innovations in Data Storage

    (34:39) Competing in the Evolving Data Storage Industry

    (37:56) Innovations in Flash Storage

    (40:55) Balancing Cost and Efficiency in Data Storage

    (44:28) The Future of Data Storage and AI Integration

    (50:07) Quantum’s Vision for the Future



  • This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more.



    In this episode of the Eye on AI podcast, we dive into the transformative world of AI compute infrastructure with Mitesh Agrawal, Head of Cloud/COO at Lambda

    Mitesh takes us on a journey from Lambda Labs' early days as a style transfer app to its rise as a leader in providing scalable, deep learning infrastructure. Learn how Lambda Labs is reshaping AI compute by delivering cutting-edge GPU solutions and accessible cloud platforms tailored for developers, researchers, and enterprises alike.

    Throughout the episode, Mitesh unpacks Lambda Labs’ unique approach to optimizing AI infrastructure—from reducing costs with transparent pricing to tackling the global GPU shortage through innovative supply chain strategies. He explains how the company supports deep learning workloads, including training and inference, and why their AI cloud is a game-changer for scaling next-gen applications.

    We also explore the broader landscape of AI, touching on the future of AI compute, the role of reasoning and video models, and the potential for localized data centers to meet the growing demand for low-latency solutions. Mitesh shares his vision for a world where AI applications, powered by Lambda Labs, drive innovation across industries.

    Tune in to discover how Lambda Labs is democratizing access to deep learning compute and paving the way for the future of AI infrastructure.

    Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest in AI, deep learning, and transformative tech!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction and Lambda Labs' Mission

    (01:37) Origins: From DreamScope to AI Compute Infrastructure

    (04:10) Pivoting to Deep Learning Infrastructure

    (06:23) Building Lambda Cloud: An AI-Focused Cloud Platform

    (09:16) Transparent Pricing vs. Hyperscalers

    (12:52) Managing GPU Supply and Demand

    (16:34) Evolution of AI Workloads: Training vs. Inference

    (20:02) Why Lambda Labs Sticks with NVIDIA GPUs

    (24:21) The Future of AI Compute: Localized Data Centers

    (28:30) Global Accessibility and Regulatory Challenges

    (32:13) China’s AI Development and GPU Restrictions

    (39:50) Scaling Lambda Labs: Data Centers and Growth

    (45:22) Advancing AI Models and Video Generation

    (50:24) Optimism for AI's Future

    (53:48) How to Access Lambda Cloud

  • This episode is sponsored by RapidSOS. Close the safety gap and transform your emergency response with RapidSOS.

    Visit https://rapidsos.com/eyeonai/ today to learn how AI-powered safety can protect your people and boost your bottom line.



    In this episode of the Eye on AI podcast, we explore the world of AI inference technology with Rodrigo Liang, co-founder and CEO of SambaNova Systems.

    Rodrigo shares his journey from high-performance chip design to building SambaNova, a company revolutionizing how enterprises leverage AI through scalable, power-efficient solutions. We dive into SambaNova’s groundbreaking achievements, including their record-breaking inference models, the Lama 405B and 70B, which deliver unparalleled speed and accuracy—all on a single rack consuming less than 10 kilowatts of power.

    Throughout the conversation, Rodrigo highlights the seismic shift from AI training to inference, explaining why production AI is now about speed, efficiency, and real-time applications. He details SambaNova’s approach to open-source models, modular deployment, and multi-tenancy, enabling enterprises to scale AI without costly infrastructure overhauls.

    We also discuss the competitive landscape of AI hardware, the challenges of NVIDIA’s dominance, and how SambaNova is paving the way for a new era of AI innovation. Rodrigo explains the critical importance of power efficiency and how SambaNova’s technology is unlocking opportunities for enterprises to deploy private, secure AI systems on-premises and in the cloud.

    Discover how SambaNova is redefining AI for enterprise adoption, enabling real-time AI, and setting new standards in efficiency and scalability.

    Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest breakthroughs in AI, technology, and enterprise innovation!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

  • In this episode of the Eye on AI podcast, we dive into the critical issue of data quality for AI systems with Sedarius Perrotta, co-founder of Shelf.

    Sedarius takes us on a journey through his experience in knowledge management and how Shelf was built to solve one of AI’s most pressing challenges—unstructured data chaos. He shares how Shelf’s innovative solutions enhance retrieval-augmented generation (RAG) and ensure tools like Microsoft Copilot can perform at their best by tackling inaccuracies, duplications, and outdated information in real-time.

    Throughout the episode, we explore how unstructured data acts as the "fuel" for AI systems and why its quality determines success. Sedarius explains Shelf's approach to data observability, transparency, and proactive monitoring to help organizations fix "garbage in, garbage out" issues, ensuring scalable and trusted AI initiatives.

    We also discuss the accelerating adoption of generative AI, the future of data management, and why building a strategy for clean and trusted data is vital for 2025 and beyond. Learn how Shelf enables businesses to unlock the full potential of their unstructured data for AI-driven productivity and innovation.

    Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest advancements in AI, data management, and next-gen automation!


    Stay Updated:
    Craig Smith Twitter: https://twitter.com/craigss
    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI


    (00:00) Introduction and Shelf's Mission
    (03:01) Understanding SharePoint and Data Challenges
    (05:29) Tackling Data Entropy in AI Systems
    (08:13) Using AI to Solve Data Quality Issues
    (12:30) Fixing AI Hallucinations with Trusted Data
    (21:01) Gen AI Adoption Insights and Trends
    (28:44) Benefits of Curated Data for AI Training
    (37:38) Future of Unstructured Data Management

  • Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.



    In this episode of the Eye on AI podcast, we explore the cutting-edge world of AI-powered autonomy with Nathan Michael, CTO of Shield AI.

    Nathan shares his journey from academia to leading one of the most innovative companies in defense technology, revealing how Shield AI is transforming autonomous systems to operate in the most challenging environments.

    Throughout the episode, Nathan dives into Shield AI's groundbreaking technologies, including the revolutionary Hivemind platform, which enables drones and uncrewed jets to think, adapt, and act independently—even in GPS-denied and communication-jammed conditions. He explains how these systems are deployed across defense and commercial applications, reshaping intelligence, surveillance, and reconnaissance missions with unprecedented precision and resilience.

    We also discuss the future of warfare and technology, from the commoditization of AI-driven autonomy to the ethical and strategic considerations of deploying these systems at scale. Nathan offers insights into how rapid iteration cycles and resilient intelligence will define the next era of defense innovation, ensuring mission success in increasingly complex scenarios.

    Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest advancements in AI, robotics, and the future of autonomous systems!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction to Shield AI and Hivemind

    (03:02) Nathan Michael's journey to Shield AI

    (06:07) Shield AI's mission and technology overview

    (10:18) VBAT: A versatile tail-sitter aircraft

    (12:44) Hivemind's swarm capabilities and applications

    (14:32) Ethics and societal guardrails for autonomous systems

    (17:24) Combat applications of quadcopters

    (19:12) Navigating and mapping unknown environments

    (20:38) Use of VBAT in Ukraine operations

    (22:10) Intelligence gathering and mission versatility

    (24:42) Transforming warfare with AI-driven autonomy

    (28:47) Teaming and communication in autonomous swarms

    (32:29) Communication networks in jammed conditions

    (35:34) Coordinated exploration with autonomous systems

    (38:13) Exploring buildings with quadcopter teams

    (40:57) Crewed-uncrewed teaming in modern combat

    (43:28) Global competition in AI and autonomy

    (46:03) Use of autonomous drones in Ukraine

    (50:28) Shield AI's vision: Proliferation of resilient intelligence

    (55:26) Expanding AI autonomy to commercial applications

  • This episode is sponsored by RapidSOS. Close the safety gap and transform your emergency response with RapidSOS.

    Visit https://rapidsos.com/eyeonai/ today to learn how AI-powered safety can protect your people and boost your bottom line.



    In this episode of the Eye on AI podcast, we delve into the role of AI in product management with Christian Marek, VP of Product at Productboard.

    Christian shares his journey in shaping the future of product management, exploring how AI is changing the way we ideate, build, and deliver customer-centric products.

    Throughout the episode, Christian unveils Productboard's innovative AI-driven solutions, including the groundbreaking Productboard Pulse, which accelerates ideation and discovery by aggregating and analyzing customer feedback. He explains how AI enables product managers to prioritize features, write detailed specifications, and align product roadmaps with organizational goals—all while driving ROI and delivering exceptional customer experiences.

    We also discuss the evolution of product management, from its origins to its rise as a mainstream discipline in organizations of all sizes. Christian offers insights into how AI tools are reshaping the product management landscape, making it more efficient, collaborative, and impactful than ever before.

    Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest breakthroughs in AI, product innovation, and customer-centric technology!

    Checkout Productboard Pulse: https://www.productboard.com/product/voice-of-customer/



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction

    (03:18) What is Productboard?

    (05:04) Christian Marek’s Journey

    (07:20) Defining Product Management

    (08:45) How Productboard Was Born

    (10:05) Transforming Product Management

    (14:32) Exploring Use Cases: Existing and New Product Development

    (18:13) Scaling Productboard

    (20:33) Managing Complex Projects Without Productboard

    (24:59) Industries Using Productboard

    (26:29) Why Startups Benefit from Productboard

    (29:25) Closing the Feedback Loop on Product Management

    (33:16) Non-Digital Applications of Productboard

    (36:27) AI Model Selection

    (39:48) The Future of Product Management with AI

    (43:30) C-Suite Engagement and ROI with Productboard

    (45:15) How to Get Started with Productboard

  • This episode is sponsored by Oracle.

    Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you’re saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.

    Cut your current cloud bill in HALF if you move to OCI now: https://oracle.com/eyeonai

    In this episode of the Eye on AI podcast, Tariq Shaukat, CEO of Sonar, joins Craig Smith to explore the future of code quality, security, and AI’s role in software development.

    Tariq shares his journey from leading roles at Google Cloud and Bumble to helming Sonar, a company disrupting code assurance for developers worldwide. With over 7 million users and support for 30+ programming languages, Sonar has become a critical tool in ensuring clean, maintainable, and secure code.

    We dive into Sonar’s innovative AI Code Assurance Workflow, which integrates seamlessly with generative AI tools like Copilot and Codium. Tariq discusses how Sonar addresses the challenges of AI-generated code, tackling issues like security vulnerabilities, maintainability problems, and the accountability crisis in today’s coding landscape.

    Tariq also unpacks the importance of hybrid deterministic and AI-driven approaches, the role of design and architecture in modern software development, and how Sonar is helping companies manage tech debt across billions of lines of code.

    With Sonar’s recent enterprise-grade SaaS launch and commitment to reducing developer toil, this episode offers valuable insights for developers, tech leaders, and anyone interested in the evolving intersection of AI and software engineering.

    Don’t forget to like, subscribe, and hit the notification bell for more discussions on AI, technology, and innovation!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction to Tariq Shaukat and Sonar

    (01:23) Overview of SonarQube

    (03:03) Deterministic Systems and AI Integration

    (07:36) Challenges of AI-Generated Code

    (10:12) Early Issue Detection in Development

    (12:33) Accountability in AI Code Generation

    (16:20) Importance of Rigorous Code Reviews

    (19:34) Managing Tech Debt with Continuous Improvement

    (22:16) Why Sonar Focuses on Integration

    (25:08) Reviewing Billion-Line Code Bases with Sonar

    (29:37) Tailoring Sonar for Specific Codebases and Workflows

    (32:40) Avoiding Overwhelming Developers with Noise

    (37:49) Governance and Managing Complex Codebases

    (40:50) Addressing Tech Debt in Legacy Systems

    (45:07) Sonar’s Open-Source Model and Philosophy

    (48:11) What’s Next for Sonar

  • Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.

    In this episode of the Eye on AI podcast, Matt Panousis, Co-Founder & COO @ LipDub AI, shares the journey behind the platforms a cutting-edge AI-powered lip-syncing tool revolutionizing the dubbing industry.

    Matt joins Craig to explore the future of content localization, the evolution of LipDub, and how generative AI is transforming how audiences experience video content worldwide.

    As a leader in AI and visual effects, Matt discusses how LipDub integrates advanced AI to enable real-time lip-syncing for dubbed audio, making Hollywood-quality dubbing affordable and accessible. From processing hours of video footage to handling complex, dynamic scenes, LipDub is empowering content creators to reach global audiences like never before.

    We explore the challenges of creating realistic lip movements, the importance of mouth internals, textures, and fidelity, and how tools like LipDub stand apart from competitors like Rasque AI. Matt also introduces Vanity AI, LipDub’s sister technology for automated digital makeup and de-aging, used by MARZ for high-end visual effects work.

    Learn how LipDub is redefining the possibilities of content localization with AI and what the future holds as this technology evolves.

    Don’t miss this deep dive into AI-powered innovation with Matt Panousis. Like, subscribe, and hit the notification bell for more episodes exploring cutting-edge advancements in AI!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction & Matt Panousis Background
    (02:45) Origins of MARZ and LipDub
    (04:20) LipDub vs. Competitors
    (06:15) Integration with Dubbing Solutions
    (08:00) LipDub Technical Process
    (10:20) Competition
    (12:20) Time and Cost
    (14:00) Target Markets and Use Cases
    (19:00) Future Developments
    (22:00) Translation and Timing Challenges
    (28:00) Ethical Concerns and Misuse
    (30:30) Digital Watermarking
    (32:25) Real-World Examples and Future Products
    (34:00) Team Building and Research
    (37:00) Building the Advisory Board
    (38:10) Securing Research Talent
    (40:00) Funding and the Board
    (40:35) Future Products

  • Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.

    In this episode of the Eye on AI podcast, Matt Panousis, Co-Founder & COO @ LipDub AI, shares the journey behind the platforms a cutting-edge AI-powered lip-syncing tool revolutionizing the dubbing industry.

    Matt joins Craig to explore the future of content localization, the evolution of LipDub, and how generative AI is transforming how audiences experience video content worldwide.

    As a leader in AI and visual effects, Matt discusses how LipDub integrates advanced AI to enable real-time lip-syncing for dubbed audio, making Hollywood-quality dubbing affordable and accessible. From processing hours of video footage to handling complex, dynamic scenes, LipDub is empowering content creators to reach global audiences like never before.

    We explore the challenges of creating realistic lip movements, the importance of mouth internals, textures, and fidelity, and how tools like LipDub stand apart from competitors like Rasque AI. Matt also introduces Vanity AI, LipDub’s sister technology for automated digital makeup and de-aging, used by MARZ for high-end visual effects work.

    Learn how LipDub is redefining the possibilities of content localization with AI and what the future holds as this technology evolves.

    Don’t miss this deep dive into AI-powered innovation with Matt Panousis. Like, subscribe, and hit the notification bell for more episodes exploring cutting-edge advancements in AI!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction & Matt Panousis Background
    (02:45) Origins of MARZ and LipDub
    (04:20) LipDub vs. Competitors
    (06:15) Integration with Dubbing Solutions
    (08:00) LipDub Technical Process
    (10:20) Competition
    (12:20) Time and Cost
    (14:00) Target Markets and Use Cases
    (19:00) Future Developments
    (22:00) Translation and Timing Challenges
    (28:00) Ethical Concerns and Misuse
    (30:30) Digital Watermarking
    (32:25) Real-World Examples and Future Products
    (34:00) Team Building and Research
    (37:00) Building the Advisory Board
    (38:10) Securing Research Talent
    (40:00) Funding and the Board
    (40:35) Future Products

  • This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more.



    In this episode of the Eye on AI podcast, we explore the cutting-edge world of semiconductor innovation and its role in the future of artificial intelligence with Kai Beckmann, CEO of Merck KGaA.

    Kai takes us on a journey into the heart of semiconductor manufacturing, revealing how next-generation chips are driving the AI revolution. From the complex process of creating advanced chips to the increasing demands of AI on semiconductor technology, Kai shares how Merck is pioneering materials science to unlock unprecedented levels of computational power.

    Throughout the conversation, Kai explains how AI’s growth is reshaping the semiconductor industry, with innovations like edge AI, heterogeneous integration, and 3D chip architectures pushing the boundaries of performance. He highlights how Merck is using artificial intelligence to accelerate material discovery, reduce experimentation cycles, and create smarter, more efficient processes for the chips that power everything from smartphones to data centers.

    Kai also delves into the global landscape of semiconductor manufacturing, discussing the challenges of supply chains, the cyclical nature of the industry, and the rapid technological advancements needed to meet AI’s demands. He explains why the semiconductor sector is entering the "Age of Materials," where breakthroughs in materials science are enabling the next wave of AI-driven devices.

    Like, subscribe, and hit the notification bell to stay tuned for more episodes!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction

    (02:48) Merck KGaA

    (05:21) Foundations of Semiconductor Manufacturing

    (07:57) How Chips Are Made

    (09:24) Exploring Materials Science

    (13:59) Growth and Trends in the Semiconductor Industry

    (15:44) Semiconductor Manufacturing

    (17:34) AI’s Growing Demands on Semiconductor Tech

    (20:34) The Future of Edge AI

    (22:10) Using AI to Disrupt Material Discovery

    (24:58) How AI Accelerates Innovation in Semiconductors

    (27:32) Evolution of Semiconductor Fabrication Processes

    (30:08) Advanced Techniques: Chiplets, 3D Stacking, and Beyond

    (32:29) Merck’s Role in Global Semiconductor Innovation

    (34:03) Major Markets for Semiconductor Manufacturing

    (37:18) Challenges in Reducing Latency and Energy Consumption

    (40:21) Exploring New Conductive Materials for Efficiency

  • This episode is sponsored by Shopify.

    Shopify is a commerce platform that allows anyone to set up an online store and sell their products. Whether you’re selling online, on social media, or in person, Shopify has you covered on every base. With Shopify you can sell physical and digital products. You can sell services, memberships, ticketed events, rentals and even classes and lessons.

    Sign up for a $1 per month trial period at http://shopify.com/eyeonai



    In this episode of the Eye on AI podcast, Andrew D. Feldman, Co-Founder and CEO of Cerebras Systems, unveils how Cerebras is disrupting AI inference and high-performance computing.

    Andrew joins Craig Smith to discuss the groundbreaking wafer-scale engine, Cerebras’ record-breaking inference speeds, and the future of AI in enterprise workflows. From designing the fastest inference platform to simplifying AI deployment with an API-driven cloud service, Cerebras is setting new standards in AI hardware innovation.

    We explore the shift from GPUs to custom architectures, the rise of large language models like Llama and GPT, and how AI is driving enterprise transformation. Andrew also dives into the debate over open-source vs. proprietary models, AI’s role in climate mitigation, and Cerebras’ partnerships with global supercomputing centers and industry leaders.

    Discover how Cerebras is shaping the future of AI inference and why speed and scalability are redefining what’s possible in computing.

    Don’t miss this deep dive into AI’s next frontier with Andrew Feldman.

    Like, subscribe, and hit the notification bell for more episodes!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Intro to Andrew Feldman & Cerebras Systems

    (00:43) The rise of AI inference

    (03:16) Cerebras’ API-powered cloud

    (04:48) Competing with NVIDIA’s CUDA

    (06:52) The rise of Llama and LLMs

    (07:40) OpenAI's hardware strategy

    (10:06) Shifting focus from training to inference

    (13:28) Open-source vs proprietary AI

    (15:00) AI's role in enterprise workflows

    (17:42) Edge computing vs cloud AI

    (19:08) Edge AI for consumer apps

    (20:51) Machine-to-machine AI inference

    (24:20) Managing uncertainty with models

    (27:24) Impact of U.S.–China export rules

    (30:29) U.S. innovation policy challenges

    (33:31) Developing wafer-scale engines

    (34:45) Cerebras’ fast inference service

    (37:40) Global partnerships in AI

    (38:14) AI in climate & energy solutions

    (39:58) Training and inference cycles

    (41:33) AI training market competition

  • Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.



    In this episode of the Eye on AI podcast, Varun Mohan, Co-Founder and CEO of Codeium, shares the journey behind building one of the most innovative AI-powered coding tools available today.

    Varun joins Craig Smith to explore the future of software development, the evolution of Codeium, and how generative AI is revolutionizing the way developers interact with codebases.

    As a leader in AI and software engineering, Varun discusses how Codeium integrates advanced AI to enable real-time code completion, refactoring, and even end-to-end application building. From processing billions of lines of code to making complex legacy systems like COBOL more accessible, Codeium is empowering developers to work smarter and faster.

    We delve into the challenges of managing massive codebases, the importance of maintaining security and self-hosted solutions, and how tools like Codeium stand apart from competitors like Copilot and CodeWhisperer. Varun also introduces **Cascade**, the AI assistant within Codeium’s custom IDE, Windsurf, which merges human ingenuity with AI's reasoning capabilities for a seamless coding experience.

    Other fascinating topics include the role of AI in onboarding developers to complex systems, its ability to handle rare programming languages, and how Codeium is reshaping the software lifecycle from design to deployment.

    Learn how Codeium is redefining the possibilities of software development with AI, and what the future holds as this technology continues to evolve.

    Don’t miss this deep dive into AI-powered innovation with Varun Mohan. Like, subscribe, and hit the notification bell for more episodes exploring cutting-edge advancements in AI!



    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction to Varun Mohan and Codeium

    (05:32) Introducing Codeium: AI-powered coding revolution

    (08:42) Competing with Copilot and CodeWhisperer

    (11:46) Using Codeium to streamline software development

    (14:01) Features of Cascade: Refactoring and automation

    (17:26) Building applications from scratch with Codeium

    (22:00) Enterprise adoption and recognition by JPMorgan Chase

    (23:14) The pivot to Codeium and lessons learned

    (24:32) The vision for Codeium’s future

  • This episode of Eye on AI is sponsored by Citrusx.

    Unlock reliable AI with Citrusx! Our platform simplifies validation and risk management, empowering you to make smarter decisions and stay compliant. Detects and mitigate AI vulnerabilities, biases, and errors with ease.

    Visit http://email.citrusx.ai/eyeonai to download our free fairness use case and see the solution in action.

    In this episode of the Eye on AI podcast, Terry Sejnowski, a pioneer in neural networks and computational neuroscience, joins Craig Smith to discuss the future of AI, the evolution of ChatGPT, and the challenges of understanding intelligence.

    Terry, a key figure in the deep learning revolution, shares insights into how neural networks laid the foundation for modern AI, including ChatGPT’s groundbreaking generative capabilities. From its ability to mimic human-like creativity to its limitations in true understanding, we explore what makes ChatGPT remarkable and what it still lacks compared to human cognition.

    We also dive into fascinating topics like the debate over AI sentience, the concept of "hallucinations" in AI models, and how language models like ChatGPT act as mirrors reflecting user input rather than possessing intrinsic intelligence. Terry explains how understanding language and meaning in AI remains one of the field’s greatest challenges.

    Additionally, Terry shares his perspective on nature-inspired AI and what it will take to develop systems that go beyond prediction to exhibit true autonomy and decision-making.

    Learn why AI models like ChatGPT are revolutionary yet incomplete, how generative AI might redefine creativity, and what the future holds for AI as we continue to push its boundaries.

    Don’t miss this deep dive into the fascinating world of AI with Terry Sejnowski. Like, subscribe, and hit the notification bell for more cutting-edge AI insights!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction to Terry Sejnowski and His Work

    (03:02) The Origins of Modern AI and Neural Networks

    (05:29) The Deep Learning Revolution and ImageNet

    (07:11) Understanding ChatGPT and Generative AI

    (12:34) Exploring AI Creativity

    (16:03) Lessons from Gaming AI: AlphaGo and Backgammon

    (18:37) Early Insights into AI’s Affinity for Language

    (24:48) Syntax vs. Semantics: The Purpose of Language

    (30:00) How Written Language Transformed AI Training

    (35:10) Can AI Become Sentient?

    (41:37) AI Agents and the Next Frontier in Automation

    (45:43) Nature-Inspired AI: Lessons from Biology

    (50:02) Digital vs. Biological Computation: Key Differences

    (54:29) Will AI Replace Jobs?

    (57:07) The Future of AI

  • This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more.

    In this episode of the Eye on AI podcast, Avthar Sewrathan, Lead Technical Product Marketing Manager at Timescale joins Craig Smith to explore how Postgres is transforming AI development with cutting-edge tools and open-source innovation.

    With its robust, extensible framework, Postgres has become the go-to database for AI applications, from semantic search to retrieval-augmented generation (RAG). Avthar takes us through Timescale's journey from its IoT origins to disrupting the way developers handle vector search, embedding management, and high-performance AI workloads—all within Postgres.

    We dive into Timescale's tools like PGVector, PGVector Scale, and PGAI Vectorizer, uncovering how they aid developers to build AI-powered systems without the complexity of managing multiple databases. Avthar explains how Postgres seamlessly handles structured and unstructured data, making it the perfect foundation for next-gen AI applications.

    Learn how Postgres supports AI-driven use cases across industries like IoT, finance, and crypto, and why its open-source ecosystem is key to fostering collaboration and innovation.

    Tune in to discover how Postgres is redefining AI databases, why Timescale’s tools are a game-changer for developers, and what the future holds for AI innovation in the database space.

    Don’t forget to like, subscribe, and hit the notification bell for more AI insights!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction to Avthar and Timescale

    (02:35) The origins of Timescale and TimescaleDB

    (05:06) What makes Postgres unique and reliable

    (07:17) Open-source philosophy at Timescale

    (12:04) Timescale's early focus on IoT and time series data

    (16:17) Applications in finance, crypto, and IoT

    (19:03) Postgres in AI: From RAG to semantic search

    (22:00) Overcoming scalability challenges with PGVector Scale

    (24:33) PGAI Vectorizer: Managing embeddings seamlessly

    (28:09) The PGAI suite: Tools for AI developers

    (30:33) Vectorization explained: Foundations of AI search

    (32:24) LLM integration within Postgres

    (35:26) Natural language interfaces and database workflows

    (38:11) Structured and unstructured data in Postgres

    (41:17) Postgres for everything: Simplifying complexity

    (44:52) Timescale’s accessibility for startups and enterprises

    (47:46) The power of open source in AI

  • This episode is sponsored by Oracle.

    Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you’re saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.

    Cut your current cloud bill in HALF if you move to OCI now: https://oracle.com/eyeonai

    In this episode of the Eye on AI podcast, Jeff Boudier, Head of Product and Growth at Hugging Face, joins Craig Smith to uncover how the platform is empowering AI builders and driving the open-source AI revolution.

    With a mission to democratize AI, Jeff walks us through Hugging Face's journey from a chatbot for teens to the leading platform hosting over 1 million public AI models, datasets, and applications. We explore how Hugging Face is bridging the gap between enterprises and open-source innovation, enabling developers to build cutting-edge AI solutions with transparency and collaboration.

    Jeff dives deep into Hugging Face’s tools and features, from hosting private and public models to fostering a thriving ecosystem of AI builders. He shares insights on the transformative impact of technologies like Transformers, transfer learning, and no-code solutions that make AI accessible to more creators than ever before.

    We also discuss Hugging Face’s latest innovation, ‘Hugs,’ designed to help enterprises seamlessly integrate open-source AI within their infrastructure while retaining full control over their data and models.

    Tune in to discover how Hugging Face is shaping the future of AI development, why open-source models are catching up with proprietary ones, and what trends are driving innovation across AI disciplines.

    Don’t forget to like, subscribe, and hit the notification bell for more!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction to Jeff Boudier

    (02:16) How Hugging Face Empowers AI Builders

    (05:26) Transition from Chatbot to Leading AI Platform

    (07:07) Hosting AI Models: Public and Private Options

    (10:13) What Does Hosting Models on Hugging Face Mean?

    (14:22) Hugging Face vs. GitHub: Key Differences

    (19:09) Navigating 1 Million Models on the Hugging Face Hub

    (22:33) Leaderboards and Filtering AI Models

    (25:26) Building Applications with Hugging Face Models

    (28:03) AI Innovation: From Code to Model-Driven Development

    (30:45) Frameworks for Agentic Systems and Hugging Chat

    (35:20) Open Source vs. Proprietary AI: The Future

    (40:41) Introducing ‘Hugs’: Open AI for Enterprises

    (44:59) The Role of No-Code in AI Development

    (47:26) Hugging Face’s Vision

  • This episode is sponsored by Legal Zoom.

    Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code Smith10 to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.



    In this episode of the Eye on AI podcast, we dive into the world of Artificial General Intelligence (AGI) with Ben Goertzel, CEO of SingularityNET and a leading pioneer in AGI development.

    Ben shares his vision for building machines that go beyond task-specific capabilities to achieve true, human-like intelligence. He explores how AGI could reshape society, from revolutionizing industries to redefining creativity, learning, and autonomous decision-making.

    Throughout the conversation, Ben discusses his unique approach to AGI, which combines decentralized AI systems and blockchain technology to create open, scalable, and ethically aligned AI networks. He explains how his work with SingularityNET aims to democratize AI, making AGI development transparent and accessible while mitigating risks associated with centralized control.

    Ben also delves into the philosophical and ethical questions surrounding AGI, offering insights into consciousness, the role of empathy, and the potential for building machines that not only think but also align with humanity’s best values. He shares his thoughts on how decentralized AGI can avoid the narrow, profit-driven goals of traditional AI and instead evolve in ways that benefit society as a whole.

    This episode offers a thought-provoking glimpse into the future of AGI, touching on the technical challenges, societal impact, and ethical considerations that come with creating truly intelligent machines.

    Ben’s perspective will leave you questioning not only what AGI can achieve, but also how we can guide it toward a positive future.

    Don’t forget to like, subscribe, and hit the notification bell to stay tuned for more!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction to Ben Goertzel

    (01:21) Overview of "The Consciousness Explosion"

    (02:28) Ben’s Background in AI and AGI

    (04:39) Exploring Consciousness and AI

    (08:22) Panpsychism and Views on Consciousness

    (10:32) The Path to the Singularity

    (13:28) Critique of Modern AI Systems

    (18:30) Perspectives on Human-Level AI and Creativity

    (21:42) Ben’s AGI Paradigm and Approach

    (25:39) OpenCog Hyperon and Knowledge Graphs

    (31:12) Integrating Perception and Experience in AI

    (34:02) Robotics in AGI Development

    (35:06) Virtual Learning Environment for AGI

    (39:01) Creativity in AI vs. Human Intelligence

    (44:21) User Interaction with AGI Systems

    (48:22) Funding AGI Research Through Cryptocurrency

    (53:03) Final Thoughts on Compassionate AI

    (55:21) How to Get "The Consciousness Explosion" Book