Afleveringen
-
Summary
“If there is a risk in your design for a device used in a hospital setting, you have qualified people there. But at home, there is no more safety net. So (AI) software risk becomes even more important.”
In this Let’s Talk Risk! conversation, Erhan Ilhan shares how software is increasingly being used as a medical device in our industry. As a result, we have to pay more attention to software risk management. Software failure will occur, but we need to understand the sequence of events leading to hazardous situations and think of controls to mitigate their impact.
In this episode, Naveen Agarwal and Erhan Ilhan discuss the complexities of software risk management in the medical device industry. They explore the differences between software and hardware risks, the importance of usability testing, the challenges of estimating software failure probabilities, and the integration of cybersecurity risks into overall risk management. The discussion also highlights the significance of post-market surveillance and the impact of emerging technologies like AI and ML on medical devices.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
01:05 Introducing Erhan Ilhan
02:10 Understanding Software Risk vs. Hardware Risk
03:12 Understanding Software Failures and Estimating Probability
05:20 Potential Effect of Recent Update to IEC 62304
06:36 Importance of Post-Market Surveillance in Software Risk Management
07:42 Usability Testing and Human Factors in Software Risk Management
12:35 Estimating Probability of Software Failure in FMEA
18:30 Cybersecurity Risk Management for Software as a Medical Device
22:55 AI/ML, GenAI in Medical Devices and Home Health
29:42 Final Thoughts and Key Takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: Tips for medical device software risk analysis.
LTR: Cloud computing in MedTech.
ACHIEVE: Post-market surveillance workshop.
Key Takeaways
* Software risk management is essential for patient safety.
* Software does not pose direct harm like hardware can.
* Assuming a software failure probability of one is incorrect.
* Real-world data is crucial for improving software safety.
* Usability testing should involve diverse user demographics.
* Cybersecurity risks must be assessed alongside safety risks.
* Post-market surveillance is vital for ongoing risk management.
* AI and ML are transforming the medical device landscape.
* Risk control measures should be clearly defined and integrated.
* Always prioritize patient safety in software development.
Keywords
Software risk management, medical devices, usability testing, cybersecurity, post-market surveillance, AI, ML, IEC 62304, ISO 14971, quality assurance
About Erhan Ilhan
Erhan Ilhan is the Head of Quality and Regulatory at Circadia Health, where he has been building the company’s Quality Management System (QMS), driving ISO 13485 implementation, and leading the regulatory strategy for Circadia’s software and hardware-based medical devices. His work includes preparing 510(k) submissions, managing FDA interactions, and leading AI/ML-focused pre-submissions to help define the regulatory pathway for Circadia’s AI-driven clinical decision support software.At Circadia, Erhan also leads design quality assurance activities, oversees risk management and cybersecurity compliance, and works closely with cross-functional teams to embed quality into every stage of product development — from design to manufacturing and post-market surveillance.Prior to Circadia, Erhan held various engineering and quality leadership roles at GE Healthcare, Abbott, Danaher, Medtronic, and Glooko. With deep expertise in software development, design controls, verification and validation, design transfer, test method development, production and process controls, risk management, and regulatory submissions, he has contributed to a wide range of medical device and digital health products. His work spans implantable pacemakers and neuromodulation devices, Continuous Glucose Monitoring (CGM) systems, insulin pumps, diabetes management platforms, mobile and web health applications, and cloud-based algorithm engines.Erhan holds a Bachelor’s degree in Industrial and Electrical Engineering and a Master’s degree in Computer Science, blending technical depth with a strong foundation in quality and regulatory compliance.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only, and not as legal advice. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
Summary
“Clinicians need to to trust the (AI) algorithm, and knowing how it works helps them understand where it might fail, or where extra attention might be needed.”
In this episode of the Let's Talk Risk Podcast, Dr. Jay Vaishnav discusses the rapid growth of artificial intelligence (AI) in the medical technology (MedTech) field, particularly in Radiology.
She shares insights from her extensive experience at the FDA and in MedTech, both in roles ranging from scientific to medical affairs and regulatory affairs, emphasizing the importance of understanding the regulatory landscape, navigating benefit-risk assessments, and the challenges of AI model validation.
The discussion also covers innovations in triage and notification systems, the future of AI in clinical care, and best practices for regulatory professionals. Dr. Vaishnav highlights the need for transparency in AI development and the importance of involving clinicians early in the process. She concludes with career insights and the value of mentorship in navigating the evolving landscape of MedTech.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
00:00 Introducing Dr. Jay Vaishnav
01:50 The Rise of AI/ML devices in MedTech
04:40 FDA perspective on benefit-risk
06:05 Managing risk of clinician over-reliance on AI
08:25 Case study: De Novo granted for a triage & notification AI device
11:56 Challenges in developing triage & notification AI devices
14:00 Current stat and barriers to adoption of AI technologies in healthcare
15:35 The need for explainability in AI/ML devices
17:16 Future directions for AI/ML in MedTech
18:00 Why clinician involvement in AI/ML development is important
19:12 Best Practices for QA/RA professionals
21:30 Career insights and power of mentoring
27:46 Closing comments
Suggested links:
* DEN 170073 - ContaCT viz.AI, Inc..
* LTR: AI in MedTech.
* LTR: Regulating Generative AI.
Key Takeaways
* AI is growing rapidly in MedTech, especially in radiology applications.
* Regulatory considerations are crucial for AI applications.
* Benefit-risk assessments are complex and vary by device indication.
* Triage applications can significantly improve patient outcomes.
* AI models face challenges with false positives and negatives.
* Cultural mistrust of AI algorithms can hinder adoption.
* Involving clinicians early in AI development is essential.
* Transparency about AI limitations is necessary for trust.
* The future of AI in healthcare is promising but uncertain.
* Career growth opportunities exist for regulatory professionals in AI.
Keywords
AI, MedTech, FDA, regulatory affairs, machine learning, healthcare, medical devices, radiology, clinical applications, patient safety.
About Jay Vaishnav, Ph.D.
Dr. Jay Vaishnav is currently Director of Regulatory Affairs at Canon Medical Informatics, where she leads US regulatory strategy and FDA submissions for Canon's portfolio of Healthcare IT solutions. She holds a Ph.D. from Harvard in theoretical physics, and after some time in academia moved to the FDA, spending over seven years there in positions of increasing responsibility. She eventually joined the medical device industry in Medical Affairs before moving into a Regulatory role.She is a Fellow of the Regulatory Affairs Professional Society, an occasional instructor at UCSC Silicon Valley Extension, and co-editor of the book "From X Rays to AI: Navigating US Regulations in Radiological Health."
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
Let's Talk Risk! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
Zijn er afleveringen die ontbreken?
-
This is a free preview of a paid episode. To hear more, visit naveenagarwalphd.substack.com
Summary
“The Digital Health Center of Excellence was hit pretty heavy, and also AI researchers at OSEL. I am not too worried about PCCP but the impact could be on how quickly FDA can finalize the other draft guidance with larger scope applied to all medical devices.”
In this episode of the Let's Talk Risk Podcast, Naveen Agarwal and Yu Zhao discuss the P…
-
Summary
“To AI or not to AI is no longer a question.”
In this Let’s Talk Risk! conversation, Naveen Agarwal and Michelle Wu discuss the transformative role of AI in the regulatory space, particularly in healthcare and life sciences. They explore key trends observed at the JP Morgan Healthcare Conference, share real-world use cases of AI applications, and address concerns about trust, job security, and privacy in the context of AI adoption. Michelle emphasizes that AI is an enabler rather than a replacement, and highlights the importance of adapting workflows to integrate AI effectively. The discussion concludes with key takeaways on leveraging AI for enhanced productivity and strategic decision-making.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
00:00 Introducing Michelle Wu of NyquistAI
03:00 Key Trends from the JP Morgan Healthcare Conference
06:36 AI Use Cases in Regulatory Applications
09:59 Trusting AI in Legal Contexts
11:49 AI in Product Development, M&A and Growth
14:40 AI's Role in Regulatory Intelligence
18:25 Addressing Job Security Concerns with AI
22:11 AI in IDE and Early Clinical Phase
25:40 Managing Privacy Concerns and IP Protection
27:05 Managing Risk of Errors from Hallucination and Data Quality
28:40 Announcements, Final Thoughts and Key Takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: AI and the future of work for QA/RA professionals.
LTR: A strategic approach to regulatory success with GenAI devices.
LTR: How GenAI is transforming Quality in MedTech.
Key Takeaways
* AI is an enabler, it's not a replacement.
* AI will challenge and shake up our traditional way of work.
* The best way to adopt AI is to revamp existing workflows.
* Trusting AI is a work in progress.
* AI can help optimize resources, save time, and reduce costs.
* AI can cut down the time for research and decision-making.
* We need multilingual professionals who can communicate AI effectively.
* AI will not take away jobs, but those who leverage AI will excel.
* Data privacy and security are critical in AI applications.
* Experimenting with AI solutions can build confidence and skills.
Keywords
AI, regulatory applications, healthcare, Nyquist AI, JP Morgan Healthcare Conference, trust in AI, product development, job security, data privacy, risk management
About Michelle Wu
Michelle Wu is the Co-Founder and CEO of NyquistAI, where her team is building the world’s largest AI-empowered data platform for Life Sciences innovators. She has over 10 years of hands-on experience in connecting real-life business needs with the right technical solutions, as well as a deep understanding of the challenges and opportunities in the life science industry. Previously she launched two other startups and also served as a Strategic Planning and BDL Manager at Novartis. Michelle holds a Bachelor’s degree in Foreign Languages with minor in Economics, and an MBA in Healthcare and Investment.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only, and not as legal advice. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
This is a free preview of a paid episode. To hear more, visit naveenagarwalphd.substack.com
Summary
“When the notified body goes back and asks you (the manufacturer) for more information, they aren’t necessarily saying that whatever you have done is wrong.”
Clinical evaluation is a major requirement for medical devices under the EU regulation, and notified bodies must determine whether the evidence provided by the manufacturer is scientifically…
-
Summary
“People who were passionate about this subject (safety risk management) and were enthusiastic about it, we got together and said ‘let’s just create a conference for MedTech’, and we did.”
In this Let’s Talk Risk! conversation, Naveen Agarwal interviews Bijan Elahi, a leading authority in safety risk management of medical devices. They discuss the upcoming International Safety Risk Management Conference in Amsterdam, focusing on its significance, key speakers, and topics such as AI, cybersecurity, and post-market risk management. The conversation highlights the importance of networking, collaboration, and continuous learning in the medical device industry.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
00:00 Introduction to the Conference and Guest
02:57 Overview of the International Safety Risk Management Conference
05:53 Keynote Speakers and Topics
09:08 AI and Cybersecurity in Medical Devices
11:50 Post-Market Risk Management Challenges
15:09 Global Regulatory Perspectives
17:57 Designing Safety into Medical Devices
20:49 Networking and Collaboration Opportunities
23:58 Closing Remarks and Key Takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
* International conference on medical device safety risk management.
* LTR: Key themes in risk management and an upcoming conference.
Key Takeaways
* The International Safety Risk Management Conference is a unique event focused on medical devices.
* The conference will feature keynotes from FDA representatives and industry experts.
* AI and cybersecurity are critical topics in medical device safety.
* Post-market risk management presents significant challenges for the industry.
* Networking opportunities at the conference can enhance professional growth.
* Designing safety into medical devices from the start is essential.
* The medical device industry is still evolving compared to other sectors like aerospace.
* Collaboration among industry professionals can lead to better risk management practices.
* The conference will offer both in-person and online participation options.
* Early bird registration is available for attendees.
Keywords
Safety risk management, medical devices, conference, AI, cybersecurity, post-market, regulatory, design, networking, collaboration
About Bijan Elahi
Bijan Elahi is the founder of MedTech Safety and a globally recognized expert on safety risk management of medical devices. Bijan is a technical fellow, and an affiliate professor at Drexel University, USA. Bijan is also a lecturer at Eindhoven University of Technology in the Netherlands, where he teaches risk management to doctoral students in engineering. He is a frequently invited speaker at international professional conferences, and is also a contributor to ISO 14971, the international standard on the application of risk management to medical devices. He has trained over 10,000 people worldwide and his book Safety Risk Management of Medical Devices, now in its second edition, is a bestseller.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only, and not as legal advice. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
Summary
“As a Quality leader, you need to be empathetic to other departments that are impacted by whatever change(s) you are making.”
In this Let’s Talk Risk! conversation, Naveen Agarwal and Amit Tyagi explore the intersection of quality management and career growth. Amit shares insights from his extensive experience in quality engineering, emphasizing the importance of innovative thinking, risk management, and effective CAPA processes. They discuss the challenges of navigating change in quality practices and the human element of leadership. Amit also provides valuable career coaching tips for entry-level professionals, highlighting the significance of networking and showcasing relevant skills. The conversation concludes with key takeaways on the importance of quality as a driver for business success and personal career development.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
00:00:05 Introducing Amit Tyagi
00:04:10 Why Quality is more than a paper exercise
00:06:45 Understanding CAPA: correction vs. corrective action
00:11:27 Leading change to improve and sustain Quality
00:15:02 Growing your career as a Quality leader
00:20:05 Bridging the gap from the WHAT to the HOW
00:23:30 Career tips for early stage Quality professionals
00:29:26 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: Why risk practitioners must build the empathy muscle.
LTR: Communicate to facilitate collaboration, not impose opinions.
Career Grow System: A CareerLaunch program with Amit Tyagi.
Key Takeaways
* Quality should not be seen as just a paperwork exercise.
* Effective CAPA processes are crucial for long-term success.
* Networking is essential for career growth.
* Quality leaders must facilitate change and guide teams.
* Understanding the human element is key in quality leadership.
* Tools like the 5 Whys can help in root cause analysis.
* Communicating value is vital during interviews.
* Entry-level professionals should focus on relevant experiences.
* Continuous improvement should be a goal for quality management.
* Quality impacts not just compliance but also business success.
Keywords
Quality management, career growth, risk management, CAPA, quality leadership, engineering, professional development, networking, quality tools, career coaching
About Amit Tyagi
Amit Tyagi is currently a Senior Director, Quality Engineering at Abzena. He is also a professional career coach committed committed to helping QA/RA professionals to new heights. With 18 years of experience in empowering individuals across industries, he fine-tuned the art of turning ambitions into accomplishments. His prior industry experience include leadership roles in various pharmaceutical and medical device companies. He holds a Bachelor and Master degrees in Biomedical Engineering, and a Certified Quality Engineer certification from ASQ.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only, and not as legal advice. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
This is a free preview of a paid episode. To hear more, visit naveenagarwalphd.substack.com
Summary
“For the foreseeable future, core FDA practices such a reviewing submissions and post-market oversight are going to continue. Smart Money says FDA in 6 months or an year from now, as to these core responsibilities, is operating similarly to FDA of mid-2024.”
Yes, there is a lot of uncertainty due to the current political environment in the USA, a…
-
Summary
“The root cause of most privacy missteps is a lack of education, training and understanding of what the privacy rule actually is.”
In this Let’s Talk Risk! conversation, Naveen Agarwal and Edye Edens delve into the critical intersection of privacy laws and clinical trials. They explore the significance of privacy regulations like HIPAA and GDPR, the implications of proposed changes to HIPAA, and the role of AI in enhancing compliance and efficiency in clinical research. Edye shares real-world examples of privacy violations and emphasizes the importance of ongoing education and risk management in navigating the complex landscape of healthcare privacy.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
00:00:05 Introduction
00:02:12 Why privacy of patient information is important
00:04:57 Brief history of HIPAA regulation in the United States
00:08:03 Surprising fact: Clinical trial data is not Protected Health Information (PHI)
00:10:10 Brief overview of GDPR in EU
00:12:20 When to seek legal input during risk analysis
00:15:48 Implications of HHS proposed HIPAA security rules for clinical trials
00:19:16 Best practices for privacy law compliance
00:23:30 Application of AI in clinical trials space
00:26:43 Areal-world example of privacy non-compliance and liability
00:30:30 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: Clinical evaluation is a lifecycle process, not a one time activity.
LTR: Clinical evaluation under EU-MDR.
Key Takeaways
* Privacy laws are essential for protecting patient data.
* HIPAA allows patients to own their medical records.
* GDPR covers a broader scope of personal data than HIPAA.
* Understanding state laws is crucial for compliance.
* Risk identification is key to managing privacy concerns.
* AI can streamline regulatory processes in clinical trials.
* Education and training are vital to prevent privacy violations.
* Data breaches can lead to significant financial penalties.
* The sharing of clinical trial data must be managed carefully.
* Continued education helps organizations stay compliant.
Keywords
Privacy laws, clinical trials, HIPAA, GDPR, risk management, data protection, healthcare compliance, regulatory frameworks, AI in research, patient privacy
About Edye Edens
Edye Edens is a seasoned legal and compliance professional specializing in clinical research and life sciences. With a Juris Doctor from Indiana University Robert H. McKinney School of Law and a Master’s in International Research Ethics, she brings over 15 years of experience in research compliance, regulatory operations, and clinical trial oversight. Her expertise spans clinical trial compliance, privacy law (HIPAA, GDPR), and research operations, with a focus on oncology and scaling HRPP and site-specific operational needs. Edye’s background includes leadership roles at Indiana University and First Class Solutions, where she served as an adjunct professor, research administrator, and a research business development leader. She currently acts as Senior Attorney for the Kulkarni Law Firm where she advises AMCs, sponsors, CROs, and research sites.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only, and not as legal advice. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
This is a free preview of a paid episode. To hear more, visit naveenagarwalphd.substack.com
Summary
“You can definitely use (GenAI) as your core algorithm, but you need to enforce that it can only do one specific thing or at least a very clarified labeling. ”
In this episode of the Let's Talk Risk Podcast, host Naveen Agarwal engages with Dr. Yujan Shrestha to explore the rapid advancements of AI and ML in the medical device industry. They dis…
-
This is a free preview of a paid episode. To hear more, visit naveenagarwalphd.substack.com
Summary
“Mismatched design controls can feel like driving through two feet of mud. You will eventually get to your destination, but it will take a lot more time and effort than it really should. ”
In this episode of the Let's Talk Risk Podcast, Naveen Agarwal and Aaron Joseph discuss the ongoing challenges of design controls in medical devices, particula…
-
“If you frame something in a certain way, you can influence the perception of its cause and the solution. ”
In this Let’s Talk Risk! conversation, Jack Garvey cautions against starting a CAPA with a problem statement first. Rather, he recommends the analysis to initially begin from a factual review of both technical and situational context leading up to a CAPA, and then defining a specific problem statement appropriately to guide the subsequent investigation of root cause(s) and identification of corrective/preventive actions.
Starting with a problem statement, as required by many of the current computer-based tools, leads to bias that can influence the perception of causes and potential solution. Conformity bias, subjectivity, and over-generalization
Listen to this Let’s Talk Risk! conversation with Jack Garvey, we cover a range of topics related to CAPA and risk analysis. Jump to a topic of interest using the timestamps below.
00:01:02 Introduction
00:02:35 A quick recap of top issues in FDA inspections
00:05:50 Why not to start your CAPA with a problem statement
00:10:00 How to conduct good investigations in CAPA
00:17:00 Analyzing risks in a relative context and diversity of risk flow
00:24:10 Why training is necessary for appropriate use of FMEAs for risk analysis
00:29:10 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: 5 blind spots to check in your CAPA process.
LTR: A risk-based approach to CAPA.
Compliance Architects: Writing for compliance.
About Jack Garvey
Jack Garvey is currently the managing partner at Compliance Architects, LLC providing services in compliance and quality consulting, outsourcing, staff augmentation technology-related services to companies regulated by the US FDA. In a career spanning over 30 years, he has held a variety of roles ranging from engineering to regulatory attorney, and leadership roles in Quality and Regulatory functions. Jack holds a Bachelor’s degree in Chemical Engineering, and Juris Doctor in Environmental Law. He has been admitted to practice law in NY and NJ since 1991.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
“Many of the AI/ML devices are going through De Novo and (in these) human factors considerations are specifically being called out as special controls.”
In this Let’s Talk Risk! conversation, Shannon Hoste highlights some of the new concerns emerging for human factor considerations for AI/ML devices in MedTech. She explains how we could consider the level of automation as we think about new hazards related to the user interface.
For example, in a clinical decision support system, having appropriate levels of trust with the physician is important. Shannon emphasizes the need for transparency and explainability of AI models, and highlights the potential for over-reliance as a reasonably foreseeably misuse by physicians. FDA is increasingly looking for human factors studies as part of De Novo submissions of novel AI/ML applications.
Listen to this Let’s Talk Risk! conversation with Shannon Hoste, which also includes an open discussion with a live audience. Jump to a section of interest using these timestamps.
00:01:35 Introduction
00:03:00 Understanding human factors in AI as levels of automation
00:07:30 Two useful guidance documents for human factors in AI
00:09:23 Current state of AI/ML devices in MedTech
00:11:07 FDA questions/concerns related to human factors in AI/ML
00:14:40 Handling reasonably foreseeable misuse for AI/ML devices
00:16:10 Open discussion and audience Q&A
00:31:20 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
FDA: Clinical Decision Support Software, Final Guidance issued Sep 2022.
FDA: Technical Considerations for Medical Devices with Physiologic Closed-Loop Control Technology, Final Guidance issued Sep 2023.
LTR: Treat human factors as a driver of customer satisfaction.
About Shannon Hoste
Shannon Hoste is currently the Chief Scientific Offer at Pathway for Patient Health, where she is actively creating and sharing knowledge to drive innovation in MedTech. In a career spanning over 25 years, Shannon has served in a variety of roles, including a Human Factors Team Lead at the US FDA. Her extensive experience includes engineering and product development roles, as well as continuing to consult in human factors engineering through Kymanox. She holds a Bachelor’s degree in Mechanical Engineering, and Graduate degrees in Management and Cognitive Systems Engineering, and is currently working on a Graduate Certificate in Epidemiology.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
This is a free preview of a paid episode. To hear more, visit naveenagarwalphd.substack.com
Summary
“Manufacturers that want to go to the European market, thinking about AI devices, it’s really important that you don’t look only at the MDR. You really have to look at the AI Act as well.”
In this episode of the Let's Talk Risk Podcast, Florian Tolkmitt discusses the rapidly evolving regulatory landscape for medical devices in Europe, particularl…
-
Summary
“We are all at some point, either patients or loved ones of patients. So, one way or another, we are all touched by the medical technology we create ourselves.
In this episode of the Let's Talk Risk Podcast, Bijan Elahi helps us appreciate the most important aspect of our work as risk practitioners, engineers and quality/regulatory affairs professionals in the medical device industry. That our work makes a direct impact on health and wellness of our loved ones and patients across the world.
We focus on 4 big themes in risk management that will continue to demand our attention in 2025 and beyond - AI/ML, cybersecurity, post-market surveillance and usability engineering. QA/RA professionals will need to stay updated as they help their organizations navigate through challenges in these areas.
We also talk about an upcoming Conference in April 2025 that will focus exclusively on safety risk management of medical devices. Register now - early bird discounted pricing available until 28 February 2025.
Listen to the full 30-minute podcast or jump to a section of interest listed below.
Chapters
00:00 Introduction and Keynote Insights
06:13 AI and Machine Learning in Medical Devices
09:08 Cybersecurity Challenges in Medical Devices
12:19 Post-Market Safety Surveillance
16:07 Usability Engineering and Human Factors
18:33 Upcoming Conference Details and Takeaways
Suggested links:
* International conference on medical device safety risk management.
* LTR: AI/ML in MedTech.
* LTR: Medical device cybersecurity now more critical than ever.
* LTR: Best practices for applying usability engineering to medical devices.
Key Takeaways
* We are all at some point either patients or loved ones.
* The impact of biomedical engineers on MedTech safety is huge.
* AI/ML devices evolve and their performance is not static.
* Cybersecurity is increasingly critical for medical devices.
* Post-market surveillance is essential for ongoing safety.
* Usability engineering can prevent misuse of medical devices.
* Collaboration across functions is vital for effective risk management.
* The FDA emphasizes explainability in AI ML devices.
* Learning from post-market data can improve future designs.
* Conferences provide valuable opportunities for knowledge sharing.
Keywords
Risk management, AI, machine learning, cybersecurity, medical devices, usability engineering, post-market surveillance, safety conference, safety, healthcare
About Bijan Elahi
Bijan Elahi is the founder of MedTech Safety and a globally recognized expert on safety risk management of medical devices. Bijan is a technical fellow, and an affiliate professor at Drexel University, USA. Bijan is also a lecturer at Eindhoven University of Technology in the Netherlands, where he teaches risk management to doctoral students in engineering. He is a frequently invited speaker at international professional conferences, and is also a contributor to ISO 14971, the international standard on the application of risk management to medical devices. He has trained over 10,000 people worldwide and his book Safety Risk Management of Medical Devices, now in its second edition, is a bestseller.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
Let's Talk Risk! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
“There is a clear case to put the money into the design of the product, and control that, than relying on the human factor.”
In this Let’s Talk Risk! conversation, Alex Saegert highlights the importance of focusing on risk reduction through safety by design, rather than relying on information for safety to users. He recommends risk practitioners to apply best practices from systems safety to build a clear understanding of system-level hazards that lead to harm in a hazardous situation through a sequence of events. Understanding the sequence of events can help create a more robust design to minimize the probability of occurrence of one or more elements involved.
He reminds us that there are tools available that can be applied without significantly increasing complexity of the design. Additionally, he recommends linking hazard analysis to design controls (DFMEA) and process controls (PFMEA) to reduce and control probabilities of occurrence of the events that contribute to hazardous situations, while maintaining traceability between hazards and control actions.
Listen to this Let’s Talk Risk! conversation with Alex Saegert, which also includes an open discussion with a live audience. Jump to a section of interest using these timestamps.
00:01:30 Introduction
00:02:30 Connecting risk analysis with design requirements for a balloon catheter
00:07:10 Estimating probability of occurrence for sequence of events
00:10:03 Connecting design with process FMEA for improved reliability
00:14:00 Applying functional safety to medical device design/development
00:19:10 Improving cross-functional collaboration
00:20:42 Open discussion and audience Q&A
00:31:20 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: Tips for integrating risk analysis with design controls
LTR: Design your medical device to be robust to use conditions
LTR: Overcoming challenges in linking risk management with design controls
About Alex Saegert
Alex Saegert is currently the President of Saegert Solutions Inc., an engineering consultancy focused on new technology commercialization, ensuring safety, reliability and usability. His professional experience includes developing ISO14971 compliant risk management programs for manufacturers of advanced class III medical devices. With over 25 years' experience in reliability engineering, product development and failure analysis across a variety of industries, Alex is a professional engineer, an ASQ-Certified Reliability Engineer (CRE), and IEC61508 Functional Safety Engineer.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
“Overconfidence in the performance of AI models in the development phase is a common problem.”
In this Let’s Talk Risk! conversation, David Giese highlights the growing need for a rigorous, disciplined development process with the goal of meeting regulatory requirements for marketing authorization. He shares how AI developers are often over-confident in the performance of their models, but disappointed when these models are subjected to rigorous scrutiny during the regulatory review.
Software as a Medical Device (SaMD), including AI/ML devices, are growing exponentially in MedTech. Rapidly evolving technology offers an opportunity to develop innovative products, but it also presents new risks. Security and Cybersecurity concerns are on the rise, as well as, concerns about transparency and explainability of AI/ML models.
David points to new guidance documents from the FDA and emerging standards. As an example, software developers should consider using IEC 81001-5-1 - Health software and health IT system safety, effectiveness and security - along with IEC 62304 - Medical device software - Software life cycle processes.
Listen to this Let’s Talk Risk! conversation with David Giese, which also includes an open discussion with a live audience. Jump to a section of interest using these timestamps.
00:01:30 Introduction
00:02:30 Current state of AI/ML applications in MedTech
00:04:05 Different types of AI models - adaptive vs. generative AI
00:06:25 Challenges in meeting regulatory expectations for AI in MedTech
00:08:18 Use of synthetic data in AI model development
00:09:30 Best practices for AI development and regulatory submissions
00:12:02 Cybersecurity challenges for SaMD and AI/ML devices
00:14:03 New skills risk practitioners need to develop
00:15:09 Open discussion and audience Q&A
00:27:20 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: EU vs. FDA - Aligned but different in approach to safety of AI/ML devices
LTR: A new paradigm for building connected medical devices
FDA: Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions, Issued September 2023.
About J. David Giese
J. David Giese is currently President and Partner at Innolitics, a consulting firm specializing in development of new medical software and regulatory submissions to the FDA. His team has helped develop and receive FDA clearance for over 60 medical devices, both SaMD and SiMD, over the last 12 years. His expertise include application development, custom software development, engineering design, project management, user experience design, cybersecurity, software testing and technical writing. He holds Bachelors and Masters degrees in Biomedical Engineering.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
Summary
“Medical devices are now increasingly connected in a hospital network. Or even if they are not, they are vulnerable to cyber attacks”.
In this episode of the Let's Talk Risk Podcast, Eric Henry highlights the growing concern about security and cybersecurity of medical devices. As technology evolves and medical devices increasingly operate in an i…
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
“Software more than any other type of device, has integrations with other systems and devices. You have to consider a complex network of dependencies and infrastructure when analyzing risks.”
In this Let’s Talk Risk! conversation, Megan Kane highlights the growing role of software, including artificial intelligence, in in-vitro diagnostic devices, including in next generation sequencing applications. We are now living at at time where technology is rapidly enabling precision medicine solutions for an individual patient.
In this high-tech environment, increasingly driven by software, new risks are also emerging. Cybersecurity, patient data privacy, service availability are common concerns. But when it comes to IVDs, it becomes challenging to connect the dots to find a direct link to the patient. Inaccurate or delayed results are common risks related to IVDs, but their potential impact on a patient is difficult to figure out with sufficient confidence. It often results in underestimating the severity of potential harm as reflected by a recent FDA warning letter.
Another key development in this area is the new FDA regulation for Lab Developed Tests (LDT), which is expected to add new requirements for pre-market reviews, risk management and quality system.
Listen to this Let’s Talk Risk! conversation with Megan Kane, which also includes an open discussion with a live audience. Jump to a section of interest using these timestamps.
00:01:10 Introduction
00:01:50 How software is increasingly driving IVD applications
00:03:30 Emerging risks with software in IVDs
00:06:15 Evolving regulatory approach to IVDs affecting LDTs
00:09:13 Increasing focus on risk management for LDTs
00:12:50 Risk considerations when software is a component of an IVD
00:15:30 Open discussion and audience Q&A
00:28:01 Closing comments and key takeaways
If you enjoyed this podcast, consider subscribing to the Let’s Talk Risk! newsletter.
Suggested links:
LTR: 3 pillars for defining your IVD risk management strategy
LTR: Focus on intended use for risk management of IVDs
FDA: Laboratory developed tests
About Megan Kane
Megan Kane is currently Director of Regulatory Operations at Velsera, where she supports the mission of radically improving healthcare globally through multi-omics to fuel growth of precision medicine. She is primarily involved in developing advanced software, including AI applications for end-to-end Next Gen Sequencing analysis. Previously she held various roles in Quality and Regulatory functions at both large and small medical device and in-vitro diagnostic companies. She holds a Bachelor’s degree in Molecular and Cell Biology, and Internal Auditor certifications from BSI in ISO 13485:2016 and ISO 9001:2015.
Let’s Talk Risk! with Dr. Naveen Agarwal is a weekly live audio event on LinkedIn, where we talk about risk management related topics in a casual, informal way. Join us at 11:00 am EST every Friday on LinkedIn.
Disclaimer
Information and insights presented in this podcast are for educational purposes only. Views expressed by all speakers are their own and do not reflect those of their respective organizations.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe -
Risk practitioners in MedTech now have a unique opportunity to help accelerate development and post-market improvement of AI/ML enabled medical devices while also reducing the regulatory burden.
PCCP, or Pre-Determined Change Control Plan, is a novel regulatory approach that allows device manufacturers to include planned changes in their initial submission and avoid additional submissions in the post-market phase when these changes are implemented.
This is even more important for AI/ML enabled medical devices, which are expected to learn from new data in real-world applications to improve both user experience and performance without compromising patient safety.
As an example, Apple recently utilized a tailored PCCP approach in their regulatory submissions to receive advance clearance for their planned post-market changes. In case of the Hearing Aid Function, they focused on features to improve user experience. In the Sleep Apnea Notification function (SANF), on the other hand, they focused on improving the performance of the core AI model based on real-world data. In both cases, they proposed a rigorous verification and validation plan, and labeling updates.
Risk managers can play a key role in planning these modifications using a risk-based approach. They can help develop testing and implementation protocols. Finally, they can help articulate the overall benefits and risks to convince the FDA that these planned changes would improve safety and effectiveness.
Risk practitioners therefore have an opportunity to become more strategic and influential by playing a proactive role. They can collaborate with regulatory professionals, product developers and clinicians right from the start of product development.
Listen to the 22-minute audio summary above that covers the following topics:
* Key differences between FDA’s draft and final PCCP guidance
* Lessons learned from Apple’s tailored approach to PCCP
* Understanding benefits and risks of PCCPs
* Opportunities for risk practitioners
* Challenges in implementing PCCPs
Disclaimer
This article was prepared with the help of Google NotebookLM, an artificial intelligence research assistant, using the following sources:
* FDA Draft and Final Guidance: PCCP for AI/ML enabled devices.
* Apple De Novo: Hearing Aid Feature, Decision Summary.
* Apple 510k: Sleep Apnea Notification Feature (SANF).
* Caption Health De Novo: Automated Ejection Fraction Software.
* Notes created using Google NotebookLM in response to user prompts.
All output(s), including the audio summary, were reviewed by a human for accuracy and relevance. This article is intended for educational purposes only and should not be considered as regulatory advice.
If you liked this post, consider becoming a free or paid subscriber to Let’s Talk Risk!.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit naveenagarwalphd.substack.com/subscribe - Laat meer zien