Questions Received From One of My Recent Webinars on Software V&V:
My responses:
Q1. Do we need to define and perform periodic validation, or is it mainly an analysis to determine whether changes have occurred that require re-validation?
ANS: I believe I mentioned that I don't believe in calendar-scheduled revalidation, nor are they required by regulators. There should be a periodic re-evaluation for any changes that require a re-validation, e.g., physical move, major repairs, etc., documented if the decision is to not revalidate. You can also set up systems to maintain equipment in a validated state (recorded in a log, lot records, or...) All allowed by the FDA
Q2. If we are using COTS software with low risk, can we skip the validation process?
ANS: No. FDA has stated that software is a risk-based activity, that's patient/user risk per ISO 14971, documented. Some validation is necessary for all COTS, at least to verify that it meets your / regulatory needs / requirements, that it does what it should do and doesn't do anything it shouldn't - no unintended consequences. And Part 11 is mandatory if the COTS is used for CGMP compliance records/sigs. Any V&V done by the vendor, if you have access to it, can be used to reduce what you need to supplement it in your test documentation.
Q3. When using SaaS for testing activities (e.g., test tools), where we do not have control over vendor-driven version changes, how can we maintain a validated state? This is especially relevant when vendors provide advanced features but are not specifically focused on the medical device industry.
ANS: Such vendor change control reporting is a problem in all industries, and especially with cloud-based software. You're going to have to 1) try to select companies willing to commit to such change notifications, 2) put such requirements in contracts, POs, etc., 3) set up systems to catch changes when such are implemented without notification (common with software changes), 4) shift to CGMP/ISO compliant-vendors, or those who want to be to be more competitive. Failure to do so will jeopardize $1000's in past validations, put you in conflict with the CGMPs, et al. This is an on-going problem, and not just with our industry.
Q4. How detailed should a test report be? We conduct multiple V&V rounds for both feature and regression testing, resulting in very detailed test steps. If we generate a final report, it can run into hundreds of pages with step-by-step pass/fail analysis. Is it acceptable to provide a summarized report instead of including detailed step-by-step results to keep the document concise?
ANS: It should be patient/user risk- based for amount of detail / length. Don't make the mistake of a "one-size fits all" software V&V test report SOP, which I see all too often with companies. I don't see how a summarized report saves anything if it's a summary of more extensive tests. Obviously 100's of pps should not be necessary for a PLC conveyor system validation or similar, but would be for a cancer drug pump. They can be large multi-page reports for complex, high risk (to people) systems/equipment, or a few pages in a Lab Book. They should all have enough information to allow replication: Project/test No., Title, Purpose/Scope, Approvals, pre-determined acceptance criteria, Materials used in test( P/Ns, Descrip., Lot No., qty...) , Equipment used (Equip/Asset No., S/N, Descrip., ...), test set up descrip'n, diagrams...), Requirements/needs to be V&V's, test cases addressing each requirement, results (filled in test cases / data), conclusions (results compared to pre-determined acceptance criteria), Post-Approvals, Appendix -- expand / contract based on risk.
Hope that answers the questions.
-- jel@jelincoln.com 04/28/2026
No comments:
Post a Comment