Tuesday, October 22, 2019



Verification and Validation Webinar - Recent Q&A,   John E. Lincoln,  10/22/2019

Some production-oriented V&V questions:


1.     When it comes to authoring and execution of test protocols, what is my role/responsibly as an automation engineer that has designed and implemented a change to the validated system?  What specifically is validation?

Ans:  You or whoever in your company is responsible for validations must reverify / revalidate whatever has been impacted by the change.  Referencing the original validation, the new V&V would normally not be as extensive as the original, unless the whole equipment / system was changed / replaced.  All based on risk (i.e., the amount of effort / depth) to the ultimate end-user / patient.

I define validation as a series of verifications (tests, checks, inspections) to prove that the item being validated meets its intended purpose(s), as defined by requirements / standards / guidance documents, et al. And there are formal CGMP and ISO definitions which say similar. Those requirements, et al, are changed into test cases which are then run to prove that the requirement exits and has been met, without any negative results.

2.     I noticed in your sample IQ ,OQ, and PQ’s, the evidence collected to prove operational acceptance appears to be strictly signatures.  What is your opinion on screenshots?  If required by validation, whose responsibility (validation, automation, production, etc.) should it be for taking screenshots (collecting evidence)?

Ans:  My definitions:  The IQ is a check list, where each installed requirement / item is checked by a qualified individual (per HR records) who signs off on the presence and functionality of that item on the check list; The OQ is composed of test cases to verify the presence and functionality of each test case based on each requirement ; the PQs (of which there are several depending upon input variables) re-challenge those OQ items / requirements that are subject to further variability by having test cases using sample sizes larger than n=1 (which is the case for OQ test cases); so each PQ has various test cases challenging requirements subject to variability, with each PQ test case using samples of n=30, or 125, etc, per a valid sampling plan based on a solid statistical rationale  (quote / reference a text book, a standard, e.g., Z 1.4, or use an industrial statistician  / consultant).

Screen shots are an excellent tool, and I and others have made much use of them.  However, they must be described in the narrative, may need a unique ID number for each, and each should be signed / initialed and dated, and added to the Test Report.  Responsibility for the screen shots can vary, should be defined by SOP (or the test protocol), but is usually done by those qualified to obtain such and/or who have a vested interest in obtaining them (the one handling the V&V usually). 
 
3.   In your OQ example, you did specify a separation between tester and verifier.  Does this imply that validation should not execute test protocols?  Or is this simply a matter of capability?  I.E. if validation is capable, there is no issue with a computer system validation engineer functioning as a tester?

Ans:  The tester can be the operator, an engineer / tech, or another trained / familiar with the operation being run.  The verifier is often a QC person, not necessarily overly familiar with the specific operation being run (the test case), but who verifies that the instructions in the test case were followed, and the actual achieved results were what were recorded.

4.   In general, it seemed like the focus was on devices, and I’m looking for clarification for where control systems and devices might differ in terms of risk.  For example, on SW V&V Elements – 1, when speaking of LOC (Min, Mod, Maj), you mentioned that Class II device must get elevated to Mod.  How does this relate to software in a data rich control system software environment?  Our systems are primarily Class 4 and 5 software systems (PLC, Operator Interface, Batch Reporting), and for the majority of changes there is little risk to patient or product.  However, because we are modifying Class 4 and 5 systems, it is often hard for us to convince our validation and quality partners that risk is negligible, and they feel their one size fits all approach is therefore justified.

Ans:  The  principle of risk (ultimately to patient / end user) still applies, tho sometimes with some operations its difficult to trace through, that’s why reference to an appropriate ISO 14971 or ICH Q9 Risk Management File is useful.  To somewhat eliminate second-guessing by stake-holders, including gov’t regulatory and internal regulatory, anticipate such push-back, and include an analysis of risk, tied to those files, in your V&V Test Report documentation.  I also try to include such references, tied to specific Risk File line items, with appropriate Test Cases.

"One size fits all" is safest from a bureaucratic standpoint, but is extremely wasteful of resources, and unnecessary.  Your SOPs on V&V should allow some leeway / “trust” in those trained (engineers) to make supported / documented decisions (e.g., I wouldn’t do the degree of effort on a PLC V&V as on a complex software V&V), yet I have done work with companies that required the same level of documentation / approvals for both (painful).

5.   What is your opinion on validation's role when it comes to installing software / firmware patches?  Would it ever be appropriate for IT/Automation to determine the level of risk and be allowed to decide if a log entry is adequate vs determining when full blown change management is required?

Ans:  First, your SOPs must clearly state the methods to be chosen, and how documented.  Second, from a QA standpoint, any patch I feel (my opinion) should be documented by a rev level change.  Unless there’s some identifier in the code, easily accessed.  In other words, there has to be 1) a clear distinction for each change made to the software, 2) changes are themselves V&V’d, approved, including QC/QA, and 3) documented. 

If a log entry is defined by the company as valid, it probably should be supported in documentation somewhere by at least two signatures approving the change.
From a regulatory standpoint, you can’t have one version / release in the field (or in production / manufacturing), but none or few of that version / release are identical to another (sadly another problem I’ve seen). There must be a clear documented history of each. How you as a company chose to work that out (and document for forward / backward traceability) is up to you, subject to the above considerations.

Remember:  I mentioned that this presentation is only one valid approach to V&V, but one that has been field tested, and reviewed by US FDA and EU ISO / Notified-Body inspectors / auditors for over 37 years, with no objections. A company can have another viable and compliant method.

Further considerations:

V&V of production systems generally can have some less depth than device software / firmware - primarily because there are usually redundant checks / verifications "wired" into the production process downstream of a validated item, which are documented in the batch record.  This can also be referenced in the Test Report / Protocol as further justification, similar to patient risk for degree of effort in a V&V.

-- John E. Lincoln                                                          jel@jelincoln.com

No comments:

Post a Comment