Tuesday, June 28, 2016

CONTRACT  MANUFACTURING  ORGANIZATION  (CMO)  V&V  ISSUES

Some recent questions I received pertaining to CMO and equipment / process verification and validation (V&V), with my answers:

Ques 1.    In a CMO context, where very different process are run, should the PQ of the equipment be specifically performed for each manufacturing process?

Ans:  You could validate the equipment for it's general use(s), and/or expected uses.  Then have your V&V SOP(s) address a method to validate or verify the particular run for a client that adds the unique requirements of that client's lot(s).  Or add such additional V&V requirements by means of a 1st Article inspection (and/or other tests / QC) addressing the additional requirements.

Ques 2.    Is it possible to perform qualification of the equipment during the performance qualification of the process? In this case, could the critical parameters, defined for the process, be used for the PQ of the equipment? Or do they need to be specific for each piece of equipment?

Ans:  The approach I favor (and explained in my webinars, but by no means the only way) is to qualify / validated the equipment by means of the IQ, OQ, and PQs.  The critical parameters are addressed under the OQ, and can include DOE.  The PQs address the robustness, repeatability and reproduceability of the the equipment given all allowable worst case inputs (shifts, RM lots, etc).  Each piece of equipment needs to be so addressed.  I generally do a process V&V for such things as cleaning, etc.  However, I have done process V&V for the entire production process, in which case, I have separate verifications under the overall process validation, that address each piece of equipment, as explained in the webinar briefly.  Note the need to define terms per your company's "working" definitions, also emphasized throughout my applicable webinars.  

Ques 3.    Which is the criterion to define a piece of equipment as critical in a manufacturing process?

Ans:  The key criterion to such definition is the equipment's contribution to the "critical quality attributes" of the element of the final product it acts upon, especially as it relates to the end user, the patient / clinician.   This is an important point that I try to emphasize in my many webinars o V&V, and recommended tying such decisions in the V&V test cases / scripts to a Product Risk Management File / Report per ISO 14971 or ICH Q9.  It's possible to develop a generalized Risk document for a CMO, and then add some unique requirements to it in the batch record, tied to an additional analysis of the client's product.


Obviously 1 and 3 require obtaining some requirements as to quality attributes and safety / efficacy of the product's field use from the client, perhaps as part of the contract, a quality agreement, questionaire, or similar document.  Rather than being a burden, I think such a requirement might add to a company’s credibility in the eyes of its customers.

-- John E. Lincoln, jelincoln.com

Wednesday, June 15, 2016

DESIGN  REVIEWS  -- HOW  MANY?

I got an e-mail today asking a similar question.  It redirected the reader / was linked to a consulting company.  Basically it mentioned that one review is mandated by the regs - focusing on 21 CFR 820.30, medical device CGMPs on Design Control -  but the website recommended two, one after the Plan and one after V&V.  It also mentioned that additional ones may be advisable.

However, on the U.S. FDA's website, on a webpage dealing with design control guidance:

http://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070627.htm

- note:  Figure 1; shows five such reviews.

So what is the actual requirement.  In short, "it depends".

The 820.30 simply states "that formal documented reviews of the design results are planned and conducted at appropriate stages of the device's design development."  To me this phrase is the key.

Moving beyond fulfilling design control requirements to avoid regulatory problems, to the positive of using such CGMP requirements because they improve a company's products, I recommend in my webinars and workshops that design reviews be used as product development "gates".  Such "gating" is described in the several 'fast cycle' development books that came out in the 1990's.  As such, I use them (and recommend their use) after each significant "milestone" on a Product Development Plan (my preference is a Gantt Chart), to review the completion of that milestone's tasks, and authorize the resources / budget to move on to the next milestone, when linear, or at critical junctures in the project, when reiterative.  Such formally scheduled design reviews are themselves a final task under each key milestone (and/or can also serve as the beginning task for the next milestone, if you're so inclined ).  Then design reviews make business sense, and are not just an exercise in compliance merely for the sake of compliance.

The CGMPs further require that the "participants at each design review include representatives of all functions concerned with the design stage being reviewed", and also include at least one member of the review team "who does not have direct responsibility for the design stage being reviewed", "as well as any specialists needed".  Of course each review - results, design ID, participants, and date, must be documented in the DHF.

-- John E. Lincoln,  jelincoln.com

Thursday, June 2, 2016

AGILE  DEVELOPMENT  AND  AUTOMATED  SOFTWARE  TESTING

Here's my answers to two questions raised at one of my recent webinars on software / firmware V&V and documentation:

Ques 1:  Do you have experience working with Agile methodologies such as SCRUM? In your presentation, you mention that FDA suggests following a waterfall development cycle. Do you know what is the point of view of the FDA about iterative/incremental methodologies?
Ans 1:  My experience in Agile is limited, although, as mentioned its principles have been used in many companies before someone came up with the name Agile (as is true with many other "methodologies", e.g., 6 sigma). 
I showed the one slide to illustrate V&V (Verification and Validation), which showed a "waterfall" product development cycle.  It was used by the U.S. FDA, in the mid 90's and was focused on design control (21 CFR 820.30; see http://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070627.htm
-- Figure 1 under "III. APPLICATION OF DESIGN CONTROLS").  It was used in conjunction with a process that typically is iterative / incremental -- device R&D to illustrate V&V for design control (21 CFR 820.30), and was not meant to show any FDA preference in product or software development, just how a series of device verifications lead to a device validation.  The FDA has no preference as to what development methodologies a company selects and uses, that I have seen to date -- they leave such decisions to the manufacturer, who must justify and prove / document their choices.  However, the FDA wants the documentation to prove defined processes were followed and that there was compliance to the regulations.  Hence my caution re: Agile, which manifesto on the Internet states that a key Agile goal is an implied minimization of defined processes ... and a reduction in documentation, to wit:
...
"Individuals and interactions over processes and tools

"Working software over comprehensive documentation..."  

-- http://www.agilemanifesto.org 

So just proceed with that caution in mind.
Ques 2: You haven’t discussed automated tests (for unit tests, integration tests, functional tests, performance tests). Wouldn’t it be the perfect tool to demonstrate reproducibility of a system? What is the point of view of the FDA on automated tests?
Ans 2:  There's nothing wrong with automated testing.  They are as mentioned, an excellent tool.  Much is done in software / firmware program development using such.  However, the automated test programs and hardware must themselves be rigorously validated in the same way as the webinar discussed (including 21 CFR 11; and see "SOFTWARE / FIRMWARE  V&V  "MODEL"" post below) before their data can be used in subsequent V&V activities.  So a discussion of automated testing is redundant to the subject of software testing as  discussed (see previous blogs on the subject).   Be aware that the FDA would look very carefully at such automated test equipment and programs, and their V&V, since they are then used for subsequent automated testing for V&V of other software on a repetitive basis.

-- John E. Lincoln, jelincoln.com