Wednesday, November 23, 2016

COTS (Commercial Off The Shelf) Software Question / Answer

Ques: We are in the process of validating a capital equipment, it is a class II medical device. It includes both software and hardware.  The software component is not separable and is not accessible/modifiable but it is the major interface that user can configure/run the device and monitor some parameters. The software also will raise alarms when there is an undesirable situation/risk to the patient. The device was developed overseas and is not have FDA approved.  We were not involved in the validation practices during the design and development of the software, nor we have access to the vendor’s codes and majority of their documentation.  During the webinar you talked about the IQ/OQ/PQ approach in validation of the software. Under the circumstances, is it acceptable to use IQ/OQ/PQ approach as the only validation practice in a 510K FDA submission without any other type of software validation activity/documents?

Ans:  Yes.  The webinar addressed primarily this final stage, integrated / systems level software validation.  You would do a "black box" V&V (verifying software operation by means of the hardware's operation) of hardware and software using IQ, OQ, PQs as described*, but include any information you know about the development methods and in-process test methods used in the development of the custom software.  
Alarms are a particular issue with the FDA.
You would have to use the guidance document I cited, as its required for a submission to the FDA, and compile the 11 documents for your submission (while all are filled out and part of your DHF, not all will be submitted as determined by Level of Concern).  
Where you cannot obtain the necessary information from the vendor, that would be stated under the applicable document, e.g, Design Spec, and Development.  Whatever you can supply, should be included (from any tech manuals, Wikipaedia, web site info, verifiable, etc.).
This is a typical approach with COTS software where you don't have access to the code.

*  IQ -- Software requirements met by hardware; installed properly;
OQ -- Software initializes and shuts down properly. Required features exist and function. Any settings are optimized. 21 CFR 11 issues addressed, exist, operate, if applicable
PQs -- Repeatability and reproducibility of applicable requirements, e.g., screen outputs, alarms, etc, are challenged by worst case runs, multiple samples per run. 

-- John E. Lincoln
COTS (Commercial Off The Shelf) Software Question / Answer

Ques: We are in the process of validating a capital equipment, it is a class II medical device. It includes both software and hardware.  The software component is not separable and is not accessible/modifiable but it is the major interface that user can configure/run the device and monitor some parameters. The software also will raise alarms when there is an undesirable situation/risk to the patient. The device was developed overseas and is not have FDA approved.  We were not involved in the validation practices during the design and development of the software, nor we have access to the vendor’s codes and majority of their documentation.  During the webinar you talked about the IQ/OQ/PQ approach in validation of the software. Under the circumstances, is it acceptable to use IQ/OQ/PQ approach as the only validation practice in a 510K FDA submission without any other type of software validation activity/documents?

Ans:  Yes.  The webinar addressed primarily this final stage, integrated / systems level software validation.  You would do a "black box" V&V (verifying software operation by means of the hardware's operation) of hardware and software using IQ, OQ, PQs as described*, but include any information you know about the development methods and in-process test methods used in the development of the custom software.  
Alarms are a particular issue with the FDA.
You would have to use the guidance document I cited, as its required for a submission to the FDA, and compile the 11 documents for your submission (while all are filled out and part of your DHF, not all will be submitted as determined by Level of Concern).  
Where you cannot obtain the necessary information from the vendor, that would be stated under the applicable document, e.g, Design Spec, and Development.  Whatever you can supply, should be included (from any tech manuals, Wikipaedia, web site info, verifiable, etc.).
This is a typical approach with COTS software where you don't have access to the code.

*  IQ -- Software requirements met by hardware; installed properly;
OQ -- Software initializes and shuts down properly. Required features exist and function. Any settings are optimized. 21 CFR 11 issues addressed, exist, operate, if applicable
PQs -- Repeatability and reproducibility of applicable requirements, e.g., screen outputs, alarms, etc, are challenged by worst case runs, multiple samples per run. 

-- John E. Lincoln

Monday, November 21, 2016

Cut Down  on  Team  Meetings

Require of your team members --
Completed Staff Work”  (U.S. Military;  also see Leon Shimkin, GM then owner, Simon and Schuster, from “… Stop Worrying…”, by Dale Carnegie):
      What is the problem?
     What is the cause of the problem?
     What are all the possible solutions to the problem?
     What solution do you suggest (with rationale)? 

See also my website on the subject:


-- John E. Lincoln

Wednesday, October 12, 2016

Molding Press and Tooling V&V Continued ...

In response to additional ques on above subject:

STATEMENT: In the OQ phase; process parameters that yield acceptable product are found; process windows are determined (via DOE’s) and backed-up by statistical data. In the PQ phase, you recommend using worst-case settings – at the edge of the now-determined process window.  While I can see that this is the most thorough way to ensure the process window is appropriately set, there are two questions:

Ques:  Is this necessary in the eyes of the FDA or other governing bodies that you have run into?   Or, is running the PQ at a nominal setting acceptable by the FDA and other governing bodies?

Ans:  The FDA has always questioned the upper and lower limits on SOPs, set-up cards, et al, wanting to see the proof for the values chosen.  I address by a min of three PQs, one at nominal for all values, one at the higher (and/or lower), and one at the lower (and/or higher) for all the values, mixing and matching to get "worst case allowable" with the three minimum PQs.

Ques:    When validating a process that has many variables, are there rules of thumb to determining how many “worst case” settings or scenarios should be run?  I can see how a machine could be forever in the PQ phase if we were constantly exercising worst case scenarios. I can’t imagine a business “making money” let alone shipping product if they are spending time in a constant PQ phase.

Ans:  Again, I try to hold my initial PQs to 3-5 max. (this would not include the additional PQs required for additional tooling, different lots of resin, etc.)  You combine as many as practical and can omit some if the patient risk is minimal to non-existent.  You as the manufacture decide, and document your rationale.  I sometimes use a matrix to combine, much like a factorial for DOE.

To add to my previous response (see previous blog) -- An Alternative to V&V of Press and Tools:

1. V&V the press as described (machine states plus other key settings / operations -- heating, cooling, pressure ...);

2.  Do 1. with the next tool scheduled, with min of 3 PQs, to V&V the press and qualify the tool;

3.  At completion of three PQs, et al, release that lot;

4. That will Validate the press (and qualify that tool);

5.  Remove the press machine states and other press related elements from the PQ template, and leave those pertaining to the molded part (fill, stress (polorized light / destructive testing ...), dims, et al);

6.  Do a min of 1 PQ for each new tool and/or each new lot of problematic resin as an addendum to the original Press/Tool V&V and after approval release that lot of molded parts. 

7.  Repeat for each new part until all tools are qualified.

8.  Each part / tool PQ can be done as part of a normal run if no problems are anticipated, and when passed, signed off so the lot can be released.

9.  Do the same with additional lots of problematic resins if an issue.

Of course, all the above assumes the same as mentioned in the previous blog, that there are downstream 1st articles / QC performed in molding and assembly, as further verification activities, documented (SOPs referenced, results included ...). 
It further assumes that variations in input variables are basically well known and controlled.  To the degree that they aren't, additional PQs may be required.

This is not as hard as it appears.  And not that obstructive to a business' operation.


-- John E. Lincoln        jel@jelincoln.com

Tuesday, October 11, 2016

Molding Press and Tooling V&V

A recent webinar on V&V Planning I conducted resulted in the following questions (and my answers):

 We are planning on performing Validation(s) for mold tooling that  will produce parts that go into a Medical Device. We are not making the completed device here (as yet) but simply making some of
the molded components.

Ques:  As such, do our molding machines need to first be validated? I am presuming so.

Ans:  Yes.  Even if you do 1st articles and additional QC on the molded parts. One of the adversary
 audit remediations I performed dealt with this, meaning the mandate came from an FDA District
Office after review of the 483.

Ques: Does this mean a full validation of the machine(s) themselves;   IQ, OQ, PQ?

Ans:  Yes.  IQ would include leveling, water, hydraulics, and electrical.

Ques:  If yes, what constitutes the OQ and PQ portion of such a validation?

Ans:  OQ would verify general operation.  Specific parts are validated separately to address resin, establish set-up card information / parameters et al.

I did a 550 T C-M press, HW and SW, and used a fold-out schematic addressing “machine states”
to develop many of my test cases.  The OQ verified the initialization, operation of the software –
“black box” – by the hardware’s performance for each of the states, as well as temperature,
pressure, et al, that everything cycled as expected or as set.  The PQ’s ran thru each of the above (machine states et al)  for n=?  with ? determined by the variability in the process (an existing
sampling SOP and/or industrial statistician or similar can provide sample sizes, or see Juran and
Gyrna’s Quality Planning And Analysis Text, any edition for rationales for sample sizes down to
=10.

Although mine involved a dedicated press (one tool), in your case I would determine the most
difficult one or two parts / tools and use that with a well thought out and documented rationale.   Include an explanation of your downstream verifications / QC required for any
molded part, e.g., 1st articles, other QC in Molding and/or Production / Assembly in your Test
Report justifications for any reduction in test cases / sample sizes, which such would allow.

In my case, a review of the press manufacturer’s  C++ module list (which I obtained by a call to
their tech support) showed the company did a FAT, so I requested a copy, and incorporated most
 in my V&V, reducing some of the additional V&V.  

Ques:  Can you use "any old mold" or does it require a specific mold to be used?

Ans: See above.

Ques:  Doesn't this constitute the "chicken or the egg" syndrome if using  a mold that is yet to be qualified?

Ans:  A common occurrence.  Do some qualification before, e.g., flow studies, part dimensional,
part stress,
venting, shot / fill and similar, which may have been done on another press -- document.  Then do
the above, if this is your most troublesome molded part.

Ques:  You spoke of software/hardware validations;  We rely upon the SPC and Parameter software
 that is part of the molding machine(s) themselves to provide us with consistent molded parts; do
we need  to have this built in software validated?

Ans:  Yes.  And it might also involve 21 CFR Part 11, depending upon how the SPC data is used.  This could be a separate V&V.

Ques:  If so, how does one go about doing such; generally speaking?

Ans:  As addressed briefly in the webinar, but I would also add something similar to an Excel macro
V&V, where you compare data so generated by your equipment with an independent source,
gauges (cal’d) and a calculator using a  textbook formula or similar independent source, all recorded
 in the Test Report.

As mentioned.  If the above seem intimidating, start simpler and address high [patient] risk items,
then later add others as CAPA feed back or other CGMP requirements require and your experience
with the system dictate.

--  John E. Lincoln       jel@jelincoln.com

Tuesday, September 13, 2016

IEC 62304 Software Lifecycle Issues

IEC 62304  Consideratons -- September 13, 2016

International Electrotechnical Commission (2006). “Medical Device Software – Software Lifecycle Processes”; IEC 62304 

[Note:  Applying the FDA SW guidance documents ensures compliance with IEC 62304 – 62304 adds more specifics to lifecycle considerations]:

       Does not specify the content of the documentation to be developed.
       Show traceability through all the elements; no set format specified.
       Does not prescribe a specific lifecycle model – Waterfall, Iterative, Evolutionary.
             Up to company to define / document.
       FDA accepts compliance to IEC 62304 as fulfilling the SW Development requirement of their
Guidance document (JEL - circular reasoning)
       The software is classified into three classes in IEC 62304:
       Class A: No injury or damage to health is possible;
       Class B: Non-serious injury is possible;
       Class C: Death or serious injury is possible.
                    [Remember FDA’s Minor, Moderate, and Major]

 INTERNATIONAL IEC STANDARD 62304 – Documentation Requirements (note similarity to U.S. FDA's Guidance Document on Device Software Documentation for 510(k)s):

Software Documentation
Class A
Class B
Class C
Software development plan
Must contain contents to sections IEC 62304:2006. The plan's content list increases as the class increases, but a plan is required for all classes.
Software requirements specification
Software requirements specification conforming to IEC 62304:2006. The content list for the software requirements specification increases as the class increases, but a document is required for all classes.
Software architecture
Not required.
Software architecture to IEC 62304:2006. Refined to software unit level for Class C.
Software detailed design
Not required.

Document detailed design for software
units.
Software unit implementation
All units are implemented, documented and source controlled.

Software unit verification
Not required.
Define process, tests and acceptance
criteria.
Carry out verification
Define additional tests and acceptance
criteria.
Carry out verification

Software integration and integration
testing
Not required.
Integration testing to IEC 62304:2006.

Software system testing
Not required.
System testing to IEC 62304:2006.

Software release







-- John E. Lincoln
Document the version of the software
product that is being released.


List of remaining software anomalies, annotated with an explanation of the
impact on safety or effectiveness, including operator usage and human factors.




jelincoln.com






Cybersecurity for Medical Devices - Draft Guidance

Cybersecurity (where required) -- September 13, 2016:

“ Postmarket Management of Cybersecurity in Medical Devices – Draft Guidance …”, dated January 22, 2016:
  • Applies to devices susceptable to unauthorized access / vulnerabilities …
  • Include cybersecurity in the product Risk Analysis (ID of threats / vulnerablities …) – Risks to
          “essential  clinical performance”, both controlled and uncontrolled;
  • Includes postmarket monitoring, assessing, detecting, impact determination, disclosure, deployment, et al;

  • Incorporate NIST’s (included in the Guidance Appendix’s)  Identify, Protect, Detect, Respond,  and Recover;
  • Device manufacturer is responsible to address (tied to 820.100 by FDA);
  • Patches = design changes (820.30); not usually subject to FDA review; are “device  enhancements”, not “recalls”;
  • But subject to K97-1 analysis by manufacturer; and
  • Require ‘validation’ (sic).
-- John E. Lincoln      jelincoln.com

Tuesday, August 2, 2016

Some 510(k) Q & A

Here are some questions I recently received by a company doing it's own 510(k), with my brief responses:

1.  Re:  Traditional 510(k), Section 10: Is it about summary of performance testing?   
     biocompatibility testing and bench  testing? 
     Ans:  Yes to all.  You either do any required tests or pay a lab to do them.  If you do them,   
     they have to be done per applicable standards.  E.g., Biocompatibility test requirements are    
     spelled out in ISO 10993.

2.  Design control?  How can I get that?
     Ans:  You should be currently developing your product under Design Control.  This means   
     doing it under 21 CFR 820.30, and addressing the 9 elements in your documentation and       
     systems:
     1) Design and Development Planning (e.g., Gantt chart or SOP-defined or ...; w/ "Start Date");
     2) Design input (requirements, standards, guidance documents, et al);
     3) Design output (drawings, specs, assembly / test SOPs, code, and similar);
     4) Design review(s) (to ensure past activities are complete, and lay out next actions; with an      
          impartial member of the review team);
     5) Design verification (I define by "working definitions" as testing, checking, inspection...);
     6) Design validation (I define as the collection of verifications);
     7) Design transfer (complete approved production-ready documents now in production);
     8) Design changes (the basic reason for Design Control - ensure changes are reviewed and 
          verified, then approved, all documented, prior to implementation during development);
     9) Design History File (DHF) (- proof that 1-8 were performed and the history of each).  
     As 820.30 requires, the results are documented in a DHF (Design History File).  
     If you haven't done so already, download the entire medical device  
     CGMPs, 21 CFR 820, from the fda.gov website.  You have to follow all of it, or your contract  
     manufacturer will follow much of it and you the rest.  Your  510(k) submission to the FDA is a
     tacit admission that you've done that, and you will later by audited by the FDA to ensure
     compliance.  That means also having SOPs and a QM (Quality Manual) that is also followed.

3.  About the CGMPs, I need to ask to the company whom I'm going to choose make my device   
     to show me the certification.
     Ans:  There is no legitimate US FDA CGMP (US 21 CFR 820) "certification".  If they are subject 
     to the US FDA CGMPs, they are registered with the FDA (as you will have to do), and subject
     to CGMP audit by the FDA.  You could ask to see copies of past FDA audits and the  
     company's responses.  And you can perform your own vendor audit to ensure their
     compliance, requiring you to get familiar with the CGMPs (or hire a consultant to perform   
     such an audit for your). This is for US sales.  For outside the US sales, the company would  
     have to have a quality management system (QMS) certified by a Notified-Body (BSI, TUV,
     DNV, UL ...), hired by the company, and that company's manufacturing system and product
     testing / lot release requirements / methods would also have to have an additional audit /
     certificate from their N-B.

As indicated above, a 510(k) is only one of many steps required by the U.S. FDA in order for a company to market devices in the U.S.

-- John E. Lincoln,  www.jelincoln.com

Tuesday, June 28, 2016

CONTRACT  MANUFACTURING  ORGANIZATION  (CMO)  V&V  ISSUES

Some recent questions I received pertaining to CMO and equipment / process verification and validation (V&V), with my answers:

Ques 1.    In a CMO context, where very different process are run, should the PQ of the equipment be specifically performed for each manufacturing process?

Ans:  You could validate the equipment for it's general use(s), and/or expected uses.  Then have your V&V SOP(s) address a method to validate or verify the particular run for a client that adds the unique requirements of that client's lot(s).  Or add such additional V&V requirements by means of a 1st Article inspection (and/or other tests / QC) addressing the additional requirements.

Ques 2.    Is it possible to perform qualification of the equipment during the performance qualification of the process? In this case, could the critical parameters, defined for the process, be used for the PQ of the equipment? Or do they need to be specific for each piece of equipment?

Ans:  The approach I favor (and explained in my webinars, but by no means the only way) is to qualify / validated the equipment by means of the IQ, OQ, and PQs.  The critical parameters are addressed under the OQ, and can include DOE.  The PQs address the robustness, repeatability and reproduceability of the the equipment given all allowable worst case inputs (shifts, RM lots, etc).  Each piece of equipment needs to be so addressed.  I generally do a process V&V for such things as cleaning, etc.  However, I have done process V&V for the entire production process, in which case, I have separate verifications under the overall process validation, that address each piece of equipment, as explained in the webinar briefly.  Note the need to define terms per your company's "working" definitions, also emphasized throughout my applicable webinars.  

Ques 3.    Which is the criterion to define a piece of equipment as critical in a manufacturing process?

Ans:  The key criterion to such definition is the equipment's contribution to the "critical quality attributes" of the element of the final product it acts upon, especially as it relates to the end user, the patient / clinician.   This is an important point that I try to emphasize in my many webinars o V&V, and recommended tying such decisions in the V&V test cases / scripts to a Product Risk Management File / Report per ISO 14971 or ICH Q9.  It's possible to develop a generalized Risk document for a CMO, and then add some unique requirements to it in the batch record, tied to an additional analysis of the client's product.


Obviously 1 and 3 require obtaining some requirements as to quality attributes and safety / efficacy of the product's field use from the client, perhaps as part of the contract, a quality agreement, questionaire, or similar document.  Rather than being a burden, I think such a requirement might add to a company’s credibility in the eyes of its customers.

-- John E. Lincoln, jelincoln.com

Wednesday, June 15, 2016

DESIGN  REVIEWS  -- HOW  MANY?

I got an e-mail today asking a similar question.  It redirected the reader / was linked to a consulting company.  Basically it mentioned that one review is mandated by the regs - focusing on 21 CFR 820.30, medical device CGMPs on Design Control -  but the website recommended two, one after the Plan and one after V&V.  It also mentioned that additional ones may be advisable.

However, on the U.S. FDA's website, on a webpage dealing with design control guidance:

http://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070627.htm

- note:  Figure 1; shows five such reviews.

So what is the actual requirement.  In short, "it depends".

The 820.30 simply states "that formal documented reviews of the design results are planned and conducted at appropriate stages of the device's design development."  To me this phrase is the key.

Moving beyond fulfilling design control requirements to avoid regulatory problems, to the positive of using such CGMP requirements because they improve a company's products, I recommend in my webinars and workshops that design reviews be used as product development "gates".  Such "gating" is described in the several 'fast cycle' development books that came out in the 1990's.  As such, I use them (and recommend their use) after each significant "milestone" on a Product Development Plan (my preference is a Gantt Chart), to review the completion of that milestone's tasks, and authorize the resources / budget to move on to the next milestone, when linear, or at critical junctures in the project, when reiterative.  Such formally scheduled design reviews are themselves a final task under each key milestone (and/or can also serve as the beginning task for the next milestone, if you're so inclined ).  Then design reviews make business sense, and are not just an exercise in compliance merely for the sake of compliance.

The CGMPs further require that the "participants at each design review include representatives of all functions concerned with the design stage being reviewed", and also include at least one member of the review team "who does not have direct responsibility for the design stage being reviewed", "as well as any specialists needed".  Of course each review - results, design ID, participants, and date, must be documented in the DHF.

-- John E. Lincoln,  jelincoln.com

Thursday, June 2, 2016

AGILE  DEVELOPMENT  AND  AUTOMATED  SOFTWARE  TESTING

Here's my answers to two questions raised at one of my recent webinars on software / firmware V&V and documentation:

Ques 1:  Do you have experience working with Agile methodologies such as SCRUM? In your presentation, you mention that FDA suggests following a waterfall development cycle. Do you know what is the point of view of the FDA about iterative/incremental methodologies?
Ans 1:  My experience in Agile is limited, although, as mentioned its principles have been used in many companies before someone came up with the name Agile (as is true with many other "methodologies", e.g., 6 sigma). 
I showed the one slide to illustrate V&V (Verification and Validation), which showed a "waterfall" product development cycle.  It was used by the U.S. FDA, in the mid 90's and was focused on design control (21 CFR 820.30; see http://www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm070627.htm
-- Figure 1 under "III. APPLICATION OF DESIGN CONTROLS").  It was used in conjunction with a process that typically is iterative / incremental -- device R&D to illustrate V&V for design control (21 CFR 820.30), and was not meant to show any FDA preference in product or software development, just how a series of device verifications lead to a device validation.  The FDA has no preference as to what development methodologies a company selects and uses, that I have seen to date -- they leave such decisions to the manufacturer, who must justify and prove / document their choices.  However, the FDA wants the documentation to prove defined processes were followed and that there was compliance to the regulations.  Hence my caution re: Agile, which manifesto on the Internet states that a key Agile goal is an implied minimization of defined processes ... and a reduction in documentation, to wit:
...
"Individuals and interactions over processes and tools

"Working software over comprehensive documentation..."  

-- http://www.agilemanifesto.org 

So just proceed with that caution in mind.
Ques 2: You haven’t discussed automated tests (for unit tests, integration tests, functional tests, performance tests). Wouldn’t it be the perfect tool to demonstrate reproducibility of a system? What is the point of view of the FDA on automated tests?
Ans 2:  There's nothing wrong with automated testing.  They are as mentioned, an excellent tool.  Much is done in software / firmware program development using such.  However, the automated test programs and hardware must themselves be rigorously validated in the same way as the webinar discussed (including 21 CFR 11; and see "SOFTWARE / FIRMWARE  V&V  "MODEL"" post below) before their data can be used in subsequent V&V activities.  So a discussion of automated testing is redundant to the subject of software testing as  discussed (see previous blogs on the subject).   Be aware that the FDA would look very carefully at such automated test equipment and programs, and their V&V, since they are then used for subsequent automated testing for V&V of other software on a repetitive basis.

-- John E. Lincoln, jelincoln.com

Wednesday, May 25, 2016

A  "SIGNATURE"  SOP

By "Signature SOP", I mean an SOP (Standard Operating Procedure) that defines what it means when a person signs their signature or initials on a document.  E.g:

o  An engineer signing for the engineering function, is only signing for the engineering portion of the document;

o  Marketing is only signing for approving marketing issues in the document;

o  Ditto Finance, Manufacturing, Operations ...;

o  Senior management is signing that the document meets company requirements and address the issues, based on reliance on the other signatures by function; and

o  QA is signing that the signatures are the latest after all iterations, that all elements mentioned in / required by the document have been addressed by the proper individual(s) / function(s), etc.


People have a hesitancy in signing a document dealing with subject matter they are unfamiliar with, may ask for unnecessary revisions, want additional clarification.  They are rightly concerned to be signing and "agreeing" to something that they don't understand, together with what they are involved with on the document.  

So such an  SOP would clarify what each functions' / individual's signature really stands for, and what it doesn't, and can assist in:
     1) getting the signatures, 
     2) defending a signature in an audit, and 
     3) defending a signature in court.

-- John  E. Lincoln, jelincoln.com

Thursday, May 12, 2016

RISK  ANALYSIS

There seem to be several definitions of risk analysis, left unstated, in many on-line regulatory discussion threads, which can lead to incorrect focus and improper direction.   

The most important from a regulatory standpoint, either US FDA or EU MDD, et al, is product risk to the end user, the patient / clinician. For devices this is ISO 14971, for pharma it could be ICH Q9, which could include the d-, p-, and u-FME[C]A as well as other complementary methods recommended in those standards. 

IN ADDITION, a project leader may want to do risk analysis tied to project success, meeting budget and time deadlines, customer delivery commitments, etc. But those would be business / financial-related risks not the ones regulatory agencies mandate. And as mentioned, the earlier the better.  


So such risk analysis if performed on equipment would still have to trace any failure modes, malfunctions, or even correct functions that could cause problems, through to their effects on the end user / patient / clinician (an omission often cited for the disadvantage of using FME[C]A for patient / user risk analysis, although such can be compensated for by definitions and/or structure of such).

-- John E. Lincoln, jelincoln.com

Wednesday, April 27, 2016

RELATIONSHIPS:  DEVICE  DHF, DMR, DHR

Caveat.  There are many ways to meet the requirements of the Design History File (820.30); the Device Master Record (DMR, 820.1811); and the Device History Record (DHR, 820.184) required by the Device CGMPs.  The following is one field tested approach.

Many  companies start a new DHF for major changes to their device.  If minor changes they handle under the CGMPs (Change Control, 820.40(b)).  

As with most of the points discussed herein, there are more than one correct approach.  Generally I recommend to my clients if they ask, that the DHF be used for major changes (with each new major change an addendum to the original DHF, or a new, cross-referenced DHF), which require some type of control of a series of development changes per 820.30.  For a minor / simple change, I recommend change control under  820.40(b), which at most companies is under a Change Request / Change Order (the CR when approved, can become the CO).  To it are attached the verification / test info'n / references, e.g., test report number / Lab Book Project no. etc, to justify the change, or in very simple changes, the actual data on the CR / CO.  The CO needs to have a block referencing a check for need to file a new 510(k) per Memorandum K97-1, and also tied to some kind of list / log of changes for a cumulative change review as well per that same K97-1.  All this is filed in Document Control, usually under QA (as would a closed out DHF, in many companies).  Of course this requires that document changes be defined in such a way to allow device changes (which are usually driven by a document change anyway, e.g., drawing change, specification change, etc.). 

When there is fielded software (firmware or software) and a revised version is created under change control, how does this affect the DMR and/or DHR?

Any change to a product, hardware or software, requires a change to the DMR (under change control, 820.40) for that product). Often the DMR includes a DHR template / blank (may be a "traveler" template /  blank, or in addition to a traveler), which would also be changed, or would at least drive a change to the DHR anyway, reflecting the new Revision / Release nos. of the software so changed (unless the Rev / Release No. entry is a fill-in blank). 

Consider the situation where a company has updated software that is then sent out to their customers.  In such a case, one possibility is that both (DMR and DHR) are updated (the DMR drives DHR content), assuming that the software is changed across the board.  If you're changing software for one customer at a time, then you're furnishing custom software, each program unique to each customer, and your entire CGMP system would have to be designed under that system.  My discussion is focused on the assumption that the software is a rev / rel where all customers for that particular software get the updates, new revs / releases. 
  
To clarify the role of the DHF, DMR and DHR:  The DHF shows the development of the device proving that the 8 other requirements of 820.30 Design Control have been met in its development from the "start" date.  The DMR is one result or design output from the DHF documented activities.  The DMR (820.81) is the device description, or "recipe": basically a list of all controlled records ("living documents") that define the device, to allow it's replication as defined in the FDA clearance (510(k)) or approval (PMA) documents.  The DMR might list device drawings, assembly work instructions / SOPs, test instructions / SOPs, BOM blanks (part numbers and part / component descriptions with blanks for quantity and lot numbers), blank travelers (which may list applicable validated / verified equipment settings with blanks to be filled in with the actual settings, operator or QC signatures and dates, and similar), reference other device specific SOPs, labels, Instructions for Use.  Some DMRs may list the generic SOPs that also apply to that device (and other devices), but don't have to. It would include the DHR template if additional to the above, e.g., traveler.


Additional on the DMR:  Although riskier, you could structure the DMR to list the device controlling documents without their current revision nos. / dates (e.g., list Drawing No. 0123, "current revision"; or SOP XYZ, "current revision” ).  Then the DMR would not always have to be updated when the device or software has minor changes, or even some major changes. However, the actual DHR template would have to be updated to reflect using the most current documents / settings / parts , et al, during manufacture of a lot.  All such changes would be under Change Control per 820.40.

The DHR (820.184) is the lot build history for one production lot; a record of how one lot / batch (lot number), or one serial number was built, quantities, RM lot nos, dates, personnel involved where appropriate, proving that that lot followed the "recipe" outlined in the DMR.  It would have the filled in / signed BOM, a filled / signed traveler (which could be a "DHR template blank"), samples of labels / IFUs included, sterile lot info (if sterilized), etc., showing / proving conformance to the DMR developed by the manufacturing process recorded in the DHF. Prior to release of that lot to the field, it would be reviewed and signed by QA, who would assure its accuracy and completeness of data and required enclosed documents.

In essence:   -- the DHF > DMR > DHRs.  See also "Definitions", 820.3.

The ultimate approach taken by a company must be defined by an SOP(s) and then followed!

-- John E. Lincoln, jelincoln.com


Monday, April 11, 2016

PART 11 and ANNEX 11

Some arguments on blogs and forums state that the EU's Annex 11 goes beyond the US FDA's 21 CFR Part 11.  They cite four key areas of differences:

01. Supplier / service provider audits;

02.  IT infrastructure qualification;

03.  Product Risk Management; and

04.  System operations and storage integrity.

A key point to keep in mind re. Part 11, is that it is not to be considered a "stand alone" requirement.  It is one part of the requirements imposed upon a company regulated by the CGMPs, e.g., 21 CFR 111 for dietary supplements, 21 CFR 211 for pharma, 21 CFR 820 for devices, 21 CFR 4 for combo products and so on.  And only where applicable (as mentioned below). 

Viewing Part 11 as such (part of the the overall CGMP requirements), the CGMPs then supply the so called "missing requirements" for supplier audits, the company's infrastructure and its qualification, incorporation of product risk management considerations into product development and subsequent activities, integrity of operations and data storage, throughout process and product life cycle from development to decommissioning.   

Failure to consider Part 11 as only part of the "bigger compliance picture" will guarantee FDA Form 483 observations during a CGMP compliance audit of a company.  

It must be noted, however, that Part 11 compliance is reviewed as part of such audits only if electronic records and/or electronic signatures are used by the company to fulfill some element of CGMP compliance documentation, record keeping or approvals, which involve e-records / e-sigs, in lieu of paper records / documents and/or manual signatures to fulfill such CGMP actions / documentation requirements.  

In reality, both systems (Part 11 or Annex 11) should not be considered as "stand alone" systems requirements, but as one small part of the entire QMS (quality management system) / regulatory environment.   Doing so makes understanding the requirements of Part 11 / Annex 11 easier, and compliance (and its resulting business benefits) complete.

-- John E. Lincoln, jelincoln.com


Sunday, April 3, 2016

DQs

A reader asked a question re:  "Design Qualification", as in DQ, IQ, OQ, PQ.  "This is a term I am not
familiar with. I have not seen it in 21 CFR and could not find any references on FDA.Gov. I also do not remember seeing it in 13485. I would like more clarification and examples, also to understand where this requirement comes from."



To answer the question: Go to www.fda.gov and search IQ OQ PQ.  You will not find DQ or Design Qualification.  However, you will find guidance documents on software V&V and process V&V that discuss the terms, define them, and state the FDA's requirements for their use.  And IQ, OQ, PQ presuppose proper requirements against which the IO, OQ, and PQ test cases would be developed to prove the Requirements have been proven to exist and function as expected.

If you enter "DQ validation" or "Design Qualification" into Google, you will find definitions similar to the one I give below.

The "c" in CGMP means that 21 CFR XXX is the rock bottom minimum they require, but that they expect companies to use practices in their industry that are "c" or "current". IQ, OQ, and PQ have been around for decades. DQ is less frequently used, but it is still used somewhat frequently in FDA regulated industries.  DQ generally means verifying that user / functional requirements are valid and complete. it is an additional step after development of the Requirements, to ensure no valid requirement is missing, including applicable standards and/or guidance documents.  While this is often done "intuitively", such a DQ action should be documented, since an auditor or other reviewer would expect that Requirements are somehow qualified or verified for appropriateness / completeness.

If you choose to follow ASTM E2500, then you won't necessarily use these terms, but are still expected to perform what they stand for. All US FDA investigators / notified-body auditors (ISO 13485 ...) expect the same. 

In order to properly validate equipment, software, processes, you would have to do them (Rqmts. / DQ, IQ, OQ, PQs, or identical equivalents / activities). The principles are not optional, i.e., you have to prove you installed X correctly / per spec (IQ against DQ / Rqmts Specs), you set it up and optimized parameters (e.g., DOE) for X correctly (OQ against DQ/Rqmts Specs), and you proved that X ran repeatedly, over extended periods of time, with varying conditions / inputs ("worst case" conditions / inputs), robustly (PQs against DQ/Rqmts Specs, 3 or more runs).  

So, while the acronyms can vary, the activities they represent cannot. 

--  John E. Lincoln, jelincoln.com