Data Integrity White Paper for Drug Quality Control Labs

By Loren Smith Agilent Technologies Inc. Santa Clara, California, USA
white paper
Data integrity issues in drug quality control laboratories are driving the implementation of more and more regulatory measures. What changes have led to all these activities? Although we have a lot of regulatory information, most of them are useless and even more confusing. This article will eliminate common misunderstandings based on research on existing resources and direct communication with US Food and Drug Administration (FDA) staff and their consultants. The following will be discussed:
• How to understand the current law enforcement environment in a historical context
• How to use critical thinking to discuss misunderstandings about data integrity
• How to evaluate current laboratory software and related procedures with new expectations
• How do suppliers redesign lab software to help customers cope with new realities?
Definition and history of data integrity and FDA regulatory certification
In an article by Scientific Computing (September 2013), Bob McDowall defined data integrity as “complying with applicable regulations to generate, transform, maintain, and ensure data accuracy, integrity, and consistency throughout its lifecycle. Sex."
A common misconception is that the applicable regulations are entirely new. In fact, 21 CFR Part 11's “Electronic Records; Electronic Signatures” was first released in 1997. By 2003, after the pharmaceutical industry had adapted to regulations for some time, the FDA issued a guide called “Scope and Application”, which described some of the requirements of Part 11. The guide also discusses the FDA's selective implementation strategy based on issues identified during the inspection process. In 2010, the FDA announced its focus on data integrity checks. However, only a few people within the FDA were able to understand the data integrity of computerized systems. As a result, FDA has conducted numerous data integrity, data integrity checks, and regulatory training activities for individuals including chemical experts, manufacturing experts, and people with GxP expertise. They also hired experts in the field of fraud investigations, including those from the US Federal Bureau of Investigation. Therefore, since 2013, data integrity has become a major inspection project, and data integrity enforcement activities in various regions have increased significantly. In addition, since 2014, the FDA has often indicated the names of hardware and software products in its warning letters and related public information documents, intentionally or unintentionally indicating to hardware and software manufacturers that management wants them to help customers solve data integrity and Concerns about regulatory certification.
Clarify misunderstanding
Another misconception often heard is that if a pharmaceutical company uses a particular software system, the FDA will shut it down. This concern can be expressed as: “The potential shortcomings of the entire production facility or laboratory system may be considered a data integrity risk. If this shortcoming is not yet apparent, is there any reason for the FDA to provide an observation in the form of a 483 warning letter? Warning?" The answer is obviously no. The ability to violate the rules does not mean that it has violated the rules. Take a look at this analogy: The speed limit in many parts of the United States is 65 mph, but the ability of a car to overspeed does not mean that the car owner will receive a subpoena (successful speeding). Similarly, in pharmaceutical companies, if there is no data tampering or data deletion (or other fraudulent activity) with potential data integrity shortcomings, there is no reason for the regulator to issue a subpoena or warning letter. The inspectors will talk to the personnel of the pharmaceutical company in detail, and the focus will shift to ensuring that known shortcomings do not appear through program control.
Program and technology control
Let's take a look at the statement: "This software meets Part 11 regulations." There are some problems with this statement. First of all, it is logically impossible. Some of the content in Part 11 cannot be met by technical controls in a computerized system. For example, CFR Part 11.10(j) specifies the policy for the use of electronic signatures. This is a requirement that cannot be met by a chromatographic data system. This is only a regulatory requirement and does not mean that it can be met through technical control. The software does not actually have the problem of not meeting the regulatory requirements. The software itself is right or wrong; only the technical controls included in the software can support the requirements of applicable regulations. In addition to technical controls, program control must also be used. Discussions about program control and technical control are common in FDA warning letters, especially when the system is unable to support the technical controls required by multiple regulations.
The use of standard operating procedures (SOPs) as program control measures can replace technical controls, provided that:
• Personnel have received training for the SOP
• The SOP is executed
• Confirm that SOPs are adhered to through quality supervision and/or certification audits
However, in general, even if there is an SOP, they are not implemented and the compliance is not properly verified. Therefore, the FDA will require system remedies to prevent similar behavior from occurring again.
Audit trails in computerized systems are an example of technical controls. The software-generated audit trail must be able to include all the elements required by the regulations and then those controls must be enabled.
The audit trail (21 CFR Part 11.10(e)) automatically states that the system is up and running, and can be queried during the audit or inspection process without any additional manual work.
Software verification or functional certificates: Are they valuable?
Many software vendors provide software verification certificates or provide a certificate named according to the terms of 21 CFR Part 11 Readiness Claims. The value of such a certificate is not significant because the FDA expects the user to verify the intended use of the software in the environment in which it is used. Although the supplier assures the customer to manufacture and design the system based on customer needs and spends a significant amount of time testing before the software is delivered, the supplier's development and testing efforts will not (and cannot) replace the customer's stated intended use and expected Use to verify their systems.
Another related point is that FDA does not have any jurisdiction over the supplier's informatics software department. Therefore (unless the software is certified by the FDA for medical device registration), any documentation provided by the supplier is not approved by the FDA because the FDA has no enforcement power over this part of the company's business.
Suppliers may provide products with the details of Part 11, but this information is based on vendor interpretation of the regulations. This interpretation may not be the same as that of many other pharmaceutical companies or the FDA itself. Supplier interpretation is not a substitute for auditing to determine whether software functionality meets regulatory requirements.
Your responsibility: audit, evaluation and verification
Data integrity regulatory certification responsibilities include supplier auditing, computer system evaluation, and software verification.
When auditing suppliers, there are four dilemmas or a series of problems caused by the auditors themselves. (See Mourrain, J., Therapeutic Innovation and Regulatory Science, Volume 40 (#2), pp. 177-183.)
The first dilemma is inconsistency. There are very few regulations. For example, without regard to the information in the preamble to the 1997 Federal Register, the Part 11 regulations have only three pages. The guide has a lot of content and more interpretation of the guide. In many audit situations, inconsistencies in interpretation can cause problems. For example, a customer may understand a rule in some way during the audit process, while the supplier understands the same rule in another way.
Preference is another issue that auditors encounter. The audit report is one-sided and not complete enough. Some auditors believe they need a lot of audit results. Therefore, they will separate the results of each question in each SOP and list them all separately, which may lead to misunderstanding, that is, the quality of the audit report depends only on the length of the report. Other auditors may use examples to include these examples in several large-scale observations to support the main findings of the audit. Such reports may not represent everything that is known during the audit.
Another problem encountered by auditors is the difference. In the final analysis, the auditor is also a person. Some auditors will struggle with certain issues, such as disaster recovery, while other auditors may focus on Part 11.
Readability (not in writing) is another issue. The auditors may be able to communicate the issues they found during the audit. However, it is often difficult for them to express the question of "what is then." That is, the impact of computerized systems on patient, product, and data integrity.
What are the solutions to these audit dilemmas? The solution is to use a model instead of a checklist, the components of which include:
• Procedure
• staff training
• Software development activities
• Testing activities
• Quality Management System
• infrastructure
The last item is usually irrelevant unless the vendor keeps the GxP record in a carrier such as a cloud application.
In the model approach, scoring can transform a more subjective process into an objective measurement system that supports reliable individual supplier audits and can be compared against multiple software or service providers. The key to all activities is that supplier audits can implement a risk-based verification strategy (as described in Section 6.3 of the FDA General Principles of Software Verification). The better the vendor does the job, the less work needs to be done in software verification (at least in theory).
Evaluation software (and supplier) regulatory certification support
Evaluating whether the software supports regulatory certification requires attention to all local regulations in the areas where the regulated company conducts business or plans to do business. If some regulated companies only operate in the United States, or only operate in Europe, they can choose to focus only on Part 11 or only on Annex 11. Part 11 and Annex 11 have commonality. However, any software compliance assessment should be based on evidence and not audible (as reflected in the Part 11 certificate only).
Like many other vendors, Agilent receives a large number of customer checklists and provides product answers as a straightforward response. However, it is important to evaluate those responses based on evidence, rather than being limited to what the vendor claims to be in the system. The area of ​​review should include data integrity issues, access controls, audit trails, equipment inspections, etc., depending on applicable regulations. Such review is very important in the software evaluation process because the observed deficiencies indicate that program control or custom software functionality may be required. Therefore, it is highly desirable to provide suppliers with feedback on any observed deficiencies, thereby avoiding remedial program control or custom solutions becoming permanent solutions.
Another common misconception about software regulatory certification support is that "I purchased the vendor's IQ/OQ toolkit, so my system has been validated." However, purchasing a vendor verification kit (usually limited to the base software IQ and the core generic OQ) is not sufficient to verify that the system is suitable for the intended use. The verification responsibility cannot be pushed to the supplier, but the supplier is indeed (or should be) a reliable source of information and a good helper for your verification work.
A related verification misconception is that "any system change requires full revalidation." The reality is that whenever a software is changed, the software change must be evaluated to determine the impact of the change on the overall software system and the risk that the change poses to patient safety, product quality, or data integrity. Often, companies limit change validation to the change itself, or they go to the other extreme, which is to repeat the entire verification process unnecessarily. The correct approach is somewhere in between.
Conclusion: Agilent's response
Agilent's goal is to incorporate FDA focus into system design to avoid the software being delivered to bring or retain the risk of non-compliance. The FDA is clearly pursuing an increasing number of technical controls that prioritize technology control over program control and prioritize preventive control over detection control. Therefore 21 CFR Part 11.10(a) stipulates that the system should be “capable of detecting records of invalid or changed”. This is the only part of the regulation that really involves detecting records that may have been tampered with by illegal means. The remaining control measures are preventive measures. So we say that detection and control measures are important, but preventive control is better. The regulations also emphasize the use of online records rather than paper report printouts. The FDA does not necessarily trust the documents that are handed to them.
Another example of system design is the use of online audit trail review. Many systems can generate print audit trail reports, but the new Agilent OpenLAB Chromatography Data System uses a built-in tool that enables users to review electronic audit trail entries online. These audit trail entries are sorted by type and can be reviewed online and include online signatures.
These examples are used to demonstrate how Agilent has used critical thinking to redesign lab software to help address the new regulatory status quo.
Frequently Asked Questions
Question: Does the new data system require no program control?
Answer: This is not the case; due to the use of additional technical controls, the need for program control will be reduced, but some program control, such as system management or user management, is always required. Another program controls the applicable policies and procedures for legitimate applications involving electronic signatures. Ideally, the amount of program control will be reduced by system technology control to eliminate all human differences that are critical to data integrity, while retaining only program control around the system.
Question: What level of revalidation is required after the software is updated?
Answer: The FDA Guide: "General Principles of Software Verification" highlights two key points about revalidation. First, the relative impact of system changes on the intended use of a particular company's system must be assessed. For example, changes to the instrument support function may be independent of the user's specific instrument configuration or may be reflected in unused features. Based on the documentation published by the vendor, the user should be able to determine which features and functionality have been updated and which defects in the software have been corrected. Any changes should be compared to the intended use to determine any revalidation impact.
Second, once the software is changed, the user should personally evaluate the changes, including using a degree of regression testing to confirm that the changes in the software or update software are not affected in a way that could have unintended consequences. This is not uncommon, for example, when updating software for a home computer or phone, look for features that worked fine before the update but were not available after the update.
Question: How do I handle automatic updates or anti-virus programs for Windows ?
Answer: Another topic that has been increasingly discussed in the past few years with the ISPE GAMP v5 guide is the concept of risk and risk-based verification. One of the factors to consider regarding operating system and anti-virus program updates is the relative risk of security issues or virus vulnerabilities in the system. Sometimes the update risk may be greater than the initial risk of the system itself (or vice versa). Some companies have a strong lineup of employees to handle network and server infrastructure certifications, and to isolate the operating system from software verification itself, ensuring that systems remain optimal through security and anti-virus software updates. As a general practice, companies should trigger a small set of standard regression tests through operating system or anti-virus program updates or just periodically to ensure that the changes do not have any adverse effects on the system.
Question: Is it necessary to re-verify the software when using a new PC ?
Answer: If the new PC has the same or greater functionality than the original PC and uses the same version of the operating system and software, then there is no need to re-verify because the system functions exactly the same. However, you need to repeat the installation of the certification to confirm that the software is properly installed or properly restored from the backup to the new PC.
One detail that improves the efficiency of this process is to avoid overly detailed indicator files. Avoid specifying a specific model of PC, especially processor speed or specific memory or disk capacity. Using the specified name "or a higher performance product" will not require constant updates of the documentation as technology advances.
Question: About GMP Lab Data Integrity: Can I modify it in any situation when it is needed?
Answer: Instrument data collected directly from a PC should be considered a solemn entity; the original data must not be modified for any reason. However, for the second type of data, that is, the data generated during data analysis and data processing such as re-integration after acquisition, changing the baseline, deleting uncorrelated peaks or other calculations, it is regarded as normal data processing. The components can be modified under controlled conditions and the changes will be recorded in the audit trail.
Question: Sometimes software calculation or algorithm is very complicated, for example chromatogram integration, software users whether it is necessary to verify the results?
Answer: If necessary, the calculation results must be verified. Not only all Microsoft Excel calculations, but all specific calculations are recorded in configurable reports, such as using report templates and automated calculations for CDS reporting. If the calculation is used to determine if a particular result is within the metric range, then the calculation needs to be validated by data within and outside the metric to ensure that the calculations are working properly. This verification also evaluates the ability of suppliers to develop great software and perform accurate calculations.
Question: Upon completion of the audit of the supplier, whether it should be re-audit of suppliers within the recommended time?
Answer: There are several factors to consider when determining the supplier audit frequency, which involves the relative risks of the system to patient safety, drug quality, and data integrity. Another factor is the supplier's performance and response from previous audits. If they perform well, the reasonable time for re-audit is once every three years, and the interval should not be too long. For other mission-critical suppliers or vendors that have performed poorly in previous audits, a reasonable time to re-audit is once every 12 months for a detailed review.
Question: What audit method should be used for software in use that is no longer supported by suppliers such as CDS but still in a verified state?
Answer: Suppliers sometimes go out of business, or more often stop supporting a particular version of the software. If users continue to use this type of software, the level of risk will increase. Because if there is a problem, the vendor may not be able to resolve the defect or be unwilling to correct the defect in the outdated version of the software they no longer provide support. However, this risk depends to some extent on the instrument and software, especially the old instruments and software. As long as those systems are properly managed and used, it may not be necessary to update them. Agilent often considers whether an upgrade can provide high enough value, new enough features, or enough defect fixes to be worth the cost, time, and effort involved in replacing or revalidating the system.
Question: What is regression testing?
Answer: Regression testing is a way to confirm that a feature will still work after making any changes. Regression testing should include the most commonly used features in the system and/or the most risky features identified during the risk analysis process to determine if a particular software update has changed functionality not directly mentioned in the vendor documentation. That is, confirm that the system itself is not reversing, or confirm that the change does not cause some kind of unexpected failure.
Question: How to ensure and demonstrate strict compliance with SOP ?
Answer: Through the audit. Auditing is the only way to ensure that people operate in accordance with training through tracking.
Question: Do pharmaceutical companies usually go to Agilent for on-site audits to review SOPs, etc., or are these audits usually written?
Answer: About 90% of Agilent's audit requests for informatics software products are written audits, and the remaining 10% are on-site audits. However, written audits are not based on evidence. It is based on a statement or Agilent's choice of expression, so it is not possible to verify the content of the statement in a written audit. A written audit is usually a routine for circle painting, where people perform an audit on paper and then say they have completed the audit. Some customers will use written audits as a prelude to on-site audits to get a rough idea of ​​Agilent's positioning before they arrive at the scene.
Question: Some people say [to buy] IQ-OQ Kit [as a complete verification solution] is a misunderstanding.
Answer: The IQ-OQ toolkit itself is real and not a misunderstanding. However, the IQ-OQ toolkit is necessary but not sufficient because it does not meet all of FDA's requirements for validating computerized systems. For example, the IQ-OQ toolkit does not include a verification plan file or a verification summary report. The IQ-OQ toolkit also does not include requirements specifications, user requirements, functional metrics, or traceability from IQ and OQ testing itself to or user requirements or functional metrics.
Question: How do I deal with systems that do not contain audit trails or audit trails that are incomplete? What is the solution for this type of system?
Answer: Many older systems are still used in regulatory regulatory laboratories, which are in good health. But the solution to this problem is to use a large number of program control measures including written records. Depending on the level of importance and risk of the particular process involved or experienced by the system, the program requirements may require verification information from the second person or actually witness the work done at the same time. The purpose of audit trails is to be able to understand the history of the work process or to rebuild electronic records, including not only the creation of records, but also the changes or deletions of records. If the system has limited or imperfect audit trails, it must be supplemented in some other way, usually in the form of a written record. For example, use a notebook next to the instrument. Technical controls make the work of each person easier.
Question: Does the new data system require no program control?
Answer: No. Program control is always required. For example, a program that authorizes a user to access a data system, or a program that changes, modifies, or removes an individual's access to the data system is a regulatory requirement, and the data system may not be able to meet these requirements. Currently, data systems can create accounts and change user permissions. However, programs that perform these operations typically need to maintain some type of record, including administrative review and approval of system access or system access changes. Therefore, program control will not completely disappear.
Question: process manufacturing kits for disaster recovery in the certification process if necessary, remove the software if any associated risks?
Answer: Disaster recovery testing is a regulatory requirement. European regulations have clearer rules on this subject. Disaster recovery tests usually fail due to assumptions. Disaster recovery testing is especially important for mission critical systems.
Question: Is there a less cumbersome audit trail example compared to the example Agilent created in its software?
Answer: Some systems with electronic audit trails are unable to indicate that those audit trails have been reviewed, resulting in the need to divide the audit trail review into small pieces and to review audit trails that are highly relevant to the data. For example, for drug batch release, all relevant records related to the manufacture of the product and all systems supporting its manufacturing process must be QA reviewed. If the QA review is concerned with audit trail entries related to those records, even if the audit trail happens to be a written record, its review is more relevant to the record review itself. Therefore, as long as the problem is divided into smaller pieces that are more relevant to the actual record, the review process will be less cumbersome.
Question: Can you talk about how to deal with the peak integration challenge in the automatic mode instead of manual?
Answer: Automatic peak integration is usually solved by a program that determines the parameters of the allowed automatic peak integration and possible re-integration. In general, the flexibility of integration or re-integration activities is addressed in SOP or system or process validation for a particular analytical method.
Question: Under certain circumstances, each system must have an administrator can modify or delete records for this potential data integrity issues, you have any suggestions?
Answer: The best practice for selecting a system administrator is to ensure that administrators of a particular system have no interest in the data in that system. This means that in large enterprises, it is usually the responsibility of an employee in the IT department to manage the system. Separation of duties is a topic that IBM has discussed in the past, and it is also important in the financial world. If a person has no incentive to manipulate the data in the system, it may not pose a problem. The FDA has suggested that if there is any evidence of deliberate fraud, they will investigate it as a criminal activity, which is their responsibility.
Problem: The sample is processed, whether adequate audit trail notes, still need the approval process?
Answer: This depends on the requirements of the company's policies and procedures. If sufficient controls are in place in the system and personnel are adequately trained, an audit trail note may be sufficient in the event of a sample reprocessing (when appropriate control measures and procedures are in place). However, if the program does not resolve the reprocessing of the sample, or if it is considered an abnormal or unusual event, it is necessary to provide a clearly documented approval process.

Hydrocolloid Plaster

Hydrocolloid Plaster,Hydrocolloid Plasters,Hydrocolloid Plasters For Spots,Hydrocolloid Plasters For Burns

Zhejiang Lanhine Medical Products Ltd. , https://www.lanheyiliao.com

Posted on