首页   按字顺浏览 期刊浏览 卷期浏览 Internal quality control of analytical data
Internal quality control of analytical data

 

作者:

 

期刊: Analyst  (RSC Available online 1995)
卷期: Volume 120, issue 1  

页码: 29-34

 

ISSN:0003-2654

 

年代: 1995

 

DOI:10.1039/AN9952000029

 

出版商: RSC

 

数据来源: RSC

 

摘要:

Analyst, January 1995, Vol. 120 29 Internal Quality Control of Analytical Data Analytical Methods Committee* Royal Society of Chemistry, Burlington House, Piccadilly, London, UK Wl V OBN It is recognized that effective quality control procedures are essential if analysis is to produce data that are fit for their purpose. This paper outlines the practical approaches to quality control. The control of random error using replication of analysis is described. Different types of reference materials are discussed as a means of controlling systematic error. Keywords: Analytical quality control; accuracy of results; reference materials The Analytical Methods Committee has received and has approved for publication the following report from its Statistical Sub-committee. Report The constitution of the Sub-committee responsible for the preparation of this paper was Dr.M. Thompson (Chairman), Dr. D. W. Brown (from October 1991), Dr. W. H. Evans (until June 1992), Mr. M. J. Gardner, Dr. E. J. Greenhow, Professor R. Howarth, Professor J. N. Miller (from July 1991), Dr. E. J. Newman, Professor B. D. Ripley, Mrs. K. J. Swan, Mr. A. Williams (from June 1992) and Dr. R. Wood, with Mr. J. J. Wilson as Secretary. The Analytical Methods Committee acknowledges financial support from the Ministry of Agriculture, Fisheries and Food. The views expressed and the recommendations made in this paper are those of the Analytical Methods Committee and not necessarily those of the Ministry of Agriculture, Fisheries and Food. Introduction An analytical result cannot be interpreted unless it is accompanied by knowledge of its associated uncertainty. A simple example demonstrates this principle.Suppose that there is a requirement that a material must not contain more than 10 yg g-1 of a particular constituent. A manufacturer analyses a batch and obtains a result of 9 kg g-1. If the uncertainty on the result is 0.1 pg g-1 (i.e. the true result falls within the range 8.9-9.1 pg g-1 with a high probability) then it can be accepted that the limit is not exceeded. If, in contrast, the uncertainty is 2 pg g-1 there can be no such assurance. The ‘meaning’ or information content of the measurement thus depends on the uncertainty associated with it. Analytical results must therefore be accompanied by an explicit quantitative statement of uncertainty, if any definite meaning is to be attached to them or an informed interpreta- tion made.If this requirement cannot be fulfilled, there are strong grounds for questioning whether analysis should be undertaken at all. The second conclusion is that in analysis the measurement uncertainty must be continually reappraised, because it can vary both with time within a laboratory and * Correspondence should be addressed to the Secretary, Analytical Methods Committee, Analytical Division, Royal Society of Chemistry, Burlington House, Piccadilly, London WlV OBN, UK. between different laboratories. This process of continual reappraisal of data quality provides a means of demonstrating and controlling the accuracy of data. The concept of internal quality control of analytical data (IQCAD) should be applicable to all types of chemical analysis.14 The success of IQCAD depends on the way in which it is applied.This, in turn, depends on the nature of the analytical job. Modifications of the practice of quality control will need to be made to accommodate numbers of samples analysed and the frequency with which analysis is undertaken. The purpose of this paper is to provide guidance on the purposes and implementation of QC procedures. It is recog- nized that the quality of sampling procedures determines to a large extent the quality of the measurement produced. However, the assessment of sampling quality is in its infancy. This document is restricted therefore to analytical quality control. Quality Control and Quality Assurance A number of factors contribute to the production of analytical data of adequate quality.Most important is the recognition of the standard of accuracy that is required of the analytical data. This should be defined with reference to the intended uses of the data. It is seldom possible to foresee all of the potential future applications of analytical results. For this reason, in order to prevent inappropriate interpretation, it is important that a statement of the intended accuracy and a demonstration that it has been achieved should always accompany analytical results, or at least be readily available to those who wish to use the data. From the practical point of view, the following factors are important in meeting accuracy requirements. Compliance with Sound Principles of Laboratory Practice and Organization Good laboratory practice or ‘quality assurance’ (in the general sense) is the essential organizational infrastructure that underlines all reliable analytical measurements.It is concer- ned with achieving appropriate levels in matters such as staff training and management, adequacy of the laboratory envi- ronment, safety, the storage, integrity and identity of samples, record keeping, the maintenance and calibration of instru- ments and the use of properly documented methods. Failure in any of these areas might undermine vigorous effects elsewhere to achieve the desired quality of data. In recent years these practices have been codified and formally recog- nized as essential. However, the prevalence of these favour- able circumstances by no means ensures the attainment of appropriate data quality.Availability of Analytical Methods that are Capable of Producing Data of the Required Quality It is important that laboratories restrict their choice of methods to those which have been thoroughly tested and have30 Analyst, January 1995, Vol. 120 been shown to be free from important fundamental flaws, for the types of analysis and materials of interest. However, even a wise choice of method does not exclude the possibility of serious error. A method does not itself possess any inherent performance characteristics for precision or trueness. There is, for a given method, only the potential to achieve a certain standard of accuracy when the method is applied under a given set of circumstances.The entire set of circumstances, including the chosen method, under which analytical results are produced can be defined as the laboratory’s analytical system. The analytical system is responsible for the accuracy of analytical data. It is therefore important to determine and control the performance of the analytical system in order to meet accuracy requirements. This should be the initial aim of any quality control measures undertaken in a laboratory. In summary, the introduction and use of an analytical method is best seen in three parts: (a) The development stage. This is usually undertaken by a single laboratory to meet an analytical need. The outcome is to produce a written procedure that has been subjected to preliminary tests to determine the likely range of application and limit of detection.(b) The validation stage. This should be carried out by a group of laboratories. The ideal approach to validation is for extensive testing of precision and trueness to be carried out in several laboratories on test samples of known composition. This stage aims to produce what is often referred to as a standard method-a method for which there is a range of test data as an illustration of the accuracy that can be achieved. Method validation is outside the scope of this paper, although, as discussed below, it may have elements in common with quality control. (c) The implementatiodapplication stage. This involves incorporation of the method into the analytical system of a given laboratory and the characterization of performance in routine use.It is the stage at which initial trials are followed by quality control tests carried out on a continuing basis. These tests should be regarded as distinct from tests to validate the method [which should have been carried out earlier as part of stage (b)]. The distinction made earlier between validation and implementation of methods is valuable in consideration of two key elements of control over bias: the use of reference materials and participation in interlaboratory tests. Both of these approaches to quality control are used, in a modified form, in method validation. The discussion below of reference materials and interlaboratory tests refers only to their application in routine quality control. Application of Quality Control Procedure Quality control is the term used to describe the practical steps undertaken to ensure that the analytical data are adequately free from error.The practice of IQCAD depends on the use of two strategies, the analysis of reference materials to check on trueness, and of some form of replication to check on precision. In this paper, the term ‘reference material’ is used to denote a test material of specified determinand content; it includes certified reference materials (CRMs), house refer- ence materials (HRMs) and independent calibration (stan- dard) reference materials (SRMs). ‘Test material’ is used as a general term to describe the type of substance analysed by the laboratory. The basic approach to IQCAD involves the analysis of control materials (reference substances or test materials of defined composition) alongside the test materials of interest.The outcome of the control analyses forms the basis of a decision regarding the acceptability of the test data. Two key points are worth noting in this context. First, interpretation, wherever possible, should be based on objective statistical criteria. Second, the results of control analyses should be viewed primarily as indicators of the performance of the analytical system, rather than as a guide to the errors associated with individual test results. Hence changes in the apparent accuracy of control determinations are usually taken to signal changes in the system but cannot be assumed to indicate an identical change for data obtained for test materials analysed at the same time.General approach-statistical control The intrepretation of the results of quality control analyses depends largely on the concept of statistical control. Statistical control corresponds to stability of operation. Specifically, it implies that quality control results can be interpreted as arising from a normal population with mean p and variance 02. Only about 0.3% of results would fall outside the bounds of p k 30, and such results can justifiably be regarded as being ‘out-of- control’, i.e., that the system has started to behave differently. Loss of control is taken to imply that the data produced by the system are of unknown accuracy and hence that they cannot be relied upon. The system thus requires investigation and remedial action before further analysis is undertaken.Results falling outside the bounds p k 20 would be sufficiently unusual (about 5%) to act as a warning of a possible problem. The values of o should be estimated by careful observation of the analytical system over an extended period. Initial estimates may need to be updated as use of the analytical system proceeds. Consideration must be taken when setting control limits of whether individual results or means of several results are to be controlled. Compliance with statistical control should be monitored graphically with Shewhart control charts.5--8 A less visually informative, but numerically equi- valent, approach [by the equivalent method of comparing values of z = (x-X)/o against appropriate values of the normal deviate] is also possible; x is an individual observation and Xis the mean value used as the best estimate of p.The nature of analytical errors It is worthwhile to recognize that two main categories of analytical error may arise. These are random errors and systematic errors (giving rise to imprecision and bias, respec- tively). The importance of categorizing errors in this way lies in the fact that they have different sources, remedies and consequences for the interpretation of data. Random errors determine the precision of analysis. They may be envisaged as causing positive and negative deviations of results about the underlying population mean. Systematic errors are manifested as a displacement of the mean of many determinations from the true value. Two forms of systematic error are worthy of consideration.Persistent bias may affect the analytical system (for a given type of test material). This will apply over a long period and affect all data. Such bias, if it is small in relation to random error, may only be identifiable after the analytical system has been in operation for a long time. It might be regarded as tolerable, provided it is kept within prescribed bounds. The second type of bias is an adventitious form introduced by a failure of the system (e.g., mistaken use of the wrong size of pipette). This form of bias is not to be tolerated, but, because it is often large, it may easily be detected by IQCAD at the time of occurrence. To some extent the division between what is regarded as random and systematic error depends on the time-scale over which the system is viewed.Long-term changes of unknown source in a positive and negative direction could be regardedAnalyst, January 1995, Vol. 120 31 as long-term random effects. Alternatively, if a shorter term view is taken, the same errors could be seen as changes in bias. Another example is ‘drift’. Calibration drift within a batch of analyses is a form of bias. However, its effect is to increase the spread of results of replicate analyses. Hence, it might be observed as a contributor to random error. As ever, the view of performance should be based on the likely consequences on data use. The batch or ‘run’ Quality control envisaged in this paper is largely based on the idea of the analytical batch. The batch can be regarded as a group of one or more test materials that are analysed by a particular method under conditions in which environmental factors that affect data quality are essentially constant, i.e., under ‘repeatability conditions’.Results from a particular batch are associated with one or more control measurements. The batch is thus the operational unit for data quality control. Routine Control of Precision An analytical result ( x ) produced in a laboratory on a particular test material can be regarded as a random sample from a potentially infinite normal population with a mean 1-1 and variance 0 2 , if the analytical protocol is executed under conditions where an approximation to statistical control can be assumed. Although several different measures can be regarded as describing ‘precision’, they are all based on 0.It should be borne in mind that the precision of interest (and therefore that which must be monitored and controlled) can vary according to data use. In a long-term monitoring programme the overall precision of data (including within- and between-batch random errors, often called ‘total’ stan- dard deviation) is important. If comparisons are to be made between observations made in the same batch, only short- term precision (as measured by the within-batch standard deviation) is of interest. Quality Control with Duplicates The simplest control of precision is achieved by duplicated measurements made on real test materials. The measure of precision monitored in this case is within-batch variation. Unless the test materials analysed are uniform, both in gross composition and in determinand level, o can be expected to vary from one test item to another.Several approaches may be applied in different circumstances. (i) All of the test materials are analysed in duplicate, and the differences (xl - x2) are tested against appropriate control limits based on a specified value of o. This method is appropriate for small batches of test materials, where statis- tical control cannot be established. (ii) A random selection of the test materials (of each type and determinand level) is analysed in duplicate. This would be appropriate for large batches of analyses and is particularly applicable to unstable determinands or those for which no reasonably representative reference material may be devised.(iii) A few representative HRMs are analysed in duplicate. This applies to the situation where (a) there are no problems with the representativeness of the control material and stability of reference materials and (b) when similar types of samples are analysed on a regular routine basis. Of these options, (i) and (ii) have the advantage that representativeness (in relation to random error) of the reference material does not have to be assumed. For some applications, it has been noted that the precision of determina- tions on reference materials are often too good, because of the extreme care with which such materials are prepared. In other areas (such as water analysis), however, this objection may not arise. The advantage with option (iii) (provided it is applic- able) is that data can be obtained for the control of both precision and bias at the same time and the performance can be monitored using a mean and range Shewhart chart (see below).Duplicates intended to control within-batch precision must not be placed adjacent to each other in the analytical sequence, otherwise they will reflect only the smallest possible measure of analytical variability. The best spacing for realistic precison control of within-batch duplicates is at random within each batch. Interpretation of Duplicate Data For the simplest approach, each group of test materials used in control measurements should have a small range of composi- tion, so a common within-batch standard deviation of results can be assumed. (a) The differences ( d = x1 - x2) between duplicate pairs should be examined.The expected distribution of the values of d is zero-centred with a standard deviation of q 2 0 . Thus the 95% confidence interval of the differences would be bounded (approximately) by -2u20 and + 2 f i o . However, it is often more convenient to consider absolute differences Id(. In this case the expected mean value is 1.1280 and the upper 95% bound of Id1 is 2.80, or about 30. This treatment is consistent with the I S 0 treatment of repeatability. Only about 1 in 20 absolute differences can be expected to fall above the 30 limit. An unduly high proportion is taken to show that the system is out of control, and is manifesting an unacceptable precision. Only about 1 in a 1000 results should fall above 4.60, corresponding to the action limit on the conventional Shewhart chart.(b) An alternative statistical approach is to form the standardized difference z d = d/V% which should have a normal distribution with zero mean and unit standard deviation. Individual values could be inter- preted on this basis. A group of n such results from a batch could be combined by forming z z d 2 and interpreting the result as a sample from a chi-squared distribution with n degrees of freedom. This alternative treatment is closer to recent trends in interpreting the results of proficiency tests. If test materials have a wide range of determinand concentrations, no common standard of precision can be assumed for the test materials, but a functional relationship between o and the determinand concentration X can still be determined.A linear relationship of the form given below may be expected: o = a + b X where a and b are constants that can be estimated for within-batch precision in the analytical system. If the mean of duplicate results is used as an estimate of the true concentra- tion, X, the expected value for 0 can be calculated. This enables us to extend the duplicate method to wide ranges of determinand concentration, utilizing either of the methods described previously. Precision Control using Reference Materials This may be applied where the reference material is a close analogue of test materials. A reference material is analysed ( n replicates) in each batch of tests and the data plotted on two Shewhart charts, one for the mean result and the other for the range of values.The charts act as a means of monitoring systematic and random errors, respectively. Control limits are32 Analyst, January 1995, VoI. 120 set at +2a and +3a for the chart of mean values. In this case the value for B corresponds to the batch to batch standard deviation of mean (of n replicates) values. Control limits for the range chart are based on estimates of the mean range as indicated in Table 1. Limits for the range chart are calculated by multiplying the mean range by the factor in the table (from BS 6008). Control of Bias Control Materials The bias (p - A‘) of an analytical result is the difference between the mean of the population of analytical results (p) and the true value (X). In routine analysis, p is estimated as the mean of a relatively small number of results.In order to estimate bias, it is necessary to have a working estimate of X . This is achieved by use of a reference material. There is a slight difference when an empirical method is used to measure a chemically ill-defined determinand such as ‘fat’. In that instance, truencss may need to be defined in relation to the consensus of a large number of laboratories’ results. When used in IQCAD as control materials, reference materials act as surrogates for the test samples, and must therefore be representative of the test material (i.e., they should be subject to the same potential sources of error), if a useful check on bias is to be made. To be fully representative, a control material must have the same matrix (in terms of gross composition and in any trace constituents which may have a bearing on accuracy) and it should be in a similar physical form, e.g., state of comminution, as the test materials.There are three other essential characteristics of a control material: it should be adequately stable over the period of interest; it must be possible to divide the control material into essentially identical portions for analysis, to allow its use over an extended period; and it must contain a concentration of the determinand that is appropriate to the range of interest. In practice, it is necessary to make some compromise on the extent to which a control material is representative of test materials. Nevertheless, analysts should always seek to improve the representativeness of their control materials. Certified Reference Materials Certified reference materials (CRMs), when available, are ideal for use as control materials as they are directly traceable to international standards or units.However, several deficien- cies limit the use of CRMs for routine QC, viz. , (a) their cost; (b) the relatively small amounts that may be purchased; (c) the small ranges of matrix and determinand content that are covered, especially for natural materials; (d) the fact that the uncertainty in the certified determinand content may be large in relation to allowable error in the application concerned; and (e) the limitation of the CRM concept to determinands and matrices that are stable. Table 1 Control limits for the range chart No of replicate analyses Warning Action on RM 2 0.04 2.81 0.00 4.12 3 0.18 2.17 0.04 2.99 4 0.29 1.93 0.10 2.58 5 0.37 1.81 0.16 2.36 6 0.42 1.72 0.21 2.22 Multiplier for action and warning limits ( n ) Lower Upper Lower Upper House Reference Materials (HRMs) For most analyses undertaken at present, appropriate CRMs are not available.It therefore falls to individual laboratories or groups of laboratories to prepare their own ‘house’ reference materials (HRMs) in suitable form, and to assign appropriate determinand concentration values to them. Assigning a true value by analysis In principle, all that is required to assign a true value to a stable reference material is careful analysis. However, con- siderable precautions may be necessary to avoid the very biases that IQCAD seeks to eliminate.This usually requires some form of independent check such as may be provided by analysis in a separate laboratory or laboratories and, where possible, the use of methods based on different physical and chemical principles. An alternative way of establishing the determinand concen- tration in an HRM is to carry out a comparison analysis (i. e., under repeatability conditions) with a suitable CRM (. z.e., one which is closely similar in both matrix and determinand concentration). The measured mean value for the HRM is adjusted to allow for the difference found between the mean for the CRM and its certified value. In effect, the CRM is used to calibrate the system for the HRM. This establishes a direct traceability from the CRM to the HRM. Assigning a true value by formulation In favourable instances an HRM can be prepared simply by admixture of constituents of known purity in predetermined amounts.For the formulation to be successful, the matrix constituents must be adequately free from determinand and the added determinand must be from a source independent of the analytical calibration. Problems are often encountered in producing the HRM in a satisfactory physical state or in ensuring that the chemical form of the determinand is realistic. Spiked control materials This is a way of creating a reference material in which a value is assigned by a combination of formulation and analysis. This is possible when a test material essentially free of the determinand is available. This material is spiked with a known amount of determinand, after exhaustive analytical checks to ensure that the background level is adequately low.The reference sample prepared in this way is thus of the same matrix as the test materials to be analysed and of known determinand level-the uncertainty in the assigned concentra- tion is limited only by the possible error in the unspiked determination. However, it may not be possible to ensure that the chemical form of the determinand is the same as in real samples. The use of spiked materials is valuable when the determinand is not stable, so that HRMs cannot be estab- lished, and when analyses are carried out on an ad hoc basis. Recovery checks A limited check on some sources of bias is possible by a check on recovery. This may be useful when determinands or matrices cannot be stabilized and when ad hoc analysis is required.A portion of the test material is spiked with a known amount of the determinand and the ‘recovery’ (the proportion of the added amount detected) is measured. The primaryAnalyst, January 1995, Vol. I20 33 advantages of recovery checks are that the matrix is represen- tative and the approach is widely applicable-most test materials can be spiked by some means. Again, this approach suffers from the disadvantage noted previously regarding the chemical speciation of the determinand. However, it can normally be assumed that poor performance in spiking recovery is strongly indicative of a similar or worse bias for the test material. Spiking and recovery testing as a method of quality control should be distinguished from the method of standard additions (which is a calibration procedure) i.e., the same spiking addition cannot be used to fulfil the role both of calibration and an independent check on accuracy. Standard Reference Materials (SRMs) In some situations it is possible to prepare a control material from pure constituents, for example, a standard solution made from high-purity metals.It is essential that the source of the constituents is independent of that used to obtain calibration materials, otherwise there is no check on bias at all. Checks on purity by spectroscopic or chromatographic means are recom- mended. This type of control material is probably the least useful in that it is at best a check only on calibration bias. However, the limited scope of the control means that it is simple to assign a cause for any bias that is detected.SRMs should be prepared at determinand concentrations at or near the top of the calibrated analytical range. This ensures the maximum power to detect calibration bias. Participation in Interlaboratory Tests Proficiency testing is a periodic assessment of the performance of individual laboratories and groups of laboratories that is achieved by the distribution by an independent testing body of typical materials for unsupervised analysis by the partici- pants .4 Proficiency testing schemes should be used where appro- priate, i.e., where the sample type and determinand concen- tration relate to the samples analysed routinely. They can be regarded as a routine, but relatively infrequent, check on bias.Without the support of a well developed within-laboratory QC system within which a control material is analysed in every batch of analyses, participation in a proficiency test is not an effective means of controlling errors. The advantage of proficiency tests is that they can allow the detection of unforeseen sources of bias. They play a key role in demonstrating the need for remedial action in laboratories with long-term problems in achieving data of appropriate quality, and the efficacy or otherwise of any remedies applied. Moreover, successful schemes demonstrate that participants have the ability to produce data of a given quality on the occasions of the tests, and hence have the potential to do so on other occasions.The limitations of proficiency tests fall into four main categories: (a) they are necessarily restricted in the scope of materials and determinands that can be prepared and circu- lated for testing; the performance of a laboratory in a given test often has to be taken as an indication of its capabilities for a wide range of related analyses; (b) the samples analysed are usually identifiable as check samples and may be analysed with more than usual care, hence the standard of accuracy achieved is not necessarily typical of laboratories’ routine operation; (c) they are repeated over a long time-scale and therefore cannot indicate the short-term variations in quality that can occur within laboratories; (d) they function as good indicators of overall data quality, but do not identify clearly the sources of errors and thereby point to effective remedies.Application of Routine IQCAD Approach to IQCAD versus Analytical Load-Various Cases The practical approach to quality control is determined by the frequency with which batches of analyses are carried out and the size of each batch. Analysis that is performed only occasionally or perhaps in one batch does not lend itself to the statistical interpretation that underlines conventional QC systems. It is not possible under these circumstances to establish and maintain a state of statistical control over the measurement process. Frequent large batches of analyses pose different problems: those of too great a number of QC data and of the possibility of needing to reject large amounts of data if ‘out of control’ is indicated.Guidance on what to do under these different circumstances is given below. Small batches (<20) analysed frequently Recommendation: carry out at least one control analysis of a reference material (spiked as appropriate) per batch. Plot a control chart of individual values. Respond to ‘out of control’ on chart by rejecting the batch of data and (where possible) repeating analyses. Use a variety of duplicate controls if different sample types are analysed at the same time. The frequency of analysis means that sufficient QC data can be generated to establish control. However, it is usually practical to peform only one QC analysis per batch. The frequency of use of control materials recommended above is for general purposes.It may be advisable to use more or permissible to use less under specific circumstances. The deviation of the actual rate of ‘out of control’ determinations from that expected does not usually pose problems. Large batches (>20) analysed frequently Recommendation: carry out one control analysis of a reference material (spiked as appropriate) per 10 test samples. If the batch size varies (but is still large), arrange to standardize on a fixed number of control determinations per batch, say between three and six. Plot a mean and range control chart. Respond to ‘out of control’ on the mean chart by rejecting all data if d l control analyses agree or, if there is only one discordant result, investigate its cause and respond accord- ingly. Respond to ‘out of control’ on the range chart by checking for sources of random error.This is the ideal approach to IQCAD provided that the choice of control material(s) is (are) representative. The use of mean and range charts allows checks on precision and bias (see below). Control data can be relied upon to be normally distributed, so the control system will operate according to statistical expectations. If the concentration range is wide (i.e., spanning a factor of 10 or more), two levels should be represented by appropriate HRMs. Batches analysed infrequentlylad hoc analyses Recommendation: carry out duplicate determinations on at least one third of samples, carry out spiking recovery tests on representatives of all sample types (consider standard addi- tions calibration) and, where possible, analyse at least one independently confirmed reference material for each sample type.Costs of IQCAD The practice of quality control requires extra analytical effort and consequently an apparent increase in costs. The amount of extra work varies with circumstances but is likely to be at least 15%. This figure is not alarming in proper perspective: it34 Analyst, January 1995, Vol. 120 is estimated that 10% of all analyses undertaken are repeated subsequently because of obvious unreliability. More signifi- cantly, the true cost of undetected analytical errors must be substantial, although difficult to quantify. Further, in the current era of the ‘educated customer’, a laboratory will increasingly face loss of custom or legal liability through the production of incorrect data.Conclusions Quality control can in principle determine with a high probability that data released from a laboratory are of appropriate quality. If properly executed, quality control methods can monitor the various aspects of data quality at closely timed intervals, effectively as a continuous part of the analytical process. In intervals where performance falls outside acceptable limits, the data produced can be rejected and, after remedial action, the analysis repeated. It must be stressed, however, that data quality control, even when properly executed, cannot exclude the possibility of important errors. First, it is subject to statistical uncertainty. Second, it cannot usually detect sporadic outliers due to very short-term disturbances in the analytical system or due to mistakes made with individual samples. Third, it cannot usually detect errors that arise from particular samples falling outside the scope of the method validation. Despite these limitations, quality control is the only recourse available under ordinary circumstances for ensuring that good-quality data are released from a laboratory. When properly executed it is very successful. However, it is abundantly clear, from a wide variety of interlaboratory tests, that many laboratories need to give more attention to the use of quality control techniques. References Analytical Methods Committee, Analyst, 1989, 114, 1497. Gardner, M. J . . Manual on Analytical Quality Control for the Water Industry, Water Research Centre, Medmenham, 1989. Kateman, G., and Pijpers, F. W., Quality Assurance in Analytical Chemistry, Wiley. New York, 1981. Mesley. R. J.. Pocklington, W. D.. and Walker, R. F., Analyst, 1991. 116, 975. Control Charts, General Guidance and Introduction, I S 0 7870, International Organization for Standardisation, Geneva, 1993. Shewhart Control Charts, IS0 8258, International Organization for Standardization, Geneva, 1991. Control Charts for A rithmetic Mean with Warming Limits, IS0 7863, International Organization for Standardization, Geneva, 1993. The Application of Statistical Methods to Industrial Standardisa- tion and Quality Control, BS 600, British Standards Institution, London, 1935. Paper 4102721 C Received May 9, 1994 Accepted June 13, 1994

 

点击下载:  PDF (908KB)



返 回