|
1. |
Data Analysis Using the Internet: the World Wide Web Scanning Probe Microscopy Data Analysis System |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1001-1006
Philip M. Williams,
Preview
|
|
摘要:
Data Analysis Using the Internet: the World Wide Web Scanning Probe Microscopy Data Analysis System Philip M. Williams*, Martyn C. Davies, Clive J. Roberts and Saul J. B. Tendler Laboratory of Biophysics and Surface Analysis, Department of Pharmaceutical Sciences, University of Nottingham, Nottingham, UK NG7 2RD The first interactive world-wide web-based image analysis system is presented (http://pharm6.pharm.nottingham.ac.uk/ processing/main.html). The system, currently tailored to scanning probe microscopy image data, has been developed to permit the use of software algorithms developed within our laboratory by researchers throughout the world.The implementation and functionality of the scanning probe microscopy server is described. Feedback from users of the facility has demonstrated its value within the research community, and highlighted key operational issues which are to be addressed. A future role of Internet-based data processing software is also discussed.Keywords: Scanning probe microscopy; data analysis; image processing; world wide web Critical analysis of data is fundamental to progressive research. The advent of digital control systems for biophysical instrumentation has allowed the development of software for exhaustive and often complex data analysis. Within the field of scanning probe microscopy (SPM), for example, many software routines to perform image analysis have been presented in the literature. These include the measurement of surface roughness and fractal dimension parameters,1,2 the analysis and correction of tip-induced distortions,3–6 the simulation of image contrast7 –9 and the prediction of tip/substrate interactions.10–12 Although it is encouraging that such significant work is being undertaken, these software algorithms are generally outside the reach of the experimentalist, who is often confined to the software tools provided by the instrument manufacturer.Furthermore, most of the published algorithms have been written within academic research groups, they are therefore mainly specific to the resources available at the time and locked into the computational hardware of the laboratory.As data analysis is almost always performed subsequent to acquisition this restriction to specific software packages and hardware can tie-up expensive instrument time. In our laboratory we make particular effort to remove, where appropriate, the analysis from acquisition and thereby maximize the time for which the instrumentation is available.Our software is thus written in a network-oriented fashion with a centrally available data archive.13 As an off-line analysis system, the software is of value to many research groups. To this end we have developed the SPM server. This is a world wide web (WWW)-based system on which users throughout the world can process their data using some of the software routines which we have developed.As the system runs within our laboratory we have removed the need to support many users and their differing, often esoteric hardware. The SPM server has been developed to provide our SPM data analysis software and is thus tailored to SPM image data. However, the provision of data analysis is not restricted to this area of research and the SPM server is a first example of a WWW-based interactive data analysis system. Here, we outline the implementation of the system and indicate its usefulness through example.Technical details of the implementation will be given elsewhere. Implementation The SPM server has been implemented under the client-server model in which a user’s WWW browser operates as the client. Data processing and image rendering is handled by the server, which is currently running on a Hewlett Packard 735 workstation. There are three reasons for adopting the client-server model rather than a peer-to-peer or Java-based system.Firstly, the service should be available to as many people as possible and a pre-requisite for a Java-compatible browser may limit the potential user base. Furthermore, an increasing number of industrial companies are restricting access to the Internet and specifically not allowing peer-to-peer operations for security reasons. Secondly, many of the routines which are published on the server have significant processing overheads, and would be severely restricted in performance under Windows and Macintosh environments and under the interpreted Java language.The scanning probe tip derivation procedure, for example, may require of the order of 1012 operations before convergence is achieved. Such processing can only be achieved realistically on dedicated high-performance workstations. Finally, the SPM server is at the first stage of a projected development to a fully integrated and distributed data analysis resource. The protocols used within the server to control message handling and job control have been taken from the Genesis Graphics System13 and the foreseen development includes the ability to access the functionality of the SPM server from graphical user interfaces (GUIs) other than a WWW browser. This functionality would permit the possible utilization of routines, such as those within the SPM server, from instrument control and other proprietary software.The SPM server uses posted forms within a series of HTML pages served by the NCSA-HTTPD server (vl.5.2a).These forms pass information to the processing modules, written in C/C++, residing within the cgi-bin directory of the server. Fig. 1 shows the main processing page as viewed on a WWW browser. The SPM server splits the browser page using four frames (HTML 2). The top frame acts as a menu bar in which various classes of operation can be chosen. These classes include data input/output, feature measurement, image morphology and surface reconstruction.The left-hand frame acts as a control window in which the currently selected class of processing operation are controlled. For instance, when the morphology class is chosen from the menu, the left-hand frame changes to reflect the various processing operations which are available in this class, such as erosion, dilation and watershed. The larger right-hand frame displays the current image. The display mode can be changed by selecting the Configure class of operation from the menu.The smaller frame, in the bottom right of the browser window, is the Index frame. This provides a historical record of the operations which have been instigated and permits previous data to be recalled from the server. Analyst, October 1997, Vol. 122 (1001–1006) 1001Data up-loaded to the SPM server is held for two working days and then deleted. The SPM server recognizes the user and serves the appropriate data to the browser by analysis of the TCP/IP internet address on which the browser is located. This system, therefore, eliminates the need for usernames and passwords, but does inhibit a user from re-analyzing their data from more than one machine (or from a machine which obtains its internet address dynamically from a server) or the use by multiple users from one central compute resource (such as X terminals hanging off a central compute facility).The up-loaded image data is held in its original form and also translated to an internal data format within a temporary image file.These working files use 4 b floating point numbers for the pixel values and maintain the image dimensions and scaling information in a 20 b header. Subsequent processing of the data causes the creation of a new internal file. The original data is maintained so that the processed image can be returned to the user in the format in which the data was first up-loaded. Up-loading is achieved using the multipart/form-data tag (at the time of writing only supported by Netscape’s Navigator software v. 2 and above). Processing modules receive their instructions from the POSTed form data and read the currently active image data from the temporary file. Using the HTML TARGET tag the standard output of the processing modules are directed to the display window and on successful completion use the REFRESH tag to reload the image display routine to show the new image. The image display routine also causes the Index frame to be refreshed so that the result of the processing operation is displayed as an entry within the index.This Index frame is automatically refreshed every 5 min to catch the results of batch processing operations. Operation The following class of operation are currently available on the server: Data Input/Output These operations control the passage of image data between the client and the server. The Data I/O options permit the user to upload data to the server, download processed data from the server, and load example image data. The formats currently supported by the server include TopoMetrix, Digital Instruments, Polaron SP300, DME, VG STM2000 and Park Scientific scanning probe files, and BioRad confocal images in the PIC format.The instrument manufacturers happily provide the necessary file format information, and software routines in some instances. Data is usually returned in the same format in which it was uploaded, but the user can use the SPM server to translate files from one format to another by selecting the desired format from the options presented. The operation of the server is best described by following its application to the processing of an SPM image.Fig. 2 shows an image of raw data acquired during atomic force microscopy (AFM) imaging of a sample of poly(ethylene oxide). The data was acquired in contact mode using an SP300 AFM from VG Microtech. This data is used in the following analysis by the SPM server.Without processing the image reveals no fine surface topography as the large slope of the sample, from bottom left to top right, saturates the grey scale. The polymer surface appears uniform. However, it is clear that careful processing of such data is required before any interpretation of its content can be made. Fig. 3 shows the interface following up-loading of the polymer data to the server. Noise Currently only one noise reduction method is provided by the SPM server, that of median filtering.This is due to its superior performance with SPM data.14–16 Twelve types of matrix are provided including several multiple pass hybrid filters.16 The user can specify the number of recursions for the filter (which Fig. 1 A view of the SPM server in operation showing a scanning probe image rendered as a three dimensional surface. The screen is divided into four regions; the menu bar (top), control frame (left), view window (right) and index frame (bottom right). 1002 Analyst, October 1997, Vol. 122automatically stops if convergence is achieved), whether a ‘modal’ filter is required by removing outlying values from the distribution, and a noise threshold which only modifies pixel values if their values are further from the filtered value than the specified noise. The parameters default to two passes of a three pixel X/I hybrid median, which was found to be optimal for a variety of SPM test data.14 Transform The transform options currently include image levelling and image thresholding. Three types of levelling operation are provided; namely plane fitting, minimum ranking and rolling ball.The minimum ranking filter has previously been demonstrated to be very useful for the removal of uneven substrate effects from SPM image data.15 Due to the speed advantages,14 a recursive 3 3 3 matrix filter is used and the number of iterations is specified by the user from within the control window.Fig. 4 shows the result of levelling the polymer data using 12 passes of a 3 3 3 minimum ranking filter after application of the median noise reduction process described above. The bow in the image, due to cantilever twisting and piezo non-linearity, has been removed. This process now clearly reveals the dendritic nature of the sample. The polymer has formed distinct domains on the mica substrate during the drying process. The rolling ball levelling procedure is a surface morphology approach to determine the surface features which have a radius of curvature smaller than the size of the sphere specified in the control window.Image thresholding is used to segment features in an image based on their height (usually above the surface). Following appropriate levelling, thresholding can be used to discriminate adsorbates from an image. Both automatic and manual thresholding is provided by the SPM server. The manual method causes the image to be thresholded above the percentage level provided by the user (0% is the lowest pixel, 100% is the highest).The automatic procedure uses the method described by Pun17,18 and Kapur19 in which a threshold value is chosen which maximizes the sum of the entropies of the two halves (see ref. 14). Fig. 5 shows the application of automatic thresholding to the scanning force image of poly(ethylene oxide). The dendritic polymer features have been clearly discriminated from the substrate.Measurement The measurement options contain feature detection filters based on the Laplacian, Sobel and Kirsch filters (see ref. 20) and a surface parameter calculator. This later module calculates the standard surface roughness values for the current image and also the ‘volume’ of the image. This is the volume of the space between the image surface and the instrumental zero and has been used to analyze dynamic effects such as polymer hydrolysis21 and protein release from model drug delivery systems.22 Fig. 2 An image of the raw data obtained in an AFM study of poly(ethylene oxide) on mica. The large slope of the sample inhibits the discrimination of the fine surface structure when displayed in this grey-scale representation. Fig. 3 The SPM server interface after the raw AFM data from Fig. 2 has been uploaded. Analyst, October 1997, Vol. 122 1003Morphology Two dimensional image morphology operations included on the SPM server include dilation, erosion, skeletonization and watershed.As these are binary operations they all first perform an image threshold at the 50% level to convert a potential height image to a binary form. The skeletonization procedure is a multiple pass erosion which uses a pixel fate table to ensure that pixels within lines of one pixel width are not removed. The result of a skeletonization procedure is the median (mid-line) of a feature. Skeletonization has been shown to be useful in the analysis of dendritic features and in the determination of the centres of features.14,22 The watershed procedure again uses pixel fate tables in the method described by Russ20 to determine the watershed (point of contact) between merged features.Fig. 6 shows the result of performing skeletonization of the data shown in Fig. 5. The resulting lines are the meridian lines of the dendrites and permit these features to be characterized by the number of branches, ends and the length of the meridians.Such analysis of these dendritic features indicates two possible methods by which they are formed on drying; that of diffusion limited aggregation14,23,24 and amorphous crystal growth under Laplacian fluid flow.14,25 Reconstruction A well documented problem encountered in SPM imaging is where the shape of the tip distorts the image acquired. 3–6,14,15,26–30 It is desirable to remove where possible the distortions incurred as a consequence of probe geometry and to assign to an image where such distortions have occurred.The reconstruction options of the SPM server centre around this removal of the effect of tip shape from SPM topographic data. The options provided include the three dimensional erosion procedure26 (the so-called Envelope Reconstruction method of Keller27) in which the user can specify an ideal probe of parabolic, hyperbolic or pyramidal geometry and remove its effect from the data, or can use the non-assumptive tip derivation software being developed within the laboratory.28 This software uses a morphological approach to determine the probe of least-sharp non-regular geometry which (mathematically) could have formed the image.29 This non-assumptive method has significant advantages over other reconstruction systems as no independent measure of the probe shape is required.30 Fig. 7 is a three dimensional view of the probe extracted from the data shown in Fig. 4. The pyramidal nature of the silicon nitride AFM probe is clearly evident.Due to their computational overhead, the reconstruction operations are performed in a batch mode on the SPM server. The server has been configured to run one job at a time. On the server, the results of a batch job are indicated as an entry in the Index frame marked by a red dot. Configure The user is able to configure the operation of the SPM server and can specify how the image is displayed (either top view, a shaded view or a fixed-angle three dimensional representation) and the colour map used (grey, heat or rainbow scales).The user is also able to change the dimensions of the data, both the width Fig. 4 The result of applying the default noise-reduction operation and twelve passes of a 3 3 3 minimum ranking background determination filter on the poly(ethylene oxide) data. Fig. 5 Automatic thresholding of the data shown in Fig. 4, using Transform option of the SPM server, clearly discriminates the adsorbate from substrate.Fig. 6 The result of performing skeletonization on the data shown in Fig. 5. Such processing permits the dendritic nature of the polymer features to be classified. Fig. 7 A three dimensional view of the apparent AFM probe which has been determined from the poly(ethylene oxide) data. The pyramidal nature of the probe is clearly evident. 1004 Analyst, October 1997, Vol. 122and height in nanometers and the vertical height from the lowest to the highest data points.Usage and Feedback The SPM server has been operational for nine months and in that time has been accessed over 700 times from sites around the world. Fig. 8 shows a breakdown of the accesses by domain. Although many of the accesses have been single hits, there is a core of users from around the world who regularly use the system. The most-used processing module has been the tip derivation procedure. This has highlighted the value of this computational method but also its current susceptibility to instrumental noise.We have welcomed feedback from users and have used this advice and comments to construct a managed update and development programme for both the SPM server and the processing modules. A major criticism has been levied not at the server itself but at the speed at which the interactions take place. The amount and foreseeable rise in Internet traffic has dictated that a textural, electronic-mail interface to the server is required.In this system users will be able to instruct and control the server using a series of commands within mail messages and images transferred as attachments to these documents. The WWW interface will still be available, and the two systems will be interchangeable. However, in order to use this message control, usernames and passwords will have to be adopted thereby replacing the automatic user identification currently employed. The SPM server is not tailored to handle many users simultaneously.Although some of the processing is performed within a parallel virtual machine (PVM) model of three machines, the SPM server itself handles both the WWW server and the image processing. With only 32 Mb of physical memory currently available on the server response can become very slow when more than four users are data processing concurrently. An ideal system would divulge its processing out to other processors. We are currently investigating the possibility of linking the PVM model with an object broker (CORBA) which will permit the distribution of code throughout the network, both to dedicated server hardware and also back to the user’s machine.It is clear that the SPM server has application outside the field of scanning probe microscopy. Interaction with users of other imaging techniques, specifically scanning electron and confocal, has shown that such widely available processing facilities would be of great use and benefit.The SPM server, therefore, will evolve into a generic data analysis system. It currently handles one confocal data format (BioRad) and will be able to process three dimensional data, such as those obtained by confocal sectioning, in the future. We also foresee the application of the data processing routines to many sequential data, including spectroscopic and chromatographic, and not just those from the imaging sciences. We see the SPM server as the first stage in the development of an integrated and distributed data analysis software architecture.By using the interprocess communication methods of the Genesis Graphics System, an extensible application program interface (API) and the PVM model, the modules of the SPM server can be accessed easily from other software systems. The integration with the Genesis Graphics System has demonstrated how scanning probe image analysis is highly complementary to many techniques and forms a significant role within bio-informatics.By adopting an API and allowing network interoperability we aim to develop the components of the SPM server into widely adopted bio-informatics modules for image analysis. This provision will be made through the evolution of the current distributed computing environment (DCE) interprocess communication methods to a CORBAcompliant system. Conclusion The SPM server has grown significantly in functionality since its conception and first incarnation in early 1996.The number of regular users is continuing to grow and the valued feedback from the users has highlighted many ways in which the service can be improved. A programme of modifications has been undertaken to maximize the usability of the system and widen its availability. We believe the significant use of the SPM server is in the substantiation of the need for readily available, network compliant standard software tools. The synergy between biophysical image analysis and the field of bioinformatics is an opportunity to develop new techniques for data handling, manipulation and processing.This work has been funded by the EPSRC Scanning Probe Microscopy Initiative. S.J.B.T. is a Nuffield Foundation Science Research Fellow. References 1 Pancorbo, M., and Anguilar, E., Surf. Sci., 1991, 251, 418. 2 Aguilar, M., and Anguiano, E., J. Microsc., 1992, 167, 197. 3 Vesenka, J., Manne, S., Giberson, R., Marsh, T., and Henderson, E., Biophys. J., 1993, 65, 510. 4 Markiewicz, P., and Goh, M. C., Langmuir, 1994, 10, 5. 5 Bonnet, N., Dongmo, S., Vautrot, P., and Troyon, M., Microsc. Microanal. Microstruct., 1994, 5, 477. 6 Tegenfeldt, J. O., and Montelius, L., Appl. Phys. Lett., 1995, 66, 1068. 7 Sumpter, B. G., Getino, C., Noid, D. W., and Wunderlich, B., Makromol. Chem., Theory Simul., 1993, 2, 55. 8 Hallmark, V. M., and Chiang, S., Surf. Sci., 1995, 329, 255. 9 Sasaki, N., and Tsukada, M., Jpn. J. Appl. Phys., 1995, 34, 3319. 10 Unger, M. A., O’Connor, S. D., and Baldeschwielder, J. D., J. Vac. Sci. Technol., 1995, B14, 1302. 11 Grubmuller, H., Heymann, B., and Tavan, P., Science, 1996, 271, 997. 12 Moore, A., Williams, P. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., unpublished work. 13 Williams, P. M., Davies, M. C., Jackson, D. E., Roberts, C. J., Tendler, S. J. B., and Wilkins, M. J., Nanotechnology, 1991, 2, 172. 14 Williams, P. M., PhD Thesis, University of Nottingham, UK, 1995. 15 Williams, P. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., J. Vac. Sci. Technol., 1994, B12, 1517. 16 Kokaram, A. C., Persad, N., Lasenby, J., Fitzgerald, W. J., McKinnon A., and Welland, M., Appl. Optics, 1995, 34, 5121. 17 Pun, T., Signal Processing, 1980, 2, 223. 18 Pun, T., CVGIP, 1981, 16, 210. 19 Kapur, J. N., Sahoo, P. K., and Wong, A. K. C., CVGIP, 1985, 29, 273. 20 Russ, J. C., The Image Processing Handbook, CRC press, 2nd edn., 1994.Fig. 8 Analysis of the accesses to the SPM server categorized by Internet domain. Analyst, October 1997, Vol. 122 100521 Chen, X., Davies, M. C., Roberts, C. J., Shakesheff, K. M., Tendler, S. J. B., and Williams, P. M., Anal. Chem., 1996, 68, 1451. 22 Shakesheff, K. M., Davies, M. C., Heller, J., Roberts, C. J., Tendler, S. J. B., and Williams, P. M., Langmuir, 1995, 11, 2547. 23 Witten, T. A., and Sander, L. M., Phys. Rev. B., 1983, 27, 5686. 24 Maloy, K. J., Feder, J., and Jossang, T., Phys.Rev. Lett., 1985, 55, 2688. 25 Mullins, W. W., and Sekerka, R. F., J. Appl. Phys., 1963, 34, 323. 26 Pingali, G. S., and Jain, R., Proc. IEEE Workshop Appl. Comp. Vision, 1992, 282. 27 Keller, D., and Franke, F. S., Surf. Sci., 1993, 294, 409. 28 Williams, P. M., Shakesheff, K. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., J. Vac. Sci. Technol., 1996, B14, 1557. 29 Villarrubia, J. S., Surf. Sci., 1994, 321, 287. 30 Williams, P.M., Shakesheff, K. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., Langmuir, 1996, 12, 3468. Paper 7/03049E Received May 6, 1997 Accepted August 19, 1997 1006 Analyst, October 1997, Vol. 122 Data Analysis Using the Internet: the World Wide Web Scanning Probe Microscopy Data Analysis System Philip M. Williams*, Martyn C. Davies, Clive J. Roberts and Saul J. B. Tendler Laboratory of Biophysics and Surface Analysis, Department of Pharmaceutical Sciences, University of Nottingham, Nottingham, UK NG7 2RD The first interactive world-wide web-based image analysis system is presented (http://pharm6.pharm.nottingham.ac.uk/ processing/main.html).The system, currently tailored to scanning probe microscopy image data, has been developed to permit the use of software algorithms developed within our laboratory by researchers throughout the world. The implementation and functionality of the scanning probe microscopy server is described.Feedback from users of the facility has demonstrated its value within the research community, and highlighted key operational issues which are to be addressed. A future role of Internet-based data processing software is also discussed. Keywords: Scanning probe microscopy; data analysis; image processing; world wide web Critical analysis of data is fundamental to progressive research. The advent of digital control systems for biophysical instrumentation has allowed the development of software for exhaustive and often complex data analysis.Within the field of scanning probe microscopy (SPM), for example, many software routines to perform image analysis have been presented in the literature. These include the measurement of surface roughness and fractal dimension parameters,1,2 the analysis and correction of tip-induced distortions,3–6 the simulation of image contrast7 –9 and the prediction of tip/substrate interactions.10–12 Although it is encouraging that such significant work is being undertaken, these software algorithms are generally outside the reach of the experimentalist, who is often confined to the software tools provided by the instrument manufacturer.Furthermore, most of the published algorithms have been written within academic research groups, they are therefore mainly specific to the resources available at the time and locked into the computational hardware of the laboratory. As data analysis is almost always performed subsequent to acquisition this restriction to specific software packages and hardware can tie-up expensive instrument time. In our laboratory we make particular effort to remove, where appropriate, the analysis from acquisition and thereby maximize the time for which the instrumentation is available.Our software is thus written in a network-oriented fashion with a centrally available data archive.13 As an off-line analysis system, the software is of value to many research groups.To this end we have developed the SPM server. This is a world wide web (WWW)-based system on which users throughout the world can process their data using some of the software routines which we have developed. As the system runs within our laboratory we have removed the need to support many users and their differing, often esoteric hardware. The SPM server has been developed to provide our SPM data analysis software and is thus tailored to SPM image data.However, the provision of data analysis is not restricted to this area of research and the SPM server is a first example of a WWW-based interactive data analysis system. Here, we outline the implementation of the system and indicate its usefulness through example. Technical details of the implementation will be given elsewhere. Implementation The SPM server has been implemented under the client-server model in which a user’s WWW browser operates as the client.Data processing and image rendering is handled by the server, which is currently running on a Hewlett Packard 735 workstation. There are three reasons for adopting the client-server model rather than a peer-to-peer or Java-based system. Firstly, the service should be available to as many people as possible and a pre-requisite for a Java-compatible browser may limit the potential user base. Furthermore, an increasing number of industrial companies are restricting access to the Internet and specifically not allowing peer-to-peer operations for security reasons.Secondly, many of the routines which are published on the server have significant processing overheads, and would be severely restricted in performance under Windows and Macintosh environments and under the interpreted Java language. The scanning probe tip derivation procedure, for example, may require of the order of 1012 operations before convergence is achieved. Such processing can only be achieved realistically on dedicated high-performance workstations.Finally, the SPM server is at the first stage of a projected development to a fully integrated and distributed data analysis resource. The protocols used within the server to control message handling and job control have been taken from the Genesis Graphics System13 and the foreseen development includes the ability to access the functionality of the SPM server from graphical user interfaces (GUIs) other than a WWW browser.This functionality would permit the possible utilization of routines, such as those within the SPM server, from instrument control and other proprietary software. The SPM server uses posted forms within a series of HTML pages served by the NCSA-HTTPD server (vl.5.2a). These forms pass information to the processing modules, written in C/C++, residing within the cgi-bin directory of the server. Fig. 1 shows the main processing page as viewed on a WWW browser. The SPM server splits the browser page using four frames (HTML 2).The top frame acts as a menu bar in which various classes of operation can be chosen. These classes include data input/output, feature measurement, image morphology and surface reconstruction. The left-hand frame acts as a control window in which the currently selected class of processing operation are controlled. For instance, when the morphology class is chosen from the menu, the left-hand frame changes to reflect the various processing operations which are available in this class, such as erosion, dilation and watershed.The larger right-hand frame displays the current image. The display mode can be changed by selecting the Configure class of operation from the menu. The smaller frame, in the bottom right of the browser window, is the Index frame. This provides a historical record of the operations which have been instigated and permits previous data to be recalled from the server.Analyst, October 1997, Vol. 122 (1001–1006) 1001Data up-loaded to the SPM server is held for two working days and then deleted. The SPM server recognizes the user and serves the appropriate data to the browser by analysis of the TCP/IP internet address on which the browser is located. This system, therefore, eliminates the need for usernames and passwords, but does inhibit a user from re-analyzing their data from more than one machine (or from a machine which obtains its internet address dynamically from a server) or the use by multiple users from one central compute resource (such as X terminals hanging off a central compute facility).The up-loaded image data is held in its original form and also translated to an internal data format within a temporary image file. These working files use 4 b floating point numbers for the pixel values and maintain the image dimensions and scaling information in a 20 b header. Subsequent processing of the data causes the creation of a new internal file.The original data is maintained so that the processed image can be returned to the user in the format in which the data was first up-loaded. Up-loading is achieved using the multipart/form-data tag (at the time of writing only supported by Netscape’s Navigator software v. 2 and above). Processing modules receive their instructions from the POSTed form data and read the currently active image data from the temporary file. Using the HTML TARGET tag the standard output of the processing modules are directed to the display window and on successful completion use the REFRESH tag to reload the image display routine to show the new image.The image display routine also causes the Index frame to be refreshed so that the result of the processing operation is displayed as an entry within the index. This Index frame is automatically refreshed every 5 min to catch the results of batch processing operations. Operation The following class of operation are currently available on the server: Data Input/Output These operations control the passage of image data between the client and the server.The Data I/O options permit the user to upload data to the server, download processed data from the server, and load example image data. The formats currently supported by the server include TopoMetrix, Digital Instruments, Polaron SP300, DME, VG STM2000 and Park Scientific scanning probe files, and BioRad confocal images in the PIC format.The instrument manufacturers happily provide the necessary file format information, and software routines in some instances. Data is usually returned in the same format in which it was uploaded, but the user can use the SPM server to translate files from one format to another by selecting the desired format from the options presented. The operation of the server is best described by following its application to the processing of an SPM image.Fig. 2 shows an image of raw data acquired during atomic force microscopy (AFM) imaging of a sample of poly(ethylene oxide). The data was acquired in contact mode using an SP300 AFM from VG Microtech. This data is used in the following analysis by the SPM server. Without processing the image reveals no fine surface topography as the large slope of the sample, from bottom left to top right, saturates the grey scale. The polymer surface appears uniform. However, it is clear that careful processing of such data is required before any interpretation of its content can be made. Fig. 3 shows the interface following up-loading of the polymer data to the server. Noise Currently only one noise reduction method is provided by the SPM server, that of median filtering. This is due to its superior performance with SPM data.14–16 Twelve types of matrix are provided including several multiple pass hybrid filters.16 The user can specify the number of recursions for the filter (which Fig. 1 A view of the SPM server in operation showing a scanning probe image rendered as a three dimensional surface. The screen is divided into four regions; the menu bar (top), control frame (left), view window (right) and index frame (bottom right). 1002 Analyst, October 1997, Vol. 122automatically stops if convergence is achieved), whether a ‘modal’ filter is required by removing outlying values from the distribution, and a noise threshold which only modifies pixel values if their values are further from the filtered value than the specified noise.The parameters default to two passes of a three pixel X/I hybrid median, which was found to be optimal for a variety of SPM test data.14 Transform The transform options currently include image levelling and image thresholding. Three types of levelling operation are provided; namely plane fitting, minimum ranking and rolling ball. The minimum ranking filter has previously been demonstrated to be very useful for the removal of uneven substrate effects from SPM image data.15 Due to the speed advantages,14 a recursive 3 3 3 matrix filter is used and the number of iterations is specified by the user from within the control window.Fig. 4 shows the result of levelling the polymer data using 12 passes of a 3 3 3 minimum ranking filter after application of the median noise reduction process described above. The bow in the image, due to cantilever twisting and piezo non-linearity, has been removed.This process now clearly reveals the dendritic nature of the sample. The polymer has formed distinct domains on the mica substrate during the drying process. The rolling ball levelling procedure is a surface morphology approach to determine the surface features which have a radius of curvature smaller than the size of the sphere specified in the control window. Image thresholding is used to segment features in an image based on their height (usually above the surface). Following appropriate levelling, thresholding can be used to discriminate adsorbates from an image.Both automatic and manual thresholding is provided by the SPM server. The manual method causes the image to be thresholded above the percentage level provided by the user (0% is the lowest pixel, 100% is the highest). The automatic procedure uses the method described by Pun17,18 and Kapur19 in which a threshold value is chosen which maximizes the sum of the entropies of the two halves (see ref. 14). Fig. 5 shows the application of automatic thresholding to the scanning force image of poly(ethylene oxide). The dendritic polymer features have been clearly discriminated from the substrate. Measurement The measurement options contain feature detection filters based on the Laplacian, Sobel and Kirsch filters (see ref. 20) and a surface parameter calculator. This later module calculates the standard surface roughness values for the current image and also the ‘volume’ of the image.This is the volume of the space between the image surface and the instrumental zero and has been used to analyze dynamic effects such as polymer hydrolysis21 and protein release from model drug delivery systems.22 Fig. 2 An image of the raw data obtained in an AFM study of poly(ethylene oxide) on mica. The large slope of the sample inhibits the discrimination of the fine surface structure when displayed in this grey-scale representation.Fig. 3 The SPM server interface after the raw AFM data from Fig. 2 has been uploaded. Analyst, October 1997, Vol. 122 1003Morphology Two dimensional image morphology operations included on the SPM server include dilation, erosion, skeletonization and watershed. As these are binary operations they all first perform an image threshold at the 50% level to convert a potential height image to a binary form. The skeletonization procedure is a multiple pass erosion which uses a pixel fate table to ensure that pixels within lines of one pixel width are not removed.The result of a skeletonization procedure is the median (mid-line) of a feature. Skeletonization has been shown to be useful in the analysis of dendritic features and in the determination of the centres of features.14,22 The watershed procedure again uses pixel fate tables in the method described by Russ20 to determine the watershed (point of contact) between merged features.Fig. 6 shows the result of performing skeletonization of the data shown in Fig. 5. The resulting lines are the meridian lines of the dendrites and permit these features to be characterized by the number of branches, ends and the length of the meridians. Such analysis of these dendritic features indicates two possible methods by which they are formed on drying; that of diffusion limited aggregation14,23,24 and amorphous crystal growth under Laplacian fluid flow.14,25 Reconstruction A well documented problem encountered in SPM imaging is where the shape of the tip distorts the image acquired. 3–6,14,15,26–30 It is desirable to remove where possible the distortions incurred as a consequence of probe geometry and to assign to an image where such distortions have occurred. The reconstruction options of the SPM server centre around this removal of the effect of tip shape from SPM topographic data. The options provided include the three dimensional erosion procedure26 (the so-called Envelope Reconstruction method of Keller27) in which the user can specify an ideal probe of parabolic, hyperbolic or pyramidal geometry and remove its effect from the data, or can use the non-assumptive tip derivation software being developed within the laboratory.28 This software uses a morphological approach to determine the probe of least-sharp non-regular geometry which (mathematically) could have formed the image.29 This non-assumptive method has significant advantages over other reconstruction systems as no independent measure of the probe shape is required.30 Fig. 7 is a three dimensional view of the probe extracted from the data shown in Fig. 4. The pyramidal nature of the silicon nitride AFM probe is clearly evident. Due to their computational overhead, the reconstruction operations are performed in a batch mode on the SPM server. The server has been configured to run one job at a time. On the server, the results of a batch job are indicated as an entry in the Index frame marked by a red dot.Configure The user is able to configure the operation of the SPM server and can specify how the image is displayed (either top view, a shaded view or a fixed-angle three dimensional representation) and the colour map used (grey, heat or rainbow scales). The user is also able to change the dimensions of the data, both the width Fig. 4 The result of applying the default noise-reduction operation and twelve passes of a 3 3 3 minimum ranking background determination filter on the poly(ethylene oxide) data.Fig. 5 Automatic thresholding of the data shown in Fig. 4, using Transform option of the SPM server, clearly discriminates the adsorbate from substrate. Fig. 6 The result of performing skeletonization on the data shown in Fig. 5. Such processing permits the dendritic nature of the polymer features to be classified. Fig. 7 A three dimensional view of the apparent AFM probe which has been determined from the poly(ethylene oxide) data.The pyramidal nature of the probe is clearly evident. 1004 Analyst, October 1997, Vol. 122and height in nanometers and the vertical height from the lowest to the highest data points. Usage and Feedback The SPM server has been operational for nine months and in that time has been accessed over 700 times from sites around the world. Fig. 8 shows a breakdown of the accesses by domain. Although many of the accesses have been single hits, there is a core of users from around the world who regularly use the system. The most-used processing module has been the tip derivation procedure.This has highlighted the value of this computational method but also its current susceptibility to instrumental noise. We have welcomed feedback from users and have used this advice and comments to construct a managed update and development programme for both the SPM server and the processing modules. A major criticism has been levied not at the server itself but at the speed at which the interactions take place.The amount and foreseeable rise in Internet traffic has dictated that a textural, electronic-mail interface to the server is required. In this system users will be able to instruct and control the server using a series of commands within mail messages and images transferred as attachments to these documents. The WWW interface will still be available, and the two systems will be interchangeable.However, in order to use this message control, usernames and passwords will have to be adopted thereby replacing the automatic user identification currently employed. The SPM server is not tailored to handle many users simultaneously. Although some of the processing is performed within a parallel virtual machine (PVM) model of three machines, the SPM server itself handles both the WWW server and the image processing. With only 32 Mb of physical memory currently available on the server response can become very slow when more than four users are data processing concurrently.An ideal system would divulge its processing out to other processors. We are currently investigating the possibility of linking the PVM model with an object broker (CORBA) which will permit the distribution of code throughout the network, both to dedicated server hardware and also back to the user’s machine. It is clear that the SPM server has application outside the field of scanning probe microscopy. Interaction with users of other imaging techniques, specifically scanning electron and confocal, has shown that such widely available processing facilities would be of great use and benefit.The SPM server, therefore, will evolve into a generic data analysis system. It currently handles one confocal data format (BioRad) and will be able to process three dimensional data, such as those obtained by confocal sectioning, in the future.We also foresee the application of the data processing routines to many sequential data, including spectroscopic and chromatographic, and not just those from the imaging sciences. We see the SPM server as the first stage in the development of an integrated and distributed data analysis software architecture. By using the interprocess communication methods of the Genesis Graphics System, an extensible application program interface (API) and the PVM model, the modules of the SPM server can be accessed easily from other software systems.The integration with the Genesis Graphics System has demonstrated how scanning probe image analysis is highly complementary to many techniques and forms a significant role within bio-informatics. By adopting an API and allowing network interoperability we aim to develop the components of the SPM server into widely adopted bio-informatics modules for image analysis. This provision will be made through the evolution of the current distributed computing environment (DCE) interprocess communication methods to a CORBAcompliant system.Conclusion The SPM server has grown significantly in functionality since its conception and first incarnation in early 1996. The number of regular users is continuing to grow and the valued feedback from the users has highlighted many ways in which the service can be improved. A programme of modifications has been undertaken to maximize the usability of the system and widen its availability.We believe the significant use of the SPM server is in the substantiation of the need for readily available, network compliant standard software tools. The synergy between biophysical image analysis and the field of bioinformatics is an opportunity to develop new techniques for data handling, manipulation and processing. This work has been funded by the EPSRC Scanning Probe Microscopy Initiative. S.J.B.T. is a Nuffield Foundation Science Research Fellow.References 1 Pancorbo, M., and Anguilar, E., Surf. Sci., 1991, 251, 418. 2 Aguilar, M., and Anguiano, E., J. Microsc., 1992, 167, 197. 3 Vesenka, J., Manne, S., Giberson, R., Marsh, T., and Henderson, E., Biophys. J., 1993, 65, 510. 4 Markiewicz, P., and Goh, M. C., Langmuir, 1994, 10, 5. 5 Bonnet, N., Dongmo, S., Vautrot, P., and Troyon, M., Microsc. Microanal. Microstruct., 1994, 5, 477. 6 Tegenfeldt, J. O., and Montelius, L., Appl. Phys. Lett., 1995, 66, 1068. 7 Sumpter, B. G., Getino, C., Noid, D. W., and Wunderlich, B., Makromol. Chem., Theory Simul., 1993, 2, 55. 8 Hallmark, V. M., and Chiang, S., Surf. Sci., 1995, 329, 255. 9 Sasaki, N., and Tsukada, M., Jpn. J. Appl. Phys., 1995, 34, 3319. 10 Unger, M. A., O’Connor, S. D., and Baldeschwielder, J. D., J. Vac. Sci. Technol., 1995, B14, 1302. 11 Grubmuller, H., Heymann, B., and Tavan, P., Science, 1996, 271, 997. 12 Moore, A., Williams, P. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., unpublished work. 13 Williams, P. M., Davies, M. C., Jackson, D. E., Roberts, C. J., Tendler, S. J. B., and Wilkins, M. J., Nanotechnology, 1991, 2, 172. 14 Williams, P. M., PhD Thesis, University of Nottingham, UK, 1995. 15 Williams, P. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., J. Vac. Sci. Technol., 1994, B12, 1517. 16 Kokaram, A. C., Persad, N., Lasenby, J., Fitzgerald, W. J., McKinnon A., and Welland, M., Appl. Optics, 1995, 34, 5121. 17 Pun, T., Signal Processing, 1980, 2, 223. 18 Pun, T., CVGIP, 1981, 16, 210. 19 Kapur, J. N., Sahoo, P. K., and Wong, A. K. C., CVGIP, 1985, 29, 273. 20 Russ, J. C., The Image Processing Handbook, CRC press, 2nd edn., 1994. Fig. 8 Analysis of the accesses to the SPM server categorized by Internet domain. Analyst, October 1997, Vol. 122 100521 Chen, X., Davies, M. C., Roberts, C. J., Shakesheff, K. M., Tendler, S. J. B., and Williams, P. M., Anal. Chem., 1996, 68, 1451. 22 Shakesheff, K. M., Davies, M. C., Heller, J., Roberts, C. J., Tendler, S. J. B., and Williams, P. M., Langmuir, 1995, 11, 2547. 23 Witten, T. A., and Sander, L. M., Phys. Rev. B., 1983, 27, 5686. 24 Maloy, K. J., Feder, J., and Jossang, T., Phys. Rev. Lett., 1985, 55, 2688. 25 Mullins, W. W., and Sekerka, R. F., J. Appl. Phys., 1963, 34, 323. 26 Pingali, G. S., and Jain, R., Proc. IEEE Workshop Appl. Comp. Vision, 1992, 282. 27 Keller, D., and Franke, F. S., Surf. Sci., 1993, 294, 409. 28 Williams, P. M., Shakesheff, K. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., J. Vac. Sci. Technol., 1996, B14, 1557. 29 Villarrubia, J. S., Surf. Sci., 1994, 321, 287. 30 Williams, P. M., Shakesheff, K. M., Davies, M. C., Jackson, D. E., Roberts, C. J., and Tendler, S. J. B., Langmuir, 1996, 12, 3468. Paper 7/03049E Received May 6, 1997 Accepted August 19, 1997 1006 Analyst, October 1997, Vol. 122
ISSN:0003-2654
DOI:10.1039/a703049e
出版商:RSC
年代:1997
数据来源: RSC
|
2. |
Monitoring of Impurities Using High-performance Liquid Chromatography With Diode-array Detection: Eigenvalue Plots of Partially Overlapping Tailing Peaks |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1007-1013
Konstantinos D. Zissis,
Preview
|
|
摘要:
NH N O SKF-101468-A (I) NH N O SKF-96266-A (II) O Monitoring of Impurities Using High-performance Liquid Chromatography With Diode-array Detection: Eigenvalue Plots of Partially Overlapping Tailing Peaks Konstantinos D. Zissisa, Richard G. Brereton*a and Richard Escottb a School of Chemistry, University of Bristol, Cantock’s Close, Bristol, UK BS8 1TS b SmithKline Beecham Pharmaceuticals, Old Powder Mills, Near Leigh, Tonbridge, Kent, UK TN11 9AN The chromatogram of ropinirole in the presence of about 5% of a closely eluting impurity, obtained by HPLC with diode-array detection, was analysed by chemometric procedures. Log eigenvalue plots were used to determine the relative composition of regions of the chromatogram.It is shown that since the peaks exhibit tailing, unusual behaviour is found in the plots. This is verified by performing simulations, in which it is demonstrated that peak asymmetry has a pronounced influence on this chemometric approach.In many cases of liquid chromatographic analysis, asymmetric peak shapes are encountered and methods for peak purity assessment should be re-evaluated in the light of these asymmetries. Keywords: High-performance liquid chromatography; window factor analysis; peak purity; tailing; co-elution Window factor analysis (WFA) is a common chemometric method for determining the number and nature of components in a poorly resolved mixture of compounds as detected by coupled chromatographic techniques such as HPLC with diodearray detection (DAD).The first step is to perform and interrogate an eigenvalue plot, which is described here. There is a substantial literature spread over more than a decade reporting the applicability of this approach.1–8 However, most papers report either simulations or carefully selected case studies, normally where elution times are long. Chemometric methods are most useful where the answer to a problem is not entirely obvious in advance.In real world industrial situations, it is under these circumstances that chemometrics has a potential. HPLC is commonly used as a method for monitoring the purity of products such as pharmaceuticals. 9,10 The presence of a small amount of impurity in a drug can have significant implications in the validation of a manufacturing process. A prerequisite for any liquid chromatographic method which is used within the pharmaceutical industry and submitted to regulatory authorities, is to establish peak purity.The most commonly used procedure for establishing this is with UV diode-array detectors. However, the current commercial instrumentation has two major limitations: (i) the methods are not quantitative and (ii) they are relatively insensitive. Thus, the results obtained are often subjective in their interpretation and techniques such as LC–MS11 offer significant advantages. Although neither of these techniques (DAD or MS) is universal in their application, there is a pressing need to improve the peak purity capabilities of HPLC–diode-array detectors.This is particularly pertinent within the pharmaceutical industry, where HPLC is often the method of choice for assessing the purity of drug substances and intermediate materials. The peak shapes produced by liquid chromatography are rarely symmetric (Gaussian) in shape. Peak distortion occurs mainly from dispersion and from the nature of the interactions between analyte functional groups and stationary phase matrices. In this paper, the use of eigenvalue plots, which is the first step in WFA, is assessed with respect to its potential use for assessing peak purity, and the effect of asymmetric peak shapes was studied.Methods Experimental Compounds The two compounds used were SKF-101468-A (ropinirole), I, and an associated impurity, SKF-96266-A, II, isolated during the synthesis of the former.11 These compounds were synthesised in-house, at SmithKline Beecham (Tonbridge, Kent, UK) and the chemical structures are shown in Fig. 1. A mixture, consisting of 24 mg of II and 450 mg of I was produced using a Mettler MT5 microbalance to yield a 5.33% m/m mixture of II to I. Concentrations were 0.3 mm for I and 0.015 mm for II. Chromatography All chromatographic work was carried out using a Beckman System Gold chromatograph (Model 126 pump, Model 507 autosampler) with a C18 reversed-phase Kromasil column (Hichrom, Theale, 5 mm, 250 3 4.6 mm id) at ambient temperature.The mobile phase consisted of 75 + 25 (v/v) acetonitrile (HPLC-grade, BDH)–5 mm NH4OAc (Aldrich, 97+%, ACS-reagent, aqueous, pH adjusted to 7.0 using a combination of dilute HCl and NH3). The flow rate was set at 1 ml min21 and 20 ml of the solution were injected. UV detection was performed using a Model 168 diode-array detector, and UV spectra were collected in the wavelength range 230–290 nm. The digital resolution was 1 s in time and 2 nm in wavelength.The individual compounds and the solution mixture of I and II described under Compounds were analysed using these conditions. The data obtained were used to produce eigenvalue plots and also to provide simulated data to assess the effects of peak asymmetry on eigenvalue plot results. Molar absorptivities Electronic absorption spectra were recorded on an UltrospecIII UV/VIS spectrometer, Model 80-2097-62 (Pharmacia), between 230 and 290 nm at 2 nm intervals, so that the same spectral range and spectral resolution were used in both HPLC Fig. 1 Structures of compounds used in this study. Analyst, October 1997, Vol. 122 (1007–1013) 1007and electronic absorption spectrometry (EAS). To determine the molar absorption coefficients of I and II, five concentration levels were used to record the UV spectra with solutions prepared in the HPLC solvent system described above. The concentrations were chosen so that the highest value produced an absorbance at the most intense wavelength of close to 1.5.For compound I, a concentration of 5 3 1022 g l21 was found to give an absorbance of 1.639 at lmax = 249 nm, whereas a concentration of 2.1 3 1022 g l21 of compound II gave an absorbance of 1.488 at lmax = 243 nm. The experiments were designed so that the maximum absorbance at the highest concentration level and most intensely absorbing wavelength was approximately 1.5 to ensure linearity and at the lowest concentration level was approximately 0.3. Three further equally spaced concentration levels were used (0.6, 0.9 and 1.2 A, respectively).Replicates were recorded as appropriate. A graph of absorbance versus concentration confirmed linearity for both compounds. The average concentration, øC k (in g l21), over all M experiments for compound k is given by the equation: C M C k mk m M = = å 1 1 where: Cmk is the concentration of compound k in sample m. The average sum of absorbances, øAk, over the calibration set between 230 and 290 nm at 2 nm intervals, is given by: A M a k jmk j J m M = = = å å 1 1 1 where: J is the number of wavelengths and ajmk is the absorbance of compound k in sample m at wavelength j.By dividing these two values, we obtain the relative summed absorbance per unit mass per volume over the wavelength range 230–290 nm at 2 nm intervals for both compounds I and II, as seen in Table 1. From these values, it is calculated that II absorbs more intensely with respect to I by a factor r of 1.54.Because the spectra and absorption coefficients of each compound differ, a 5.33% mixture of II to I would now correspond to a relative area, summed over all wavelengths, of 5.33 3 1.54% = 8.22%, in the series of simulated chromatograms. For the calibration of the HPLC instrument, a maximum injection volume of 25 ml and a minimum of 5 ml were selected, with three equally spaced intermediate levels. After injecting these amounts of compounds and measuring the corresponding peak areas for each run, graphs of peak area versus injection volume were obtained for the two individual compounds.Replicates were used for all runs and the graphs were again linear. Peak Shapes Asymmetric peak shapes An asymmetric chromatographic peak shape can be simulated as a Lorentzian–Gaussian function, one half being Lorentzian and the other half Gaussian.12–15 An equation for a Lorentzian peak shape is: x t A t t L ( ) ( ) / = + - 1 2 2 r s where: A is an absorbance value at the point of maximum intensity, sL a factor relating to the width of the peak and tr the retention time at the maximum of the peak.An equation giving a Gaussian peak shape is: x t A t t ( ) exp ( ) = - - é ë ê ù û ú r G 2 2 s with parameters A and tr being exactly the same as above, and sG being a factor relating to the peak width. It can be shown that the peak width at half-height (D1 2) is 2sG ABB ln2 for a Gaussian and 2sL for a Lorentzian peak shape.A schematic representation of both peak shapes and the corresponding notations is given in Fig. 2. To enable simulations of reconstructed chromatograms, the leading edge of the peaks was based on a Gaussian peak shape (which is less tailing), whereas the tailing edge was based on a Lorentzian peak shape (in which tailing is much more profound). Several different peak shapes are possible according to asymmetricitiy. There are many approaches for modelling asymmetric peak shapes, but in this paper we use only one such method.There is no special reason for favouring any particular approach. Symmetric peak shapes A symmetric peak shape would have either purely Lorentzian or purely Gaussian character. In the simulations of symmetric peaks described below, the peaks are entirely of Gaussian form and only the values of sG (peak width) are changed. Fixed Window Factor Analysis WFA is a method for detecting the number of substances present in a chromatographic signal and the first step is to construct an eigenvalue plot.A datamatrix from HPLC–diode-array detectors typically consists of I rows and J columns, where I is the number of points in time and J the number of wavelengths of the spectrum obtained on the corresponding time. A window, w, is a small slice of the overall datamatrix, typically consisting of 3 or 5 data points, usually progressive in the direction of elution time.The method involves performing principal component analysis (PCA) on sub-matrices of the data consisting of w data points in time and then looking at the size of the eigenvalues. The window is then moved along in a sequential approach by one Table 1 Average summed absorbance (over 230–290 nm), øA k, versus average concentration, øC k, and r value for the two UV calibration schemes øA k øC k/g l21 øA k/øC k B/A I 15.46 3 3 1022 515.33 (A) 1.54 II 9.78 1.23 3 1022 795.12 (B) Fig. 2 Typical Gaussian/Lorentzian peak shape. 1008 Analyst, October 1997, Vol. 122data point, PCA is performed and the size of the eigenvalues calculated again. In general, the greater the size of the eigenvalue, the more significant the individual component. Data scaling When performing PCA, the data should be left uncentred.16 Provided that this is done, the results, as viewed from the eigenvalue plots, should be straightforward to interpret. The corresponding eigenvalues for the centred data are likely to give totally different results that are often hard to interpret, as discussed elsewhere.1,16 Uncentred PCA is often used in chromatography, as most factor analysis methods are concerned with variability above a baseline, rather than around the mean.Principal component analysis PCA is a common technique for dimensionality reduction, so that the system retains only the information that is regarded as most relevant.17–20 PCA decomposes a data matrix into scores and loadings and in terms of assessing peak purity the most important variables that are extracted are called component scores.The scores (I,KT) should relate to sample composition and the loadings (K,JP) should relate to spectra. A mathematical equation that describes this is given below: I,JX = I,KT K,JP + I,JE where K is the number of significant components present, I the number of points in time, J the number of wavelengths, I,JX the overall datamatrix, I,KT the scores matrix, K,JP the loadings matrix and I,JE the residual error matrix.Eigenvalue plots In chemometrics, eigenvalues are used in order to measure the size of a PC. In general, the first PC is the most descriptive, with the remainder of successive PCs describing less and less information and finally modelling just noise. The simplest definition of an eigenvalue,16 gk, is the sum of squares of the scores, given by: g t k ki i I = = å 2 1 where tki is the score of the kth component at elution time i.One of the uses of eigenvalues is to estimate the number of significant components present in a mixture. The most convenient method involves plotting the logarithm of the eigenvalues against elution time, or window centre. As data evolve with time, we can calculate the number of significant components present. For a partially resolved two-component mixture, the plot of the log of the first eigenvalue should correspond to two clear peaks, whereas the plot of the log of the second eigenvalue should result in a single peak that indicates the co-elution of both components.A schematic representation is given in Fig. 3, where regions A and C are composition 1 or selective regions (where only one component elutes), and B is the region of coelution. Results Real Chromatogram Pure compounds The spectra, obtained from the chromatographic analysis of the individual components, were averaged between the start and end times (I1 and I2) of the individual peaks, according to the equation: s s I I jk ijk i I I ~ � /( ) - = - + = å 2 1 1 1 2 The spectra were then normalised to a maximum of 1, using the equation: n÷s jk = ÷s jk/max(÷s jk) Mixture chromatogram A datamatrix for the 5.33% m/m mixture of II to I consisting of 31 wavelengths (between 230 and 290 nm at 2 nm intervals) and 66 points in time (between 3 min 5 s and 4 min 10 s) was obtained.The chromatogram shown in Fig. 4 corresponds to only 46 points in time, as the last 20 points represent pure noise. The chromatographic elution profile was summed over the entire wavelength range as described above, and scaled to a maximum of 1.In order to reproduce this real-data chromatogram by means of simulations, the left-hand side of the peak corresponding to compound I was modelled using a Gaussian function. To do this, the profile of the real data between time 1 and 8 was normalised so that the maximum of the profile is 1, i.e., p x i ij j = = å1 31 and npi = pi/p8 Fig. 3 First and second eigenvalue plots versus time for a hypothetical partially resolved two-component mixture. Fig. 4 Chromatographic elution profile for the 5.33% m/m mixture of II to I, normalised to a maximum of 1. Analyst, October 1997, Vol. 122 1009An estimated Gaussian peak shape was generated between i = 1 and 8 to give n�pi, where the value at i = 8 is 1. Because comparison will be biased according to the point at time i = 8, a modified prediction was obtained by regressing n�p onto np as follows: nöp = npa and: a = = = å å n i n i i n i i p p p � 1 8 2 1 8 The reason for this is to ensure that the maximum value at i = 8 does not unduly bias the peak shape estimate.The root mean square error (RMSE) was then calculated by the equation: RMSE = - S( � � ) / n i n i p p 2 8 The minimum value of the RMSE was found to be 0.008033 and corresponded to a sG of 1.95. The level of noise, d, in the last 20 points in time for the real data was given by the equation: d = - = = å å ( ) / x x IJ ij ij j i 2 1 31 47 66 so that the signal : noise ratio in the real data is: S/N = max(xij)/d = 763 Simulations In order to understand the influence on real data, simulations were performed.Of particular interest is whether real phenomena can be reproduced using symmetric peak shapes. Symmetric peak shapes Three simulations of symmetric peak shapes were performed, in which the value of sG was changed from 1.95 to 3.90 to 5.85, respectively.For compound I, the following parameters were used in the simulations: trI = 8, AI = 1, whereas for compound II, trII = 21. To find a value of A for compound II, in each of the three simulaterall relative area of peak II was set at 8.22% that of I, as described under Molar absorptivities. If the value of sG is kept constant, when simulating two peaks separately, a relative area of 8.22% for compound II corresponds to a peak height, A, of 0.0822 for all three symmetric simulations, provided that both peaks are recorded for a sufficiently long period in time, so that there is negligible intensity remaining under each peak.To generate a predicted datamatrix from the simulations, the matrix corresponding to the two simulated chromatographic peaks, I,K �C (estimated elution profile), was multiplied by the matrix corresponding to the spectra of the pure components, k,J n �S , according to the equation: I,J �A = I,K �C K,J n �S where: I is the number of data points in time (46), J the number of wavelengths (31), K the number of significant components present (2) and K,J n �S the true average spectra obtained as described under Pure compounds, scaled so that the maximum is 1.A noise matrix, I,JN, was then obtained based on a normal distribution with variable random seed and standard deviation 0.001311. To calculate the standard deviation, the S/N of I,J �A was set to that of the real data. The I,JN matrix was then added to the predicted datamatrix I,J �A .Eigenvalue plots, based on a three-point time window, were obtained for each of the three symmetric simulations (Fig. 5). From this, as the peak width is increased, the region of a significant second eigenvalue gradually increases. Using a five-point time window, the observed results were similar. Asymmetric peak shapes On trying to reproduce the real data by means of simulations, a Gaussian–Lorentzian function was used, the first part of both peaks being Gaussian and the second part Lorentzian.Various parameters had to be changed in order to achieve the best outcome. First, sG was kept constant for the Gaussian parts of the peaks, at 1.95, as calculated under Mixture chromatogram. The height at the point of maximum intensity for the first peak, AI, was set to 1. The height at the point of maximum intensity for peak 2, AII, was calculated as follows: Fig. 5 Eigenvalue plot results for symmetric simulation with: (a), sG = 1.95; (b), sG = 3.9; (c), sG = 5.85. 1010 Analyst, October 1997, Vol. 122The ratio of the true weights of II : I (in mg) is b = 24/ 450 = 0.0533. The ratio of the sum of intensities over wavelengths 230–290 nm of spectra II : I at a concentration of 1022 g l21, r has a value of 1.54 (Table 1). If the intensity of peak A (corresponding to compound I) at time i is ciA and the absorbance of peak A at unit concentration and wavelength j is sjA, then, the sum of absorbances of peak A under wavelengths 230–290 nm is: Sj j by A =å230 2 290 , Hence, the area of peak A over all times and all wavelengths, PA, is: P C S i i j j by A A A = = = å å 1 46 230 2 290 , The spectra were normalised so that: S S j j by j j by A B and = = å å = = 230 2 290 230 2 290 1 1 , , , From the last two equations: P P C S C S C C r i i j j by i i j j by i i i i A B A A B B A B = = = = = = = = = å å å å å å 1 46 230 2 290 1 46 230 2 290 1 46 1 46 1 , , /b The height of ciA was then set to 1, and PB could be calculated from the previous equation.If the parameters tr, sG and sL for peak B (corresponding to compound II) are known, then an area for peak B, PAB, can be calculated, in which A = 1, according to the equation: PB = A2 3 PAB From this equation, the value of A2 can be calculated. To optimise the values of sL for both peaks A and B, the two columns corresponding to the simulated chromatographic Table 2 The different parameters used in the four asymmetric simulations Peak A Peak B Simulation A1 tr1 sG1 sL1 A2 tr2 sG2 sL2 1 1 8 1.95 1.46 0.0486 21 1.95 1.46 2 1 8 1.95 3.46 0.0839 21 1.95 3.46 3 1 8 1.95 5.46 0.1167 21 1.95 5.46 4 1 8 1.95 7.46 0.1471 21 1.95 7.46 Fig. 6 Eigenvalue plot results for asymmetric simulation with: (a), sL = 1.46; (b), sL = 3.46; (c), sL = 5.46; (d), sL = 7.46. Analyst, October 1997, Vol. 122 1011elution profile I,K simC were summed and regressed over the real chromatographic elution profile I,1 realC. The RMSE was calculated according to the following equation: RMSE real sim = - = å ( ) / c c i i i g 2 1 46 46 where g is the regression factor. For the peak corresponding to compound I, the parameters used were AI = 1, trI = 8, sGI = 1.95, whereas for compound II, trII = 21, sGII = 1.95 and AII was estimated each time separately using the procedure described above.In the simulations, sL was first kept the same for both peaks, and the value that minimised the RMSE was found to be 3.46. On gradually changing the value of sL for peak A while keeping sL for peak B constant at 3.46, and vice versa, the value of the RMSE could not be lowered any further, so it was assumed that there was the same amount of peak tailing for both peaks. A series of three more simulations then followed, in which sL for peak A was changed by values of 2, in order to investigate the effect on eigenvalue plots.The different parameters are displayed in Table 2 and the four simulations are illustrated in Fig. 6. Real Data The result after peforming eigenvalue plots on the real data is shown in Fig. 7. The plot of the second eigenvalue is of particular interest. On comparing this with Fig. 6(b), it is apparent that the simulated and the real data show similar trends. As the value of sL increases, the behaviour of the second eigenvalue similarly shows a well defined minimum in the centre of its graph.It is only possible to reproduce this behaviour in the second eigenvalue by using asymmetric peak shape functions. Presumably, at an early point in the co-elution of the two peaks the ratio of the intensity of A : B decreases. Past the maximum of B, the large tail of A remains and so this ratio increases again. This is shown diagrammatically for the set of peaks represented in Fig. 6(b) (Fig. 8). Conclusion The results presented in this paper show clearly that conventional approaches for purity assessment of partially overlapping peaks are dramatically influenced by peak shape.Most commercial peak purity algorithms are limited in scope, and do not take into account the peak asymmetry effect. Using symmetric peak shapes, the ratio of the intensities of two partially overlapping peaks should either increase or decrease monotonically over a peak cluster. If, however, the first eluting and major peak tails significantly, this peak will dominate both at the beginning and end of the cluster.This behaviour is very evident from the use of sensitive chemometric techniques such as WFA and the use of eigenvalue plots. The use of eigenvalue plots, as a part of WFA, is a very sensitive technique for detecting the presence of small amounts of impurities; many conventional techniques fail. The majority of published material assumes symmetric peak shapes, yet in most practical situations, peak asymmetry is normally encountered.Hence, there is a need to review the behaviour of many common approaches to peak purity assessment under such circumstances. Appendix List of Notations lmax Most intense absorbing wavelength during calibration k Number of significant compounds present in mixture J Total number of wavelengths I Total number of points in time M Number of calibration samples (measurements) for compound k øC k Average concentration over M samples for compound k øA k Average summed absorbance over M samples for compound k, over the calibration set 230–290 nm Cmk Concentration of compound k in sample m ajmk Absorbance of compound k in sample m at wavelength j A Absorbance value at a point of maximum intensity tr Retention time at the maximum of peak sG Factor relating to the width of a peak simulated by a Gaussian function sL Factor relating to the width of a peak simulated by a Lorentzian function D1/2 Width at half the height of a peak I,KT Scores matrix after prerforming PCA K,JP Loadings matrix after performing PCA I,JE Residual error matrix after performing PCA gk Eigenvalue tkih component at elution time i I1 Point at the start of the individual peak in chromatogram of pure compounds I2 Point at the end of the individual peak in chromatogram of pure compounds �sijk True absorbance of pure compound k at time i and wavelength j Fig. 7 Eigenvalue plot results for real data chromatogram.Fig. 8 Elution profile for the individual peaks coresponding to Fig. 6(b), and plot of log (IA : IB) after t = 17 s. 1012 Analyst, October 1997, Vol. 122�sjk True mean absorbance of pure compound k at wavelength j, averaged over points I1 and I2 n÷s jk True mean absorbance of pure compound k at wavelength j, averaged over points I1 and I and normalised to a maximum of 1 xij Point in datamatrix of 5.33% m/m mixture of II to I pi Point of chromatographic profile for real data, averaged over all wavelengths np Chromatographic profile of real data, averaged over all wavelengths, and normalised to a maximum of 1 n�p Chromatographic profile of simulated data, normalised to a maximum of 1 a Regression coefficient d Level of noise in datamatrix of 5.33% m/m mixture of II to I I,K �C Matrix of the two simulated symmetric chromatographic peaks (elution profile) k,J n �S Matrix of the true average spectra of the pure compounds, normalised to a maximum of 1 I,J �A Predicted datamatrix for the symmetric simulations I,JN Estimated noise matrix b Ratio of true weights II : I r Ratio of sum of intensities over wavelengths 230–290 nm of spectra II : I at a concentration of 1 mg per 100 ml ciA Intensity of a point of peak A (compound I) at time i sjA Absorbance of a point of peak A at unit concentration and wavelength j PA Area of peak A over all times and wavelengths PB Area of peak B over all times and wavelengths PAB Area of peak B, for which A2 = 1 I,K simC Matrix for the two simulated asymmetric chromatographic peaks (elution profile) real I,1C Matrix for the real average chromatographic elution profile References 1 Brereton, R., Gurden, S.P., and Groves, J. A., Chemom. Intell. Lab. Syst., 1995, 27, 73. 2 Elbergali, A. K., and Brereton, R. G., Chemom. Intell. Lab. Syst., 1994, 23, 97. 3 Shostack, K. J., and Malinowski, E. R., Chemom. Intell. Lab. Syst., 1993, 20, 173. 4 Liang, Y.-Z., Kvalheim, O.M., Rahmani, A., and Brereton, R. G., J. Chemom., 1993, 7, 15. 5 Keller, H. R., and Massart, D. L., Chemom. Intell. Lab. Syst., 1992, 12, 209. 6 Kvalheim, O. M., and Liang, Y.-Z., Anal. Chem., 1992, 64, 936. 7 Keller, H. R., and Massart, D. L., Anal. Chim. Acta, 1991, 246, 379. 8 Maeder, M., and Zilian, A., Chemom. Intell. Lab. Syst., 1988, 3, 205. 9 Shervington, L. A., Anal. Lett., 1997, 30, 927. 10 Sierra, I., and Vidal Valverde, C., J. Liq. Chromatogr. Rel. Tech., 1997, 20, 957. 11 Bryant, D. K., Kingswood, M. D., and Belenguer, A., J. Chromatogr., 1996, 721, 42. 12 Le Vent, S., Anal. Chim. Acta, 1995, 312, 263. 13 Nash, P. J., and Hartwell, S., Chromatographia, 1988, 26, 285. 14 Jonsson, J. A., Chromatographic Theory and Basic Principles, Marcel Dekker, New York, 1987, p. 37. 15 Williams, P. W., Numerical Computation, Nelson, London, 1972, p. 154. 16 Brereton, R. G., Analyst, 1995, 120, 2325. 17 Brereton, R. G., Multivariate Pattern Recognition in Chemometrics, Illustrated by Case Studies, Elsevier, Amsterdam, 1992. 18 Brereton, R. G., Chemometrics: Applications of Mathematical and Statistics to Laboratory Systems, Ellis Horwood, Chichester, 1990. 19 Wold, S., Esbensen, K., and Geladi, P., Chemom. Intell. Lab. Syst., 1987, 2, 37. 20 Mardia, K. V., Kent, J. T., and Bibby, J., Multivariate Analysis, Academic Press, London, 1979. Paper 7/03371K Received May 15, 1997 Accepted June 26, 1997 Analyst, October 1997, Vol. 122 1013 NH N O SKF-101468-A (I) NH N O SKF-96266-A (II) O Monitoring of Impurities Using High-performance Liquid Chromatography With Diode-array Detection: Eigenvalue Plots of Partially Overlapping Tailing Peaks Konstantinos D. Zissisa, Richard G. Brereton*a and Richard Escottb a School of Chemistry, University of Bristol, Cantock’s Close, Bristol, UK BS8 1TS b SmithKline Beecham Pharmaceuticals, Old Powder Mills, Near Leigh, Tonbridge, Kent, UK TN11 9AN The chromatogram of ropinirole in the presence of about 5% of a closely eluting impurity, obtained by HPLC with diode-array detection, was analysed by chemometric procedures.Log eigenvalue plots were used to determine the relative composition of regions of the chromatogram. It is shown that since the peaks exhibit tailing, unusual behaviour is found in the plots. This is verified by performing simulations, in which it is demonstrated that peak asymmetry has a pronounced influence on this chemometric approach.In many cases of liquid chromatographic analysis, asymmetric peak shapes are encountered and methods for peak purity assessment should be re-evaluated in the light of these asymmetries. Keywords: High-performance liquid chromatography; window factor analysis; peak purity; tailing; co-elution Window factor analysis (WFA) is a common chemometric method for determining the number and nature of components in a poorly resolved mixture of compounds as detected by coupled chromatographic techniques such as HPLC with diodearray detection (DAD).The first step is to perform and interrogate an eigenvalue plot, which is described here. There is a substantial literature spread over more than a decade reporting the applicability of this approach.1–8 However, most papers report either simulations or carefully selected case studies, normally where elution times are long. Chemometric methods are most useful where the answer to a problem is not entirely obvious in advance.In real world industrial situations, it is under these circumstances that chemometrics has a potential. HPLC is commonly used as a method for monitoring the purity of products such as pharmaceuticals. 9,10 The presence of a small amount of impurity in a drug can have significant implications in the validation of a manufacturing process. A prerequisite for any liquid chromatographic method which is used within the pharmaceutical industry and submitted to regulatory authorities, is to establish peak purity.The most commonly used procedure for establishing this is with UV diode-array detectors. However, the current commercial instrumentation has two major limitations: (i) the methods are not quantitative and (ii) they are relatively insensitive. Thus, the results obtained are often subjective in their interpretation and techniques such as LC–MS11 offer significant advantages. Although neither of these techniques (DAD or MS) is universal in their application, there is a pressing need to improve the peak purity capabilities of HPLC–diode-array detectors.This is particularly pertinent within the pharmaceutical industry, where HPLC is often the method of choice for assessing the purity of drug substances and intermediate materials. The peak shapes produced by liquid chromatography are rarely symmetric (Gaussian) in shape. Peak distortion occurs mainly from dispersion and from the nature of the interactions between analyte functional groups and stationary phase matrices.In this paper, the use of eigenvalue plots, which is the first step in WFA, is assessed with respect to its potential use for assessing peak purity, and the effect of asymmetric peak shapes was studied. Methods Experimental Compounds The two compounds used were SKF-101468-A (ropinirole), I, and an associated impurity, SKF-96266-A, II, isolated during the synthesis of the former.11 These compounds were synthesised in-house, at SmithKline Beecham (Tonbridge, Kent, UK) and the chemical structures are shown in Fig. 1. A mixture, consisting of 24 mg of II and 450 mg of I was produced using a Mettler MT5 microbalance to yield a 5.33% m/m mixture of II to I. Concentrations were 0.3 mm for I and 0.015 mm for II. Chromatography All chromatographic work was carried out using a Beckman System Gold chromatograph (Model 126 pump, Model 507 autosampler) with a C18 reversed-phase Kromasil column (Hichrom, Theale, 5 mm, 250 3 4.6 mm id) at ambient temperature. The mobile phase consisted of 75 + 25 (v/v) acetonitrile (HPLC-grade, BDH)&ndaAc (Aldrich, 97+%, ACS-reagent, aqueous, pH adjusted to 7.0 using a combination of dilute HCl and NH3).The flow rate was set at 1 ml min21 and 20 ml of the solution were injected. UV detection was performed using a Model 168 diode-array detector, and UV spectra were collected in the wavelength range 230–290 nm. The digital resolution was 1 s in time and 2 nm in wavelength.The individual compounds and the solution mixture of I and II described under Compounds were analysed using these conditions. The data obtained were used to produce eigenvalue plots and also to provide simulated data to assess the effects of peak asymmetry on eigenvalue plot results. Molar absorptivities Electronic absorption spectra were recorded on an UltrospecIII UV/VIS spectrometer, Model 80-2097-62 (Pharmacia), between 230 and 290 nm at 2 nm intervals, so that the same spectral range and spectral resolution were used in both HPLC Fig. 1 Structures of compounds used in this study. Analyst, October 1997, Vol. 122 (1007–1013) 1007and electronic absorption spectrometry (EAS). To determine the molar absorption coefficients of I and II, five concentration levels were used to record the UV spectra with solutions prepared in the HPLC solvent system described above. The concentrations were chosen so that the highest value produced an absorbance at the most intense wavelength of close to 1.5.For compound I, a concentration of 5 3 1022 g l21 was found to give an absorbance of 1.639 at lmax = 249 nm, whereas a concentration of 2.1 3 1022 g l21 of compound II gave an absorbance of 1.488 at lmax = 243 nm. The experiments were designed so that the maximum absorbance at the highest concentration level and most intensely absorbing wavelength was approximately 1.5 to ensure linearity and at the lowest concentration level was approximately 0.3.Three further equally spaced concentration levels were used (0.6, 0.9 and 1.2 A, respectively). Replicates were recorded as appropriate. A graph of absorbance versus concentration confirmed linearity for both compounds. The average concentration, øC k (in g l21), over all M experiments for compound k is given by the equation: C M C k mk m M = = å 1 1 where: Cmk is the concentration of compound k in sample m.The average sum of absorbances, øAk, over the calibration set between 230 and 290 nm at 2 nm intervals, is given by: A M a k jmk j J m M = = = å å 1 1 1 where: J is the number of wavelengths and ajmk is the absorbance of compound k in sample m at wavelength j. By dividing these two values, we obtain the relative summed absorbance per unit mass per volume over the wavelength range 230–290 nm at 2 nm intervals for both compounds I and II, as seen in Table 1. From these values, it is calculated that II absorbs more intensely with respect to I by a factor r of 1.54.Because the spectra and absorption coefficients of each compound differ, a 5.33% mixture of II to I would now correspond to a relative area, summed over all wavelengths, of 5.33 3 1.54% = 8.22%, in the series of simulated chromatograms. For the calibration of the HPLC instrument, a maximum injection volume of 25 ml and a minimum of 5 ml were selected, with three equally spaced intermediate levels.After injecting these amounts of compounds and measuring the corresponding peak areas for each run, graphs of peak area versus injection volume were obtained for the two individual compounds. Replicates were used for all runs and the graphs were again linear. Peak Shapes Asymmetric peak shapes An asymmetric chromatographic peak shape can be simulated as a Lorentzian–Gaussian function, one half being Lorentzian and the other half Gaussian.12–15 An equation for a Lorentzian peak shape is: x t A t t L ( ) ( ) / = + - 1 2 2 r s where: A is an absorbance value at the point of maximum intensity, sL a factor relating to the width of the peak and tr the retention time at the maximum of the peak.An equation giving a Gaussian peak shape is: x t A t t ( ) exp ( ) = - - é ë ê ù û ú r G 2 2 s with parameters A and tr being exactly the same as above, and sG being a factor relating to the peak width. It can be shown that the peak width at half-height (D1 2) is 2sG ABB ln2 for a Gaussian and 2sL for a Lorentzian peak shape.A schematic representation of both peak shapes and the corresponding notations is given in Fig. 2. To enable simulations of reconstructed chromatograms, the leading edge of the peaks was based on a Gaussian peak shape (which is less tailing), whereas the tailing edge was based on a Lorentzian peak shape (in which tailing is much more profound). Several different peak shapes are possible according to asymmetricitiy. There are many approaches for modelling asymmetric peak shapes, but in this paper we use only one such method.There is no special reason for favouring any particular approach. Symmetric peak shapes A symmetric peak shape would have either purely Lorentzian or purely Gaussian character. In the simulations of symmetric peaks described below, the peaks are entirely of Gaussian form and only the values of sG (peak width) are changed. Fixed Window Factor Analysis WFA is a method for detecting the number of substances present in a chromatographic signal and the first step is to construct an eigenvalue plot.A datamatrix from HPLC–diode-array detectors typically consists of I rows and J columns, where I is the number of points in time and J the number of wavelengths of the spectrum obtained on the corresponding time. A window, w, is a small slice of the overall datamatrix, typically consisting of 3 or 5 data points, usually progressive in the direction of elution time.The method involves performing principal component analysis (PCA) on sub-matrices of the data consisting of w data points in time and then looking at the size of the eigenvalues. The window is then moved along in a sequential approach by one Table 1 Average summed absorbance (over 230–290 nm), øA k, versus average concentration, øC k, and r value for the two UV calibration schemes øA k øC k/g l21 øA k/øC k B/A I 15.46 3 3 1022 515.33 (A) 1.54 II 9.78 1.23 3 1022 795.12 (B) Fig. 2 Typical Gaussian/Lorentzian peak shape. 1008 Analyst, October 1997, Vol. 122data point, PCA is performed and the size of the eigenvalues calculated again. In general, the greater the size of the eigenvalue, the more significant the individual component. Data scaling When performing PCA, the data should be left uncentred.16 Provided that this is done, the results, as viewed from the eigenvalue plots, should be straightforward to interpret. The corresponding eigenvalues for the centred data are likely to give totally different results that are often hard to interpret, as discussed elsewhere.1,16 Uncentred PCA is often used in chromatography, as most factor analysis methods are concerned with variability above a baseline, rather than around the mean.Principal component analysis PCA is a common technique for dimensionality reduction, so that the system retains only the information that is regarded as most relevant.17–20 PCA decomposes a data matrix into scores and loadings and in terms of assessing peak purity the most important variables that are extracted are called component scores.The scores (I,KT) should relate to sample composition and the loadings (K,JP) should relate to spectra. A mathematical equation that describes this is given below: I,JX = I,KT K,JP + I,JE where K is the number of significant components present, I the number of points in time, J the number of wavelengths, I,JX the overall datamatrix, I,KT the scores matrix, K,JP the loadings matrix and I,JE the residual error matrix.Eigenvalue plots In chemometrics, eigenvalues are used in order to measure the size of a PC. In general, the first PC is the most descriptive, with the remainder of successive PCs describing less and less information and finally modelling just noise. The simplest definition of an eigenvalue,16 gk, is the sum of squares of the scores, given by: g t k ki i I = = å 2 1 where tki is the score of the kth component at elution time i.One of the uses of eigenvalues is to estimate the number of significant components present in a mixture. The most convenient method involves plotting the logarithm of the eigenvalues against elution time, or window centre. As data evolve with time, we can calculate the number of significant components present. For a partially resolved two-component mixture, the plot of the log of the first eigenvalue should correspond to two clear peaks, whereas the plot of the log of the second eigenvalue should result in a single peak that indicates the co-elution of both components.A schematic representation is given in Fig. 3, where regions A and C are composition 1 or selective regions (where only one component elutes), and B is the region of coelution. Results Real Chromatogram Pure compounds The spectra, obtained from the chromatographic analysis of the individual components, were averaged between the start and end times (I1 and I2) of the individual peaks, according to the equation: s s I I jk ijk i I I ~ � /( ) - = - + = å 2 1 1 1 2 The spectra were then normalised to a maximum of 1, using the equation: n÷s jk = ÷s jk/max(÷s jk) Mixture chromatogram A datamatrix for the 5.33% m/m mixture of II to I consisting of 31 wavelengths (between 230 and 290 nm at 2 nm intervals) and 66 points in time (between 3 min 5 s and 4 min 10 s) was obtained.The chromatogram shown in Fig. 4 corresponds to only 46 points in time, as the last 20 points represent pure noise. The chromatographic elution profile was summed over the entire wavelength range as described above, and scaled to a maximum of 1. In order to reproduce this real-data chromatogram by means of simulations, the left-hand side of the peak corresponding to compound I was modelled using a Gaussian function. To do this, the profile of the real data between time 1 and 8 was normalised so that the maximum of the profile is 1, i.e., p x i ij j = = å1 31 and npi = pi/p8 Fig. 3 First and second eigenvalue plots versus time for a hypothetical partially resolved two-component mixture. Fig. 4 Chromatographic elution profile for the 5.33% m/m mixture of II to I, normalised to a maximum of 1. Analyst, October 1997, Vol. 122 1009An estimated Gaussian peak shape was generated between i = 1 and 8 to give n�pi, where the value at i = 8 is 1. Because comparison will be biased according to the point at time i = 8, a modified prediction was obtained by regressing n�p onto np as follows: nöp = npa and: a = = = å å n i n i i n i i p p p � 1 8 2 1 8 The reason for this is to ensure that the maximum value at i = 8 does not unduly bias the peak shape estimate.The root mean square error (RMSE) was then calculated by the equation: RMSE = - S( � � ) / n i n i p p 2 8 The minimum value of the RMSE was found to be 0.008033 and corresponded to a sG of 1.95.The level of noise, d, in the last 20 points in time for the real data was given by the equation: d = - = = å å ( ) / x x IJ ij ij j i 2 1 31 47 66 so that the signal : noise ratio in the real data is: S/N = max(xij)/d = 763 Simulations In order to understand the influence on real data, simulations were performed. Of particular interest is whether real phenomena can be reproduced using symmetric peak shapes. Symmetric peak shapes Three simulations of symmetric peak shapes were performed, in which the value of sG was changed from 1.95 to 3.90 to 5.85, respectively.For compound I, the following parameters were used in the simulations: trI = 8, AI = 1, whereas for compound II, trII = 21. To find a value of A for compound II, in each of the three simulations, the overall relative area of peak II was set at 8.22% that of I, as described under Molar absorptivities. If the value of sG is kept constant, when simulating two peaks separately, a relative area of 8.22% for compound II corresponds to a peak height, A, of 0.0822 for all three symmetric simulations, provided that both peaks are recorded for a sufficiently long period in time, so that there is negligible intensity remaining under each peak.To generate a predicted datamatrix from the simulations, the matrix corresponding to the two simulated chromatographic peaks, I,K �C (estimated elution profile), was multiplied by the matrix corresponding to the spectra of the pure components, k,J n �S , according to the equation: I,J �A = I,K �C K,J n �S where: I is the number of data points in time (46), J the number of wavelengths (31), K the number of significant components present (2) and K,J n �S the true average spectra obtained as described under Pure compounds, scaled so that the maximum is 1.A noise matrix, I,JN, was then obtained based on a normal distribution with variable random seed and standard deviation 0.001311. To calculate the standard deviation, the S/N of I,J �A was set to that of the real data.The I,JN matrix was then added to the predicted datamatrix I,J �A . Eigenvalue plots, based on a three-point time window, were obtained for each of the three symmetric simulations (Fig. 5). From this, as the peak width is increased, the region of a significant second eigenvalue gradually increases. Using a five-point time window, the observed results were similar. Asymmetric peak shapes On trying to reproduce the real data by means of simulations, a Gaussian–Lorentzian function was used, the first part of both peaks being Gaussian and the second part Lorentzian. Various parameters had to be changed in order to achieve the best outcome. First, sG was kept constant for the Gaussian parts of the peaks, at 1.95, as calculated under Mixture chromatogram.The height at the point of maximum intensity for the first peak, AI, was set to 1. The height at the point of maximum intensity for peak 2, AII, was calculated as follows: Fig. 5 Eigenvalue plot results for symmetric simulation with: (a), sG = 1.95; (b), sG = 3.9; (c), sG = 5.85. 1010 Analyst, October 1997, Vol. 122The ratio of the true weights of II : I (in mg) is b = 24/ 450 = 0.0533. The ratio of the sum of intensities over wavelengths 230–290 nm of spectra II : I at a concentration of 1022 g l21, r has a value of 1.54 (Table 1). If the intensity of peak A (corresponding to compound I) at time i is ciA and the absorbance of peak A at unit concentration and wavelength j is sjA, then, the sum of absorbances of peak A under wavelengths 230–290 nm is: Sj j by A =å230 2 290 , Hence, the area of peak A over all times and all wavelengths, PA, is: P C S i i j j by A A A = = = å å 1 46 230 2 290 , The spectra were normalised so that: S S j j by j j by A B and = = å å = = 230 2 290 230 2 290 1 1 , , , From the last two equations: P P C S C S C C r i i j j by i i j j by i i i i A B A A B B A B = = = = = = = = = å å å å å å 1 46 230 2 290 1 46 230 2 290 1 46 1 46 1 , , /b The height of ciA was then set to 1, and PB could be calculated from the previous equation. If the parameters tr, sG and sL for peak B (corresponding to compound II) are known, then an area for peak B, PAB, can be calculated, in which A = 1, according to the equation: PB = A2 3 PAB From this equation, the value of A2 can be calculated.To optimise the values of sL for both peaks A and B, the two columns corresponding to the simulated chromatographic Table 2 The different parameters used in the four asymmetric simulations Peak A Peak B Simulation A1 tr1 sG1 sL1 A2 tr2 sG2 sL2 1 1 8 1.95 1.46 0.0486 21 1.95 1.46 2 1 8 1.95 3.46 0.0839 21 1.95 3.46 3 1 8 1.95 5.46 0.1167 21 1.95 5.46 4 1 8 1.95 7.46 0.1471 21 1.95 7.46 Fig. 6 Eigenvalue plot results for asymmetric simulation with: (a), sL = 1.46; (b), sL = 3.46; (c), sL = 5.46; (d), sL = 7.46. Analyst, October 1997, Vol. 122 1011elution profile I,K simC were summed and regressed over the real chromatographic elution profile I,1 realC. The RMSE was calculated according to the following equation: RMSE real sim = - = å ( ) / c c i i i g 2 1 46 46 where g is the regression factor. For the peak corresponding to compound I, the parameters used were whereas for compound II, trII = 21, sGII = 1.95 and AII was estimated each time separately using the procedure described above.In the simulations, sL was first kept the same for both peaks, and the value that minimised the RMSE was found to be 3.46. On gradually changing the value of sL for peak A while keeping sL for peak B constant at 3.46, and vice versa, the value of the RMSE could not be lowered any further, so it was assumed that there was the same amount of peak tailing for both peaks. A series of three more simulations then followed, in which sL for peak A was changed by values of 2, in order to investigate the effect on eigenvalue plots.The different parameters are displayed in Table 2 and the four simulations are illustrated in Fig. 6. Real Data The result after peforming eigenvalue plots on the real data is shown in Fig. 7. The plot of the second eigenvalue is of particular interest. On comparing this with Fig. 6(b), it is apparent that the simulated and the real data show similar trends. As the value of sL increases, the behaviour of the second eigenvalue similarly shows a well defined minimum in the centre of its graph.It is only possible to reproduce this behaviour in the second eigenvalue by using asymmetric peak shape functions. Presumably, at an early point in the co-elution of the two peaks the ratio of the intensity of A : B decreases. Past the maximum of B, the large tail of A remains and so this ratio increases again. This is shown diagrammatically for the set of peaks represented in Fig. 6(b) (Fig. 8).Conclusion The results presented in this paper show clearly that conventional approaches for purity assessment of partially overlapping peaks are dramatically influenced by peak shape. Most commercial peak purity algorithms are limited in scope, and do not take into account the peak asymmetry effect. Using symmetric peak shapes, the ratio of the intensities of two partially overlapping peaks should either increase or decrease monotonically over a peak cluster. If, however, the first eluting and major peak tails significantly, this peak will dominate both at the beginning and end of the cluster.This behaviour is very evident from the use of sensitive chemometric techniques such as WFA and the use of eigenvalue plots. The use of eigenvalue plots, as a part of WFA, is a very sensitive technique for detecting the presence of small amounts of impurities; many conventional techniques fail. The majority of published material assumes symmetric peak shapes, yet in most practical situations, peak asymmetry is normally encountered. Hence, there is a need to review the behaviour of many common approaches to peak purity assessment under such circumstances.Appendix List of Notations lmax Most intense absorbing wavelength during calibration k Number of significant compounds present in mixture J Total number of wavelengths I Total number of points in time M Number of calibration samples (measurements) for compound k øC k Average concentration over M samples for compound k øA k Average summed absorbance over M samples for compound k, over the calibration set 230–290 nm Cmk Concentration of compound k in sample m ajmk Absorbance of compound k in sample m at wavelength j A Absorbance value at a point of maximum intensity tr Retention time at the maximum of peak sG Factor relating to the width of a peak simulated by a Gaussian function sL Factor relating to the width of a peak simulated by a Lorentzian function D1/2 Width at half the height of a peak I,KT Scores matrix after prerforming PCA K,JP Loadings matrix after performing PCA I,JE Residual error matrix after performing PCA gk Eigenvalue tki Score of the kth component at elution time i I1 Point at the start of the individual peak in chromatogram of pure compounds I2 Point at the end of the individual peak in chromatogram of pure compounds �sijk True absorbance of pure compound k at time i and wavelength j Fig. 7 Eigenvalue plot results for real data chromatogram. Fig. 8 Elution profile for the individual peaks coresponding to Fig. 6(b), and plot of log (IA : IB) after t = 17 s. 1012 Analyst, October 1997, Vol. 122�sjk True mean absorbance of pure compound k at wavelength j, averaged over points I1 and I2 n÷s jk True mean absorbance of pure compound k at wavelength j, averaged over points I1 and I and normalised to a maximum of 1 xij Point in datamatrix of 5.33% m/m mixture of II to I pi Point of chromatographic profile for real data, averaged over all wavelengths np Chromatographic profile of real data, averaged over all wavelengths, and normalised to a maximum of 1 n�p Chromatographic profile of simulated data, normalised to a maximum of 1 a Regression coefficient d Level of noise in datamatrix of 5.33% m/m mixture of II to I I,K �C Matrix of the two simulated symmetric chromatographic peaks (elution profile) k,J n �S Matrix of the true average spectra of the pure compounds, normalised to a maximum of 1 I,J �A Predicted datamatrix for the symmetric simulations I,JN Estimated noise matrix b Ratio of true weights II : I r Ratio of sum of intensities over wavelengths 230–290 nm of spectra II : I at a concentration of 1 mg per 100 ml ciA Intensity of a point of peak A (compound I) at time i sjA Absorbance of a point of peak A at unit concentration and wavelength j PA Area of peak A over all times and wavelengths PB Area of peak B over all times and wavelengths PAB Area of peak B, for which A2 = 1 I,K simC Matrix for the two simulated asymmetric chromatographic peaks (elution profile) real I,1C Matrix for the real average chromatographic elution profile References 1 Brereton, R., Gurden, S.P., and Groves, J. A., Chemom. Intell. Lab. Syst., 1995, 27, 73. 2 Elbergali, A. K., and Brereton, R. G., Chemom. Intell. Lab. Syst., 1994, 23, 97. 3 Shostack, K. J., and Malinowski, E. R., Chemom. Intell. Lab. Syst., 1993, 20, 173. 4 Liang, Y.-Z., Kvalheim, O. M., Rahmani, A., and Brereton, R. G., J. Chemom., 1993, 7, 15. 5 Keller, H. R., and Massart, D. L., Chemom. Intell. Lab. Syst., 1992, 12, 209. 6 Kvalheim, O. M., and Liang, Y.-Z., Anal. Chem., 1992, 64, 936. 7 Keller, H. R., and Massart, D. L., Anal. Chim. Acta, 1991, 246, 379. 8 Maeder, M., and Zilian, A., Chemom. Intell. Lab. Syst., 1988, 3, 205. 9 Shervington, L. A., Anal. Lett., 1997, 30, 927. 10 Sierra, I., and Vidal Valverde, C., J. Liq. Chromatogr. Rel. Tech., 1997, 20, 957. 11 Bryant, D. K., Kingswood, M. D., and Belenguer, A., J. Chromatogr., 1996, 721, 42. 12 Le Vent, S., Anal. Chim. Acta, 1995, 312, 263. 13 Nash, P. J., and Hartwell, S., Chromatographia, 1988, 26, 285. 14 Jonsson, J. A., Chromatographic Theory and Basic Principles, Marcel Dekker, New York, 1987, p. 37. 15 Williams, P. W., Numerical Computation, Nelson, London, 1972, p. 154. 16 Brereton, R. G., Analyst, 1995, 120, 2325. 17 Brereton, R. G., Multivariate Pattern Recognition in Chemometrics, Illustrated by Case Studies, Elsevier, Amsterdam, 1992. 18 Brereton, R. G., Chemometrics: Applications of Mathematical and Statistics to Laboratory Systems, Ellis Horwood, Chichester, 1990. 19 Wold, S., Esbensen, K., and Geladi, P., Chemom. Intell. Lab. Syst., 1987, 2, 37. 20 Mardia, K. V., Kent, J. T., and Bibby, J., Multivariate Analysis, Academic Press, London, 1979. Paper 7/03371K Received May 15, 1997 Accepted June 26, 1997 Analyst, October 1997, V
ISSN:0003-2654
DOI:10.1039/a703371k
出版商:RSC
年代:1997
数据来源: RSC
|
3. |
Cross-validatory Selection of Test and Validation Sets in Multivariate Calibration and Neural Networks as Applied to Spectroscopy |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1015-1022
Frank R. Burden,
Preview
|
|
摘要:
Cross-validatory Selection of Test and Validation Sets in Multivariate Calibration and Neural Networks as Applied to Spectroscopy Frank R. Burdenab, Richard G. Breretonb and Peter T. Walshc a Chemistry Department, Monash University, Clayton, Victoria, Australia 3168 b School of Chemistry, University of Bristol, Cantock’s Close, Bristol, UK BS8 1TS c Health and Safety Laboratory, Health and Safety Executive, Broad Lane, Sheffield, UK S3 7HQ Cross-validated and non-cross-validated regression models using principal component regression (PCR), partial least squares (PLS) and artificial neural networks (ANN) have been used to relate the concentrations of polycyclic aromatic hydrocarbon pollutants to the electronic absorption spectra of coal tar pitch volatiles.The different trends in the cross-validated and non-cross-validated results are discussed as well as a method for the production of a true cross-validated neural network regression model. It is shown that the methods must be compared through the errors produced in the validation sets as well as those given for the final model.Various methods for calculation of errors are described and compared. The separation of training, validation and test sets into fully independent groups is emphasized. PLS outperforms PCR using all indicators. ANNs are inferior to multivariate techniques for individual compounds but are reasonably effective in predicting the sum of PAHs in the mixture set.Keywords: Polycyclic aromatic hydrocarbons; chemometric; neural networks; regression In a previous paper the concentrations of polycyclic aromatic hydrocarbons (PAHs) in coal tar pitch volatile sources obtained by the Health and Safety Executive (HSE)1 have been reported. The choice of PAHs in this paper is for illustration only : it does not imply any HSE acceptance or approval of the list or method. It was shown that multivariate methods for regression such as partial least squares (PLS) were superior to univariate single wavelength calibration for prediction of concentrations.In this paper we apply both principal components regression (PCR) and PLS2–8 and artificial neural networks (ANNs)9–13 to the estimation of concentrations of PAHs an experimental dataset by calibrating known concentrations estimated by GC–MS to electronic absorption spectra (EAS). This paper discusses cross-validation.14–18 There is surprisingly limited discussion of cross-validation in the chemometrics literature, except in the form of algorithmic descriptions of which there are numerous.There are two different reasons for cross-validation, often confused. The first is to find out how many components/iterations (in the case of neural networks) are necessary for an adequate model. The second is to compare the effectiveness of different models. The use of test sets is very common in classification studies, for example, where a model is formed on one group and the predictions tested on another group.A good way of testing the effectiveness of a method is to leave each possible group out at a time, until all groups are left out once, and average the quality of predictions. Some investigators use methods for leaving one sample out at a time, but in cases of large datasets, this can be prohibitive. This paper concentrates on methods for leaving a group of samples out at a time. There is much literature on neural networks in chemistry, and many investigators want to compare these to conventional chemometric approaches, e.g., for classification or calibration.However, a common approach must be found for method comparison and there are substantial problems here. Most common methods for neural networks remove a set of samples to test convergence of a model (called the test set in this paper). It is unfair then to use this set of samples for validation, as the network is trained to minimise the error on these samples.A third and independent group must then be selected for validating the model. If a ‘leave one sample out at a time’ approach is adopted there will be difficulties for the following reasons. First, the model obtained on a single test sample will be dependent on that sample not being an outlier. A minimum size of test set of around four is recommended. Second, the number of calculations will be large: for 100 samples, there will need to be 9 900 computations of the network if every possible combination of single samples is tested.Using too few test sets can result in quite unrepresentative models and analyses of errors. Hence, methods for cross validation that include several samples in each group must be employed. In this paper the previously published PAH dataset1 is used to demonstrate a number of methods for cross validation and error analysis. Method The PAH data set consisted of the EAS of 32 samples taken at 181 wavelengths at 1 nm intervals from 220 to 400 nm inclusive, the X matrix.The concentrations of 13 PAHs in each of the samples had previously been measured by GC–MS, and consists of the y data. In this paper, the estimated concentrations of (a) anthracene, (b) fluoranthene and (c) the sum of concentrations of 13 PAHs were analysed, each, in turn, forming a y vector, of length 32. Similar conclusions can be obtained from the other PAHs in the mixture, but the dataset described in this paper has been reduced for sake of brevity.A concentration summary is given in Table 1. Further experimental details are reported elsewhere,1 including more information on the overall dataset. All of the calculations were carried out using MATLAB routines written by the authors but making use of the well-tested toolbox published by B. Wise and the Mathworks Neural Network Toolbox.19 All the algorithms were validated against Table 1 Concentration summary Concentration/mg ml21 Total detectable Anthracene Fluoranthene PAHs Mean 19.8 160.2 903.8 s 10.8 64.4 375.6 Analyst, October 1997, Vol. 122 (1015–1022) 1015other sources, including in-house software in C and Visual Basic and the Propagator neural network package. Multivariate Methods Principal component regression Since the number of wavelengths far exceeds the number of samples, it is normal to perform data reduction prior to regression, both in order to reduce the size of the problem and to remove colinearities in the dataset, so PCR is the technique of preference to multiple linear regression, which is not discussed in this paper.Principal components analysis was performed on the centred but unstandardized data. After six components have been computed 99.9997% of the variance has been explained, so it was decided to keep only the first six components. Similar conclusions are reached for anthracene and fluoranthene. Note that the main aim of this paper is to compare various approaches for cross-validation and calculation of errors and not how crossvalidation can be employed to determine the optimum number of components.If T is the matrix consisting of the scores of the first six principal components then the regression coefficients are produced simply via the pseudoinverse as follows b = (TAT)21TAy so that �y t b y ij ija ja j a A = + = å1 where �yij is the estimated concentration of the jth sample, using the ith method for calculating the concentrations (see below for extended discussion of this), there are A significant components and the scores matrix is denoted by T.Partial least squares Partial least squares regression can also be employed, as an alternative to PCR. In the case discussed here, PLS1, regressing one y variable at a time, was felt to be more suitable than PLS2, involving regressing all y variables at a time, especially when direct comparisons between methods are required.The optimum number of PLS components can be established as follows. The criterion employed is the variance in the y (concentration) predictions rather than the X or spectral direction. In this way, PLS differs from PCR in which only the X variance can be used. For the purpose of determining the number of components, the ‘leave one out’ method for cross validation is employed, whereby the prediction error as each sample in turn is red is estimated. The prediction on the training set will be reduced as more components are computed, whereas the prediction on the test set shows a minimum and then increases as too many components are employed.It is emphasized that this method is a fairly crude approach, and depends on the structure of the data, but is sufficiently good for determining how many components are useful to describe the data. From the values obtained using the sum of the concentration of 13 detected PAHs, a good number of PLS components to choose is six.It is emphasized that the main objective of this paper is strategy for cross-validation and comparison of models, and details of the dataset analyzed in this paper have already been published, so selection of the number of significant components is not discussed in detail below for reason of brevity. Cross validation and calculation of errors In the cases described here a ‘leave n out’ technique for crossvalidation is employed. A number of samples (n) are removed from the dataset and a model estimated from the remaining data, which is used to validate the model.In turn, each possible group of samples, where any particular sample is included once, is removed, in turn, to provide average estimates of models and errors over the entire data set. It is important that the validation set has no influence on the regression equations. For PCR, and PLS, if there are N samples in total, and Nv samples are to be left out and used as a validation set, the training set of Nt = N 2Nv, must have any pre-processing, such as mean centering or standardization, applied after the validation samples have been removed; these preprocessing parameters such as means and standard deviations of the training set are then applied to the validation set.Note that this implies that means and standard deviations are calculated each time a validation set is removed from the full data, and that these do not necessarily correspond to the overall statistics; in this way the approach in this paper differs from that employed by some other authors. Note that when PCA or PLS is performed, this must also be repeated for each training set in turn and not on the entire dataset.The case for the ANNs is more complex and is discussed in more detail below. The first step is to randomize the order of the data. In many key experimental situations, data are often presented to an investigator in some form of experimental sequence.Without this first stage, a cross-validation may be performed in a biased manner. Once randomized, the new order of the data is maintained. We assume, in this paper, that the samples are reordered in a random manner, so sample j = 7 is the seventh randomized sample. Note that the random seed in all computations in this paper is identical, so that the order with which the samples are arranged is constant for comparison; obviously the calculation of errors depends in part on the method of ordering, the most important aspect being that the original experimental order is not followed.In order to include all possible samples in one validation set, M combinations of training and validation sets were produced where M = N/Nv and Nv is the number of samples in each of the validation sets. Ideally, N is a multiple of Nv, although other regimes, not reported in this paper, are possible. In the case discussed below, four samples are removed, in turn, each time cross-validation is performed, implying eight calculations as illustrated in Fig. 1. Note that removing two samples results in validation sets that are too small, and removing eight samples reduces the number of validation sets to four. For validation run number i (where i varies from 1 to 8), samples j = (i 2 1) 3 Nv + 1 to i 3 Nv are removed. The training set consists of the remaining Nt = 28 samples. For each of the M training/validation sets, a model is produced that predicts the concentration �yij for the ith set (i being a number between 1 and 8 in this case) and the jth sample (varying between 1 and 32).Three possible predictions can be made. (i) The predictions for the samples in the validation sets. Note that there will be one prediction per sample, as each sample is a member of only one validation set. (ii) The predictions for the samples in the training sets. For each sample there will be seven predictions. (iii) Predictions for the overall model, taking into account both validation and test samples, resulting in eight predictions per sample. It is common to transform these predictions into errors for the overall dataset, and five possible errors can be calculated.Fig. 1 Summary of cross-validation for multivariate methods; validation set is shaded. 1016 Analyst, October 1997, Vol. 1221. The standard error for prediction for the validation set given by SEPV = v v ( � ) ( ) y y N j ij j i N iN i M - = - + = å å 2 1 1 1 which in the case of this paper, is simply the root mean square error of prediction of the validation samples. 2. The standard error for the training set given by SEPTR = v v ( � ) ( � ) ( ) ( ) y y y y N M j ij j ij j iN N j i N i M - + - é ë êêê ù û úúú � - = + = - = å å å 2 2 1 1 1 1 1 This is the root mean square of prediction of the training set samples counted seven times, for each of the M 2 1 validation runs. 3. The overall model error given by SEPM = ( � ) y y N M j ij j N i M - � = = å å 2 1 1 which includes the both validation and test samples.Note that this is not the same as the error calculated by performing regression on the entire dataset (see below). An additional possibility is to average the values of the predicted data over all validation runs. The average prediction is given by � � / y y M j ij i M = = å1 A variance on this is to simply average the predictions for the training sets to give t j ij ij i jN M i j N y y y M � � � ( ) ( ) + - = + - å å v v =1 1 1 1 These averaged predictions will result in lower errors, which can be defined as follows. 4. The standard error for the averaged prediction of the training set SEPTRav = - = å ( � ) t j j j N y y N 2 1 5. The standard error for the overall averaged model given by SEPMav = - = å ( � ) y y N j j j N 2 1 Note that since each validation sample is removed only once, there is no corresponding average error for the validation set.It is not, of course, necessary to model the overall data using an average of cross-validated models, but PCR or PLS can beperformed on the entire dataset to give predictions of the form �yj where � [( ] y b t y b y j a ja a A a j a A = + = ¢ ¢ + = - = å å 1 1 x x P PP ) ( )-1 where øy is the mean concentration (or y value), øx is a vector of mean absorbances over all wavelengths, and P is a loadings matrix, as is normal for centred data. This error (called prediction error) is then given by SEPP = ( � ) y y N j j j N - = å 2 1 It is important to recognise that there are differences between this error and SEPMav as will be evident below.Artificial Neural Networks An alternative approach is to use neural networks for calibration. The example in this paper is relatively simple, so only a fairly basic approach is employed. The methods for calculations of errors can be applied to neural networks of any level of sophistication, but it is not the primary purpose of this paper to optimize the network.The first step is to define the inputs, outputs and number of hidden nodes (architecture) of the network. A back-propagation9 feed-forward network using one hidden layer was employed which made use of sigmoidal transfer functions of the form 1 1- where net = net e z w p p p P - = å1 and zp is the input to the given node, with wp the weights. The neural network program uses the back-propagation algorithm to find the weights.The output is simply the estimated concentration. In order to reduce the size of the problem each concentration was estimated separately, so that all calculations only had one output, to be comparable to the rformed separately for each compound. For the given problem, it was found that one hidden layer with a single node was optimum. The input layer of principal component scores, together with a bias node, was connected to the hidden node.The hidden node, together with another bias node was connected to the output node. The bias nodes had no input and delivered an output of 1. The question of the input to the network is an important one. Using all 181 wavelengths would result in a large number (184) of weights when bias and hidden nodes are included, which is clearly unjustified by the present data set consisting only of 32 samples. Hence, the data was reduced first using PCA. As above, only six (linear) principal components were kept as the input to the network.A seventh, bias node, to represent an intercept term equaling one was also added to the input, resulting in seven inputs. Hence, there are 7 (input/hidden) + 2 (hidden/output) = 9 weights. This is illustrated in Fig. 2. It is the common practice when training a neural network to randomize the initial set of weights and then allow the backpropagation algorithm to continuously refine their values in order to reduce the SEPTR of the network training set.This Analyst, October 1997, Vol. 122 1017process is stopped when the error of an independent test set starts to rise (signifying entry into a memorizing domain) and the weights at this test set minimum are retained. If the neural network is run again on the same set of data it is likely (if the input data is not fully independent) to arrive at similar errors with a different, though just as valid, set of weights. In order to start the neural network, a set of randomized weights is required.The initial randomized set of weights can produce a bad starting point for the back-propagation algorithm to seek a global minimum of the test set error so that it is essential for the cross-validation procedure that a good initial set be found. In the present work this was accomplished by producing many ( Å 100) initial sets and choosing the best of these. In order to ensure that any repeat calculation found the same starting point the same seed for the MATLAB random number generator was always used for the first random choice of weights.Cross validation The issue of cross validation is much more complex in the case of neural networks as compared with normal regression. In the application reported below, the data was divided into three sets. A training set consisting of 24 samples is used to obtain a model. It is important to understand that all preprocessing such as PCA is performed on this subset of data, and not on the overall 32 samples.The remaining samples are then divided into two sets of Nvt = 4 samples each. One, referred to below as the test set, is used to determine when the network has converged. (Note that the use of the term validation to describe the dataset used to determine when the training of a neural network should be terminated has been here replaced by test set so that the term validation set can be used for the PLS and PCR calculations. Thus the terms validation set and test set are used in an inverse manner to much of the literature involving neural networks).The error in the training set decreases as the weights are improved. The error in the test set converges to a minimum and then increases again. The network is judged to have converged when the test set error is lowest. However, unlike normal regression, it is not correct to compare the mean error of the test set for the purposes of validation.The reason for this is that the test set, although not directly responsible for a model, has an influence on when the network is judged to have been optimized, and, hence, this error will be low. A small error in the test set is not necessarily an indication that the network can successfully predict unknown samples. The third, validation, set, consists of four samples that are totally left out of the initial computations. The validation error is the error arising from these remaining four samples.The selection of the test and validation sets is illustrated in Fig. 3. The samples are first randomized as in the case of PCR and PLS. Subsequent to that the first four samples are removed to act as a validation set. Then, in sequence samples 5 to 8, 9 to 12 up to 29 to 32 are removed in turn as test sets. The procedure is repeated, removing samples 5 to 8 as a validation set, and then, successively, samples 1 to 4, 9 to 12, 13 to 16, etc. If the number in the validation and testing sets are equal to one another, Nvt, then the number of calculations, Q, where Nvt samples are extracted from the total randomized set for the validation set together with Nvt different samples for the test set, are Q = (M) (M 2 1) where M = N/Nvt = 8 and this is the number of runs that is necessary for the neural network to ensure that each sample is included in each validation and test set. In the case reported in this paper, 56 ( = 8 3 7) computations are required.Note that the mean of the training set is subtracted from the corresponding validation and test sets, and the loadings of the PCs computed from the training set used to calculate the inputs to these sets, which are then weighted by the appropriate numbers obtained from the training set to give predicted outputs. Non-cross validated neural networks ANNs can also be performed on non-cross validated data. In this case, the calculation is somewhat simpler.A set of four samples is removed in turn for the test set. These are used to determine when the network converges, and are removed in a similar fashion to the cross-validated PCR or PLS. Eight computations are performed in total, with each group of samples being removed in turn. Note, however, the test set has a different purpose to the validation set in PCR or PLS. The error in estimating these samples is minimised during the ANN calculation. These samples cannot strictly be used in crossvalidation as they have been used in assessing the performance of the model.Calculation of errors The calculation of errors is more sophisticated than in the case of straight PLS and PCR, but must be performed correctly for comparability. If done in the wrong way, neural networks might appear to work spuriously well, despite the evidence. It is essential to recognise that comparison of methods depends critically on how the ability to predict data is measured, and that there is a fundamental difference between how this prediction ability can be estimated using neural networks as described in this paper, and using standard regression.There are frequent claims in the literature about one method being more effective than another; however these claims are, in part, a function of how the quality of the predictions is calculated. In the cross-validated method proposed in this paper, there are Q ( = 56) validation/test runs. Every (M 2 1) ( = 7) runs, the validation set changes.For the first seven runs it consists of Fig. 2 Summary of neural network. Fig. 3 Summary of cross-validation and testing. Cycles for the neural network; validation set is shaded vertically, test set horizontally. 1018 Analyst, October 1997, Vol. 122samples 1 to 4, for runs 8 to 14 it consists of samples 5 to 8 and so on. A variable p = mod [i/(M 2 1)] + 1 can be calculated, where i is the run number, equal to 1 for runs 1 to 7, 2 for runs 8 to 15, etc., can be computed. The validation set consists of samples 4(p 2 1) + 1 to 4p.The test set consists of the remaining seven possible combinations of four samples. Several errors may be computed. 1. The standard error for the validation samples which is given by SEPV = ( � ) ( ) ( ) ( ) y y M j ij i m m j m j m m M - - = - + = - + = = å å å 2 7 1 1 7 4 1 1 4 1 1 which in the case of this paper, is the root mean square error of the validation samples.Note that each sample is repeated 7 ( =M 2 1) times, hence a more complex equation. 2. The overall error of prediction for all samples across all validation/test runs given by SEPM = ( � ) y y N Q j ij i Q j N - � = = å å 2 1 1 each sample being estimated 56 ( =Q) ti. The standard error for the training set, SEPTR is calculated for the 42 = (M 2 1) 3 (M 2 2) estimates of the training set, defined as SEPTR = ( - 1) ( ) tr ( � ) y y N M M j ij i j j N - � � - ' = å å 2 1 2 where jtr are the training runs for sample j.For example, for sample 9, these are runs 1, 3–8, 10–14, 22–23, 25–30, 32–37, 39–44, 46–51 and 53–56. 4. A fourth error is of interest. It is debatable whether the four test samples should be used in the overall error. This is because they have been used to determine the minimum model for crossvalidation. An alternative overall error excluding these four test samples each time can be calculated as follows SEPM = ts ¢ - � - Ï = å å ( � ) ( ) y y N M j ij i j j N 2 1 2 1 where jts is the group of samples not belonging to the test set.For each sample, seven runs will be excluded, making 49 runs in total. As in the case of straight multivariate methods it is often useful to average the estimates over several runs. In many cases this procedure is important, as it is the only way to obtain an overall model. In PCR, cross-validation is often a separate step to producing a full predictive model. First the number of components or effectiveness of the model is determined and then the calculation, using an optimum number of components, is repeated again on the entire dataset.This is not possible for ANNs because the test set critically determines when the model converges. Removing a different test set results in a different optimum model. The algebraic definition of an overall model is extremely fraught using the methods described in this paper, because principal components will differ according to which samples removed for the test set.The PCs on a subset of 28 samples differ for each subset. Interesting features such as swap over of PCs, changing signs of scores and often completely different values for later PCs are encountered. Hence, the average estimate over several validation/test runs is of some significance. Unlike multivariate methods, each training and test set sample is removed seven times, not once, so all of the four errors above have corresponding and differing average estimates. 1.The standard error for the average estimate for the validation samples which is given by SEPVav v = - = å ( � ) y y N j j j N 2 1 where v � ( ) ( )( ) ( ) y M j i M s M s = - = - - + -å1 1 1 1 1 and s = mod[(i 2 1)/Nt] + 1, e.g., for sample 9 it equals 3, the validation set being represented in runs 15 to 21, as this sample belongs to the third group of four. 2. The overall error of average prediction for all samples across all validation/test runs given by SEPMav = - = å( � ) y y N j j j N 2 1 where � � / y y Q j ij i Q = = å1 3.The error of prediction for the average training set results, SEPTRav, can likewise be calculated, using t tr � � /( )( ) y y M M j ij i j = - - ' å 1 2 4. The equivalent error, SEPMAav can be calculated, removing the test samples. For the non-cross-validated data only two errors are strictly of interest. 1. The standard error for the training set given by SEPTR = ( t t y y y y N M j ij j ij j iN N j j N i N - + - é ë êêê ù û úúú � - = + = - = å å å � ) ( � ) ( ) ( ) 2 2 1 1 1 1 1 This is the root mean square of prediction of the training set samples counted seven times, for each of the M 2 1 validation runs. 2. The overall model error given by SEPM = =1 ( � ) y y N Q j ij j N i M - � = å å 2 1 Analyst, October 1997, Vol. 122 1019Fig. 4 Graphs of predicted versus observed for anthracene. Table 2 Summary of the RMS errors (mg ml21) Anthracene Fluoranthene Total detectable PAHs Multivariate methods— No cross-validation PCR PLS PCR PLS PCR PLS SEPP 1.582 1.379 7.436 5.150 56.604 47.938 Cross-validation PCR PLS PCR PLS PCR PLS Non-av Av Non-av Av Non-av Av Non-av Av Non-av Av Non-av Av SEPM 1.645 1.586 1.436 1.378 7.553 7.321 5.506 5.156 59.33 56.41 50.92 48.02 SEPTR 1.569 1.523 1.333 1.300 7.261 7.084 4.996 4.828 54.96 53.08 46.41 44.92 SEPV 2.101 2.101 2.010 2.010 9.343 9.343 8.231 8.231 83.76 83.76 75.29 75.29 Artificial neutral networks— No cross-validation Non-av Av Non-av Av Non-av Av SEPM 2.342 1.864 12.319 9.853 103.78 74.22 SEPTR 2.283 1.811 11.590 9.177 106.14 75.27 Cross-validation Non-av Av Non-av Av Non-av Av SEPM 2.953 2.062 16.08 10.81 105.63 72.86 SEPMA 2.817 1.923 15.14 9.98 104.37 70.82 SEPTR 2.731 1.803 14.64 9.59 97.84 70.27 SEPV 3.286 2.894 17.86 13.91 105.45 79.91 1020 Analyst, October 1997, Vol. 122which includes the both training and validation samples.The two equivalent errors on the averaged sample estimates can also be calculated. Results Analysis of Errors The results of the various errors are given in Table 2. The graphs of predicted versus observed concentrations for PCR and ANN for anthracene are given in Fig. 4. Only certain graphs are selected for brevity. A substantial number of conclusions are possible. For the multivariate methods, in all cases SEPV > SEPM > SEPP > SEPTR. This is expected for normal datasets.The validation error should be highest as the validation data was not used to form the model, and the training error least. SEPM should be close to SEPP. In all cases it is slightly higher, reflecting the fact that four samples are not included in computing the overall model and so their inclusion increases this error by a small amount. Averaging the estimates over all cross-validation runs is useful, and has an important influence over the error estimates. In order to get an overall averaged model from cross-validation it is useful to perform this operation, and the number of residuals for the averaged model over all cross-validated runs and the non-cross validated data can be compared directly.Since each sample is a member of only one validation set, SEPVav = SEPV. However, in all other cases, averaging reduces the error as expected, and as is clearly seen in the corresponding graphs. The averaged SEPM is now very close to SEPP in all cases.The amount of reduction in the error estimate on averaging reflects the underlying quality of the model. If the true model is completely linear, and all deviations from linearity are normally distributed with a mean of 0, the error should be reduced by A7 = 2.646 for the training set, reflecting the fact that each sample is included in seven training sets, and A8 = 2.828 for the overall model error, which is clearly not the case. The reason for this is that the underlying model is not exactly linear, indicating a small lack-of-fit. The reduction in error as sample estimates are averaged over cross-validation runs represents a valuable diagnostic tool.It is interesting to note in these results that SEPTR proportionally reduced by less than SEPM in all cases, as predicted, the average reduction in SEPTR being 3.0% and SEPM 4.6%, again suggesting that although there is a small but significant lack-of-fit, the dataset is reasonable. The results using ANNs are quite interesting.Without crossvalidation, the modelling error is only slightly higher than the training error. It is debatable as to which statistic best represents the true error. In this case, averaging the results of eight runs has quite a significant influence on the size of errors, reducing them by 20 to 30%. Normally distributed errors should reduce by 100 3 [1 2 (1/A8)] or around 65% on averaging. This indicates that the model improves considerably when performing repeat calculations using ANNs, as expected, but the amount by which the error reduces suggests that a perfect model will not be achieved even after averaging a larger number of runs (which could be done by randomizing the order of the original data again). It is debatable as to what to use as the predictive model to use for a neural network, whether to average the models for several runs or keep to the model of a single test run.Owing to the need to use a test set for to check for convergence, ies are left out of the computation each time.Developing a model removing just one set of samples will be unrepresentative of the dataset. The values for noncross- validated ANN errors in Table 2 are all higher than those for normal regression, suggesting that the averaged model obtained here is not as good as the models obtained for PLS and PCR. For cross-validated ANNs, SEPV > SEPM > SEPMA > SEPTR as expected in all cases.Averaging the sample estimates maintains this order also, which is interesting given the different number of samples in the training and validation datasets. Note Fig. 4 Continued— Analyst, October 1997, Vol. 122 1021that averaging has a greater influence on the errors than whether a sample is a member of a particular group (training, validation, etc.). The errors for the averaged cross-validated models are about comparable in size to the corresponding average noncross- validated errors.However, the non-averaged crossvalidated errors for anthracene and fluoranthene are significantly higher than the corresponding cross-validated errors. A possible reason is that only 24 samples or 3/4 of the original data are used for determining the model. This leads to a small number of highly outlying predictions as can be seen graphically. Because a root mean square error criterion is calculated, these outlying predictions have a major influence on the size of the error.In practical terms, this suggests that performing ANNs using one group of 24 samples has a chance of producing a very poor model. When 28 samples are used, this probability decreases. Application to Estimation of PAHs The methods in this paper can be employed to compare methods for estimation of PAHs using various chemometric techniques. In all cases, PLS outperforms PCR. It is important to calculate a number of indicators to ensure that similar trends are obeyed no matter which error is calculated.Had PCR proved superior using one or more indicators, this would lead to a less unambiguous conclusion. On the whole PLS is expected to outperform PCR providing the experimental dataset is well designed, as PLS takes into account variance in both the concentration and spectral dimensions. If PLS performs worse than PCR this might suggest that there are some outliers or unusual measurements, which could influence the statistics if one of these is removed to the validation set.Also, the number of samples has to be much greater than the number of components, plus the variability between samples sufficient, to allow sensible calibration models. A series of samples that are effectively replicates may not exhibit this trend. One experimental danger, however, with this conclusion is that a great deal of reliance is placed on the independent concentration estimate, in the case of this paper performed by GC–MS.The conclusions of Table 2 only state that using the PLS algorithm, a better mathematical model can be developed that predicts the GC–MS concentration estimates. If there are large errors in the GC–MS measurement, PLS may not necessarily be a superior approach for concentration estimates as it is influenced by the quality of the independent measurement. It is out of the scope of this paper to discuss the nature of GC–MS measurements. ANNs are harder to compare directly with multivariate methods.A sensible model requires several test and validation set combinations. As can be seen by the graphs, a single ANN run will probably result in a number of very poor predictions. Hence it is strongly recommended that the results of all these runs are averaged. The averaged estimates both for non-crossvalidated and cross- validated models result in poorer predictions than PLS and PCR in most cases. The single exception is the averaged PCR validation error for total PAHs.A possible reason is that pure PAHs can be predicted quite well. In some cases a pure PAH has several characteristic wavelengths, and it is even possible to produce quite accurate linear calibration at such wavelengths. The quality of a univariate model is primarily related to spectral overlap. In the absence of noise, it is always possible to obtain accurate calibration models using a limited number of wavelengths. For example, if there are only two components in a mixture, the ratio of absorbances at two wavelengths can be employed to determined the relative amounts of each component in the mixture.The distribution of concentrations in the mixture set is not relevant. However, for the total PAHs linear models are less easy to construct and a more empirical approach such as ANNs may function better, so that ANNs are not so bad in this case. It is recommended that for calibration of concentrations of single PAHs, PLS or possibly PCR are employed.ANN being a non-linear method exhibits few advantages. However, ANNs may perform reasonably well when predicting parameters such as a sum of total concentrations of a set of compounds, where a linear model may be less appropriate. For more complex mixtures, for example of 50 to 100 compounds, PLS or PCR may break down, and it is worth exploring ANNs under such circumstances. Conclusion This paper has highlighted the importance of a properly thought out scheme for cross-validation, and calculation of the associated errors.The particular dataset is predicted well by PLS and PCR, but neural networks might appear to work anomalously well if the wrong statistics are calculated. A great deal more information can be obtained using the type of error analysis proposed in this paper, including whether there truly is an underlying linear model. There is not a great deal of literature on confidence in, and estimates of, lack-of-fit for multivariate calibration, in contrast to the very substantial corresponding literature on univariate calibration.Monash University, Australia, is thanked for funded sabbatical leave for F.R.B. to visit Bristol. References 1 Cirovic, D. A., Brereton, R. G., Walsh, P. T., Ellwood, J. A., and Scobbie, E., Analyst, 1996, 121, 575. 2 Martens, H., and Naes, T., Multivariate Calibration, Wiley, New York, 1989. 3 H�oskuldsson, A., J. Chemom., 1988, 2, 211. 4 Wold, S., Geladi, P., Esbensen, K., and Ohman, J., J.Chemom., 1987, 1, 41. 5 Kowalski, B. R., and Seasholtz, M. B., J. Chemom., 1991, 5, 129. 6 Demir, C., and Brereton, R. G., Analyst, 1997, 122, 631. 7 Geladi, P., and Kowalski, B. R., Anal. Chim. Acta, 1986, 185, 1. 8 Brown, P. J., J. R. Stat. Soc. Ser. B., 1982, 44, 287. 9 Rumelhart, D. E., and McClelland, J. L., Parallel Distributed Processing, MIT Press, Cambridge, MA, 1986, vol. I. 10 Blank,T. B., and Brown, S. D., Anal.Chim. Acta., 1993, 277, 273. 11 Walczak, B., and Wegscheider, W., Anal. Chim. Acta., 1993, 283, 508. 12 Blank, T. B. and Brown, S. D., Anal. Chem., 1993, 65, 3081. 13 Burden, F. R., J. Chem. Inf.Comput. Sci., 1994, 34, 1229. 14 Deane, J. M., Multivariate Pattern Recognition in Chemometrics, illustrated by case studies, ed. Brereton, R. G., Elsevier, Amsterdam, 1992, ch. 5. 15 Stone, M. J., J. R. Stat. Soc. Ser. B., 1974, 36, 111. 16 Wold, S., Technometrics, 1978, 20, 397. 17 Krzanowski, W.J., Biometrics, 1987, 44, 575. 18 Gemperline, P. J., J. Chemom., 1989, 3, 549. 19 The MathWorks Inc., MA, USA. Paper 7/03565I Received May 22, 1997 Accepted July 28, 1997 1022 Analyst, October 1997, Vol. 122 Cross-validatory Selection of Test and Validation Sets in Multivariate Calibration and Neural Networks as Applied to Spectroscopy Frank R. Burdenab, Richard G. Breretonb and Peter T. Walshc a Chemistry Department, Monash University, Clayton, Victoria, Australia 3168 b School of Chemistry, University of Bristol, Cantock’s Close, Bristol, UK BS8 1TS c Health and Safety Laboratory, Health and Safety Executive, Broad Lane, Sheffield, UK S3 7HQ Cross-validated and non-cross-validated regression models using principal component regression (PCR), partial least squares (PLS) and artificial neural networks (ANN) have been used to relate the concentrations of polycyclic aromatic hydrocarbon pollutants to the electronic absorption spectra of coal tar pitch volatiles.The different trends in the cross-validated and n-cross-validated results are discussed as well as a method for the production of a true cross-validated neural network regression model. It is shown that the methods must be compared through the errors produced in the validation sets as well as those given for the final model. Various methods for calculation of errors are described and compared. The separation of training, validation and test sets into fully independent groups is emphasized.PLS outperforms PCR using all indicators. ANNs are inferior to multivariate techniques for individual compounds but are reasonably effective in predicting the sum of PAHs in the mixture set. Keywords: Polycyclic aromatic hydrocarbons; chemometric; neural networks; regression In a previous paper the concentrations of polycyclic aromatic hydrocarbons (PAHs) in coal tar pitch volatile sources obtained by the Health and Safety Executive (HSE)1 have been reported.The choice of PAHs in this paper is for illustration only : it does not imply any HSE acceptance or approval of the list or method. It was shown that multivariate methods for regression such as partial least squares (PLS) were superior to univariate single wavelength calibration for prediction of concentrations. In this paper we apply both principal components regression (PCR) and PLS2–8 and artificial neural networks (ANNs)9–13 to the estimation of concentrations of PAHs an experimental dataset by calibrating known concentrations estimated by GC–MS to electronic absorption spectra (EAS).This paper discusses cross-validation.14–18 There is surprisingly limited discussion of cross-validation in the chemometrics literature, except in the form of algorithmic descriptions of which there are numerous. There are two different reasons for cross-validation, often confused. The first is to find out how many components/iterations (in the case of neural networks) are necessary for an adequate model.The second is to compare the effectiveness of different models. The use of test sets is very common in classification studies, for example, where a model is formed on one group and the predictions tested on another group. A good way of testing the effectiveness of a method is to leave each possible group out at a time, until all groups are left out once, and average the quality of predictions.Some investigators use methods for leaving one sample out at a time, but in cases of large datasets, this can be prohibitive. This paper concentrates on methods for leaving a group of samples out at a time. There is much literature on neural networks in chemistry, and many investigators want to compare these to conventional chemometric approaches, e.g., for classification or calibration. However, a common approach must be found for method comparison and there are substantial problems here.Most common methods for neural networks remove a set of samples to test convergence of a model (called the test set in this paper). It is unfair then to use this set of samples for validation, as the network is trained to minimise the error on these samples. A third and independent group must then be selected for validating the model. If a ‘leave one sample out at a time’ approach is adopted there will be difficulties for the following reasons. First, the model obtained on a single test sample will be dependent on that sample not being an outlier.A minimum size of test set of around four is recommended. Second, the number of calculations will be large: for 100 samples, there will need to be 9 900 computations of the network if every possible combination of single samples is tested. Using too few test sets can result in quite unrepresentative models and analyses of errors. Hence, methods for cross validation that include several samples in each group must be employed.In this paper the previously published PAH dataset1 is used to demonstrate a number of methods for cross validation and error analysis. Method The PAH data set consisted of the EAS of 32 samples taken at 181 wavelengths at 1 nm intervals from 220 to 400 nm inclusive, the X matrix. The concentrations of 13 PAHs in each of the samples had previously been measured by GC–MS, and consists of the y data. In this paper, the estimated concentrations of (a) anthracene, (b) fluoranthene and (c) the sum of concentrations of 13 PAHs were analysed, each, in turn, forming a y vector, of length 32.Similar conclusions can be obtained from the other PAHs in the mixture, but the dataset described in this paper has been reduced for sake of brevity. A concentration summary is given in Table 1. Further experimental details are reported elsewhere,1 including more information on the overall dataset. All of the calculations were carried out using MATLAB routines written by the authors but making use of the well-tested toolbox published by B.Wise and the Mathworks Neural Network Toolbox.19 All the algorithms were validated against Table 1 Concentration summary Concentration/mg ml21 Total detectable Anthracene Fluoranthene PAHs Mean 19.8 160.2 903.8 s 10.8 64.4 375.6 Analyst, October 1997, Vol. 122 (1015–1022) 1015other sources, including in-house software in C and Visual Basic and the Propagator neural network package.Multivariate Methods Principal component regression Since the number of wavelengths far exceeds the number of samples, it is normal to perform data reduction prior to regression, both in order to reduce the size of the problem and to remove colinearities in the dataset, so PCR is the technique of preference to multiple linear regression, which is not discussed in this paper. Principal components analysis was performed on the centred but unstandardized data.After six components have been computed 99.9997% of the variance has been explained, so it was decided to keep only the first six components. Similar conclusions are reached for anthracene and fluoranthene. Note that the main aim of this paper is to compare various approaches for cross-validation and calculation of errors and not how crossvalidation can be employed to determine the optimum number of components. If T is the matrix consisting of the scores of the first six principal components then the regression coefficients are produced simply via the pseudoinverse as follows b = (TAT)21TAy so that �y t b y ij ija ja j a A = + = å1 where �yij is the estimated concentration of the jth sample, using the ith method for calculating the concentrations (see below for extended discussion of this), there are A significant components and the scores matrix is denoted by T.Partial least squares Partial least squares regression can also be employed, as an alternative to PCR.In the case discussed here, PLS1, regressing one y variable at a time, was felt to be more suitable than PLS2, involving regressing all y variables at a time, especially when direct comparisons between methods are required. The optimum number of PLS components can be established as follows. The criterion employed is the variance in the y (concentration) predictions rather than the X or spectral direction. In this way, PLS differs from PCR in which only the X variance can be used.For the purpose of determining the number of components, the ‘leave one out’ method for cross validation is employed, whereby the prediction error as each sample in turn is removed is estimated. The prediction on the training set will be reduced as more components are computed, whereas the prediction on the test set shows a minimum and then increases as too many components are employed. It is emphasized that this method is a fairly crude approach, and depends on the structure of the data, but is sufficiently good for determining how many components are useful to describe the data.From the values obtained using the sum of the concentration of 13 detected PAHs, a good number of PLS components to choose is six. It is emphasized that the main objective of this paper is strategy for cross-validation and comparison of models, and details of the dataset analyzed in this paper have already been published, so selection of the number of significant components is not discussed in detail below reason of brevity.Cross validation and calculation of errors In the cases described here a ‘leave n out’ technique for crossvalidation is employed. A number of samples (n) are removed from the dataset and a model estimated from the remaining data, which is used to validate the model. In turn, each possible group of samples, where any particular sample is included once, is removed, in turn, to provide average estimates of models and errors over the entire data set. It is important that the validation set has no influence on the regression equations. For PCR, and PLS, if there are N samples in total, and Nv samples are to be left out and used as a validation set, the training set of Nt = N 2Nv, must have any pre-processing, such as mean centering or standardization, applied after the validation samples have been removed; these preprocessing parameters such as means and standard deviations of the training set are then applied to the validation set.Note that this implies that means and standard deviations are calculated each time a validation set is removed from the full data, and that these do not necessarily correspond to the overall statistics; in this way the approach in this paper differs from that employed by some other authors. Note that when PCA or PLS is performed, this must also be repeated for each training set in turn and not on the entire dataset.The case for the ANNs is more complex and is discussed in more detail below. The first step is to randomize the order of the data. In many key experimental situations, data are often presented to an investigator in some form of experimental sequence. Without this first stage, a cross-validation may be performed in a biased manner. Once randomized, the new order of the data is maintained. We assume, in this paper, that the samples are reordered in a random manner, so sample j = 7 is the seventh randomized sample.Note that the random seed in all computations in this paper is identical, so that the order with which the samples are arranged is constant for comparison; obviously the calculation of errors depends in part on the method of ordering, the most important aspect being that the original experimental order is not followed. In order to include all possible samples in one validation set, M combinations of training and validation sets were produced where M = N/Nv and Nv is the number of samples in each of the validation sets.Ideally, N is a multiple of Nv, although other regimes, not reported in this paper, are possible. In the case discussed below, four samples are removed, in turn, each time cross-validation is performed, implying eight calculations as illustrated in Fig. 1. Note that removing two samples results in validation sets that are too small, and removing eight samples reduces the number of validation sets to four.For validation run number i (where i varies from 1 to 8), samples j = (i 2 1) 3 Nv + 1 to i 3 Nv are removed. The training set consists of the remaining Nt = 28 samples. For each of the M training/validation sets, a model is produced that predicts the concentration �yij for the ith set (i being a number between 1 and 8 in this case) and the jth sample (varying between 1 and 32). Three possible predictions can be made. (i) The predictions for the samples in the validation sets.Note that there will be one prediction per sample, as each sample is a member of only one validation set. (ii) The predictions for the samples in the training sets. For each sample there will be seven predictions. (iii) Predictions for the overall model, taking into account both validation and test samples, resulting in eight predictions per sample. It is common to transform these predictions into errors for the overall dataset, and five possible errors can be calculated.Fig. 1 Summary of cross-validation for multivariate methods; validation set is shaded. 1016 Analyst, October 1997, Vol. 1221. The standard error for prediction for the validation set given by SEPV = v v ( � ) ( ) y y N j ij j i N iN i M - = - + = å å 2 1 1 1 which in the case of this paper, is simply the root mean square error of prediction of the validation samples. 2. The standard error for the training set given by SEPTR = v v ( � ) ( � ) ( ) ( ) y y y y N M j ij j ij j iN N j i N i M - + - é ë êêê ù û úúú � - = + = - = å å å 2 2 1 1 1 1 1 This is the root mean square of prediction of the training set samples counted seven times, for each of the M 2 1 validation runs. 3. The overall model error given by SEPM = ( � ) y y N M j ij j N i M - � = = å å 2 1 1 which includes the both validation and test samples. Note that this is not the same as the error calculated by performing regression on the entire dataset (see below). An additional possibility is to average the values of the predicted data over all validation runs.The average prediction is given by � � / y y M j ij i M = = å1 A variance on this is to simply average the predictions for the training sets to give t j ij ij i jN M i j N y y y M � � � ( ) ( ) + - = + - å å v v =1 1 1 1 These averaged predictions will result in lower errors, which can be defined as follows. 4. The standard error for the averaged prediction of the training set SEPTRav = - = å ( � ) t j j j N y y N 2 1 5.The standard error for the overall averaged model given by SEPMav = - = å ( � ) y y N j j j N 2 1 Note that since each validation sample is removed only once, there is no corresponding average error for the validation set. It is not, of course, necessary to model the overall data using an average of cross-validated models, but PCR or PLS can beperformed on the entire dataset to give predictions of the form �yj where � [( ] y b t y b y j a ja a A a j a A = + = ¢ ¢ + = - = å å 1 1 x x P PP ) ( )-1 where øy is the mean concentration (or y value), øx is a vector of mean absorbances over all wavelengths, and P is a loadings matrix, as is normal for centred data.This error (called prediction error) is then given by SEPP = ( � ) y y N j j j N - = å 2 1 It is important to recognise that there are differences between this error and SEPMav as will be evident below. Artificial Neural Networks An alternative approach is to use neural networks for calibration. The example in this paper is relatively simple, so only a fairly basic approach is employed.The methods for calculations of errors can be applied to neural networks of any level of sophistication, but it is not the primary purpose of this paper to optimize the network. The first step is to define the inputs, outputs and number of hidden nodes (architecture) of the network. A back-propagation9 feed-forward network using one hidden layer was employed which made use of sigmoidal transfer functions of the form 1 1- where net = net e z w p p p P - = å1 and zp is the input to the given node, with wp the weights. The neural network program uses the back-propagation algorithm to find the weights. The output is simply the estimated concentration.In order to reduce the size of the problem each concentration was estimated separately, so that all calculations only had one output, to be comparable to the PLS and PCR results, which were performed separately for each compound.For the given problem, it was found that one hidden layer with a single node was optimum. The input layer of principal component scores, together with a bias node, was connected to the hidden node. The hidden node, together with another bias node was connected to the output node. The bias nodes had no input and delivered an output of 1. The question of the input to the network is an important one.Using all 181 wavelengths would result in a large number (184) of weights when bias and hidden nodes are included, which is clearly unjustified by the present data set consisting only of 32 samples. Hence, the data was reduced first using PCA. As above, only six (linear) principal components were kept as the input to the network. A seventh, bias nodequaling one was also added to the input, resulting in seven inputs. Hence, there are 7 (input/hidden) + 2 (hidden/output) = 9 weights.This is illustrated in Fig. 2. It is the common practice when training a neural network to randomize the initial set of weights and then allow the backpropagation algorithm to continuously refine their values in order to reduce the SEPTR of the network training set. This Analyst, October 1997, Vol. 122 1017process is stopped when the error of an independent test set starts to rise (signifying entry into a memorizing domain) and the weights at this test set minimum are retained.If the neural network is run again on the same set of data it is likely (if the input data is not fully independent) to arrive at similar errors with a different, though just as valid, set of weights. In order to start the neural network, a set of randomized weights is required. The initial randomized set of weights can produce a bad starting point for the back-propagation algorithm to seek a global minimum of the test set error so that it is essential for the cross-validation procedure that a good initial set be found.In the present work this was accomplished by producing many ( Å 100) initial sets and choosing the best of these. In order to ensure that any repeat calculation found the same starting point the same seed for the MATLAB random number generator was always used for the first random choice of weights. Cross validation The issue of cross validation is much more complex in the case of neural networks as compared with normal regression. In the application reported below, the data was divided into three sets. A training set consisting of 24 samples is used to obtain a model.It is important to understand that all preprocessing such as PCA is performed on this subset of data, and not on the overall 32 samples. The remaining samples are then divided into two sets of Nvt = 4 samples each. One, referred to below as the test set, is used to determine when the network has converged.(Note that the use of the term validation to describe the dataset used to determine when the training of a neural network should be terminated has been here replaced by test set so that the term validation set can be used for the PLS and PCR calculations. Thus the terms validation set and test set are used in an inverse manner to much of the literature involving neural networks). The error in the training set decreases as the weights are improved. The error in the test set converges to a minimum and then increases again.The network is judged to have converged when the test set error is lowest. However, unlike normal regression, it is not correct to compare the mean error of the test set for the purposes of validation. The reason for this is that the test set, although not directly responsible for a model, has an influence on when the network is judged to have been optimized, and, hence, this error will be low. A small error in the test set is not necessarily an indication that the network can successfully predict unknown samples.The third, validation, set, consists of four samples that are totally left out of the initial computations. The validation error is the error arising from these remaining four samples. The selection of the test and validation sets is illustrated in Fig. 3. The samples are first randomized as in the case of PCR and PLS. Subsequent to that the first four samples are removed to act as a validation set.Then, in sequence samples 5 to 8, 9 to 12 up to 29 to 32 are removed in turn as test sets. The procedure is repeated, removing samples 5 to 8 as a validation set, and then, successively, samples 1 to 4, 9 to 12, 13 to 16, etc. If the number in the validation and testing sets are equal to one another, Nvt, then the number of calculations, Q, where Nvt samples are extracted from the total randomized set for the validation set together with Nvt different samples for the test set, are Q = (M) (M 2 1) where M = N/Nvt = 8 and this is the number of runs that is necessary for the neural network to ensure that each sample is included in each validation and test set.In the case reported in this paper, 56 ( = 8 3 7) computations are required. Note that the mean of the training set is subtracted from the corresponding validation and test sets, and the loadings of the PCs computed from the training set used to calculate the inputs to these sets, which are then weighted by the appropriate numbers obtained from the training set to give predicted outputs.Non-cross validated neural networks ANNs can also be performed on non-cross validated data. In this case, the calculation is somewhat simpler. A set of four samples is removed in turn for the test set. These are used to determine when the network converges, and are removed in a similar fashion to the cross-validated PCR or PLS. Eight computations are performed in total, with each group of samples being removed in turn.Note, however, the test set has a different purpose to the validation set in PCR or PLS. The error in estimating these samples is minimised during the ANN calculation. These samples cannot strictly be used in crossvalidation as they have been used in assessing the performance of the model. Calculation of errors The calculation of errors is more sophisticated than in the case of straight PLS and PCR, but must be performed correctly for comparability.If done in the wrong way, neural networks might appear to work spuriously well, despite the evidence. It is essential to recognise that comparison of methods depends critically on how the ability to predict data is measured, and that there is a fundamental difference between how this prediction ability can be estimated using neural networks as described in this paper, and using standard regression. There are frequent claims in the literature about one method being more effective than another; however these claims are, in part, a function of how the quality of the predictions is calculated.In the cross-validated method proposed in this paper, there are Q ( = 56) validation/test runs. Every (M 2 1) ( = 7) runs, the validation set changes. For the first seven runs it consists of Fig. 2 Summary of neural network. Fig. 3 Summary of cross-validation and testing. Cycles for the neural network; validation set is shaded vertically, test set horizontally. 1018 Analyst, October 1997, Vol. 122samples 1 to 4, for runs 8 to 14 it consists of samples 5 to 8 and so on. A variable p = mod [i/(M 2 1)] + 1 can be calculated, where i is the run number, equal to 1 for runs 1 to 7, 2 for runs 8 to 15, etc., can be computed. The validation set consists of samples 4(p 2 1) + 1 to 4p. The test set consists of the remaining seven possible combinations of four samples. Several errors may be computed. 1. The standard error for the validation samples which is given by SEPV = ( � ) ( ) ( ) ( ) y y M j ij i m m j m j m m M - - = - + = - + = = å å å 2 7 1 1 7 4 1 1 4 1 1 which in the case of this paper, is the root mean square error of the validation samples.Note that each sample is repeated 7 ( =M 2 1) times, hence a more complex equation. 2. The overall error of prediction for all samples across all validation/test runs given by SEPM = ( � ) y y N Q j ij i Q j N - � = = å å 2 1 1 each sample being estimated 56 ( =Q) times. 3. The standard error for the training set, SEPTR is calculated for the 42 = (M 2 1) 3 (M 2 2) estimates of the training set, defined as SEPTR = ( - 1) ( ) tr ( � ) y y N M M j ij i j j N - � � - ' = å å 2 1 2 where jtr are the training runs for sample j. For example, for sample 9, these are runs 1, 3–8, 10–14, 22–23, 25–30, 32–37, 39–44, 46–51 and 53–56. 4. A fourth error is of interest. It is debatable whether the four test samples should be used in the overall error.This is because they have been used to determine the minimum model for crossvalidation. An alternative overall error excluding these four test samples each time can be calculated as follows SEPM = ts ¢ - � - Ï = å å ( � ) ( ) y y N M j ij i ere jts is the group of samples not belonging to the test set. For each sample, seven runs will be excluded, making 49 runs in total. As in the case of straight multivariate methods it is often useful to average the estimates over several runs.In many cases this procedure is important, as it is the only way to obtain an overall model. In PCR, cross-validation is often a separate step to producing a full predictive model. First the number of components or effectiveness of the model is determined and then the calculation, using an optimum number of components, is repeated again on the entire dataset. This is not possible for ANNs because the test set critically determines when the model converges.Removing a different test set results in a different optimum model. The algebraic definition of an overall model is extremely fraught using the methods described in this paper, because principal components will differ according to which samples removed for the test set. The PCs on a subset of 28 samples differ for each subset. Interesting features such as swap over of PCs, changing signs of scores and often completely different values for later PCs are encountered.Hence, the average estimate over several validation/test runs is of some significance. Unlike multivariate methods, each training and test set sample is removed seven times, not once, so all of the four errors above have corresponding and differing average estimates. 1. The standard error for the average estimate for the validation samples which is given by SEPVav v = - = å ( � ) y y N j j j N 2 1 where v � ( ) ( )( ) ( ) y M j i M s M s = - = - - + -å1 1 1 1 1 and s = mod[(i 2 1)/Nt] + 1, e.g., for sample 9 it equals 3, the validation set being represented in runs 15 to 21, as this sample belongs to the third group of four. 2. The overall error of average prediction for all samples across all validation/test runs given by SEPMav = - = å( � ) y y N j j j N 2 1 where � � / y y Q j ij i Q = = å1 3. The error of prediction for the average training set results, SEPTRav, can likewise be calculated, using t tr � � /( )( ) y y M M j ij i j = - - ' å 1 2 4.The equivalent error, SEPMAav can be calculated, removing the test samples. For the non-cross-validated data only two errors are strictly of interest. 1. The standard error for the training set given by SEPTR = ( t t y y y y N M j ij j ij j iN N j j N i N - + - é ë êêê ù û úúú � - = + = - = å å å � ) ( � ) ( ) ( ) 2 2 1 1 1 1 1 This is the root mean square of prediction of the training set samples counted seven times, for each of the M 2 1 validation runs. 2. The overall model error given by SEPM = =1 ( � ) y y N Q j ij j N i M - � = å å 2 1 Analyst, October 1997, Vol. 122 1019Fig. 4 Graphs of predicted versus observed for anthracene. Table 2 Summary of the RMS errors (mg ml21) Anthracene Fluoranthene Total detectable PAHs Multivariate methods— No cross-validation PCR PLS PCR PLS PCR PLS SEPP 1.582 1.379 7.436 5.150 56.604 47.938 Cross-validation PCR PLS PCR PLS PCR PLS Non-av Av Non-av Av Non-av Av Non-av Av Non-av Av Non-av Av SEPM 1.645 1.586 1.436 1.378 7.553 7.321 5.506 5.156 59.33 56.41 50.92 48.02 SEPTR 1.569 1.523 1.333 1.300 7.261 7.084 4.996 4.828 54.96 53.08 46.41 44.92 SEPV 2.101 2.101 2.010 2.010 9.343 9.343 8.231 8.231 83.76 83.76 75.29 75.29 Artificial neutral networks— No cross-validation Non-av Av Non-av Av Non-av Av SEPM 2.342 1.864 12.319 9.853 103.78 74.22 SEPTR 2.283 1.811 11.590 9.177 106.14 75.27 Cross-validation Non-av Av Non-av Av Non-av Av SEPM 2.953 2.062 16.08 10.81 105.63 72.86 SEPMA 2.817 1.923 15.14 9.98 104.37 70.82 SEPTR 2.731 1.803 14.64 9.59 97.84 70.27 SEPV 3.286 2.894 17.86 13.91 105.45 79.91 1020 Analyst, October 1997, Vol. 122which includes the both training and validation samples. The two equivalent errors on the averaged sample estimates can also be calculated. Results Analysis of Errors The results of the various errors are given in Table 2. The graphs of predicted versus observed concentrations for PCR and ANN for anthracene are given in Fig. 4.Only certain graphs are selected for brevity. A substantial number of conclusions are possible. For the multivariate methods, in all cases SEPV > SEPM > SEPP > SEPTR. This is expected for normal datasets. The validation error should be highest as the validation data was not used to form the model, and the training error least. SEPM should be close to SEPP. In all cases it is slightly higher, reflecting the fact that four samples are not included in computing the overall model and so their inclusion increases this error by a small amount.Averaging the estimates over all cross-validation runs is useful, and has an important influence over the error estimates. In order to get an overall averaged model from cross-validation it is useful to perform this operation, and the number of residuals for the averaged model over all cross-validated runs and the non-cross validated data can be compared directly. Since each sample is a member of only one validation set, SEPVav = SEPV.However, in all other cases, averaging reduces the error as expected, and as is clearly seen in the corresponding graphs. The averaged SEPM is now very close to SEPP in all cases. The amount of reduction in the error estimate on averaging reflects the underlying quality of the model. If the true model is completely linear, and all deviations from linearity are normally distributed with a mean of 0, the error should be reduced by A7 = 2.646 for the training set, reflecting the fact that each sample is included in seven training sets, and A8 = 2.828 for the overall model error, which is clearly not the case.The reason for this is that the underlying model is not exactly linear, indicating a small lack-of-fit. The reduction in error as sample estimates are averaged over cross-validation runs represents a valuable diagnostic tool. It is interesting to note in these results that SEPTR proportionally reduced by less than SEPM in all cases, as predicted, the average reduction in SEPTR being 3.0% and SEPM 4.6%, again suggesting that although there is a small but significant lack-of-fit, the dataset is reasonable.The results using ANNs are quite interesting. Without crossvalidation, the modelling error is only slightly higher than the training error. It is debatable as to which statistic best represents the true error. In this case, averaging the results of eight runs has quite a significant influence on the size of errors, reducing them by 20 to 30%.Normally distributed errors should reduce by 100 3 [1 2 (1/A8)] or around 65% on averaging. This indicates that the model improves considerably when performing repeat calculations using ANNs, as expected, but the amount by which the error reduces suggests that a perfect model will not be achieved even after averaging a larger number of runs (which could be done by randomizing the order of the original data again).It is debatable as to what to use as the predictive model to use for a neural network, whether to average the models for several runs or keep to the model of a single test run. Owing to the need to use a test set for to check for convergence, it is a requirement that some samples are left out of the computation each time. Developing a model removing just one set of samples will be unrepresentative of the dataset. The values for noncross- validated ANN errors in Table 2 are all higher than those for normal regression, suggesting that the averaged model obtained here is not as good as the models obtained for PLS and PCR. For cross-validated ANNs, SEPV > SEPM > SEPMA > SEPTR as expected in all cases.Averaging the sample estimates maintains this order also, which is interesting given the different number of samples in the training and validation datasets. Note Fig. 4 Continued— Analyst, October 1997, Vol. 122 1021that averaging has a greater influence on the errors than whether a sample is a member of a particular group (training, validation, etc.).ged cross-validated models are about comparable in size to the corresponding average noncross- validated errors. However, the non-averaged crossvalidated errors for anthracene and fluoranthene are significantly higher than the corresponding cross-validated errors. A possible reason is that only 24 samples or 3/4 of the original data are used for determining the model.This leads to a small number of highly outlying predictions as can be seen graphically. Because a root mean square error criterion is calculated, these outlying predictions have a major influence on the size of the error. In practical terms, this suggests that performing ANNs using one group of 24 samples has a chance of producing a very poor model. When 28 samples are used, this probability decreases. Application to Estimation of PAHs The methods in this paper can be employed to compare methods for estimation of PAHs using various chemometric techniques. In all cases, PLS outperforms PCR.It is important to calculate a number of indicators to ensure that similar trends are obeyed no matter which error is calculated. Had PCR proved superior using one or more indicators, this would lead to a less unambiguous conclusion. On the whole PLS is expected to outperform PCR providing the experimental dataset is well designed, as PLS takes into account variance in both the concentration and spectral dimensions.If PLS performs worse than PCR this might suggest that there are some outliers or unusual measurements, which could influence the statistics if one of these is removed to the validation set. Also, the number of samples has to be much greater than the number of components, plus the variability between samples sufficient, to allow sensible calibration models.A series of samples that are effectively replicates may not exhibit this trend. One experimental danger, however, with this conclusion is that a great deal of reliance is placed on the independent concentration estimate, in the case of this paper performed by GC–MS. The conclusions of Table 2 only state that using the PLS algorithm, a better mathematical model can be developed that predicts the GC–MS concentration estimates. If there are large errors in the GC–MS measurement, PLS may not necessarily be a superior approach for concentration estimates as it is influenced by the quality of the independent measurement.It is out of the scope of this paper to discuss the nature of GC–MS measurements. ANNs are harder to compare directly with multivariate methods. A sensible model requires several test and validation set combinations. As can be seen by the graphs, a single ANN run will probably result in a number of very poor predictions. Hence it is strongly recommended that the results of all these runs are averaged.The averaged estimates both for non-crossvalidated and cross- validated models result in poorer predictions than PLS and PCR in most cases. The single exception is the averaged PCR validation error for total PAHs. A possible reason is that pure PAHs can be predicted quite well. In some cases a pure PAH has several characteristic wavelengths, and it is even possible to produce quite accurate linear calibration at such wavelengths. The quality of a univariate model is primarily related to spectral overlap. In the absence of noise, it is always possible to obtain accurate calibration models using a limited number of wavelengths. For example, if there are only two components in a mixture, the ratio of absorbances at two wavelengths can be employed to determined the relative amounts of each component in the mixture. The distribution of concentrations in the mixture set is not relevant. However, for the total PAHs linear models are less easy to construct and a more empirical approach such as ANNs may function better, so that ANNs are not so bad in this case. It is recommended that for calibration of concentrations of single PAHs, PLS or possibly PCR are employed. ANN being a non-linear method exhibits few advantages. However, ANNs may perform reasonably well when predicting parameters such as a sum of total concentrations of a set of compounds, where a linear model may be less appropriate. For more complex mixtures, for example of 50 to 100 compounds, PLS or PCR may break down, and it is worth exploring ANNs under such circumstances. Conclusion This paper has highlighted the importance of a properly thought out scheme for cross-validation, and calculation of the associated errors. The particular dataset is predicted well by PLS and PCR, but neural networks might appear to work anomalously well if the wrong statistics are calculated. A great deal more information can be obtained using the type of error analysis proposed in this paper, including whether there truly is an underlying linear model. There is not a great deal of literature on confidence in, and estimates of, lack-of-fit for multivariate calibration, in contrast to the very substantial corresponding literature on univariate calibration. Monash University, Australia, is thanked for funded sabbatical leave for F.R.B. to visit Bristol. References 1 Cirovic, D. A., Brereton, R. G., Walsh, P. T., Ellwood, J. A., and Scobbie, E., Analyst, 1996, 121, 575. 2 Martens, H., and Naes, T., Multivariate Calibration, Wiley, New York, 1989. 3 H�oskuldsson, A., J. Chemom., 1988, 2, 211. 4 Wold, S., Geladi, P., Esbensen, K., and Ohman, J., J. Chemom., 1987, 1, 41. 5 Kowalski, B. R., and Seasholtz, M. B., J. Chemom., 1991, 5, 129. 6 Demir, C., and Brereton, R. G., Analyst, 1997, 122, 631. 7 Geladi, P., and Kowalski, B. R., Anal. Chim. Acta, 1986, 185, 1. 8 Brown, P. J., J. R. Stat. Soc. Ser. B., 1982, 44, 287. 9 Rumelhart, D. E., and McClelland, J. L., Parallel Distributed Processing, MIT Press, Cambridge, MA, 1986, vol. I. 10 Blank,T. B., and Brown, S. D., Anal. Chim. Acta., 1993, 277, 273. 11 Walczak, B., and Wegscheider, W., Anal. Chim. Acta., 1993, 283, 508. 12 Blank, T. B. and Brown, S. D., Anal. Chem., 1993, 65, 3081. 13 Burden, F. R., J. Chem. Inf.Comput. Sci., 1994, 34, 1229. 14 Deane, J. M., Multivariate Pattern Recognition in Chemometrics, illustrated by case studies, ed. Brereton, R. G., Elsevier, Amsterdam, 1992, ch. 5. 15 Stone, M. J., J. R. Stat. Soc. Ser. B., 1974, 36, 111. 16 Wold, S., Technometrics, 1978, 20, 397. 17 Krzanowski, W. J., Biometrics, 1987, 44, 575. 18 Gemperline, P. J., J. Chemom., 1989, 3, 549. 19 The MathWorks Inc., MA, USA. Paper 7/03565I Received May 22, 1997 Accepted July 28, 1997 1022 Analyst, October 1997, Vol. 1
ISSN:0003-2654
DOI:10.1039/a703565i
出版商:RSC
年代:1997
数据来源: RSC
|
4. |
2(k–p)Fractional Factorial DesignviaFold Over: Application to Optimization of Novel Multicomponent Vesicular Systems |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1023-1028
Yannis L. Loukas,
Preview
|
|
摘要:
2( k2 p) Fractional Factorial Design viaFold Over: Application to Optimization of Novel Multicomponent Vesicular Systems Yannis L. Loukas† Centre for Drug Delivery Research, School of Pharmacy, University of London, 29–39 Brunswick Square, London, UK WC1N 1AX A computer-based technique based on a 2(k2p) fractional factorial design was applied for the optimization of recently described multicomponent protective liposomal formulations. These formulations contain riboflavin (vitamin B2) as a model, photosensitive drug, in addition to Oil Red O, deoxybenzone, oxybenzone and b-carotene as oil-soluble light absorbers and antioxidants incorporated into the lipid bilayer, and sulisobenzone as a water-soluble light absorber incorporated into the aqueous phase of liposomes. The presence or absence of these five different light absorbers in multilamellar liposomes containing the vitamin free or complexed with g-cyclodextrin comprised the six factors of the system, each one examined at two levels.The stabilization ratio of the vitamin and its percentage entrapment in liposomes were the two response variables of the system to be optimized. The entrapment values were calculated for all the materials, either spectrophotometrically, using second-order derivative spectrophotometry, or fluorimetrically. The response variables were predicted by multiple regression equations comprising combinations of the six formulation factors. Higher entrapment and higher protection for the drug should characterize the optimum formulation.Keywords: Experimental design; fractional factorial; fold over technique; optimization; vesicular systems Photosensitive drugs are known to degrade on exposure to light and lose their activity. Topical formulations of these drugs for medical or cosmetic reasons must be prepared in such a way as to achieve maximum stability. Known stabilizing systems in the literature include the use of certain antioxidants and light absorbers in the same preparation (solution or suspension) with the drug or the use of cyclodextrins as a complexing system which provides also moderate stability against the examined external factors (light and oxygen).We have recently proposed1 –3 a novel multicomponent stabilizing system based on liposomes, which provides high protection to sensitive drugs. This system is based generally on the liposome’s ability to accommodate both hydrophobic and hydrophilic substances into their lipid membranes and their aqueous phases, respectively. In brief, multilamellar liposomes consisting of phosphatidylcholine and cholesterol entrap the water-soluble sensitive drug, as such or in the form of a cyclodextrin complex, in the aqueous phase and one or more light absorbers either in the aqueous phase or in the lipid bilayers, depending on their characteristics (Scheme 1).In this study, riboflavin was chosen as a model photosensitive drug with rapid decomposition on exposure to light (t50% = 0.5 h).4 In order to increase the stability of the vitamin, it was entrapped as such or in the form of a g-cyclodextrin (cyclomaltooctaose) complex in dehydration–rehydration multicomponent liposomes containing one or more of the light absorbers Oil Red O, oxybenzone, deoxybenzone, sulisobenzone and the antioxidant b-carotene (Scheme 1).A liposomal formulation can be characterized as being efficient when it contains the vitamin at a high entrapment value with a higher stabilization ratio (the ratio k0/kL, where k0 and kL are the degradation rate constants of the vitamin in free form and in liposomal formulations, respectively). From the above-mentioned six factors (the presence or absence of the g-cyclodextrin cavity, Oil Red O, dioxybenzone, oxybenzone, sulisobenzone and b-carotene), each reporting a different behavior on the two responses of interest (stabilization ratio and percentage entrapment of the vitamin), it is not obvious how the optimum formulation can be achieved.In this present study, an experimental design5 can be used in order to derive valid and robust statistical significance tests for the factors examined with a minimum number of experiments. It is sufficient to consider the factors affecting the responses at two levels; for instance, the concentration of each light absorber may be set either to zero or to a constant molar ratio with the vitamin, and the vitamin may be in either free or complexed † Present address: Riga Ferreou 21, Ano Ilioupolis, 163 43 Athens, Greece.E-mail: ylloukas@compulink.gr Scheme 1 Schematic representation of the 16 runs. The scheme has been simplified by omitting the cholesterol molecules. Analyst, October 1997, Vol. 122 (1023–1027) 1023form (Table 1). The most intuitive approach to study these factors and how they affect the responses examined would be to vary the factors of interest in a 2k full factorial design (k factors at two levels), that is, to try all possible combinations.This would work well, except that the number of liposomal preparations necessary will increase exponentially. For example, the six factors examined in the present study require 26 = 64 preparations. Because each liposomal preparation is time consuming and requires costly materials, the use of a 2(k2p) fractional factorial design6 can reduce considerably the number of preparations (from 64 preparations to 8, in the present case of six factors at two levels each).In this study, the use of the original (fractional factorial) design only gave indications of significant factors rather than conclusive results and, as a consequence, the fold over design was used with all the results referring to this design. After adding the second fraction (the fold over part), the resolution of the design was increased from III to IV, isolating the factors’ main effects from any two-way interactions.Experimental Materials and Instrumentation Riboflavin (R) and g-cyclodextrin (gCD) were obtained from Aldrich Chemical (Gillingham, Dorset, UK), Oil Red O, oxybenzone, deoxybenzone, sulisobenzone, b-carotene and cholesterol from Sigma Chemical (Poole, Dorset, UK) and phosphatidylcholine (PC) from Lipid Products (Nuthill, Surrey, UK). All other reagents were of analytical-reagent grade. Doubly distilled water was used throughout.Photostability studies of R were carried out using a Blak-Ray long-wavelength (365 nm) UV lamp with 6 W rating and 460 mW cm22 dm21 intensity (Model UVGL-58, UVP, San Gabriel, CA, USA). Measurement of the degradation kinetics of R in various preparations was performed fluorimetrically (lex = 445 nm, lem 520 nm) and assays of the components entrapped into liposomes were carried out with a Compuspec UV/VIS spectrophotometer (Wallac, Turku, Finland) connected to a personal computer which can also process the spectra into their derivatives. Preparation of R : gCD Complex and Multilamellar Liposomes The inclusion complex of R with gCD was prepared according to the freeze-drying method.7 Multilamellar liposomes were prepared according to the dehydration–rehydration method with some modifications: briefly, small unilamellar vesicles (SUV) prepared from equimolar PC and cholesterol were mixed with R (free or complexed) dissolved in de-ionized water, diluted to 10 ml with water and freeze-dried overnight.The dry powder was subjected to controlled rehydration and then centrifuged at 27 300g for 20 min to separate the entrapped and non-entrapped R. The liposomal pellet containing multilamellar dehydration– rehydration vesicles (DRV) was washed three times by centrifugation in 0.1 m sodium phosphate buffer containing 0.9% NaCl (pH 7.4) (PBS) and resuspended in 4 ml of PBS before use. DRV liposomes incorporating the vitamin and the lipid-soluble components in their lipid bilayers were prepared as above with the absorbers and the lipids dissolved in chloroform, prior to the generation of the SUV precursor vesicles. When the water-soluble light absorber sulisobenzone was also entrapped into DRV liposomes, this was dissolved together with free or complexed R in the aqueous solution to be subsequently mixed with the SUV.Determination of Liposome-entrapped Materials Entrapment values for R and light absorbers were determined by measuring the concentrations of the materials in both the DRV liposomal pellets obtained and the separated pooled supernatants fluorimetrically for R and by derivative UV spectrophotometry for the rest of the components.8 The use of the second-order derivative (D2) of the spectra was found to provide both good resolution and high signal-to-noise ratios (S/N).Photostability Studies The photostabilization of R in the different DRV formulations exposed to UV radiation was calculated fluorimetrically.The assay procedure is briefly as follows: the liposomal suspension of R (3 ml) was transferred into an open quartz cuvette and was placed in front of the UV lamp. The liposomal suspension was stirred continuously in order to make it homogeneous during the study and in order for the whole suspension to be equally irradiated. At time intervals, 100 ml of the liposomal suspension was dialyzed with 200 ml of propan-2-ol and the resultant clear solution was diluted to 3 ml with water and was measured at lex = 445 nm and lem = 520 nm. 2(k2p) Fractional Factorial Design via Fold Over The six factors were examined at two levels (Table 1), calculating also how they affect the two different responses (the stabilization ratio and the percentage entrapment value). The first fraction of the experiment (eight runs) is described as a 2(623) design of resolution III.9 This means that overall k = 6 factors (the first number in parentheses) were studied; however, p = 3 of those factors (the second number in parentheses) were generated from the interactions of a full 2[(623) = 3] factorial design.As a result, the design does not give full resolution; that is, there are certain interaction effects that are confounded with (identical with) other effects. In this study, R is equal to III and therefore, no L = 1 level interactions (i.e., main effects) are confounded with any other interaction of order less than R 2 L = 3 2 1 = 2.Hence, the main effects in this design are aliased (or confounded) with the two-way interactions. For instance, in the present study, the factor 4 main effect is confounded with the interaction of factors 1 and 2. Similarly, factor 5 is confounded with the interaction of factors 1 and 3 and factor 6 with the interaction of factors 2 and 3. To clarify the main effect of factors 4, 5 and 6 from the two-factor interactions, another fraction of eight runs is added— the fold over—and the design can then be turned to a resolution of IV.The fold over fraction copies the entire design (the first eight runs) and appends it to the end, reversing all signs. In the resulting design of resolution IV, no main effects of the examined factors are confounded with any other interaction of order less than R = 4 2 1 = 3. In this design, then, the main Table 1 Low and high settings (levels) for the six factors examined Factor setting Factor name Type* Low High (1) Free–Complex Q Free† Complex† (2) Oil Red O C Out In (0.2 mmol) (3) Oxybenzone C Out In (0.2 mmol) (4) b-Carotene C Out In (0.2 mmol) (5) Sulisobenzone C Out In (0.2 mmol) (6) Deoxybenzone C Out In (0.2 mmol) * Q and C denote a qualitative factor (cannot be varied continuously) and a continuous factor (can be varied continuously), respectively.† In all the liposomal preparations, egg PC and cholesterol were kept at 1 mmol and R (free or complexed) at 0.1 mmol. 1024 Analyst, October 1997, Vol. 122effects are not confounded with two-way interactions, but only with three-way interactions. Also, no two-way interactions are confounded with any other interaction of order less than R = 4 2 2 = 2. Hence, the two-way interactions in this design are confounded with each other. In this study, the calculation of the variability of measurements (pure error), through all partial replications, was omitted in order to simplify the study.10 A statistical software package11 with experimental design capabilities was used to perform the calculations and to illustrate all the interactive graphics.The 16 formulations listed in Table 1 were evaluated in random order to nullify the effect of extraneous or nuisance variables. After the two responses (Table 2) had been collected, the system was ready for analysis. Results and Discussion Calculation of Entrapped Materials The interest for the entrapment values is concentrated not only on R but also on the light absorbers since their entrapment values affect the stability and probably the entrapment value of R.In this study, the pellets were first disrupted with propan-2-ol and the resulting solutions were calculated fluorimetrically for R and by derivative UV spectrophotometry for the light absorbers. Second, the pooled supernatants were measured for the unentrapped materials by disruption of possible small unilamellar vesicles (SUV) and solubilization of the unentrapped light absorbers with propan-2-ol.The three combined supernatants were also measured fluorimetrically and by derivative UV spectrophotometry. The entrapment values for each compound were calculated according to the equation entrapment (%) = P P S A A A + �100 where AP is the absorbance of the materials in the pellets and AS is the absorbance of non-entrapped materials in the pooled supernatants, after a dilution correction to achieve identical dilutions for both AP and AS.Specifically, in the liposomal formulation No. 1 in Table 2, where all the compounds are present, the entrapment values for all the compounds were calculated indirectly according to the equation entrapment (%) = A A A 0 0 100 - � where A0 is the absorbance of the initial concentration of materials and A denotes the absorbance of non-entrapped materials in the organic and aqueous phases (obtained on extraction of the combined supernatants with chloroform) after a dilution correction to achieve identical dilutions for both A0 and A.For the quantification, the second-order derivative was used following the ‘zero crossing method’ for the simultaneous determination of the four lipid-soluble sunscreens (the procedure is described in detail elsewhere).3,12 Determination of the Six Factors on the Two Responses Check of main effects and ANOVA results Table 2 and Scheme 1 present the 16 runs (liposomal formulations) and the calculated responses.After the calculations, the system is ready for analysis, beginning with the calculation of the main effects of the factors (the design is of resolution IV after the addition of the fold over fraction; hence the two-way interactions confound each other and they cannot be estimated from this design). In Table 3, the first numeric column for each response contains the factor’s main effect estimates, which can be interpreted as deviations of the mean of the negative settings from the mean of the positive settings for the respective factors.For example, if the vitamin is entrapped in complexed form, an improvement in stabilization ratio by 86.125 and a decrease in entrapment value by 18.75 can be expected (Table 3; negative values for the effects denote a decrease in the response value). Furthermore, the presence of the Oil Red O increases the stabilization ratio by 126.125 and does not change significantly the entrapment value of the vitamin (Table 3).The second numeric column for each response in Table 3 contains the factor’s main effect regression coefficients. These are the coefficients that could be used for the prediction of each response for new factor settings, via the linear equation ypred. = b0 + b1x1 + . . . + b6x6 where ypred. is the predicted response (stabilization ratio or percentage entrapment), x1– x6 are the settings (1–6), b1–b6 are the respective coefficients and b0 is intercept or mean.For this design, the main effect estimates do not show the standard errors, because this is a saturated design,13 where all degrees of freedom (i.e., information) are used to estimate the factors’ main effects and no independent assessment of the error variance is available. After the estimation of the factors’ main effects, the determination of the significant factors afing the dependent variables of interest (responses) is carried out by performing an ANOVA for each response separately (Tables 4 and 5).In these tables the sum of squares (SS) is the information that was used Table 2 Sixteen liposomal formulations and the estimated responses (1) Free– (2) Oil (3) Oxy- (4) (5) Suliso- (6) Deoxy- Stability Entrapment Case complex Red O benzone b-Carotene benzone benzone ratio (%) 1 Complex In In In In In 265.00 10.00 2 Complex In Out In Out Out 225.00 20.00 3 Complex Out In Out In Out 40.00 9.00 4 Complex Out Out Out Out In 30.00 21.00 5 Free In In Out Out In 85.00 48.00 6 Free In Out Out In Out 75.00 20.00 7 Free Out In In Out Out 25.00 47.00 8 Free Out Out In In In 35.00 19.00 9 Free Out Out Out Out Out 4.00 49.00 10 Free Out In Out In In 27.00 22.00 11 Free In Out In Out In 73.00 47.00 12 Free In In In In Out 83.00 19.00 13 Complex Out Out In In Out 41.00 10.00 14 Complex Out In In Out In 45.00 22.00 15 Complex In Out Out In In 235.00 9.00 16 Complex In In Out Out Out 215.00 20.00 Analyst, October 1997, Vol. 122 1025to estimate the factors’ main effects and the F-ratios (F) are the ratios of the respective mean square effect and the mean square error. Furthermore, because the factors in this study have two levels, each ANOVA main effect has one degree of freedom (df). Finally, the p values indicate when the main effect of each factor is statistically significant (p < 0.05) or marginally significant (p < 0.10). Therefore, the ANOVA data for the first response (Table 4) support the conclusion that, indeed, factors 1 and 2 significantly affect the stabilization ratio of the vitamin since they show the largest parameter estimates (Table 3); hence the settings of these two factors were most important for the resultant stabilization ratio.This means that the vitamin expresses the highest stability when in complexed form (R : gCD) is entrapped in the aqueous phase of liposomes containing at least Oil Red O in their bilayers. Similarly, from the ANOVA in Table 5 it appears that mainly factors 1 and 6 affect the percentage entrapment values of the vitamin, meaning that the liposomal formulation provides the highest entrapment values when the vitamin is in free form and the hydrophilic sulisobenzone is absent. From the above observations, the formulator can easily conclude that the presence of at least one hydrophobic light absorber in a liposomal formulation, containing the vitamin in free form, provides both better stability and a higher entrapment value.Also, the presence of the hydrophilic sulisobenzone adds little to the overall stability, decreasing at the same time the entrapment value considerably. Finally, if the Table 3 Estimated factors’ main effects and the coefficients for the predictive mathematical models (Response 1) (Response 2) Stabilization ratio Entrapped R (%) Factor Effect Coefficient Effect Coefficient Mean/intercept 93.9375 93.93750 24.5000 24.50000 (1) Free–complex 86.1250 43.06250 218.7500 29.37500 (2) Oil Red O 126.1250 63.06250 20.7500 20.37500 (3) Oxybenzone 8.3750 4.18750 0.2500 0.12500 (4) b-Carotene 10.1250 5.06250 20.5000 20.25000 (5) Sulisobenzone 12.3750 6.18750 219.5000 29.75000 (6) Deoxybenzone 10.8750 5.43750 0.5000 0.25000 Table 4 ANOVA for the stabilization ratio (r2 = 0.84) Factor SS df MS F p (1) Free–complex 29670.1 1 29670.06 13.41555 0.005213 (2) Oil Red O 63630.1 1 63630.06 28.77082 0.000454 (3) Oxybenzone 280.6 1 280.56 0.12686 0.729918 (4) b-Carotene 410.1 1 410.06 0.18541 0.676890 (5) Sulisobenzone 612.6 1 612.56 0.27697 0.611412 (6) Deoxybenzone 473.1 1 473.06 0.21390 0.654706 Error 19904.6 9 2211.62 Total SS 114980.9 15 Table 5 ANOVA for the percentage entrapment value (r2 = 0.92) Factor SS df MS F p (1) Free–complex 1406.250 1 1406.250 45.16057 0.000087 (2) Oil Red O 2.250 1 2.250 0.07226 0.794139 (3) Oxybenzone 0.250 1 0.250 0.00803 0.930566 (4) b-Carotene 1.000 1 1.000 0.03211 0.861747 (5) Sulisobenzone 1521.000 1 1521.000 48.84567 0.000064 (6) Deoxybenzone 1.000 1 1.000 0.03211 0.861747 Error 280.250 9 31.139 Total SS 3212.000 15 Fig. 1 Normal probability plots of residual values for the stabilization ratio and the percentage entrapment. Fig. 2 Pareto charts for the factors’ main effects on stabilization ratio and on percentage entrapment. 1026 Analyst, October 1997, Vol. 122main aim is the highest stability, then the vitamin can be used in complexed form ‘sacrificing’ the highest entrapment value.Diagnostic plots of residuals and Pareto charts of effects From the ANOVA tables, specific ‘models’ that include a particular number of effects for each of the two responses could be concluded (see above). Furthermore, the distribution of the residual values,10 which is the difference between the predicted values (as predicted by the current models) and the observed values, could also be examined. Fig. 1 presents the normal probability plot of residuals for each response separately, by assessing how closely the set of observed values follow a theoretical distribution.Since all values fall around a straight line, it can be concluded that they follow the normal distribution. Another useful plot for identifying the factors that are important is the Pareto chart of effects (Fig. 2). This graph will show the ANOVA effect estimates plotted against the horizontal axis. This plot will also include a vertical line to indicate the p = 0.05 threshold for statistical significance (an effect that exceeds the vertical line may be considered significant).After completing the first eight runs, the Pareto chart for the percentage entrapment showed that the main effects of sulisobenzone and free–complex were marginally significant (not shown). The addition of the fold over fraction highlighted the significance of these two factors, as shown in Fig. 2. Normal probability plot of effects Another useful, albeit more technical, summary graph is the normal probability plot of effects10 which is constructed as follows (Fig. 3). First, the effect estimates are rank ordered. From these ranks, z values (i.e., standard values of the normal distribution) can be computed based on the assumption that the estimates come from a normal distribution with a common mean. These z values are plotted on the left y-axis in the plot and the corresponding normal probabilities are shown on the right yaxis.If the actual estimates (plotted on the x-axis) are normally distributed, then all values should fall on a straight line. This plot is very useful for separating random noise from ‘real’ effects. The estimates for effects that are actually zero in the population will assume a normal distribution around a common mean of zero; effects that truly exist will be shown as outliers. In Fig. 3, the points for Oil Red O and free–complex (for the stabilization ratio) and the points for free–complex and sulisobenzone (for the percentage entrapment), the main effects appear different from the other effects.In conclusion, such multicomponent vesicular formulations may include more factors during the preparation (e.g., the lipid : cholesterol molar ratio, the presence of a second lipid and its molar ratio to the first lipid, different combinations of light absorbers, different cyclodextrins for the complexation of the drug, the binding constant value of the complexes formed, different preparation methods for the liposomes), making the interpretation of the system extremely complicated.In order for all the factors to be used at their optimum levels and for the best responses to be achieved, many experiments must be performed, including all the possible combinations between the different factors. The use of fractional factorial design, as described here, can decrease considerably the number of experiments necessary, give useful conclusions for the main effects and interactions between the factors examined and clarify complicated interactions through graphical representations.In this study, the 2(k2p) fractional design was introduced. This design identified quickly and efficiently the factors which are active and also provided some information on two-factor interactions. Sequential assembly of these designs via fold over was found to be very effective to gain additional information about the factors’ main effects that an initial experiment may identify as possibly important. References 1 Gregoriadis, G., and Loukas, Y.L., PCT Int. Pat. Appl., GB95/01258, 1995. 2 Loukas, Y. L., Jayasekera, P., and Gregoriadis, G., J. Phys. Chem., 1995, 99, 11035. 3 Loukas, Y. L., Jayasekera, P., and Gregoriadis, G., Int. J. Pharm., 1995, 117, 85. 4 Ho, A., Puri, K., and Sugden, J., Int. J. Pharm., 1994, 107, 199. 5 Montgomery, D., Design and Analysis of Experiments, Wiley, Chichester, 1991. 6 Box, G. E. P., Hunter, W. G., and Hunter, S. J., Statistics for Experimenters: an Introduction to Design, Data Analysis, and Model Building, Wiley, New York, 1978. 7 Loukas, Y. L., J. Phys. Chem. B, 1997, 101, 4863. 8 Loukas, Y. L., Analyst, 1996, 121, 279. 9 Box, G. E. P., and Draper, N. R., Empirical Model-Building and Response Surfaces, Wiley, New York, 1987. 10 Deming, S. N., and Morgan, S. L., Experimental Design: a Chemometric Approach, Elsevier, Amsterdam, 2nd edn., 1993. 11 Statistics for Windows Version 5, StatSoft, Biggleswade UK, 1995. 12 Loukas, Y. L., Vraka, V., and Gregoriadis, G., Pharm. Sci., 1996, 2, 523. 13 Ryan, T. P., Statistical Methods for Quality Improvement, Wiley, New York, 1989. Paper 7/02701J Received April 21, 1997 Accepted June 12, 1977 Fig. 3 Normal probability plot of the facotrs’ main effects on the stabilization ratio and on percentage entrapment. (The right y-axis denotes the percentage cumulative frequency, which is equal to the cumulative frequency divided by n + 1, where the cumulative frequency for a measurement denotes the measurements less than or equal to that measurement and n is the total number of measurements).Analyst, October 1997, Vol. 122 1027 2( k2 p) Fractional Factorial Design viaFold Over: Application to Optimization of Novel Multicomponent Vesicular Systems Yannis L. Loukas† Centre for Drug Delivery Research, School of Pharmacy, University of London, 29–39 Brunswick Square, London, UK WC1N 1AX A computer-based technique based on a 2(k2p) fractional factorial design was applied for the optimization of recently described multicomponent protective liposomal formulations.These formulations contain riboflavin (vitamin B2) as a model, photosensitive drug, in addition to Oil Red O, deoxybenzone, oxybenzone and b-carotene as oil-soluble light absorbers and antioxidants incorporated into the lipid bilayer, and sulisobenzone as a water-soluble light absorber incorporated into the aqueous phase of liposomes.The presence or absence of these five different light absorbers in multilamellar liposomes containing the vitamin free or complexed with g-cyclodextrin comprised the six factors of the system, each one examined at two levels. The stabilization ratio of the vitamin and its percentage entrapment in liposomes were the two response variables of the system to be optimized. The entrapment values were calculated for all the materials, either spectrophotometrically, using second-order derivative spectrophotometry, or fluorimetrically.The response variables were predicted by multiple regression equations comprising combinations of the six formulation factors. Higher entrapment and higher protection for the drug should characterize the optimum formulation. Keywords: Experimental design; fractional factorial; fold over technique; optimization; vesicular systems Photosensitive drugs are known to degrade on exposure to light and lose their activity.Topical formulations of these drugs for medical or cosmetic reasons must be prepared in such a way as to achieve maximum stability. Known stabilizing systems in the literature include the use of certain antioxidants and light absorbers in the same preparation (solution or suspension) with the drug or the use of cyclodextrins as a complexing system which provides also moderate stability against the examined external factors (light and oxygen).We have recently proposed1 –3 a novel multicomponent stabilizing system based on liposomes, which provides high protection to sensitive drugs. This system is based generally on the liposome’s ability to accommodate both hydrophobic and hydrophilic substances into their lipid membranes and their aqueous phases, respectively. In brief, multilamellar liposomes consisting of phosphatidylcholine and cholesterol entrap the water-soluble sensitive drug, as such or in the form of a cyclodextrin complex, in the aqueous phase and one or more light absorbers either in the aqueous phase or in the lipid bilayers, depending on their characteristics (Scheme 1).In this study, riboflavin was chosen as a model photosensitive drug with rapid decomposition on exposure to light (t50% = 0.5 h).4 In order to increase the stability of the vitamin, it was entrapped as such or in the form of a g-cyclodextrin (cyclomaltooctaose) complex in dehydration–rehydration multicomponent liposomes containing one or more of the light absorbers Oil Red O, oxybenzone, deoxybenzone, sulisobenzone and the antioxidant b-carotene (Scheme 1).A liposomal formulation can be characterized as being efficient when it contains the vitamin at a high entrapment value with a higher stabilization ratio (the ratio k0/kL, where k0 and kL are the degradation rate constants of the vitamin in free form and in liposomal formulations, respectively). From the above-mentioned six factors (the presence or absence of the g-cyclodextrin cavity, Oil Red O, dioxybenzone, oxybenzone, sulisobenzone and b-carotene), each reporting a different behavior on the two responses of interest (stabilization ratio and percentage entrapment of the vitamin), it is not obvious how the optimum formulation can be achieved.In this present study, an experimental design5 can be used in order to derive valid and robust statistical significance tests for the factors examined with a minimum number of experiments.It is sufficient to consider the factors affecting the responses at two levels; for instance, the concentration of each light absorber may be set either to zero or to a constant molar ratio with the vitamin, and the vitamin may be in either free or complexed † Present address: Riga Ferreou 21, Ano Ilioupolis, 163 43 Athens, Greece. E-mail: ylloukas@compulink.gr Scheme 1 Schematic representation of the 16 runs. The scheme has been simplified by omitting the cholesterol molecules.Analyst, October 1997, Vol. 122 (1023–1027) 1023form (Table 1). The most intuitive approach to study these factors and how they affect the responses examined would be to vary the factors of interest in a 2k full factorial design (k factors at two levels), that is, to try all possible combinations. This would work well, except that the number of liposomal preparations necessary will increase exponentially. For example, the six factors examined in the present study require 26 = 64 preparations.Because each liposomal preparation is time consuming and requires costly materials, the use of a 2(k2p) fractional factorial design6 can reduce considerably the number of preparations (from 64 preparations to 8, in the present case of six factors at two levels each). In this study, the use of the original (fractional factorial) design only gave indications of significant factors rather than conclusive results and, as a consequence, the fold over design was used with all the results referring to this design.After adding the second fraction (the fold over part), the resolution of the design was increased from III to IV, isolating the factors’ main effects from any two-way interactions. Experimental Materials and Instrumentation Riboflavin (R) and g-cyclodextrin (gCD) were obtained from Aldrich Chemical (Gillingham, Dorset, UK), Oil Red O, oxybenzone, deoxybenzone, sulisobenzone, b-carotene and cholesterol from Sigma Chemical (Poole, Dorset, UK) and phosphatidylcholine (PC) from Lipid Products (Nuthill, Surrey, UK).All other reagents were of analytical-reagent grade. Doubly distilled water was used throughout. Photostability studies of R were carried out using a Blak-Ray long-wavelength (365 nm) UV lamp with 6 W rating and 460 mW cm22 dm21 intensity (Model UVGL-58, UVP, San Gabriel, CA, USA). Measurement of the degradation kinetics of R in various preparations was performed fluorimetrically (lex = 445 nm, lem 520 nm) and assays of the components entrapped into liposomes were carried out with a Compuspec UV/VIS spectrophotometer (Wallac, Turku, Finland) connected to a personal computer which can also process the spectra into their derivatives. Preparation of R : gCD Complex and Multilamellar Liposomes The inclusion complex of R with gCD was prepared according to the freeze-drying method.7 Multilamellar liposomes were prepared according to the dehydration–rehydration method with some modifications: briefly, small unilamellar vesicles (SUV) prepared from equimolar PC and cholesterol were mixed with R (free or complexed) dissolved in de-ionized water, diluted to 10 ml with water and freeze-dried overnight. The dry powder was subjected to controlled rehydration and then centrifuged at 27 300g for 20 min to separate the entrapped and non-entrapped R.The liposomal pellet containing multilamellar dehydration– rehydration vesicles (DRV) was washed three times by centrifugation in 0.1 m sodium phosphate buffer containing 0.9% NaCl (pH 7.4) (PBS) and resuspended in 4 ml of PBS before use.DRV liposomes incorporating the vitamin and the lipid-soluble components in their lipid bilayers were prepared as above with the absorbers and the lipids dissolved in chloroform, prior to the generation of the SUV precursor vesicles. When the water-soluble light absorber sulisobenzone was also entrapped into DRV liposomes, this was dissolved together with free or complexed R in the aqueous solution to be subsequently mixed with the SUV.Determination of Liposome-entrapped Materials Entrapment values for R and light absorbers were determined by measuring the concentrations of the materials in both the DRV liposomal pellets obtained and the separated pooled supernatants fluorimetrically for R and by derivative UV spectrophotometry for the rest of the components.8 The use of the second-order derivative (D2) of the spectra was found to provide both good resolution and high signal-to-noise ratios (S/N).Photostability Studies The photostabilization of R in the different DRV formulations exposed to UV radiation was calculated fluorimetrically. The assay procedure is briefly as follows: the liposomal suspension of R (3 ml) was transferred into an open quartz cuvette and was placed in front of the UV lamp. The liposomal suspension was stirred continuously in order to make it homogeneous during the study and in order for the whole suspension to be equally irradiated.At time intervals, 100 ml of the liposomal suspension was dialyzed with 200 ml of propan-2-ol and the resultant clear solution was diluted to 3 ml with water and was measured at lex = 445 nm and lem = 520 nm. 2(k2p) Fractional Factorial Design via Fold Over The six factors were examined at two levels (Table 1), calculating also how they affect the two different responses (the stabilization ratio and the percentage entrapment value).The first fraction of the experiment (eight runs) is described as a 2(623) design of resolution III.9 This means that overall k = 6 factors (the first number in parentheses) were studied; however, p = 3 of those factors (the second number in parentheses) were generated from the interactions of a full 2[(623) = 3] factorial design. As a result, the design does not give full resolution; that is, there are certain interaction effects that are confounded with (identical with) other effects.In this study, R is equal to III and therefore, no L = 1 level interactions (i.e., main effects) are confounded with any other interaction of order less than R 2 L = 3 2 1 = 2. Hence, the main effects in this design are aliased (or confounded) with the two-way interactions. For instance, in the present study, the factor 4 main effect is confounded with the interaction of factors 1 and 2.Similarly, factor 5 is confounded with the interaction of factors 1 and 3 and factor 6 with the interaction of factors 2 and 3. To clarify the main effect of factors 4, 5 and 6 from the two-factor interactions, another fraction of eight runs is added— the fold over—and the design can then be turned to a resolution of IV. The fold over fraction copies the entire design (the first eight runs) and appends it to the end, reversing all signs. In the resulting design of resolution IV, no main effects of the examined factors are confounded with any other interaction of order less than R = 4 2 1 = 3.In this design, then, the main Table 1 Low and high settings (levels) for the six factors examined Factor setting Factor name Type* Low High (1) Free–Complex Q Free† Complex† (2) Oil Red O C Out In (0.2 mmol) (3) Oxybenzone C Out In (0.2 mmol) (4) b-Carotene C Out In (0.2 mmol) (5) Sulisobenzone C Out In (0.2 mmol) (6) Deoxybenzone C Out In (0.2 mmol) * Q and C denote a qualitative factor (cannot be varied continuously) and a continuous factor (can be varied continuously), respectively.† In all the liposomal preparations, egg PC and cholesterol were kept at 1 mmol and R (free or complexed) at 0.1 mmol. 1024 Analyst, October 1997, Vol. 122effects are not confounded with two-way interactions, but only with three-way interactions. Also, no two-way interactions are confounded with any other interaction of order less than R = 4 2 2 = 2.Hence, the two-way interactions in this design are confounded with each other. In this study, the calculation of the variability of measurements (pure error), through all partial replications, was omitted in order to simplify the study.10 A statistical software package11 with experimental design capabilities was used to perform the calculations and to illustrate all the interactive graphics. The 16 formulations listed in Table 1 were evaluated in random order to nullify the effect of extraneous or nuisance variables.After the two responses (Table 2) had been collected, the system was ready for analysis. Results and Discussion Calculation of Entrapped Materials The interest for the entrapment values is concentrated not only on R but also on the light absorbers since their entrapment values affect the stability and probably the entrapment value of R. In this study, the pellets were first disrupted with propan-2-ol and the resulting solutions were calculated fluorimetrically for R and by derivative UV spectrophotometry for the light absorbers.Second, the pooled supernatants were measured for the unentrapped materials by disruption of possible small unilamellar vesicles (SUV) and solubilization of the unentrapped light absorbers with propan-2-ol. The three combined supernatants were also measured fluorimetrically and by derivative UV spectrophotometry. The entrapment values for each compound were calculated according to the equation entrapment (%) = P P S A A A + �100 where AP is the absorbance of the materials in the pellets and AS is the absorbance of non-entrapped materials in the pooled supernatants, after a dilution correction to achieve identical dilutions for both AP and AS.Specifically, in the liposomal formulation No. 1 in Table 2, where all the compounds are present, the entrapment values for all the compounds were calculated indirectly according to the equation entrapment (%) = A A A 0 0 100 - � where A0 is the absorbance of the initial concentration of materials and A denotes the absorbance of non-entrapped materials in the organic and aqueous phases (obtained on extraction of the combined supernatants with chloroform) after a dilution correction to achieve identical dilutions for both A0 and A.For the quantification, the second-order derivative was used following the ‘zero crossing method’ for the simultaneous determination of the four lipid-soluble sunscreens (the procedure is described in detail elsewhere).3,12 Determination of the Six Factors on the Two Responses Check of main effects and ANOVA results Table 2 and Scheme 1 present the 16 runs (liposoformulations) and the calculated responses.After the calculations, the system is ready for analysis, beginning with the calculation of the main effects of the factors (the design is of resolution IV after the addition of the fold over fraction; hence the two-way interactions confound each other and they cannot be estimated from this design).In Table 3, the first numeric column for each response contains the factor’s main effect estimates, which can be interpreted as deviations of the mean of the negative settings from the mean of the positive settings for the respective factors. For example, if the vitamin is entrapped in complexed form, an improvement in stabilization ratio by 86.125 and a decrease in entrapment value by 18.75 can be expected (Table 3; negative values for the effects denote a decrease in the response value).Furthermore, the presence of the Oil Red O increases the stabilization ratio by 126.125 and does not change significantly the entrapment value of the vitamin (Table 3). The second numeric column for each response in Table 3 contains the factor’s main effect regression coefficients. These are the coefficients that could be used for the prediction of each response for new factor settings, via the linear equation ypred. = b0 + b1x1 + .. . + b6x6 where ypred. is the predicted response (stabilization ratio or percentage entrapment), x1– x6 are the settings (1–6), b1–b6 are the respective coefficients and b0 is intercept or mean. For this design, the main effect estimates do not show the standard errors, because this is a saturated design,13 where all degrees of freedom (i.e., information) are used to estimate the factors’ main effects and no independent assessment of the error variance is available.After the estimation of the factors’ main effects, the determination of the significant factors affecting the dependent variables of interest (responses) is carried out by performing an ANOVA for each response separately (Tables 4 and 5). In these tables the sum of squares (SS) is the information that was used Table 2 Sixteen liposomal formulations and the estimated responses (1) Free– (2) Oil (3) Oxy- (4) (5) Suliso- (6) Deoxy- Stability Entrapment Case complex Red O benzone b-Carotene benzone benzone ratio (%) 1 Complex In In In In In 265.00 10.00 2 Complex In Out In Out Out 225.00 20.00 3 Complex Out In Out In Out 40.00 9.00 4 Complex Out Out Out Out In 30.00 21.00 5 Free In In Out Out In 85.00 48.00 6 Free In Out Out In Out 75.00 20.00 7 Free Out In In Out Out 25.00 47.00 8 Free Out Out In In In 35.00 19.00 9 Free Out Out Out Out Out 4.00 49.00 10 Free Out In Out In In 27.00 22.00 11 Free In Out In Out In 73.00 47.00 12 Free In In In In Out 83.00 19.00 13 Complex Out Out In In Out 41.00 10.00 14 Complex Out In In Out In 45.00 22.00 15 Complex In Out Out In In 235.00 9.00 16 Complex In In Out Out Out 215.00 20.00 Analyst, October 1997, Vol. 122 1025to estimate the factors’ main effects and the F-ratios (F) are the ratios of the respective mean square effect and the mean square error. Furthermore, because the factors in this study have two levels, each ANOVA main effect has one degree of freedom (df).Finally, the p values indicate when the main effect of each factor is statistically significant (p < 0.05) or marginally significant (p < 0.10). Therefore, the ANOVA data for the first response (Table 4) support the conclusion that, indeed, factors 1 and 2 significantly affect the stabilization ratio of the vitamin since they show the largest parameter estimates (Table 3); hence the settings of these two factors were most important for the resultant stabilization ratio.This means that the vitamin expresses the highest stability when in complexed form (R : gCD) is entrapped in the aqueous phase of liposomes containing at least Oil Red O in their bilayers. Similarly, from the ANOVA in Table 5 it appears that mainly factors 1 and 6 affect the percentage entrapment values of the vitamin, meaning that the liposomal formulation provides the highest entrapment values when the vitamin is in free form and the hydrophilic sulisobenzone is absent.From the above observations, the formulator can easily conclude that the presence of at least one hydrophobic light absorber in a liposomal formulation, containing the vitamin in free form, provides both better stability and a higher entrapment value. Also, the presence of the hydrophilic sulisobenzone adds little to the overall stability, decreasing at the same time the entrapment value considerably. Finally, if the Table 3 Estimated factors’ main effects and the coefficients for the predictive mathematical models (Response 1) (Response 2) Stabilization ratio Entrapped R (%) Factor Effect Coefficient Effect Coefficient Mean/intercept 93.9375 93.93750 24.5000 24.50000 (1) Free–complex 86.1250 43.06250 218.7500 29.37500 (2) Oil Red O 126.1250 63.06250 20.7500 20.37500 (3) Oxybenzone 8.3750 4.18750 0.2500 0.12500 (4) b-Carotene 10.1250 5.06250 20.5000 20.25000 (5) Sulisobenzone 12.3750 6.18750 219.5000 29.75000 (6) Deoxybenzone 10.8750 5.43750 0.5000 0.25000 Table 4 ANOVA for the stabilization ratio (r2 = 0.84) Factor SS df MS F p (1) Free–complex 29670.1 1 29670.06 13.41555 0.005213 (2) Oil Red O 63630.1 1 63630.06 28.77082 0.000454 (3) Oxybenzone 280.6 1 280.56 0.12686 0.729918 (4) b-Carotene 410.1 1 410.06 0.18541 0.676890 (5) Sulisobenzone 612.6 1 612.56 0.27697 0.611412 (6) Deoxybenzone 473.1 1 473.06 0.21390 0.654706 Error 19904.6 9 2211.62 Total SS 114980.9 15 Table 5 ANOVA for the percentage entrapment value (r2 = 0.92) Factor SS df MS F p (1) Free–complex 1406.250 1 1406.250 45.16057 0.000087 (2) Oil Red O 2.250 1 2.250 0.07226 0.794139 (3) Oxybenzone 0.250 1 0.250 0.00803 0.930566 (4) b-Carotene 1.000 1 1.000 0.03211 0.861747 (5) Sulisobenzone 1521.000 1 1521.000 48.84567 0.000064 (6) Deoxybenzone 1.000 1 1.000 0.03211 0.861747 Error 280.250 9 31.139 Total SS 3212.000 15 Fig. 1 Normal probability plots of residual values for the stabilization ratio and the percentage entrapment.Fig. 2 Pareto charts for the factors’ main effects on stabilization ratio and on percentage entrapment. 1026 Analyst, October 1997, Vol. 122main aim is the highest stability, then the vitamin can be used in complexed form ‘sacrificing’ the highest entrapment value. Diagnostic plots of residuals and Pareto charts of effects From the ANOVA tables, specific ‘models’ that include a particular number of effects for each of the two responses could be concluded (see above).Furthermore, the distribution of the residual values,10 which is the difference between the predicted values (as predicted by the current models) and the observed values, could also be examined. Fig. 1 presents the normal probability plot of residuals for each response separately, by assessing how closely the set of observed values follow a theoretical distribution. Since all values fall around a straight line, it can be concluded that they follow the normal distribution.Another useful plot for identifying the factors that are important is the Pareto chart of effects (Fig. 2). This graph will show the ANOVA effect estimates plotted against the horizontal axis. This plot will also include a vertical line to indicate the p = 0.05 threshold for statistical significance (an effect that exceeds the vertical line may be considered significant). After completing the first eight runs, the Pareto chart for the percentage entrapment showed that the main effects of sulisobenzone and free–complex were marginally significant (not shown). The addition of the fold over fraction highlighted the significance of these two factors, as shown in Fig. 2. Normal probability plot of effects Another useful, albeit more technical, summary graph is the normal probability plot of effects10 which is constructed as follows (Fig. 3). First, the effect estimates are rank ordered. From these ranks, z values (i.e., standard values of the normal distribution) can be computed based on the assumption that the estimates come from a normal distribution with a common mean.These z values are plotted on the left y-axis in the plot and the corresponding normal probabilities are shown on the right yaxis. If the actual estimates (plotted on the x-axis) are normally distributed, then all values should fall on a straight line. This plot is very useful for separating random noise from ‘real’ effects. The estimates for effects that are actually zero in the population will assume a normal distribution around a common mean of zero; effects that truly exist will be shown as outliers.In Fig. 3, the points for Oil Red O and free–complex (for the stabilization ratio) and the points for free–complex and sulisobenzone (for the percentage entrapment), the main effects appear different from the other effects. In conclusion, such multicomponent vesicular formulations may include more factors during the preparation (e.g., the lipid : cholesterol molar ratio, the presence of a second lipid and its molar ratio to the first lipid, different combinations of light absorbers, different cyclodextrins for the complexation of the drug, the binding constant value of the complexes formed, different preparation methods for the liposomes), making the interpretation of the system extremely complicated.In order for all the factors to be used at their optimum levels and for the best responses to be achieved, many experiments must be performed, including all the possible combinations between the different factors. The use of fractional factorial design, as described here, can decrease considerably the number of experiments necessary, give useful conclusions for the main effects and interactions between the factors examined and clarify complicated interactions through graphical representations. In this study, the 2(k2p) fractional design was introduced. This design identified quickly and efficiently the factors which are active and also provided some information on two-factor interactions. Sequential assembly of these designs via fold over was found to be very effective to gain additional information about the factors’ main effects that an initial experiment may identify as possibly important. References 1 Gregoriadis, G., and Loukas, Y. L., PCT Int. Pat. Appl., GB95/01258, 1995. 2 Loukas, Y. L., Jayasekera, P., and Gregoriadis, G., J. Phys. Chem., 1995, 99, 11035. 3 Loukas, Y. L., Jayasekera, P., and Gregoriadis, G., Int. J. Pharm., 1995, 117, 85. 4 Ho, A., Puri, K., and Sugden, J., Int. J. Pharm., 1994, 107, 199. 5 Montgomery, D., Design and Analysis of Experiments, Wiley, Chichester, 1991. 6 Box, G. E. P., Hunter, W. G., and Hunter, S. J., Statistics for Experimenters: an Introduction to Design, Data Analysis, and Model Building, Wiley, New York, 1978. 7 Loukas, Y. L., J. Phys. Chem. B, 1997, 101, 4863. 8 Loukas, Y. L., Analyst, 1996, 121, 279. 9 Box, G. E. P., and Draper, N. R., Empirical Model-Building and Response Surfaces, Wiley, New York, 1987. 10 Deming, S. N., and Morgan, S. L., Experimental Design: a Chemometric Approach, Elsevier, Amsterdam, 2nd edn., 1993. 11 Statistics for Windows Version 5, StatSoft, Biggleswade UK, 1995. 12 Loukas, Y. L., Vraka, V., and Gregoriadis, G., Pharm. Sci., 1996, 2, 523. 13 Ryan, T. P., Statistical Methods for Quality Improvement, Wiley, New York, 1989. Paper 7/02701J Received April 21, 1997 Accepted June 12, 1977 Fig. 3 Normal probability plot of the facotrs’ main effects on the stabilization ratio and on percentage entrapment. (The right y-axis denotes the percentage cumulative frequency, which is equal to the cumulative frequency divided by n + 1, where the cumulative frequency for a measurement denotes the measurements less than or equal to that measurement and n is the total number of measurements). Analyst, October 1997, Vol. 122 1027
ISSN:0003-2654
DOI:10.1039/a702701j
出版商:RSC
年代:1997
数据来源: RSC
|
5. |
Sample Filtration as a Source of Error in the Determination of Trace Metals in Marine Waters |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1029-1032
Michael Gardner,
Preview
|
|
摘要:
Sample Filtration as a Source of Error in the Determination of Trace Metals in Marine Waters Michael Gardner* and Sean Comber WRc, Henley Road, Medmenham, Marlow, Buckinghamshire, UK SL7 2HD Adequate performance in interlaboratory proficiency tests using filtered, pre-treated or pre-digested test materials does not necessarily demonstrate that laboratories’ data are of adequate comparability. Sample handling can be an important source of error which is not examined by routine proficiency tests.This paper reports a study of sample filtration as a source error in the determination of trace metals in marine water samples. The results indicate that that current practice may need to be reviewed if important contamination errors are to be controlled. Keywords: Trace metals; quality control; contamination; filtration; sea-water The UK National Marine Analytical Quality Control (NMAQC) Scheme was established in 1991 with approximately 20 participating laboratories.1 Its primary aims are (i) to provide laboratories which submit data to the UK National Marine Monitoring Plan with a means of demonstrating that they are achieving the necessary (pre-defined) standards of analytical accuracy for trace metals and organic determinands in waters, sediments and biota and (ii) to assist laboratories to improve their accuracy, where necessary.Work undertaken by the Scheme falls into two categories: regular proficiency tests and special investigative exercises.This paper describes work of the second type in connection with the filtration of water samples for the determination of trace metals. The determination of trace constituents in environmental samples is subject to errors from a wide range of different sources. In common with many other interlaboratory quality programmes, the NMAQC scheme has concentrated on checking the accuracy of the measurement stage of the analytical process. However, it is clear that sampling and sample handling are stages where errors might arise and, therefore, where some form of check on accuracy is needed.Measures taken to address these aspects of analysis for the preparation and digestion of sediment samples have already been reported.2,3 For the determination of trace metals in water, routine checks have involved interlaboratory tests on filtered, homogenised, acidpreserved sea-water samples and standard solutions. Whilst this aspect of an external quality control programme is necessary, parts of the analytical process not addressed by such test materials are also likely to be important in determining the accuracy of trace analysis.This paper reports investigations of sample filtration as a potential source of error in the determination of trace metals in marine waters. The aim of the work was to determine the size and sources of any errors and to help to identify and promote good practice. Choice of Filtration Procedures Sample collection and sample handling (including filtration) are not always the responsibility of staff who undertake analyses in participating laboratories. This meant that there might not be a ‘routine’ procedure to be tested in all participating laboratories.Given this, the test was designed with the aim of providing an illustration that satisfactory filtration procedures could be applied. Participants were asked to choose a filtration procedure (to be examined in these tests) with two criteria in mind: (i) the procedure should be representative either of what is actually done for marine samples, or (ii) it should be sufficiently practicable to be applied to marine monitoring samples, if it proves satisfactory.Test Design Two principal types of analytical error might be introduced during filtration: contamination from the apparatus or handling procedure and adsorptive losses to the filter, filter support, etc., during filtration. The test was designed to assess both types of error.Five 1 l samples were provided for filtration: sample A was a de-ionised water sample; sample B was a filtered (0.2 mm) coastal sea-water; samples C and D were accurately measured (1 l ± 5 ml) portions of the same filtered sea-water; and sample E was a portion of sample B which had been spiked with the determinands of interest and which also contained added microcrystalline cellulose [BP grade (Merck, Poole, Dorset, UK)] at a concentration of 100 ± 5 mg l21.Participating laboratories were asked to filter samples A, B and E as received, using the method (or methods) chosen for the test. Samples C and D were to be spiked at the participating laboratory immediately before filtration by adding 500 ml of a corresponding concentrated spiking solution (also supplied) to the measured 1 l portion of water. The two spiking solutions were provided at a pH between 2.0 and 2.5 (to minimise adsorptive losses). A separate spiking solution was provided for chromium (as CrVI) because of the insolubility of lead chromate. The other metals were present in a mixed spiking solution.The spiked samples were mixed by shaking and filtered within 1 h. The portions of filtrate were put into the laboratory’s own bottles (the type normally used for filtered samples) and preserved by addition of 400 ml of 5.5 m hydrochloric acid (Aristar grade, Merck) (supplied) per 100 ml of filtrate. The labelled filtrates were returned to WRc.Participants were asked to carry out filtrations in duplicate. All samples were analysed at WRc. Determinations of Cd, Cu, Pb and Ni were made using a semi-micro chelation solvent extraction procedure.4 Chromium5 and zinc6 were determined by previously described methods. A summary of each laboratory’s filtration procedure is given in Table 1. Results Fig. 1 shows a comparison of analytical data for the filtrates supplied by each participating laboratory.The pair of results for each laboratory corresponds to the two filtrations carried out on each sample. Results are summarised for each metal, on the basis that sources of contamination or the tendency for adsorption are likely to be metal-related. Analyst, October 1997, Vol. 122 (1029–1032) 1029Discussion It is worth emphasising that the comparison shown in Fig. 1 relates to the effects of filtration (and subsequent sample storage), instead of the more familiar comparison of interlaboratory analytical performance.Differences between laboratories’ results can be regarded as arising from a combination of within-batch analytical variation and what might be termed ‘filtration errors’. Effort was made to make analytical variations as small and homogeneous as possible. Determinations for a given metal were carried out in a single batch of analysis to minimise the effects of between-batch analytical errors. The limits of detection (LOD) estimated from duplicate blank determinations (for a series of analytical batches) are as follows:7 Cd 0.05, Cr 0.01, Cu 0.03, Pb 0.04, Ni 0.05 and Zn 0.05 mg l21.Potential Filtration Problems Contamination Contamination can produce errors which could be categorised as both random and systematic. Consistent contamination will tend to produce a positive bias in metal concentrations; sporadic contamination will tend to produce differences between replicate filtrates (i.e., an increase in imprecision).Hence it is unwise to interpret the results for filtration error in distinct categories of precision and bias. Data for samples C and D were evaluated against a ‘spiked’ value calculated as the spiked concentration + a ‘background’ value determined at WRc. In all cases the background was small in relation to the added spike. For sample E, a mean of laboratories’ concentrations was used as a reference point, for reasons discussed below under Adsorption.Cadmium (Fig. 1). Results for the de-ionised water sample (A) indicated that at least two laboratories (4 and 19) are subject to serious contamination. This is confirmed for laboratory 4 by data for the unspiked sea-water (B). For samples B–E, results for laboratories 7, 8 and 16 showed some evidence of very small elevations in concentration or wider differences (than those for other laboratories) between replicate filtrations. The results showing excellent agreement within replicate filtrations and with one another including those for laboratories 3, 11, 12 and 17. Chromium.The results for the de-ionised water and the blank sea-water were too close to the limit of detection of the analytical method to allow clear conclusions to be drawn concerning the smaller observed variations. However, there was evidence of contamination of between 0.2 and 1 mg l21 for laboratory 19. At the higher concentration of samples C, D and E there is close agreement between concentrations measured in laboratories’ filtrates.Copper (Fig. 2). Several laboratories’ results indicated sources of copper contamination. A large effect was evident for laboratory 4 (contamination of > 1 mg/l21); other laboratories Table 1 Summary of filtration procedures Material in Laboratory contact with No. Apparatus* sample* Filter type 3 P/e funnels, P/c P/e: bottle, funnel Nuclepore, 47mm Swinloc P/c: membrane, silicone sealing ring 4 P/c Sartorius P/c: bottle, filter, Nuclepore membrane 7 Sartorius 500 ml P/c: glass Cellulose nitrate, measuring cylinder 50mm 8 P/c Swinloc P/c: bottle, filter, Nuclepore membrane 9 Becton Dickenson P/p Cellulose acetate, 60 ml syringe 47mm filter? 11 Becton Dickenson P/p and rubber: Cellulose acetate, 50 ml syringe syringe Acetate: 47mm Sartolab P filter filter holder 12 Gelman P/e: bottle Gelman, 633 cm Glass-fibre: filter ?syringe 15 Becton Dickenson — Cellulose nitrate, syringe, Sartolab 50mm P, 120 sample pot 17 P/s Nalgene P/s: bottle, filter, Cellulose nitrate, membrane 47mm 19 Millipore PTFE PTFE Whatman, 142mm 1.5 l apparatus cellulose nitrate Pore size/ Bottle mm Preparation† Storage preparation† 0.4 Overnight in 1% Prepared prior Detergent rinse, HNO3 50 ml 1% to use 1% HNO3 for 1 HNO3 50 ml DIW week, 0.1% HNO3 for 1 week, DIW before use 0.4 50 ml 5% HNO3 Prepared prior As filter except no 50 ml DIW to use sample rinse 50 ml sample (discarded) 0.45 Syringe and filter Prepared prior DIW, 10% HCl, rinsed with DIW to use 10% HNO3, 0.1% 10% HCl and HNO3 until used DIW 0.4 50 ml 1% HNO3 Prepared prior As filter 50 ml DIW to use 50 ml sample (discarded) 0.45 Syringe and filter None rinsed with 30 ml 1% HNO3, 30 ml sample 0.45 Rinsed once with Rinsed with sample sample 0.45 None Sealed plastic HNO3 wash, DIW bag 0.45 Syringe and filter — None rinsed with sample 0.45 10% HNO3, Prepared prior As filter DIW, sample to use (discarded) 0.45 10% HNO3 for Prepared prior Detergent, 24 h 2 h, DIW rinse to use 10% HNO3, DIW rinse * P/c = polycarbonate, P/e = polyethylene, P/p = polypropylene, P/s = polysulfone † DIW = de-ionised water. 1030 Analyst, October 1997, Vol. 122(8, 11 and 19) were subject to smaller biases which were mainly evident at the blank level. The data showing excellent agreement within replicate filtrations and with one another including those for laboratories 3, 7, 9, 12, 15 and 17. Lead. The majority of results for lead in blank filtrates were lower than the 0.04 mg l21 reporting threshold, indicating good control over contamination.Good comparability of filtrates was observed at higher concentrations (samples C, D and E). The range of results obtained was narrow and close to the expected spiked values. Nickel. Excellent comparability and accuracy, with respect to the spiked value, were achieved for the spiked samples. Only one serious bias was observed at the blank level, in laboratory 19 (this may arise from the same source of contamination as that already noted for chromium).Zinc (Fig. 3). The data for zinc showed the most obvious instances of contamination bias. Replicate results tended to be similar but were often displaced with respect to filtrate data from other laboratories. At both the blank and at higher levels, laboratories 4, 8 and 9 showed clear bias. In the first two laboratories this bias was small (1–2 mg l21); in the last, bias was around 10 mg l21.Blank data from three other laboratories (7, 11 and 15) might be subject to small bias of approximately 1 mg l21. This was not borne out by the higher concentration samples (except in the case of laboratory 15). Adsorption There is no clear evidence of a bias between the ‘spiked’ value and observed values for samples C and D, for any of the metals. This indicates that adsorptive losses during filtration are probably not important. Sample E, containing 100 mg l21 of microcrystalline cellulose, was included with two aims in mind: (i) To test the hypothesis of increased adsorption for increased contact time during filtration: if adsorptive losses had been significant for filtration of samples C and D (no suspended solids, so a relatively short filtration time), it might be expected that the longer contact time for a sample containing solid matter would result in larger adsorption. Sample E could have provided a guide to the additional adsorptive losses which might apply to real samples.The test data show that adsorption during filtration was not important (or at least was not detectable in relation to the observed random errors). In freshwater samples, such adsorption can lead to significant losses of trace metals.8 The lack of adsorption in sea-water might be ascribed to the high ionic strength of the matrix and consequent competition for adsorption sites from major ion constituents of the sample (e.g., Mg2+ and Ca2+).(ii) To provide a check on adsorption in the sample bottle: sample E was prepared in bulk (with solids) and spiked at the same concentration as sample D. The bulk sample was equilibrated for 1 week to allow partitioning between the solid and dissolved phases. This was mixed thoroughly and then dispensed into the sample bottles. Markedly lower concentrations in the dissolved metal concentration of sample E in relation to sample D (given that samples D and C were not subject to large negative bias) might be taken as evidence of adsorptive losses of determinand to the solids or to the inner surface of the sample bottle.Fig. 1–3 illustrate the important comparisons between the mean of laboratories’ filtrate concentrations for sample E and the spiked + residual value for sample D (the two should be the Fig. 1 Comparison of metal concentrations in participants’ filtrates: cadmium. +, LOD; –5–, spike; and –x–, mean of laboratories. Fig. 2 Comparison of metal concentrations in participants’ filtrates: copper.Symbols as in Fig. 1. Analyst, October 1997, Vol. 122 1031same, if there is no adsorption and other sources of bias are small). The values were found to be similar for Cd, Cr, Ni and Zn, if the instances of contamination (applicable to both samples, but not necessarily consistent) are ignored. However, for Cu the concentration in sample D is markedly higher than that in sample E [3.7 mg l21 (ignoring data from laboratory 4) versus 2.7 mg l21]. This is statistically significant (p = 0.05), indicating losses of copper.It is likely that adsorption on the cellulose was more important than loss to the bottle, given the affinity of the metal for oxygen-containing organic matter. The nominal surface areas of the solids and the bottle were similar, estimated to be approximately 500 cm2 each. The results for Pb also show a small decrease between samples D and E. Conclusions The test demonstrates that sample filtration can be a source of important error in the determination of trace metals in sea-water samples.It is worth noting that the laboratories taking part in this exercise are relatively experienced, both in trace analysis and in the application of quality control techniques. Given this, it might be assumed that sample handling practices used in these laboratories are typical of or better than the current norm. The finding that several approaches are not suitable indicates that sample handling may require more attention in many laboratories. The choice of filtration procedure is critical.It is accepted widely that acid-washed plastic (e.g., low-density polyethylene, polypropylene, polycarbonate) is suitable for use at metal concentrations found in coastal waters. Other materials, in particular rubber and high-density plastics (e.g., PVC and highdensity polyethylene), can be sources of contamination which are difficult to eliminate, even by acid washing.The use of plastic ware and filter materials ‘as received from the manufacturer’, i.e., without acid washing and rinsing with de-ionised water, is clearly inadvisable. Filtration procedures which, on the evidence of their description, appear to be appropriate to the task in hand have also been shown to be subject to serious contamination. This is a further illustration of the principle that the method alone does not determine performance; its mode of use is also crucial.Adsorption on the filtration apparatus appears not to be such an important source of error as contamination, for marine samples. It is likely that the high ionic strength of sea-water protects against the adsorptive losses encountered during filtration of some freshwater samples. The test has confirmed the conclusions of other work on adsorption during sample storage, i.e., that of the six metals of interest, copper and lead are likely to be most prone to adsorptive interactions with suspended matter. Recommendations A programme of periodic checks on filtration blanks is recommended as the only means of establishing control over contamination.Unless suitable test data are obtained to demonstrate fitness for purpose, it cannot be assumed that contamination during sample filtration is adequately controlled. Continuing checks of filtration blanks are also a necessary illustration that control is maintained during routine analysis.The analysis of spiked (pre-filtered) samples, although not as important as the use of blanks, can be used as an initial confirmation that adsorption to the filtration apparatus is not responsible for important losses of the determinand of interest. The authors acknowledge the cooperation and assistance of the following organisations which are participants in the NMAQC programme: the Environment Agency of England and Wales, the Scottish Environment Protection Agency, the Ministry of Agriculture, Fisheries and Food, the Scottish Office Environment, Agriculture and Fisheries Department, the Department of the Environment (Northern Ireland) and the Department of Agriculture (Northern Ireland).References 1 Dobson, J. E., Gardner, M. J., Griffiths, A. H., Jessep, M. A., and Ravenscroft, J. E., Accred. Quality Assur., in the press. 2 Cook, J. M., Gardner, M. J., Griffiths, A. H., Jessep, M. A., Ravenscroft, J. E., and Yates, R., Mar. Pollut.Bull., in the press. 3 Dixon, E. M., Gardner, M. J., and Hudson, R., Chemosphere, in the press. 4 Apte, S. C., and Gunn, A. M., Anal. Chim. Acta., 1987, 193, 147. 5 Gardner, M. J., and Ravenscroft, J. E., Fresenius’ J. Anal. Chem., 1996, 354, 602. 6 Bird, P., Comber, S. D. W., Gardner, M. J., and Ravenscroft, J. E., Sci. Total Environ., 1996, 181, 257. 7 Analytical Methods Committee, Analyst, 1987, 112, 199. 8 Gardner, M. J., and Hunt, D. T. E., Analyst, 1981, 106, 471. Paper 7/04527A Received June 27, 1997 Accepted July 16, 1997 Fig. 3 Comparison of metal concentrations in participants’ filtrates: zinc. Symbols as in Fig. 1. 1032 Analyst, October 1997, Vol. 122 Sample Filtration as a Source of Error in the Determination of Trace Metals in Marine Waters Michael Gardner* and Sean Comber WRc, Henley Road, Medmenham, Marlow, Buckinghamshire, UK SL7 2HD Adequate performance in interlaboratory proficiency tests using filtered, pre-treated or pre-digested test materials does not necessarily demonstrate that laboratories’ data are of adequate comparability.Sample handling can be an important source of error which is not examined by routine proficiency tests. This paper reports a study of sample filtration as a source error in the determination of trace metals in marine water samples. The results indicate that that current practice may need to be reviewed if important contamination errors are to be controlled. Keywords: Trace metals; quality control; contamination; filtration; sea-water The UK National Marine Analytical Quality Control (NMAQC) Scheme was established in 1991 with approximately 20 participating laboratories.1 Its primary aims are (i) to provide laboratories which submit data to the UK National Marine Monitoring Plan with a means of demonstrating that they are achieving the necessary (pre-defined) standards of analytical accuracy for trace metals and organic determinands in waters, sediments and biota and (ii) to assist laboratories to improve their accuracy, where necessary.Work undertaken by the Scheme falls into two categories: regular proficiency tests and special investigative exercises. This paper describes work of the second type in connection with the filtration of water samples for the determination of trace metals. The determination of trace constituents in environmental samples is subject to errors from a wide range of different sources.In common with many other interlaboratory quality programmes, the NMAQC scheme has concentrated on checking the accuracy of the measurement stage of the analytical process. However, it is clear that sampling and sample handling are stages where errors might arise and, therefore, where some form of check on accuracy is needed. Measures taken to address these aspects of analysis for the preparation and digestion of sediment samples have already been reported.2,3 For the determination of trace metals in water, routine checks have involved interlaboratory tests on filtered, homogenised, acidpreserved sea-water samples and standard solutions.Whilst this aspect of an external quality control programme is necessary, parts of the analytical process not addressed by such test materials are also likely to be important in determining the accuracy of trace analysis. This paper reports investigations of sample filtration as a potential source of error in the determination of trace metals in marine waters.The aim of the work was to determine the size and sources of any errors and to help to identify and promote good practice. Choice of Filtration Procedures Sample collection and sample handling (including filtration) are not always the responsibility of staff who undertake analyses in participating laboratories. This meant that there might not be a ‘routine’ procedure to be tested in all participating laboratories. Given this, the test was designed with the aim of providing an illustration that satisfactory filtration procedures could be applied.Participants were asked to choose a filtration procedure (to be examined in these tests) with two criteria in mind: (i) the procedure should be representative either of what is actually done for marine samples, or (ii) it should be sufficiently practicable to be applied to marine monitoring samples, if it proves satisfactory. Test Design Two principal types of analytical error might be introduced during filtration: contamination from the apparatus or handling procedure and adsorptive losses to the filter, filter support, etc., during filtration.The test was designed to assess both types of error. Five 1 l samples were provided for filtration: sample A was a de-ionised water sample; sample B was a filtered (0.2 mm) coastal sea-water; samples C and D were accurately measured (1 l ± 5 ml) portions of the same filtered sea-water; and sample E was a portion of sample B which had been spiked with the determinands of interest and which also contained added microcrystalline cellulose [BP grade (Merck, Poole, Dorset, UK)] at a concentration of 100 ± 5 mg l21.Participating laboratories were asked to filter samples A, B and E as received, using the method (or methods) chosen for the test. Samples C and D were to be spiked at the participating laboratory immediately before filtration by adding 500 ml of a corresponding concentrated spiking solution (also supplied) to the measured 1 l portion of water.The two spiking solutions were provided at a pH between 2.0 and 2.5 (to minimise adsorptive losses). A separate spiking solution was provided for chromium (as CrVI) because of the insolubility of lead chromate. The other metals were present in a mixed spiking solution. The spiked samples were mixed by shaking and filtered within 1 h. The portions of filtrate were put into the laboratory’s own bottles (the type normally used for filtered samples) and preserved by addition of 400 ml of 5.5 m hydrochloric acid (Aristar grade, Merck) (supplied) per 100 ml of filtrate.The labelled filtrates were returned to WRc. Participants were asked to carry out filtrations in duplicate. All samples were analysed at WRc. Determinations of Cd, Cu, Pb and Ni were made using a semi-micro chelation solvent extraction procedure.4 Chromium5 and zinc6 were determined by previously described methods.A summary of each laboratory’s filtration procedure is given in Table 1. Results Fig. 1 shows a comparison of analytical data for the filtrates supplied by each participating laboratory. The pair of results for each laboratory corresponds to the two filtrations carried out on each sample. Results are summarised for each metal, on the basis that sources of contamination or the tendency for adsorption are likely to be metal-related. Analyst, October 1997, Vol. 122 (1029–1032) 1029Discussion It is worth emphasising that the comparison shown in Fig. 1 relates to the effects of filtration (and subsequent sample storage), instead of the more familiar comparison of interlaboratory analytical performance. Differences between laboratories’ results can be regarded as arising from a combination of within-batch analytical variation and what might be termed ‘filtration errors’. Effort was made to make analytical variations as small and homogeneous as possible.Determinations for a given metal were carried out in a single batch of analysis to minimise the effects of between-batch analytical errors. The limits of detection (LOD) estimated from duplicate blank determinations (for a series of analytical batches) are as follows:7 Cd 0.05, Cr 0.01, Cu 0.03, Pb 0.04, Ni 0.05 and Zn 0.05 mg l21. Potential Filtration Problems Contamination Contamination can produce errors which could be categorised as both random and systematic.Consistent contamination will tend to produce a positive bias in metal concentrations; sporadic contamination will tend to produce differences between replicate filtrates (i.e., an increase in imprecision). Hence it is unwise to interpret the results for filtration error in distinct categories of precision and bias. Data for samples C and D were evaluated against a ‘spiked’ value calculated as the spiked concentration + a ‘background’ value determined at WRc. In all cases the background was small in relation to the added spike.For sample E, a mean of laboratories’ concentrations was used as a reference point, for reasons discussed below under Adsorption. Cadmium (Fig. 1). Results for the de-ionised water sample (A) indicated that at least two laboratories (4 and 19) are subject to serious contamination. This is confirmed for laboratory 4 by data for the unspiked sea-water (B). For samples B–E, results for laboratories 7, 8 and 16 showed some evidence of very small elevations in concentration or wider differences (than those for other laboratories) between replicate filtrations.The results showing excellent agreement within replicate filtrations and with one another including those for laboratories 3, 11, 12 and 17. Chromium. The results for the de-ionised water and the blank sea-water were too close to the limit of detection of the analytical method to allow clear conclusions to be drawn concerning the smaller observed variations.However, there was evidence of contamination of between 0.2 and 1 mg l21 for laboratory 19. At the higher concentration of samples C, D and E there is close agreement between concentrations measured in laboratories’ filtrates. Copper (Fig. 2). Several laboratories’ results indicated sources of copper contamination. A large effect was evident for laboratory 4 (contamination of > 1 mg/l21); other laboratories Table 1 Summary of filtration procedures Material in Laboratory contact with No.Apparatus* sample* Filter type 3 P/e funnels, P/c P/e: bottle, funnel Nuclepore, 47mm Swinloc P/c: membrane, silicone sealing ring 4 P/c Sartorius P/c: bottle, filter, Nuclepore membrane 7 Sartorius 500 ml P/c: glass Cellulose nitrate, measuring cylinder 50mm 8 P/c Swinloc P/c: bottle, filter, Nuclepore membrane 9 Becton Dickenson P/p Cellulose acetate, 60 ml syringe 47mm filter? 11 Becton Dickenson P/p and rubber: Cellulose acetate, 50 ml syringe syringe Acetate: 47mm Sartolab P filter filter holder 12 Gelman P/e: bottle Gelman, 633 cm Glass-fibre: filter ?syringe 15 Becton Dickenson — Cellulose nitrate, syringe, Sartolab 50mm P, 120 sample pot 17 P/s Nalgene P/s: bottle, filter, Cellulose nitrate, membrane 47mm 19 Millipore PTFE PTFE Whatman, 142mm 1.5 l apparatus cellulose nitrate Pore size/ Bottle mm Preparation† Storage preparation† 0.4 Overnight in 1% Prepared prior Detergent rinse, HNO3 50 ml 1% to use 1% HNO3 for 1 HNO3 50 ml DIW week, 0.1% HNO3 for 1 week, DIW before use 0.4 50 ml 5% HNO3 Prepared prior As filter except no 50 ml DIW to use sample rinse 50 ml sample (discarded) 0.45 Syringe and filter Prepared prior DIW, 10% HCl, rinsed with DIW to use 10% HNO3, 0.1% 10% HCl and HNO3 until used DIW 0.4 50 ml 1% HNO3 Prepared prior As filter 50 ml DIW to use 50 ml sample (discarded) 0.45 Syringe and filter None rinsed with 30 ml 1% HNO3, 30 ml sample 0.45 Rinsed once with Rinsed with sample sample 0.45 None Sealed plastic HNO3 wash, DIW bag 0.45 Syringe and filter — None rinsed with sample 0.45 10% HNO3, Prepared prior As filter DIW, sample to use (discarded) 0.45 10% HNO3 for Prepared prior Detergent, 24 h 2 h, DIW rinse to use 10% HNO3, DIW rinse * P/c = polycarbonate, P/e = polyethylene, P/p = polypropylene, P/s = polysulfone † DIW = de-ionised water. 1030 Analyst, October 1997, Vol. 122(8, 11 and 19) were subject to smaller biases which were mainly evident at the blank level.The data showing excellent agreement within replicate filtrations and with one another including those for laboratories 3, 7, 9, 12, 15 and 17. Lead. The majority of results for lead in blank filtrates were lower than the 0.04 mg l21 reporting threshold, indicating good control over contamination. Good comparability of filtrates was observed at higher concentrations (samples C, D and E). The range of results obtained was narrow and close to the expected spiked values.Nickel. Excellent comparability and accuracy, with respect to the spiked value, were achieved for the spiked samples. Only one serious bias was observed at the blank level, in laboratory 19 (this may arise from the same source of contamination as that already noted for chromium). Zinc (Fig. 3). The data for zinc showed the most obvious instances of contamination bias. Replicate results tended to be similar but were often displaced with respect to filtrate data from other laboratories.At both the blank and at higher levels, laboratories 4, 8 and 9 showed clear bias. In the first two laboratories this bias was small (1–2 mg l21); in the last, bias was around 10 mg l21. Blank data from three other laboratories (7, 11 and 15) might be subject to small bias of approximately 1 mg l21. This was not borne out by the higher concentration samples (except in the case of laboratory 15). Adsorption There is no clear evidence of a bias between the ‘spiked’ value and observed values for samples C and D, for any of the metals.This indicates that adsorptive losses during filtration are probably not important. Sample E, containing 100 mg l21 of microcrystalline cellulose, was included with two aims in mind: (i) To test the hypothesis of increased adsorption for increased contact time during filtration: if adsorptive losses had been significant for filtration of samples C and D (no suspended solids, so a relatively short filtration time), it might be expected that the longer contact time for a sample containing solid matter would result in larger adsorption.Sample E could have provided a guide to the additional adsorptive losses which might apply to real samples. The test data show that adsorption during filtration was not important (or at least was not detectable in relation to the observed random errors). In freshwater samples, such adsorption can lead to significant losses of trace metals.8 The lack of adsorption in sea-water might be ascribed to the high ionic strength of the matrix and consequent competition for adsorption sites from major ion constituents of the sample (e.g., Mg2+ and Ca2+).(ii) To provide a check on adsorption in the sample bottle: sample E was prepared in bulk (with solids) and spiked at the same concentration as sample D. The bulk sample was equilibrated for 1 week to allow partitioning between the solid and dissolved phases. This was mixed thoroughly and then dispensed into the sample bottles.Markedly lower concentrations in the dissolved metal concentration of sample E in relation to sample D (given that samples D and C were not subject to large negative bias) might be taken as evidence of adsorptive losses of determinand to the solids or to the inner surface of the sample bottle. Fig. 1–3 illustrate the important comparisons between the mean of laboratories’ filtrate concentrations for sample E and the spiked + residual value for sample D (the two should be the Fig. 1 Comparison of metal concentrations in participants’ filtrates: cadmium. +, LOD; –5–, spike; and –x–, mean of laboratories. Fig. 2 Comparison of metal concentrations in participants’ filtrates: copper. Symbols as in Fig. 1. Analyst, October 1997, Vol. 122 1031same, if there is no adsorption and other sources of bias are small). The values were found to be similar for Cd, Cr, Ni and Zn, if the instances of contamination (applicable to both samples, but not necessarily consistent) are ignored.However, for Cu the concentration in sample D is markedly higher than that in sample E [3.7 mg l21 (ignoring data from laboratory 4) versus 2.7 mg l21]. This is statistically significant (p = 0.05), indicating losses of copper. It is likely that adsorption on the cellulose was more important than loss to the bottle, given the affinity of the metal for oxygen-containing organic matter. The nominal surface areas of the solids and the bottle were similar, estimated to be approximately 500 cm2 each.The results for Pb also show a small decrease between samples D and E. Conclusions The test demonstrates that sample filtration can be a source of important error in the determination of trace metals in sea-water samples. It is worth noting that the laboratories taking part in this exercise are relatively experienced, both in trace analysis and in the application of quality control techniques. Given this, it might be assumed that sample handling practices used in these laboratories are typical of or better than the current norm.The finding that several approaches are not suitable indicates that sample handling may require more attention in many laboratories. The choice of filtration procedure is critical. It is accepted widely that acid-washed plastic (e.g., low-density polyethylene, polypropylene, polycarbonate) is suitable for use at metal concentrations found in coastal waters.Other materials, in particular rubber and high-density plastics (e.g., PVC and highdensity polyethylene), can be sources of contamination which are difficult to eliminate, even by acid washing. The use of plastic ware and filter materials ‘as received from the manufacturer’, i.e., without acid washing and rinsing with de-ionised water, is clearly inadvisable. Filtration procedures which, on the evidence of their description, appear to be appropriate to the task in hand have also been shown to be subject to serious contamination.This is a further illustration of the principle that the method alone does not determine performance; its mode of use is also crucial. Adsorption on the filtration apparatus appears not to be such an important source of error as contamination, for marine samples. It is likely that the high ionic strength of sea-water protects against the adsorptive losses encountered during filtration of some freshwater samples.The test has confirmed the conclusions of other work on adsorption during sample storage, i.e., that of the six metals of interest, copper and lead are likely to be most prone to adsorptive interactions with suspended matter. Recommendations A programme of periodic checks on filtration blanks is recommended as the only means of establishing control over contamination. Unless suitable test data are obtained to demonstrate fitness for purpose, it cannot be assumed that contamination during sample filtration is adequately controlled. Continuing checks of filtration blanks are also a necessary illustration that control is maintained during routine analysis. The analysis of spiked (pre-filtered) samples, although not as important as the use of blanks, can be used as an initial confirmation that adsorption to the filtration apparatus is not responsible for important losses of the determinand of interest. The authors acknowledge the cooperation and assistance of the following organisations which are participants in the NMAQC programme: the Environment Agency of England and Wales, the Scottish Environment Protection Agency, the Ministry of Agriculture, Fisheries and Food, the Scottish Office Environment, Agriculture and Fisheries Department, the Department of the Environment (Northern Ireland) and the Department of Agriculture (Northern Ireland). References 1 Dobson, J. E., Gardner, M. J., Griffiths, A. H., Jessep, M. A., and Ravenscroft, J. E., Accred. Quality Assur., in the press. 2 Cook, J. M., Gardner, M. J., Griffiths, A. H., Jessep, M. A., Ravenscroft, J. E., and Yates, R., Mar. Pollut. Bull., in the press. 3 Dixon, E. M., Gardner, M. J., and Hudson, R., Chemosphere, in the press. 4 Apte, S. C., and Gunn, A. M., Anal. Chim. Acta., 1987, 193, 147. 5 Gardner, M. J., and Ravenscroft, J. E., Fresenius’ J. Anal. Chem., 1996, 354, 602. 6 Bird, P., Comber, S. D. W., Gardner, M. J., and Ravenscroft, J. E., Sci. Total Environ., 1996, 181, 257. 7 Analytical Methods Committee, Analyst, 1987, 112, 199. 8 Gardner, M. J., and Hunt, D. T. E., Analyst, 1981, 106, 471. Paper 7/04527A Received June 27, 1997 Accepted July 16, 1997 Fig. 3 Comparison of metal concentrations in participants’ filtrates: zinc. Symbols as in Fig. 1. 1032 Analyst, October 1997, Vol. 122
ISSN:0003-2654
DOI:10.1039/a704527a
出版商:RSC
年代:1997
数据来源: RSC
|
6. |
Simultaneous Determination of Phosphate and Silicate in Waste Water by Sequential Injection Analysis |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1033-1038
F. Mas-Torres,
Preview
|
|
摘要:
Simultaneous Determination of Phosphate and Silicate in Waste Water by Sequential Injection Analysis F. Mas-Torres, A. Mun�oz, J. M. Estela and V. Cerd`a* Department of Chemistry, University of Balearic Islands, 07071-Palma de Mallorca, Spain A sequential injection analysis system for the simultaneous determination of phosphate and silicate in waste water is proposed. The method is based on the formation of yellow vanadomolybdophosphate and molybdosilicate, respectively, in addition to the use of large sample volumes.The mutual interference between both analytes was eliminated by selection of the appropriate acidity and by sample segmentation with oxalic acid. The calibration graph for phosphate and silicate is linear up to 12 mg l21 P and 30 mg l21 Si, respectively. The detection limits are 0.2 mg l21 P and 0.9 mg l21 Si. The method provides a throughput of 23 samples h21 with a relative standard deviation < 1.4% for phosphate and < 4% for silicate.The method was found to be suitable for the determination of these species in waste water samples. Keywords: Sequential injection analysis; simultaneous determination; phosphate; silicate; waste water The introduction of sequential injection analysis (SIA) by R°uöziöcka, and Marshall1 responded to the difficulties of implementing flow injection analysis (FIA) on an industrial scale. SIA has opened up new possibilities in flow techniques. Among the acknowledged advantages of SIA are the greater versatility of the manifold, thus avoiding the physical reconfiguration required in FIA systems when changing the chemical determination, and the considerable saving of reagents since a continuous consumption is not involved in SIA systems. In essence, SIA consists in the sequential aspiration of welldefined sample and reagent zones into a holding coil by means of a multi-position (selection) valve.The flow is then reversed and the entire contents of the holding coil are propelled towards the detector.Consequently, there is a considerable decrease in the sampling frequency in relation to the comparable FIA method. However, the favourable aspects of FIA provide a good source for the development of automated monitors. During the aspiration and propelling steps an interpenetration zone, necessary for the required reaction to be accomplished; is generated due to axial and radial dispersion which will depend on the volumes and concentrations of the reagents used together with the geometric conditions of the system.In order to provide sufficient robustness for industrial control process, the proponents of the technique recommend the use of sinusoidal flow piston pumps. Nevertheless, other more available options have been suggested such as the use of peristaltic pumps2 or titration burettes.3 A consequence of the increasing control demanded either in industrial processes or in the environmental field is the increase in the number of parameters to be determined for a particular sample, which has, therefore, led to greater interest in the development of multiparametric automated methods.FIA, in addition to coupling with other techniques such as HPLC or ICP-AES, offers several possibilities regarding the determination of two or more parameters. The most common design of multicomponent systems is based on the use of several detectors connected in parallel or in series.Other less used alternatives are those based on spectral resolution by means of multicomponent techniques and those based on the way in which the sample is introduced into the system. The latter possibility has given rise to the ‘sandwich’ technique, which consists in introducing a sample zone between two different reagent solutions.4,5 The sample volume should be sufficiently large to obtain two clearly differentiated peaks corresponding to the two sample/reagent mixing zones.Both possibilities can be implemented in an SIA system. G�omez et al.6 used multicomponent SIA for the simultaneous determination of calcium and magnesium in waters. On the other hand, Estela et al.7 carried out a study on the feasibility of the use of large sample volumes in SIA. Other workers, as has been reviewed by Robards et al.,8 have proposed several FIA methods for the simultaneous determination of phosphate and silicate with on-line column separation, 9–11 methods based on the different formation rates of the corresponding molybdate heteropolyacid12–14 or by using intermittent flows.15 In the present work, an SIA method is proposed by using large sample volumes for the simultaneous spectrophotometric determination of phosphate and silicate.The mutual interference was eliminated by adjusting the acid concentration and by segmenting the sample by the addition of oxalic acid. The established method was applied to the analysis of waste waters.Experimental Reagents All reagents were prepared from analytical-reagent grade chemicals (Merck, Darmstadt, Germany) and stored in polyethylene bottles, except for phosphate solutions which were stored in glass containers. Stock standard solutions of phosphate (50 mg l21 P) and silicate (1000 mg l21 Si) were prepared from KH2PO4 and Na2SiO3·5H2O, respectively. Working phosphate and silicate solutions were prepared daily by suitable dilution of the stock solutions. A 0.5 m ammonium molybdate solution was prepared from (NH4)6Mo7O24·4H2O.This solution was diluted 5-fold before use as a reagent solution (R1) for the determination of silicate. For phosphate determination, the vanadomolybdate reagent (R2) was prepared to contain 0.035 m ammonium molybdate and 3 31023 m ammonium vanadate in 0.65 m HCl. The carrier was 0.05 m HCl. A 5.6% oxalic acid solution (R3) was prepared by dissolving the solid in distilled water. Hydrochloric acid was added to the solution to obtain a final concentration of 0.18 m.A 0.002% Bromothymol Blue (BTB) solution in 0.01 m sodium tetraborate was used in preliminary studies of the system. Apparatus The sequential injection system depicted in Fig. 1(a) was constructed from the following components: a Crison (Alella, Spain) 738 titration autoburette with adjustable dispensing rate, Analyst, October 1997, Vol. 122 (1033–1038) 1033a laboratory-built electromechanically controlled Rheodyne (Cotati, CA, USA) 5011 six-port valve, a Gilson (Villiers le Bel, France) Sample Changer-22 autosampler and a HP-8452A diode-array spectrophotometer (Hewlett-Packard, Waldbronn, Germany) equipped with a 10 mm Hellma (Jamaica, NY, USA) flow-through cell (volume 18 ml).Data acquisition and device control were achieved using a PC-486 compatible computer. All the tubing connecting the different units was made of PTFE. The holding coil (HC) was 300 cm 3 1.5 mm id. All the remaining tubing was 0.86 mm id.The lengths of reaction coil 1 (RC1) and reaction coil 2 (RC2) were 3.5 m and 130 cm, respectively. Procedure Ports 1–6 were connected to R1, sample, R3, R2, detector and waste, respectively. The analytical procedures of the SIA system were controlled by DARRAY† version 2.0 software developed by the authors’ group. The protocol sequence is listed in Table 1 and the zone sequence is illustrated in Fig. 1(b). R1 (150 ml) was first aspirated into RC1 followed by the sample zone (800 ml) and the flow was then stopped to allow the reaction to take place.Next, R3 (75 ml), a further sample volume (1200 ml) and R2 (75 ml) were sequentially aspirated into RC1 and subsequently propelled towards the detector. The absorbance was measured at 616 nm during the BTB experiments and at 400 nm for the determination of both phosphate and silicate. Dual wavelengths were used to minimize Schlieren noise and the correction was made at 800 nm.Results and Discussion Preliminary Studies of the System As in other continuous-flow methods, SIA requires an overlap zone to achieve a significant reaction at the two reagent/sample interfaces. In addition, in order to determine two different analytes in the same sample volume, a sufficient separation between peaks is required, which can be attained by using a sufficiently large sample volume. Prior to optimization of the system, severxperiments were carried out by alternately using an indicator (BTB) as sample and reagent (100 ml at both sides of the sample), and registering the resulting overlap profiles in order to observe the influence of the flow rate, reaction tube diameter and sample volume on the abovementioned aspects.The flow rate during aspiration of the sample and flush sequences was investigated by using a RC1 of 0.86 mm id. The flow rate was varied between 3.6 and 9.1 ml min21. On maintaining the slowest propulsion rate, the corresponding overlap zones remained virtually constant with the sample aspiration rate.However, for a propulsion rate higher than 4.5 ml min21, the first registered peak was exceedingly narrow and, therefore, the inaccuracy involved in its detection was increased. A flow rate of 4 ml min21 was selected, for both aspiration and propelling purposes. The influence of the tube diameter of RC1 was studied between 0.56 and 1.5 mm id. Obviously, when the internal diameter of the tubing decreases, the length occupied by the same volume of liquid is then increased, and separation of the two sample/reagent interfaces is favoured. Seemingly, with smaller diameter tubing the sensitivity was slightly enhanced. Nevertheless, diameters smaller than 0.5 mm could not be used owing to excessive back-pressure produced in the flow by the liquid-driver used.The reaction coil diameter finally chosen was 0.86 mm id. Owing to the large sample volume, the † The software used in this work can be obtained on request from SCIWARE, Banco de Programas, Departament de Qu�ýmica, Universitat de les Illes Balears, E-07071 Palma de Mallorca, Spain.Fig. 1 Schematic diagram of the SIA system used for the simultaneous determination of phosphate and silicate. P, titration burette, 5 ml; V, six-port valve; HC, holding coil; RC1, reaction coil 1; RC2, reaction coil 2; R1–R3: reagents; S, sample; C, carrier; and D, detector. For details see text. Table 1 Protocol sequence of the SIA system for the simultaneous determination of phosphate and silicate Step Time/s Valve Burette* Sampler Description 1 Initialize Initialize Initial piston position 0 ml 2 7.65 6 D 2500 ml Place burette piston for subsequent steps 3 6 Next sample 4 6.6 2 A 1000 ml Aspirate sample for washing the sample line 5 6.6 6 D 2000 ml Dispense to waste 6 2.5 1 A 150 ml Aspirate R1 solution to RC1 7 12.0 2 A 800 ml Aspirate sample 8 10 2 Stop Stopped-flow for 10 s 9 1.25 3 A 75 ml Aspirate oxalic acid solution 10 18 2 A 1200 ml Aspirate sample 11 1.7 4 A 100 ml Aspirate R2 solution 12 62.1 5 D 3725 ml Dispense flow to the detector; acquire data 13 11.2 5 L 3400 ml Load the burette with carrier via its own two-way valve 14 5 5 D 1000 ml Dispense carrier to wash the line and adjust piston position for next cycle 15 Repeat from step 6, n replicates 16 Repeat from step 3, n samples * Aspirate; D, dispense; L, load burette movement. 1034 Analyst, October 1997, Vol. 122reagents undergo a larger path in RC1 than in RC2 (fixed at a smaller length); thus, the latter does not considerably affect the spatial resolution of the peaks. The diameter selected for RC2 was the same as that of RC1. The influence of the sample volume on spatial resolution was evaluated by injecting volumes from 500 to 2000 ml. As expected, peak separation was improved with an increase in the sample volume since the separation between the two sample/ reagent overlap zones was greater.However, owing to the higher dispersion of the reagent aspirated initially (R1), as a result of the larger distance travelled inside RC1, a widening of the second peak takes place. The increase in the peak amplitude corresponding to the reaction with R1, obtained by using a volume of 2000 ml, was approximately four times that obtained with a volume of 500 ml. The decrease in the height of the valley between both peaks is of the same order. Finally, a 2000 ml sample volume was chosen in order to achieve a good spatial resolution.The increase in the dispersion of R1 might be reflected in a decrease in sensitivity; thus, the possibility of introducing into the system an element which restrained this dispersion, such as a small volume of air or an organic compound immiscible in an aqueous medium, was considered. The use of an organic solvent (e.g., CCl4) gave rise to adherence to the walls of the PTFE tubing, which might have an influence on the reproducibility of the method; thus, no improvement was attained.Air bubbles can be used to avoid dispersion of a certain liquid zone. In order to restrain dispersion of R1 the air zone must be previously aspirated prior to any other reagent. Fig. 2 shows the influence of the signal corresponding to the second peak regarding the presence or absence of air. Probably owing to a build-up of pressure in the selection valve together with the inaccuracy of the burette used as a liquid-driver, the smallest volume of air that gave rise to reproducible results was 15 ml.The air zone could be eliminated by using porous tubular or planar membranes just before detection took place. Finally, it was decided not to use segmentation with air since the selection valve was not provided with sufficient ports; in addition implementation of a new valve would involve a more complex system. In any case, the final range obtained for the studied analytes was sufficiently sensitive for the analysis of waste water.Simultaneous Determination of Phosphate and Silicate The determination of phosphate and silicate is based on the formation of vanadomolybdophosphate in acidic medium. The aim of this work was to design a system which only allowed the reaction of one of the two species at each end of the sample zone, thus simplifying the treatment which would be involved in relation to multicomponent techniques. In order to achieve this objective, different acidic conditions were fixed at both ends of the sample, since silicate reacts more slowly than phosphate, the reaction being accelerated when the acidity is decreased.Because of this fact, together with the need for a higher sensitivity for phosphate, it was decided to determine phosphate in the front peak and silicate in the rear peak. Study of the effect of different parameters Reagent concentration. For the determination of silicate, it was initially decided to use a 0.15 m MoO4 22 solution in 0.5 m HCl; however, finally, the concentrations were decreased (0.1 m MoO4 22 in 0.4 m HCl) in order to reduce the precipitation of MoO3 and other condensed forms which take place in acidic medium.Owing to the persistence of precipitation, the following recommendations should be taken into account: daily calibrations, change of solutions every 2–3 d and washing with an NaOH solution after the work has been concluded. The reagent initially employed for the determination of phosphate was proposed in previous work:16 0.035 m MoO4 22, 2.5 31023 m VO32 and 0.5 m HCl.Under these conditions, and for a volume of 2000 ml a slight interference from silicate was observed; therefore, the acidity was increased to 0.8 m HCl. The lowest concentration possible with which the silicate interference was eliminated was selected (0.65 m) to avoid a larger decrease in the phosphate response. In order to compensate for the loss of response regarding phosphate, the concentration of molybdate was increased in the vanadomolybdate solution, however, only a parallel shift of the calibration graph towards higher values of the coordinates in the origin was obtained.If the acid concentration remains constant (0.65 m HCl), when the [H+]/[Mo] ratio decreases the silicate interference increases. The initial molybdate concentration was considered in further studies. An increase in the vanadate concentration slightly favoured an increase in phosphate sensitivity in addition to the linear range. For example, the correlation coefficient of the calibration graph up to 16 mg l21 P was 0.9951 for 2 3 1023 m VO32 and 0.9992 for 1022 m.However, owing to the absorption of vanadate in the yellow region, the presence of a peak just prior to the phosphate peak is observed, which is due to the excess of reagent that did not undergo reaion. The former peak increases with vanadate concentration and decreases with the concentration of acid in the reagent solution.A concentration of 3 31023 m VO32 was selected in this work. In order to eliminate the pre-peak of phosphate, the carrier solution (distilled water) was replaced by a slightly acidic solution. The lowest concentration that allowed the former peak to be eliminated was 0.05 m HCl. Aspirated reagent volume. The reagent volumes used so far were: 100 ml of R1, 2000 ml of sample and 50 ml of R2. As previously reported,7 volumes of R1 and R2 that allow the required resolution of the peaks would be Vm/10 and Vm/20, respectively, where Vm is the sample volume.These earlier workers7 recommended sample volumes larger than 1500 ml, and, as described under Preliminary Studies of the System, a sample volume of 2000 ml was selected, for which, according to the previous criterion, volumes of R1 and R2 of 200 and 100 ml, respectively should be used; such volumes are larger than those used so far.However, under our experimental conditions (volumes and reagent concentrations) silicate interfered with the determination of phosphate, its elimination being achieved by adjusting the volumes to 150 ml for R1 and 75 ml for R2. Elimination of phosphate interference in the determination of silicate. Phosphate interference in the determination of silicate (second peak) could not be eliminated by simply adjusting the Fig. 2 Effect of the insertion of an air zone prior to aspiration of reagents.(A) Without air; (B) with 15 ml of air. Analyst, October 1997, Vol. 122 1035volumes and concentrations of the reagents. Thus, under the previously established conditions, the contribution to the Si signal of a 10 mg l21 P solution corresponded to a concentration of 12 mg l21 Si. Phosphate interference is usually eliminated by decomposition of phosphomolybdic acid by means of oxalic acid.17 This can easily be implemented in FIA; however, in SIA it involves the insertion of an additional reagent, which may hinder the degree of mixing necessary for the reaction to take place.Aspiration of the oxalic acid solution (R3) could be considered prior to R1, the sequence thus being R3–R1–sample–R2. However, under these conditions formation of the silicomolybdic complex is hindered and, therefore, the reaction between R1 and the sample should take place prior to the addition of R3. The following conditions were considered: the same aspiration sequence as that used so far, propulsion of a certain volume (1.2–1.5 ml) towards the detector in such a way that the fraction for the determination of phosphate remained post-valve, flow halting, insertion of 150 ml of R3 and, finally, continuing with the determination of silicate.However, possibly because the mixing of the molybdate–sample with the oxalic acid was incomplete, decomposition of molybdophosphoric acid was not attained. A further sequential aspiration of all the reagents was considered, including R3, and further propelling towards the detector.Thus, R3 divides the sample zone into two, the aspiration sequence being as follows: R1–sample–R3–sample– R1. Several positions regarding the addition of oxalic acid were tested (Table 2) for a total sample volume of 2000 ml; phosphate interference was avoided when fragmentation of the sample corresponded to 800 and 1200 ml for Si and P, respectively. On varying the volume of R3 from 50 to 150 ml, the only effect observed was that of the persistence of phosphate interference when the volume was only 50 ml.In order to avoid this inconvenience, a volume of 75 ml was selected. The same problem arises when very dilute solutions of oxalic acid are used. The concentration of oxalic acid in 0.24 m HCl was varied from 1.4 to 5.6%. The phosphate signal (5 mg l21 P) remained constant and the interference of phosphate with the determination of silicate completely disappeared for concentrations higher than 3%. A concentration of 5.6% was selected for further experiments.Evaluation of the method Under the selected conditions, silicate interference was avoided in the first peak (phosphate) and vice versa, a good spatial resolution of both peaks was also obtained. Hence, in spite of the fragmentation of the sample zone, which might involve modification of the optimum experimental conditions, it was decided to assess whether the present SIA system met the required needs (detection limit, linear range, etc.).Linearity and accuracy. The performance of the SIA system for the simultaneous determination of phosphate and silicate by using large sample volumes is given in Table 3. The detection limit was calculated as three times the standard deviation of the blank for phosphate and three times the standard deviation of the noise level for silicate. A representative run illustrating the peaks obtained for standards and real samples is shown in Fig. 4. Accuracy. The accuracy of the proposed SIA method was evaluated by comparing the results for several synthetic samples with different phosphate and silicate ratios. The results shown in Table 4, were fairly good in all cases. The divergence becomes greater as the difference in concentrations between the two analytes increases and is also higher for very low phosphate concentrations. Interferences. The possible interference of several species for a mixture of phosphate and silicate at concentrations of 4 and 12 mg l21, respectively, was studied.The interference criterion established was 10% of the concentration value. The maximum concentration tested was 800 mg l21 for most of the species, except for arsenic, chromium(vi) and nitrite for which lower concentrations were considered (1, 5 and 20 mg l21, respectively), owing to the low levels of these species in urban waste water. The results obtained are summarized in Table 5.In addition to the concentration at which the considered species starts to interfere, it is indicated in parentheses whether the interference is either positive or negative. Waste water samples The proposed method was applied to the determination of phosphate and silicate in urban waste water. The waste water samples were filtered through a 0.45 mm filter prior to analysis; thus, only the soluble fraction was analysed. On attempting to analyse waste water samples, the precipitation of calcium oxalate made acidification of the oxalic acid solution necessary.Different HCl concentrations were tested between 0.06 and 0.24 Table 2 Effect of the position of oxalic acid (R3) regarding the interference of different phosphate solutions on the silicate peak* Height of the silicate peak/AU31023 Phosphate/ mg l21 P A† B‡ C§ D¶ 0 0 0 0 0 2 33 11 0 0 6 88 27 0 0 10 136 46 12 0 * R1: 0.1 m MoO4 22, R2: 0.035 m MoO4 22–3 3 1023 m VO32–0.65 m HCl.† A: 150 ml R1 + 2000 ml sample + 75 ml R2. ‡ B: 150 ml R1 + 1000 ml sample + 150 ml R3 + 1000 ml sample + 75 ml R2. § C: 150 ml R1 + 850 ml sample + 150 ml R3 + 1150 ml sample + 75 ml R2. ¶ D: 150 ml R1 + 750 ml sample + 150 ml R3 + 1250 ml sample + 75 ml R2. Table 3 Performance of the SIA system for the simultaneous determination of phosphate and silicate Parameter Phosphate Linear calibration range 0–12 mg l21 P Regression equation H(AU31023) = 48.2 + 41.53[mg l21 P] Correlation coefficient (r) 0.9982 Detection limit (3s) 0.2 mg l21 P RSD (n = 10) 1.38% (9 mg l21 P) Throughput Sample consumption (per sample) Reagent consumption (per sample) Silicate 0–36 mg l21 Si H(AU31023) = 3.7 + 7.43[mg l21 Si] 0.9980 0.9 mg l21 Si 3.87% (23.8 mg l21 Si) 23 samples h21 3.0 ml 0.150 ml molybdate solution 0.075 ml oxalic acid solution 0.075 ml molybdovanadate solution 1036 Analyst, October 1997, Vol. 122m. Within this range, the acidity does not affect the signal significantly since a variation in the blank results in a proportional variation in the sample. By means of several batch assays it was proved that, as the acidity increased, the formation of the precipitate was delayed.Hence, whereas precipitation was almost instantaneous with a concentration of 0.06 m HCl it was delayed for more than 2 min with 0.18 m HCl. The latter concentration was employed for working purposes after ascertaining its compatability with the SIA manifold used in this study.Table 6 compares the results for several waste water samples with those obtained by standard spectrophotometric methods18 based on the formation of vanadomolybdophosphoric and molybdosilicate. Fig. 3(a) and (b) shows that there is good correlation between the methods, with most points lying within the 95% confidence limits. As depicted in Fig. 4, in spite of the fact that the separation of the peaks is substantially worse in the samples than in the mixtures prepared from standard solutions, indicative of a matrix effect, integration of the peaks by using the peak height as an analytical signal was sufficient in order to Fig. 3 (a) Correlation between proposed SIA method and standard method for phosphate and (b) silicate in waste water. Table 6 Results of the determination of phosphate and silicate in waste waters Dilution mg l21 P mg l21 Si Sample factor No. Type* for SIA Batch SIA Error (%) Batch SIA Error (%) 1 E 1 7.3 7.7 +5.5 15.4 15.3 20.6 2 E 1 6.4 6.7 +4.7 17.2 17.1 20.6 3 I 1 7.4 7.5 +1.4 21.2 19.8 26.6 4 I 1 7.8 7.8 0 28.1 27.3 22.8 5 PS 1 7.7 7.8 +1.3 12.6 12.5 20.8 6 PS 1 5.7 5.7 0 19.4 19.5 +0.5 7 PS 1 8.2 8.3 +1.2 15.9 15.1 25.0 8 I 5 14.46 15.03 +3.9 90.53 84.05 27.2 9 I 5 12.76 12.85 +0.7 52.0 57.3 +10.2 10 E 2.5 8.15 7.66 +6.0 31.7 33.6 +6.0 11 I 2.5 9.6 9.55 20.5 31.3 35.4 +13.1 12 I 2.5 11.13 10.05 29.7 31.64 36.8 +16.3 13 I 2.5 16.95 16.67 21.7 29.85 30.8 +3.2 14 I 5 7.07 6.0 215.1 65.93 64.82 21.7 15 E 2.5 6.23 6.05 22.9 36.77 40.73 +10.8 16 E 2.5 9.69 10.03 +3.5 26.1 30.0 +14.9 17 I 2.5 5.67 5.48 23.4 31.34 31.9 +1.8 18 E 1 1.35 1.16 214 24.9 26.6 +6.8 19 I 2.5 7.31 7.41 1.4 35.9 36.1 +0.5 20 E 2.5 7.27 7.94 9.2 44.7 41.9 26.0 * I = Influent; E = Effluent; PS = primary settled.Table 4 Results of the analysis of several standard mixtures of silicate and phosphate Taken/ Taken/ Found/ Error Found/ Error mg l21 P mg l21 Si mg l21 P (%) mg l21 Si (%) 1 1.2 0.78 222.0 1.3 +8.3 1 5.95 0.95 25.0 6.2 +4.7 1 11.9 0.80 220.0 12.1 +1.7 1 23.8 1.2 +20.0 21.4 210.1 2 3.0 2.1 +5.0 3.1 +3.3 2 5.95 2.1 +5.0 5.91 20.7 2 11.9 2.3 +15.0 13.1 +10.1 3 11.9 3.3 +10.0 11.4 24.2 3 17.85 3.2 +6.7 16.6 27.0 6 5.95 6.6 +10.0 5.7 24.2 6 23.8 6.2 +3.3 20.2 215.1 6 35.7 6.7 +11.7 31.3 212.3 Table 5 Interference of several species with the determination of phosphate (4 mg l21 P) and silicate (12 mg l21 Si) Species* Phosphate Silicate Species* Phosphate Silicate Fe3+ 5 (+) 2 (+) S22 1 (2) 15 (+) Fe2+ 2 (2) 4 (+) NO22 > 20 > 20 AsV > 1 > 1 CO3 22 800 (+) 75 (+) CrVI > 5 > 5 SO4 22 800 (2) > 800 K+, Na+ 200 (2) 800 (+) Cl2 > 800 > 800 NH4 + 800 (2) > 800 NO32 > 800 800 (+) Mg2+ > 800 250 (+) * Concentrations in mg l21.Analyst, October 1997, Vol. 122 1037obtain results in good agreement with those of the classical method. Conclusions In the present work the use of large sample volumes has been applied to sequential injection systems for the simultaneous determination of phosphate and silicate, in such a way that each species is analysed at each end of the sample zone.Configuration of the system, which is totally automated, is very simple. Although the reagent consumption is small, the substantial amount of sample required limits the application of the method to those situations in which large amounts of samples are available, as with environmental monitoring.In spite of the difficulty arising from the mutual interference between the two analytes determined, the use of sample segmentation has led to satisfactory results in the analysis of urban waste water. Better results might be anticipated in addition to greater ease of implementation, if two analytes of totally different chemical behaviour were considered. A possible disadvantage regarding SIA methods is the slow analysis rate; however, in this work a throughput of 23 samples h21 was attained, which is sufficient for most applications. The authors thank the CICyT (Spanish Council for Research in Science and Technology) for financial support of this work as part of projects AMB94-0534 and AMB94-1033.References 1 R°uöziöcka, J., and Marshall, G. D., Anal. Chim. Acta, 1990, 237, 329. 2 Ivaska, A., and R°uöziöcka, J., Analyst, 1993, 118, 885. 3 Cladera, A., Tom�as, C., G�omez, E., Estela, J. M., and Cerd�a, V., Anal. Chim. Acta, 1995, 302, 297. 4 Alonso-Chamarro, J., Bartrol�ý, J., and Barber, R., Anal. Chim. Acta, 1992, 261, 219. 5 Araujo, A. N., Lima, J. L. F. C., Rangel, O. S. S., Alonso, J., Bartrol�ý, J., and Barber, R., Analyst, 1989, 114, 1465. 6 G�omez, E., Tom�as, C., Cladera, A., Estela, J. M., and Cerd`a, V., Analyst, 1995, 120, 1181. 7 Estela, J. M., Cladera, A., Mu�noz, A., and Cerd`a, V., Int. J. Environ. Anal. Chem., 1996, 64, 205. 8 Robards, K., McKelvie, I. D., Benson, R. L., Worsfold, P. J., Blundell, N.J., and Casey, H., Anal. Chim. Acta, 1994, 287 , 147. 9 Narusawa, Y., and Hashimoto, T., Chem. Lett., 1987, 1367. 10 Narusawa, Y., Anal. Chim. Acta, 1988, 204, 53. 11 Jones, P., Stanley, R., and Barnett, N., Anal. Chim. Acta, 1991, 249, 539. 12 Linares, P., Luque de Castro, M. D., and Valc�arcel, M., Talanta, 1986, 33, 889. 13 Mas, F., Estela, J. M., and Cerd`a, V., Int. J. Environ. Anal. Chem., 1991, 43, 71. 14 Kircher, C. C. and Crouch, S. R., Anal. Chem., 1983, 55, 248. 15 Jacintho, A. O., Kronka, E. A. M., Zagatto, E. A. G., Arruda, M. A. Z., and Ferreira, J. R., J. Flow Injection Anal.,1989, 6, 19. 16 Mu�noz, A., Mas Torres, F., Estela, J. M., and Cerd`a, V., Anal. Chim. Acta, in the press. 17 Chalmers, R. A., and Sinclair, A. G., Anal. Chim. Acta, 1966, 34, 412. 18 American Public Health Association, American Water Works Association, Water Pollution Control Federation, Standard Methods for the Examination of Water and Wastewater, American Public Health Association, 17th edn., 1989.Paper 7/01646H Received March 10, 1997 Accepted June 30, 1997 Fig. 4 Representative run for the simultaneous determination of phosphate and silicate by SIA. Concentration expressed as mg l21; S1 and S2, waste water samples. 1038 Analyst, October 1997, Vol. 122 Simultaneous Determination of Phosphate and Silicate in Waste Water by Sequential Injection Analysis F. Mas-Torres, A. Mun�oz, J. M. Estela and V. Cerd`a* Department of Chemistry, University of Balearic Islands, 07071-Palma de Mallorca, Spain A sequential injection analysis system for the simultaneous determination of phosphate and silicate in waste water is proposed.The method is based on the formation of yellow vanadomolybdophosphate and molybdosilicate, respectively, in addition to the use of large sample volumes. The mutual interference between both analytes was eliminated by selection of the appropriate acidity and by sample segmentation with oxalic acid.The calibration graph for phosphate and silicate is linear up to 12 mg l21 P and 30 mg l21 Si, respectively. The detection limits are 0.2 mg l21 P and 0.9 mg l21 Si. The method provides a throughput of 23 samples h21 with a relative standard deviation < 1.4% for phosphate and < 4% for silicate. The method was found to be suitable for the determination of these species in waste water samples. Keywords: Sequential injection analysis; simultaneous determination; phosphate; silicate; waste water The introduction of sequential injection analysis (SIA) by R°uöziöcka, and Marshall1 responded to the difficulties of implementing flow injection analysis (FIA) on an industrial scale.SIA has opened up new possibilities in flow techniques. Among the acknowledged advantages of SIA are the greater versatility of the manifold, thus avoiding the physical reconfiguration required in FIA systems when changing the chemical determination, and the considerable saving of reagents since a continuous consumption is not involved in SIA systems.In essence, SIA consists in the sequential aspiration of welldefined sample and reagent zones into a holding coil by means of a multi-position (selection) valve. The flow is then reversed and tthe holding coil are propelled towards the detector. Consequently, there is a considerable decrease in the sampling frequency in relation to the comparable FIA method.However, the favourable aspects of FIA provide a good source for the development of automated monitors. During the aspiration and propelling steps an interpenetration zone, necessary for the required reaction to be accomplished; is generated due to axial and radial dispersion which will depend on the volumes and concentrations of the reagents used together with the geometric conditions of the system. In order to provide sufficient robustness for industrial control process, the proponents of the technique recommend the use of sinusoidal flow piston pumps. Nevertheless, other more available options have been suggested such as the use of peristaltic pumps2 or titration burettes.3 A consequence of the increasing control demanded either in industrial processes or in the environmental field is the increase in the number of parameters to be determined for a particular sample, which has, therefore, led to greater interest in the development of multiparametric automated methods.FIA, in addition to coupling with other techniques such as HPLC or ICP-AES, offers several possibilities regarding the determination of two or more parameters. The most common design of multicomponent systems is based on the use of several detectors connected in parallel or in series. Other less used alternatives are those based on spectral resolution by means of multicomponent techniques and those based on the way in which the sample is introduced into the system.The latter possibility has given rise to the ‘sandwich’ technique, which consists in introducing a sample zone between two different reagent solutions.4,5 The sample volume should be sufficiently large to obtain two clearly differentiated peaks corresponding to the two sample/reagent mixing zones. Both possibilities can be implemented in an SIA system. G�omez et al.6 used multicomponent SIA for the simultaneous determination of calcium and magnesium in waters.On the other hand, Estela et al.7 carried out a study on the feasibility of the use of large sample volumes in SIA. Other workers, as has been reviewed by Robards et al.,8 have proposed several FIA methods for the simultaneous determination of phosphate and silicate with on-line column separation, 9–11 methods based on the different formation rates of the corresponding molybdate heteropolyacid12–14 or by using intermittent flows.15 In the present work, an SIA method is proposed by using large sample volumes for the simultaneous spectrophotometric determination of phosphate and silicate. The mutual interference was eliminated by adjusting the acid concentration and by segmenting the sample by the addition of oxalic acid.The established method was applied to the analysis of waste waters. Experimental Reagents All reagents were prepared from analytical-reagent grade chemicals (Merck, Darmstadt, Germany) and stored in polyethylene bottles, except for phosphate solutions which were stored in glass containers.Stock standard solutions of phosphate (50 mg l21 P) and silicate (1000 mg l21 Si) were prepared from KH2PO4 and Na2SiO3·5H2O, respectively. Working phosphate and silicate solutions were prepared daily by suitable dilution of the stock solutions. A 0.5 m ammonium molybdate solution was prepared from (NH4)6Mo7O24·4H2O. This solution was diluted 5-fold before use as a reagent solution (R1) for the determination of silicate. For phosphate determination, the vanadomolybdate reagent (R2) was prepared to contain 0.035 m ammonium molybdate and 3 31023 m ammonium vanadate in 0.65 m HCl.The carrier was 0.05 m HCl. A 5.6% oxalic acid solution (R3) was prepared by dissolving the solid in distilled water. Hydrochloric acid was added to the solution to obtain a final concentration of 0.18 m. A 0.002% Bromothymol Blue (BTB) solution in 0.01 m sodium tetraborate was used in preliminary studies of the system.Apparatus The sequential injection system depicted in Fig. 1(a) was constructed from the following components: a Crison (Alella, Spain) 738 titration autoburette with adjustable dispensing rate, Analyst, October 1997, Vol. 122 (1033–1038) 1033a laboratory-built electromechanically controlled Rheodyne (Cotati, CA, USA) 5011 six-port valve, a Gilson (Villiers le Bel, France) Sample Changer-22 autosampler and a HP-8452A diode-array spectrophotometer (Hewlett-Packard, Waldbronn, Germany) equipped with a 10 mm Hellma (Jamaica, NY, USA) flow-through cell (volume 18 ml).Data acquisition and device control were achieved using a PC-486 compatible computer. All the tubing connecting the different units was made of PTFE. The holding coil (HC) was 300 cm 3 1.5 mm id. All the remaining tubing was 0.86 mm id. The lengths of reaction coil 1 (RC1) and reaction coil 2 (RC2) were 3.5 m and 130 cm, respectively. Procedure Ports 1–6 were connected to R1, sample, R3, R2, detector and waste, respectively. The analytical procedures of the SIA system were controlled by DARRAY† version 2.0 software developed by the authors’ group.The protocol sequence is listed in Table 1 and the zone sequence is illustrated in Fig. 1(b). R1 (150 ml) was first aspirated into RC1 followed by the sample zone (800 ml) and the flow was then stopped to allow the reaction to take place. Next, R3 (75 ml), a further sample volume (1200 ml) and R2 (75 ml) were sequentially aspirated into RC1 and subsequently propelled towards the detector.The absorbance was measured at 616 nm during the BTB experiments and at 400 nm for the determination of both phosphate and silicate. Dual wavelengths were used to minimize Schlieren noise and the correction was made at 800 nm. Results and Discussion Preliminary Studies of the System As in other continuous-flow methods, SIA requires an overlap zone to achieve a significant reaction at the two reagent/sample interfaces.In addition, in order to determine two different analytes in the same sample volume, a sufficient separation between peaks is required, which can be attained by using a sufficiently large sample volume. Prior to optimization of the system, several experiments were carried out by alternately using an indicator (BTB) as sample and reagent (100 ml at both sides of the sample), and registering the resulting overlap profiles in order to observe the influence of the flow rate, reaction tube diameter and sample volume on the abovementioned aspects.The flow rate during aspiration of the sample and flush sequences was investigated by using a RC1 of 0.86 mm id. The flow rate was varied between 3.6 and 9.1 ml min21. On maintaining the slowest propulsion rate, the corresponding overlap zones remained virtually constant with the sample aspiration rate. However, for a propulsion rate higher than 4.5 ml min21, the first registered peak was exceedingly narrow and, therefore, the inaccuracy involved in its detection was increased.A flow rate of 4 ml min21 was selected, for both aspiration and propelling purposes. The influence of the tube diameter of RC1 was studied between 0.56 and 1.5 mm id. Obviously, when the internal diameter of the tubing decreases, the length occupied by the same volume of liquid is then increased, and separation of the two sample/reagent interfaces is favoured. Seemingly, with smaller diameter tubing the sensitivity was slightly enhanced. Nevertheless, diameters smaller than 0.5 mm could not be used owing to excessive back-pressure produced in the flow by the liquid-driver used.The reaction coil diameter finally chosen was 0.86 mm id. Owing to the large sample volume, the † The software used in this work can be obtained on request from SCIWARE, Banco de Programas, Departament de Qu�ýmica, Universitat de les Illes Balears, E-07071 Palma de Mallorca, Spain.Fig. 1 Schematic diagram of the SIA system used for the simultaneous determination of phosphate and silicate. P, titration burette, 5 ml; V, six-port valve; HC, holding coil; RC1, reaction coil 1; RC2, reaction coil 2; R1–R3: reagents; S, sample; C, carrier; and D, dtor. For details see text. Table 1 Protocol sequence of the SIA system for the simultaneous determination of phosphate and silicate Step Time/s Valve Burette* Sampler Description 1 Initialize Initialize Initial piston position 0 ml 2 7.65 6 D 2500 ml Place burette piston for subsequent steps 3 6 Next sample 4 6.6 2 A 1000 ml Aspirate sample for washing the sample line 5 6.6 6 D 2000 ml Dispense to waste 6 2.5 1 A 150 ml Aspirate R1 solution to RC1 7 12.0 2 A 800 ml Aspirate sample 8 10 2 Stop Stopped-flow for 10 s 9 1.25 3 A 75 ml Aspirate oxalic acid solution 10 18 2 A 1200 ml Aspirate sample 11 1.7 4 A 100 ml Aspirate R2 solution 12 62.1 5 D 3725 ml Dispense flow to the detector; acquire data 13 11.2 5 L 3400 ml Load the burette with carrier via its own two-way valve 14 5 5 D 1000 ml Dispense carrier to wash the line and adjust piston position for next cycle 15 Repeat from step 6, n replicates 16 Repeat from step 3, n samples * Aspirate; D, dispense; L, load burette movement. 1034 Analyst, October 1997, Vol. 122reagents undergo a larger path in RC1 than in RC2 (fixed at a smaller length); thus, the latter does not considerably affect the spatial resolution of the peaks.The diameter selected for RC2 was the same as that of RC1. The influence of the sample volume on spatial resolution was evaluated by injecting volumes from 500 to 2000 ml. As expected, peak separation was improved with an increase in the sample volume since the separation between the two sample/ reagent overlap zones was greater. However, owing to the higher dispersion of the reagent aspirated initially (R1), as a result of the larger distance travelled inside RC1, a widening of the second peak takes place.The increase in the peak amplitude corresponding to the reaction with R1, obtained by using a volume of 2000 ml, was approximately four times that obtained with a volume of 500 ml. The decrease in the height of the valley between both peaks is of the same order. Finally, a 2000 ml sample volume was chosen in order to achieve a good spatial resolution. The increase in the dispersion of R1 might be reflected in a decrease in sensitivity; thus, the possibility of introducing into the system an element which restrained this dispersion, such as a small volume of air or an organic compound immiscible in an aqueous medium, was considered. The use of an organic solvent (e.g., CCl4) gave rise to adherence to the walls of the PTFE tubing, which might have an influence on the reproducibility of the method; thus, no improvement was attained.Air bubbles can be used to avoid dispersion of a certain liquid zone.In order to restrain dispersion of R1 the air zone must be previously aspirated prior to any other reagent. Fig. 2 shows the influence of the signal corresponding to the second peak regarding the presence or absence of air. Probably owing to a build-up of pressure in the selection valve together with the inaccuracy of the burette used as a liquid-driver, the smallest volume of air that gave rise to reproducible results was 15 ml. The air zone could be eliminated by using porous tubular or planar membranes just before detection took place.Finally, it was decided not to use segmentation with air since the selection valve was not provided with sufficient ports; in addition implementation of a new valve would involve a more complex system. In any case, the final range obtained for the studied analytes was sufficiently sensitive for the analysis of waste water. Simultaneous Determination of Phosphate and Silicate The determination of phosphate and silicate is based on the formation of vanadomolybdophosphate in acidic medium.The aim of this work was to design a system which only allowed the reaction of one of the two species at each end of the sample zone, thus simplifying the treatment which would be involved in relation to multicomponent techniques. In order to achieve this objective, different acidic conditions were fixed at both ends of the sample, since silicate reacts more slowly than phosphate, the reaction being accelerated when the acidity is decreased.Because of this fact, together with the need for a higher sensitivity for phosphate, it was decided to determine phosphate in the front peak and silicate in the rear peak. Study of the effect of different parameters Reagent concentration. For the determination of silicate, it was initially decided to use a 0.15 m MoO4 22 solution in 0.5 m HCl; however, finally, the concentrations were decreased (0.1 m MoO4 22 in 0.4 m HCl) in order to reduce the precipitation of MoO3 and other condensed forms which take place in acidic medium.Owing to the persistence of precipitation, the following recommendations should be taken into account: daily calibrations, change of solutions every 2–3 d and washing with an NaOH solution after the work has been concluded. The reagent initially employed for the determination of phosphate was proposed in previous work:16 0.035 m MoO4 22, 2.5 31023 m VO32 and 0.5 m HCl. Under these conditions, and for a volume of 2000 ml a slight interference from silicate was observed; therefore, the acidity was increased to 0.8 m HCl.The lowest concentration possible with which the silicate interference was eliminated was selected (0.65 m) to avoid a larger decrease in the phosphate response. In order to compensate for the loss of response regarding phosphate, the concentration of molybdate was increased in the vanadomolybdate solution, however, only a parallel shift of the calibration graph towards higher values of the coordinates in the origin was obtained.If the acid concentration remains constant (0.65 m HCl), when the [H+]/[Mo] ratio decreases the silicate interference increases. The initial molybdate concentration was considered in further studies. An increase in the vanadate concentration slightly favoured an increase in phosphate sensitivity in addition to the linear range. For example, the correlation coefficient of the calibration graph up to 16 mg l21 P was 0.9951 for 2 3 1023 m VO32 and 0.9992 for 1022 m.However, owing to the absorption of vanadate in the yellow region, the presence of a peak just prior to the phosphate peak is observed, which is due to the excess of reagent that did not undergo reaction. The former peak increases with vanadate concentration and decreases with the concentration of acid in the reagent solution. A concentration of 3 31023 m VO32 was selected in this work. In order to eliminate the pre-peak of phosphate, the carrier solution (distilled water) was replaced by a slightly acidic solution.The lowest concentration that allowed the former peak to be eliminated was 0.05 m HCl. Aspirated reagent volume. The reagent volumes used so far were: 100 ml of R1, 2000 ml of sample and 50 ml of R2. As previously reported,7 volumes of R1 and R2 that allow the required resolution of the peaks would be Vm/10 and Vm/20, respectively, where Vm is the sample volume.These earlier workers7 recommended sample volumes larger than 1500 ml, and, as described under Preliminary Studies of the System, a sample volume of 2000 ml was selected, for which, according to the previous criterion, volumes of R1 and R2 of 200 and 100 ml, respectively should be used; such volumes are larger than those used so far. However, under our experimental conditions (volumes and reagent concentrations) silicate interfered with the determination of phosphate, its elimination being achieved by adjusting the volumes to 150 ml for R1 and 75 ml for R2.Elimination of phosphate interference in the determination of silicate. Phosphate interference in the determination of silicate (second peak) could not be eliminated by simply adjusting the Fig. 2 Effect of the insertion of an air zone prior to aspiration of reagents. (A) Without air; (B) with 15 ml of air. Analyst, October 1997, Vol. 122 1035volumes and concentrations of the reagents.Thus, under the previously established conditions, the contribution to the Si signal of a 10 mg l21 P solution corresponded to a concentration of 12 mg l21 Si. Phosphate interference is usually eliminated by decomposition of phosphomolybdic acid by means of oxalic acid.17 This can easily be implemented in FIA; however, in SIA it involves the insertion of an additional reagent, which may hinder the degree of mixing necessary for the reaction to take place. Aspiration of the oxalic acid solution (R3) could be considered prior to R1, the sequence thus being R3–R1–sample–R2.However, under these conditions formation of the silicomolybdic complex is hindered and, therefore, the reaction between R1 and the sample should take place prior to the addition of R3. The following conditions were considered: the same aspiration sequence as that used so far, propulsion of a certain volume (1.2–1.5 ml) towards the detector in such a way that the fraction for the determination of phosphate remained post-valve, flow halting, insertion of 150 ml of R3 and, finally, continuing with the determination of silicate.However, possibly because the mixing of the molybdate–sample with the oxalic acid was incomplete, decomposition of molybdophosphoric acid was not attained. A further sequential aspiration of all the reagents was considered, including R3, and further propelling towards the detector. Thus, R3 divides the sample zone into two, the aspiration sequence being as follows: R1–sample–R3–sample– R1.Several positions regarding the addition of oxalic acid were tested (Table 2) for a total sample volume of 2000 ml; phosphate interference was avoided when fragmentation of the sample corresponded to 800 and 1200 ml for Si and P, respectively. On varying the volume of R3 from 50 to 150 ml, the only effect observed was that of the persistence of phosphate interference when the volume was only 50 ml. In order to avoid this inconvenience, a volume of 75 ml was selected.The same problem arises when very dilute solutions of oxalic acid are used. The concentration of oxalic acid in 0.24 m HCl was varied from 1.4 to 5.6%. The phosphate signal (5 mg l21 P) remained constant and the interference of phosphate with the determination of silicate completely disappeared for concentrations higher than 3%. A concentration of 5.6% was selected for further experiments. Evaluation of the method Under the selected conditions, silicate interference was avoided in the first peak (phosphate) and vice versa, a good spatial resolution of both peaks was also obtained.Hence, in spite of the fragmentation of the sample zone, which might involve modification of the optimum experimental conditions, it was decided to assess whether the present SIA system met the required needs (detection limit, linear range, etc.). Linearity and accuracy. The performance of the SIA system for the simultaneous determination of phosphate and silicate by using large sample volumes is given in Table 3.The detection limit was calculated as three times the standard deviation of the blank for phosphate and three times the standard deviation of the noise level for silicate. A representative run illustrating the peaks obtained for standards and real samples is shown in Fig. 4. Accuracy. The accuracy of the proposed SIA method was evaluated by comparing the results for several synthetic samples with different phosphate and silicate ratios.The results shown in Table 4, were fairly good in all cases. The divergence becomes greater as the difference in concentrations between the two analytes increases and is also higher for very low phosphate concentrations. Interferences. The possible interference of several species for a mixture of phosphate and silicate at concentrations of 4 and 12 mg l21, respectively, was studied. The interference criterion established was 10% of the concentration value.The maximum concentration tested was 800 mg l21 for most of the species, except for arsenic, chromium(vi) and nitrite for which lower concentrations were considered (1, 5 and 20 mg l21, respectively), owing to the low levels of these species in urban waste water. The results obtained are summarized in Table 5. In addition to the concentration at which the considered species starts to interfere, it is indicated in parentheses whether the interference is either positive or negative.Waste water samples The proposed method was applied to the determination of phosphate and silicate in urban waste water. The waste water samples were filtered through a 0.45 mm filter prior to analysis; thus, only the soluble fraction was analysed. On attempting to analyse waste water samples, the precipitation of calcium oxalate made acidification of the oxalic acid solution necessary. Different HCl concentrations were tested between 0.06 and 0.24 Table 2 Effect of the position of oxalic acid (R3) regarding the interference of different phosphate solutions on the silicate peak* Height of the silicate peak/AU31023 Phosphate/ mg l21 P A† B‡ C§ D¶ 0 0 0 0 0 2 33 11 0 0 6 88 27 0 0 10 136 46 12 0 * R1: 0.1 m MoO4 22, R2: 0.035 m MoO4 22–3 3 1023 m VO32–0.65 m HCl.† A: 150 ml R1 + 2000 ml sample + 75 ml R2. ‡ B: 150 ml R1 + 1000 ml sample + 150 ml R3 + 1000 ml sample + 75 ml R2. § C: 150 ml R1 + 850 ml sample + 150 ml R3 + 1150 ml sample + 75 ml R2.¶ D: 150 ml R1 + 750 ml sample + 150 ml R3 + 1250 ml sample + 75 ml R2. Table 3 Performance of the SIA system for the simultaneous determination of phosphate and silicate Parameter Phosphate Linear calibration range 0–12 mg l21 P Regression equation H(AU31023) = 48.2 + 41.53[mg l21 P] Correlation coefficient (r) 0.9982 Detection limit (3s) 0.2 mg l21 P RSD (n = 10) 1.38% (9 mg l21 P) Throughput Sample consumption (per sample) Reagent consumption (per sample) Silicate 0–36 mg l21 Si H(AU31023) = 3.7 + 7.43[mg l21 Si] 0.9980 0.9 mg l21 Si 3.87% (23.8 mg l21 Si) 23 samples h21 3.0 ml 0.150 ml molybdate solution 0.075 ml oxalic acid solution 0.075 ml molybdovanadate solution 1036 Analyst, October 1997, Vol. 122m. Within this range, the acidity does not affect the signal significantly since a variation in the blank results in a proportional variation in the sample. By means of several batch assays it was proved that, as the acidity increased, the formation of the precipitate was delayed.Hence, whereas precipitation was almost instantaneous with a concentration of 0.06 m HCl it was delayed for more than 2 min with 0.18 m HCl. The latter concentration was employed for working purposes after ascertaining its compatability with the SIA manifold used in this study. Table 6 compares the results for several waste water samples with those obtained by standard spectrophotometric methods18 based on the formation of vanadomolybdophosphoric and molybdosilicate.Fig. 3(a) and (b) shows that there is good correlation between the methods, with most points lying within the 95% confidence limits. As depicted in Fig. 4, in spite of the fact that the separation of the peaks is substantially worse in the samples than in the mixtures prepared from standard solutions, indicative of a matrix effect, integration of the peaks by using the peak height as an analytical signal was sufficient in order to Fig. 3 (a) Correlation between proposed SIA method and standard method for phosphate and (b) silicate in waste water. Table 6 Results of the determination of phosphate and silicate in waste waters Dilution mg l21 P mg l21 Si Sample factor No. Type* for SIA Batch SIA Error (%) Batch SIA Error (%) 1 E 1 7.3 7.7 +5.5 15.4 15.3 20.6 2 E 1 6.4 6.7 +4.7 17.2 17.1 20.6 3 I 1 7.4 7.5 +1.4 21.2 19.8 26.6 4 I 1 7.8 7.8 0 28.1 27.3 22.8 5 PS 1 7.7 7.8 +1.3 12.6 12.5 20.8 6 PS 1 5.7 5.7 0 19.4 19.5 +0.5 7 PS 1 8.2 8.3 +1.2 15.9 15.1 25.0 8 I 5 14.46 15.03 +3.9 90.53 84.05 27.2 9 I 5 12.76 12.85 +0.7 52.0 57.3 +10.2 10 E 2.5 8.15 7.66 +6.0 31.7 33.6 +6.0 11 I 2.5 9.6 9.55 20.5 31.3 35.4 +13.1 12 I 2.5 11.13 10.05 29.7 31.64 36.8 +16.3 13 I 2.5 16.95 16.67 21.7 29.85 30.8 +3.2 14 I 5 7.07 6.0 215.1 65.93 64.82 21.7 15 E 2.5 6.23 6.05 22.9 36.77 40.73 +10.8 16 E 2.5 9.69 10.03 +3.5 26.1 30.0 +14.9 17 I 2.5 5.67 5.48 23.4 31.34 31.9 +1.8 18 E 1 1.35 1.16 214 24.9 26.6 +6.8 19 I 2.5 7.31 7.41 1.4 35.9 36.1 +0.5 20 E 2.5 7.27 7.94 9.2 44.7 41.9 26.0 * I = Influent; E = Effluent; PS = primary settled.Table 4 Results of the analysis of several standard mixtures of silicate and phosphate Taken/ Taken/ Found/ Error Found/ Error mg l21 P mg l21 Si mg l21 P (%) mg l21 Si (%) 1 1.2 0.78 222.0 1.3 +8.3 1 5.95 0.95 25.0 6.2 +4.7 1 11.9 0.80 220.0 12.1 +1.7 1 23.8 1.2 +20.0 21.4 210.1 2 3.0 2.1 +5.0 3.1 +3.3 2 5.95 2.1 +5.0 5.91 20.7 2 11.9 2.3 +15.0 13.1 +10.1 3 11.9 3.3 +10.0 11.4 24.2 3 17.85 3.2 +6.7 16.6 27.0 6 5.95 6.6 +10.0 5.7 24.2 6 23.8 6.2 +3.3 20.2 215.1 6 35.7 6.7 +11.7 31.3 212.3 Table 5 Interference of several species with the determination of phosphate (4 mg l21 P) and silicate (12 mg l21 Si) Species* Phosphate Silicate Species* Phosphate Silicate Fe3+ 5 (+) 2 (+) S22 1 (2) 15 (+) Fe2+ 2 (2) 4 (+) NO22 > 20 > 20 AsV > 1 > 1 CO3 22 800 (+) 75 (+) CrVI > 5 > 5 SO4 22 800 (2) > 800 K+, Na+ 200 (2) 800 (+) Cl2 > 800 > 800 NH4 + 800 (2) > 800 NO32 > 800 800 (+) Mg2+ > 800 250 (+) * Concentrations in mg l21.Analyst, October 1997, Vol. 122 1037obtain results in good agreement with those of the classical method. Conclusions In the present work the use of large sample volumes has been applied to sequential injection systems for the simultaneous determination of phosphate and silicate, in such a way that each species is analysed at each end of the sample zone. Configuration of the system, which is totally automated, is very simple. Although the reagent consumption is small, the substantial amount of sample required limits the application of the method to those situations in which large amounts of samples are available, as with environmental monitoring. In spite of the difficulty arising from the mutual interference between the two analytes determined, the use of sample segmentation has led to satisfactory results in the analysis of urban waste water. Better results might be anticipated in addition to greater ease of implementation, if two analytes of totally different chemical behaviour were considered. A possible disadvantage regarding SIA methods is the slow analysis rate; however, in this work a throughput of 23 samples h21 was attained, which is sufficient for most applications. The authors thank the CICyT (Spanish Council for Research in Science and Technology) for financial support of this work as part of projects AMB94-0534 and AMB94-1033. References 1 R°uöziöcka, J., and Marshall, G. D., Anal. Chim. Acta, 1990, 237, 329. 2 Ivaska, A., and R°uöziöcka, J., Analyst, 1993, 118, 885. 3 Cladera, A., Tom�as, C., G�omez, E., Estela, J. M., and Cerd�a, V., Anal. Chim. Acta, 1995, 302, 297. 4 Alonso-Chamarro, J., Bartrol�ý, J., and Barber, R., Anal. Chim. Acta, 1992, 261, 219. 5 Araujo, A. N., Lima, J. L. F. C., Rangel, O. S. S., Alonso, J., Bartrol�ý, J., and Barber, R., Analyst, 1989, 114, 1465. 6 G�omez, E., Tom�as, C., Cladera, A., Estela, J. M., and Cerd`a, V., Analyst, 1995, 120, 1181. 7 Estela, J. M., Cladera, A., Mu�noz, A., and Cerd`a, V., Int. J. Environ. Anal. Chem., 1996, 64, 205. 8 Robards, K., McKelvie, I. D., Benson, R. L., Worsfold, P. J., Blundell, N. J., and Casey, H., Anal. Chim. Acta, 1994, 287 , 147. 9 Narusawa, Y., and Hashimoto, T., Chem. Lett., 1987, 1367. 10 Narusawa, Y., Anal. Chim. Acta, 1988, 204, 53. 11 Jones, P., Stanley, R., and Barnett, N., Anal. Chim. Acta, 1991, 249, 539. 12 Linares, P., Luque de Castro, M. D., and Valc�arcel, M., Talanta, 1986, 33, 889. 13 Mas, F., Estela, J. M., and Cerd`a, V., Int. J. Environ. Anal. Chem., 1991, 43, 71. 14 Kircher, C. C. and Crouch, S. R., Anal. Chem., 1983, 55, 248. 15 Jacintho, A. O., Kronka, E. A. M., Zagatto, E. A. G., Arruda, M. A. Z., and Ferreira, J. R., J. Flow Injection Anal.,1989, 6, 19. 16 Mu�noz, A., Mas Torres, F., Estela, J. M., and Cerd`a, V., Anal. Chim. Acta, in the press. 17 Chalmers, R. A., and Sinclair, A. G., Anal. Chim. Acta, 1966, 34, 412. 18 American Public Health Association, American Water Works Association, Water Pollution Control Federation, Standard Methods for the Examination of Water and Wastewater, American Public Health Association, 17th edn., 1989. Paper 7/01646H Received March 10, 1997 Accepted June 30, 1997 Fig. 4 Representative run for the simultaneous determination of phosphate and silicate by SIA. Concentration expressed as mg l21; S1 and S2, waste water samples. 1038 Analyst, Oc
ISSN:0003-2654
DOI:10.1039/a701646h
出版商:RSC
年代:1997
数据来源: RSC
|
7. |
Automated Monosegmented Flow Analyser. Determination of Glucose, Creatinine and Urea |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1039-1044
Ivo M. Raimundo Jr.,
Preview
|
|
摘要:
Automated Monosegmented Flow Analyser. Determination of Glucose, Creatinine and Urea Ivo M. Raimundo, Jr.*, and Celio Pasquini Instituto de Qu�ýmica, UNICAMP, CP 6154, CEP 13083-970, Campinas, S�ao Paulo, Brazil An automated monosegmented flow analyser containing a sampling valve and a reagent addition module and employing a laboratory-made photodiode array spectrophotometer as detection system is described. The instrument was controlled by a 386SX IBM compatible microcomputer through an IC8255 parallel port that communicates with the interface which controls the sampling valve and reagent addition module.The spectrophotometer was controlled by the same microcomputer through an RS232 serial standard interface. The software for the instrument was written in QuickBasic 4.5. Opto-switches were employed to detect the air bubbles limiting the monosegment, allowing precise sample localisation for reagent addition and signal reading. The main characteristics of the analyser are low reagent consumption and high sensitivity which is independent of the sample volume.The instrument was designed to determine glucose, creatinine or urea in blood plasma and serum without hardware modification. The results were compared against those obtained by the Clinical Hospital of UNICAMP using commercial analysers. Correlation coefficients among the methods were 0.997, 0.982 and 0.996 for glucose, creatinine and urea, respectively. Keywords: Monosegmented flow analysis; automated flow analyser; glucose; creatinine; urea Automatic analysers are being widely used nowadays owing to the demand for high turnover determinations, mainly in the clinical and environmental fields.These analysers give precise and accurate results with low consumption of both reagents and sample, allowing high laboratory productivity. Analysers can be divided into three categories according to the sample processing method: robotic, batch (or discrete) and continuous analysers.1,2 Robot based and batch automatic analysers need high precision mechanical parts and, therefore, are very difficult to maintain by small routine laboratories.On the other hand, continuous flow systems, such as flow injection (FI) and continuous flow analysis (CFA),3 are simple and an automatic flow instrument can be easily implemented. Automatic flow analysers employing the FI technique have recently been described.4–10 The construction of this kind of analyser is relatively simple because samples are individually processed by these systems, that is, the software to control the instrument has to perform a number of sequential operations without any parallel processing because usually only one sample is processed in the manifold each time.Monosegmented flow analysis (MSFA)3 was proposed by Pasquini and de Oliveira11 in 1985. In the MSFA system, the sample (previously mixed with reagents) is introduced into the analyser between two air bubbles.These bubbles minimise sample dispersion, allowing long residence times. The sampling frequency can be maintained as high as in FI systems with several samples simultaneously present in the reaction coil; therefore, there is no direct relationship between sample injection and sample detection. Two main approaches have been taken to add and mix reagents for the samples. The first, which was proposed in the original paper11 and has since been frequently used,12–15 employs differential pumping to mix reagents with the sample, before filling the sample loop.The second uses continuous addition of reagents through a confluence point after injection.16–18 The first procedure does not allow methods based on sequential reactions to be adapted to MSFA systems such as the determination of urea by employing urease enzymatic hydrolysis followed by the Berthelot reaction for ammonium determination. The second, in addition to the high reagent consumption, destroys the integrity of the monosegment and is only feasible when the air bubbles are either removed before sample detection16 or do not cause spurious signals in the detector, as when AAS is employed.17,18 Air bubble removal has often been employed before the sample reaches the detector.11–15,19 This operation eliminates spurious signals but increases sample dispersion.However, Facchin and Pasquini20 have recently described monosegmented flow systems which perform liquid–liquid extractions, showing that it is possible to carry out the determination without removal of air bubbles.This paper describes the construction of a microcomputer controlled automatic monosegmented flow analyser which has three main components: a sampling valve, a reagent addition module and a detection system with a photodiode array spectrophotometer.21 Opto-switches were employed to detect the air bubbles limiting the monosegment, allowing sample localisation for reagent addition and for detection.The analyser was applied to the determination of glucose, creatinine and urea in blood plasma and serum by employing the well established GOD–PAP, Jaff�e reaction and urease–Berthelot methods, respectively. The manifold was designed to allow the determination of each of these analytes with only minor changes. These three analytes were chosen because they are often required in clinical tests; for example, they represent about 40% of the whole demand for analyses at the Clinical Hospital of UNICAMP.Experimental Fig. 1 shows a simplified diagram of the analyser. The instrument was controlled by a 386SX IBM compatible microcomputer (25 MHz, 2 Mbytes RAM) through an IC 8255 parallel port22 that communicates with an interface which uses an address decoder similar to one described elsewhere.23 The interface controls the peristaltic pump on–off state and the sampling valve. Sample localisation and sampling valve position were followed by employing opto-switches that generate TTL signals which can be accessed by the microcomputer as described previously.24 A laboratory-made diode array spectrophotometer21 was used as a detector and was controlled by the microcomputer through an RS232 serial interface. Automatic Sampling Valve The sampling valve was constructed by employing a proportional injector25 whose sliding central bar was connected to a Analyst, October 1997, Vol. 122 (1039–1044) 1039stepper motor (24 V, 1 A, 7.5° per step).The sampling and injection positions of the valve were determined by using two PCST 2103 opto-switches (optos S and I in Fig. 1). The microcomputer sends a TTL pulse that enables an electronic circuit to switch the valve from sampling to injection position. A third opto-switch (opto R in Fig. 1) was used to generate another TTL pulse that is necessary to return the valve to its initial position. This last pulse is produced when the first bubble of the monosegment passes through opto-switch R, which was placed at a distance from the sampling valve equivalent to the size of the monosegment.Sampling valve commutation was found to occur in about 400 ms. Automatic Reagent Addition Module This device was constructed by inserting one (or more) hypodermic syringe needles in a PTFE tube (1.6 mm id), which was fixed with polyester resin, as shown in Fig. 2(a). This needle was placed between two opto-switches and each needle was connected to a three-way solenoid valve (12 V, 80 mA), as shown in Fig. 2(b). The opto-switches can locate the air bubbles and, therefore, the monosegment containing the sample for reagent addition. The first opto-switch was placed before the needle and the second one air bubble away from the needle. When the first air bubble reaches the second sensor, the solenoid valve is turned on and reagent is delivered into the sample monosegment. The valve is turned off when the second air bubble reaches the first opto-switch.The electronic circuit necessary to perform this operation is shown in Fig. 3 The analyser was constructed with two modules that can add up to three and up to two reagents, respectively (modules 1 and 2 in Fig. 1). The addition of the reagent can be selected and enabled/disabled by software. Detection System The detection system was constructed with a flow cell,toswitch and the diode array spectrophotometer.21 An opto-switch (D) was placed after the flow cell as shown in Fig. 1, so that the central zone of the sample monosegment is inside the flow cell when its first bubble reaches the switch.At this moment, a logic signal is generated, triggering the microcomputer to perform the absorbance measurements. Software for the Analyser The software for instrument control, data acquisition and treatment was written in Microsoft QuickBasic 4.5. A simplified flow chart of the computer program is shown in Fig. 4. First, it allows start-up of the instrument, by filling the reaction coil with the carrier fluid and the tubing of the addition modules with reagents. The software requests from the operator the sample identification, the number of standards (3–7) and their respective concentrations (to construct calibration curves), the reagents that will be delivered (up to three in the first module and up to two in the second) and the wavelength at which the absorbance will be measured.The spectrophotometer is controlled as described elsewhere21 and the intensity signals for three, five or nine diodes (covering about a 1.2, 1.9 and 3.5 nm wavelength range centred around the selected wavelength) are transferred to the microcomputer to obtain averaged absorption signals. Before starting analysis, the microcomputer requests a reference spectral data set to perform absorbance calculations. The software asks for solutions (standards or samples) necessary to perform the determination. Data are processed in real time, results (as a report, showing the calibration curve and concentration of the samples) are shown on the microcomputer video and stored into a file named by the operator.A hard copy of the report can be obtained, if desired. Reagents and Solutions Analytical-reagent grade reagents and de-ionized water were used to prepare all solutions. Chromium(vi) working standard solutions from 0.200 to 1.400 mg l21 were prepared by dilution of a 1000 mg l21 CrVI stock standard solution.A 0.25% m/v diphenylcarbazide (DPC) solution was prepared in 25% v/v acetic acid and 2.0 mol l21 sulfuric acid solution was prepared by dilution of the concentrated acid. A 0.01 mol l21 PIPES buffer solution (pH 7.2) was prepared by 1 + 4 v/v dilution of the Merck (Darmstadt, Germany) solution (catalogue No. 14144). Merck reactive No. 14143 (GOD–PAP method) was diluted 1 + 40 with Merck solution No. 14144.b-d-Glucose standard solutions were prepared in the range 0.50–10.0 mg dl21 in 0.01 mol l21 PIPES buffer solution. Creatinine standard solutions were prepared from 0.10 to 1.20 mg dl21 in 0.1 mol l21 hydrochloric acid and 4.0 mol l21 sodium hydroxide and 5.5 3 1022 mol l21 picric acid solutions were prepared with de-ionized water. Urea standard solutions were prepared from 0.50 to 5.00 mg dl21. A 0.10 mol l21 phosphate buffer (pH 7.2) was prepared in 0.9% sodium chloride and 0.001% v/v Brij 30 solution.A 44 kU l21 urease solution was prepared in water. Fig. 1 Schematic diagram of the automated flow analyser. For details, see text. 1040 Analyst, October 1997, Vol. 122Other solutions were 6.0% phenol plus 1.0% sodium nitroprusside and 1.0% sodium hypochlorite in 4.0 mol l21 sodium hydroxide. Procedures Evaluation of the analyser A manifold similar to that shown in Fig. 1 was employed, but with reactor 1 removed and a PTFE tubing coil of 1.5 m length and 1.6 mm id used as reactor 2.De-ionized water was used as the carrier at a flow rate of 2.0 ml min21. The first and the second air bubbles, limiting the monosegment, had volumes of 90 and 50 ml, respectively. These flow parameters allowed a residence time of 90 s for the samples, after the second module of reagent addition. The sample monosegment volume was 300 ml, except where specified otherwise. Determination of glucose, creatinine and urea The manifold shown in Fig. 1 was employed. Two glass reactors of 1.6 mm id were used; the carrier flow rate and bubble air volumes were kept as in the evaluation of the analyser, allowing residence times for samples of 2.0 and 6.5 min in the first and second reactors, respectively. The sample loop had a volume of 220 ml. For glucose determination, blood plasma samples were manually diluted 1 + 45 v/v with 0.01 mol l21 PIPES buffer solution. This buffer solution was also employed as the carrier and the reagent was delivered through the second reagent addition module at a flow rate of 0.16 ml min21.Absorbance measurements were carried out at 510 nm. For creatinine determination, blood serum was deproteinized with 5% trichloroacetic acid solution (1 + 1 v/v). The supernatant was manually diluted 1 + 1 v/v with deionized water. Sodium hydroxide (0.16 ml min21) and picric acid (0.28 ml min21) were added to the sample through the first and Fig. 2 (a) Reagent addition module and (b) addition point P (stainlesssteel needle).Fig. 3 Electronic circuit of the reagent addition module: (a) circuit to turn on and turn off the solenoid valve (enabled by the microcomputer) and (b) circuit to extract logic signal from the opto-switch. Analyst, October 1997, Vol. 122 1041second reagent addition modules, respectively. De-ionized water was used as the carrier. Absorbance measurements were made at 500 nm. For urea determination, blood serum samples were deproteinized as in the determination of creatinine.The supernatant was diluted 1 + 45 v/v with phosphate buffer solution, which was also used as the carrier. Urease solution (0.16 ml min21) was added through the first module; phenol–sodium nitroprusside (0.16 ml min21) and sodium hypochlorite–sodium hydroxide (0.16 ml min21) solutions were both mixed with the sample through the second reagent addition module. Absorbance was measured at 620 nm. Results and Discussion Evaluation of the Analyser MSFA analysers usually work with several samples being processed sequentially in the reaction coil, in order to allow for long residence times without decreasing the sample throughput.Therefore, one sample could be passing through the detector (and absorbance measurement must be performed) while some other tasks, such as switching of the sampling valve or addition of reagent, have also to be carried out. Thus, the hardware and software of the MSFA analyser was developed in order not to miss an absorbance measurement. The switching of the sampling valve from injection position to sampling position and the addition of reagents to the sample are performed automatically under hardware control enabled by the computer.The microcomputer has the task of sending a logic signal to perform sample injection and this action can be delayed for a few seconds if an absorbance measurement is being obtained for a sample present in the flow cell.Fig. 4 shows the flow diagram of the routine that allows the control of these tasks. In addition to releasing the microcomputer, the opto-switch, used to trigger the return of the sampling valve to its injection position, makes this event independent of the flow rate, which is an advantage when a method is being developed. A disadvantage of an MSFA analyser, in general, is related to the admission and/or formation of small air bubbles in the reactor, because the air bubbles of the monosegment are used to drive reagent delivery through the addition modules and to control sample measurement. This problem was minimised by adjusting the opto-switch sensitivities with RC components of the circuit shown in Fig. 3(b). For a carrier flow rate of 2.0 ml min21, the opto-switch sensitivities were adjusted in order not to generate a logic level transition for air bubbles lower than 20 ml. The CrVI–DPC reaction was used to evaluate the analyser performance, by adding 2.0 mol l21 sulfuric acid and 0.25% DPC at flow rates of 0.07 and 0.15 ml min21, respectively, consecutively to 300 ml of sample through modules 1 and 2.The concentrations of the reagents and the flow rate ratios between reagents and carrier (sample) were determined according to the standard recommended method.26 Absorbance measurements were performed at 540 nm, with a bandwidth of 3.5 nm (averaging signal intensities of nine diodes).Standard solutions of CrVI from 0.2 to 1.4 mg l21 were injected in tripiclate at a sampling frequency of 60 h21. Absorbance values were obtained in the range 0.0611– 0.4069, with an average absolute standard deviation of 0.0017.The precision obtained in these absorbance measurements agrees with those obtained previously in the absence of reactions,21 indicating that the analyser shows a very good performance. The injection of a blank solution (water, A = 0.0016 ± 0.0016) after a 1.4 mg l21 CrVI standard solution (A = 0.4069 ± 0.0021) showed that there is no significant carry-over between samples.The calibration curve obtained with these data is A = (0.0078 ± 0.0028) + (0.2880 ± 0.0031)C (r = 0.9997), where A is solution absorbance and C is the CrVI concentration, in mg l21. Considering that the analyser has a flow cell with only a 5 mm long pathlength, these results also agree with those obtained previously, with respect to sensitivity and linearity.11 The injection of 300 ml of a CrVI sample solution, as described, resulted in a consumption of 7.5 ml of sulfuric acid and 12 ml of the DPC solution.When 100 ml of sample were injected, these consumptions were lowered to 2.5 and 4.0 ml, respectively. Table 1 shows some parameters obtained under different conditions of analysis; the sensitivity is almost independent of the sample volume whereas the precision (determined by the standard deviation of ten replicates of a 1.00 mg l21 CrVI solution) of the measurements decreases when the sample volume is decreased and at higher sampling frequencies.As can be seen, this automatic monosegmented flow analyser shows a good performance, allowing a sensitivity that is virtually independent of the sample volume and consuming less reagent than other instruments because the reagents are not delivered continuously but only into the monosegment. Fur- Fig. 4 Flow diagram of the software developed to control sample processing in the analyser (event 1 means solution in flow cell). 1042 Analyst, October 1997, Vol. 122thermore, the reagent addition module makes possible the use of methods employing sequential reactions in MSFA, without disturbing the monosegment pattern. Determination of Glucose, Creatine and Urea Standard, manually processed, methods were adapted to the analyser in order to allow the determination of glucose, creatinine and urea with minor changes to the manifold. Therefore, the experimental parameters employed (mainly sample dilution and residence time) were not optimised for each analyte, but were aimed to suit the overall performance of the analyser.For example, although the glucose reagent was added through the second module, the first reactor (not necessary for this determination) was not removed from the manifold. However, this procedure did not alter the frequency of sample introduction but just increased both the delay necessary for the first sample to reach the detector and the sample dispersion to a minor extent.It is important to emphasise that in the urea determination, urease is added through the first module and, after this reaction has been processed, reagents for ammonium determination are delivered through the second reagent addition module. This operation is the main feature of the proposed analyser, i.e., it became possible to perform sequential reactions without disturbing the monosegment and with reagents being added only to the sample.Glass reactors were used in the manifold because the sample monosegment was not stable in PTFE reactors, mainly in the determination of creatinine. This probably occurs because blood proteins have a stronger affinity for PTFE. On the other hand, glass is wet by aqueous solutions and, therefore, when this material is employed, an increase in cross-contamination and a decrease in precision are observed. A manifold made with PTFE reactors allows insignificant cross-contamination and an RSD of 0.7% for six injections of a 1.00 mg dl21 creatinine aqueous reference solution.A signal that is 2.5% of that obtained for any creatinine reference solution in the range 0.1–1.2 mg dl21 was observed for the first blank introduced after the reference solution in a glass reactor. This characterises a cross-contamination that should be considered if the introduction of samples and/or reference solutions is not replicated. However, crosscontamination effects were minimised by injecting samples in triplicate and averaging the three signals obtained, because it only affects the first signal.Furthermore, as in real calibration and sample determinations the change in concentration is not so drastic, the cross-contamination is minimised. This is particularly true for the samples. Table 2 shows the figures of merit for the methodologies adapted to the developed analyser. The results obtained with the analyser (MSFA) were plotted against those obtained by the Clinical Hospital (CH) of UNICAMP and the results for glucose, creatinine and urea were MSFA = 4.72 + 0.895CH (r = 0.997, n = 25), MSFA = 0.0785 + 1.155CH (r = 0.982, n = 29) and MSFA = 13.5 + 0.956CH (r = 0.996, n = 17), respectively.At the Clinical Hospital, the determinations were performed by automatic discrete analysers, i.e., Merck–Vitalab Selectra (glucose) and Roche Cobas–Mira (creatinine and urea). Glucose and creatinine were also determined by the GOD–PAP and Jaff�e methods, respectively.However, a kinetic procedure was employed in both determinations and the results were obtained from the difference between two absorbance measurements, made in a pre-defined time interval. Urea determination was based on the reaction of ammonium ion (produced by urease catalysed urea hydrolysis) with 2-oxoglutarate and NADH, in the presence of glutarate dehydrogenase, and the decrease in absorbance, due to the NADH consumed, was measured at 340 nm.Although a good correlation coefficient was always observed (r > 0.98) for the three analytes, the results do not agree completely and there are both constant and proportional systematic differences. The origin of these differences can be attributed to the different methodologies and/or instruments employed, as pointed out by Koch and Peters.27 For example, in the creatinine determination some interferences (e.g., from proteins) can be eliminated by employing a kinetic method, as in the procedure used in the Clinical Hospital.Differences such as those found in this work have often been reported for clinical methodologies28–35 and seem to be tolerated from the clinical point of view. According to this point of view, these differences are not a serious drawback to the use of the proposed methodologies because the range of reference values for blood analyte concentrations is a function of the methodology and/or instrument employed for the determination. 27 Conclusions The automated analyser allows determinations to be performed with low reagent consumption. Furthermore, with the development of the reagent addition modules it is possible to adapt methods based on sequential reactions to MSFA without disturbing the monosegment, because the reagents are delivered only into the sample zone. Direct adaptation of the manual procedures can be made by setting appropriate flow rates for carrier fluid and reagents, which maintain the proportion of the manual procedures.The sensitivity of the analyser shows little dependence on the carrier flow rate if the reagent flow rate is kept proportional and the reaction reaches completion. Also, the sensitivity is almost independent of the sample volume, owing to low monosegment dispersion and proportional addition of reagents. Finally, the same manifold can be used to determine different analytes, although some straightforward dilution Table 1 Dependence of the sensitivity and precision of the monosegmented flow analyser on sample volume and sample frequency Calibration curve Vsample/ Frequency/ Linear A ± s* ml h21 coefficient Slope r (n = 10) 300 60 0.0027 0.2459 0.9996 0.2519 ± 0.0016 100 60 0.0020 0.2376 0.9999 0.2337 ± 0.0031 100 120 0.0029 0.2306 0.9996 0.2341 ± 0.0051 * Absorbance of 1.00 mg l21 CrVI reference solution ± standard deviation.Table 2 Figures of merit for glucose, creatinine and urea determination with the monosegmented flow analyser Averaged Upper limit of precision linear range/ Analyte (RSD) (%) mg dl21* Glucose 1.8 400 Creatinine 3.6 2.00 Urea 3.7 200 * Before sample dilution, as described under Experimental. Analyst, October 1997, Vol. 122 1043operations need to be performed before sample introduction into the system. The authors are grateful to Dr. C. H. Collins for manuscript revision, to Dr. L. Parentoni for blood samples and to M. S. Toma for construction of the flow cell and the mechanical parts of the sampling valve.References 1 Valc�arcel, M., and Luque de Castro, M. D., Automatic Methods of Analysis (Techiniques and Instrumentation in Analytical Chemistry, Vol. 9), Elsevier, Amsterdam, 1988. 2 Valc�arcel, M., and Luque de Castro, M. D., Analysis por Inyeccion en Flujo, Imprenta San Pablo, Cordoba, 1984. 3 van der Linden, W. E., Pure Appl. Chem., 1994, 66, 2493. 4 Pasquini, C., and de Faria, L. C., J. Autom. Chem., 1991, 13, 143. 5 Reis, B. F., Gin�e, M. F., Krug, F. J., and Bergamin Fo., H., J. Anal. At. Spectrom., 1992, 7, 865. 6 Clark, G. D., Christian, G. D., Ruzicka, J., Anderson, G. F., and van Zee, J. A., Anal. Instrum., 1989, 18, 1. 7 Malcome-Lawes, D. J., and Pasquini, C., J. Autom. Chem., 1988, 10, 192. 8 Prodromiris, M. I., Tsibiris, A. B., and Karayannis, M. I., J. Autom. Chem., 1995, 17, 187. 9 Cosano, J. S., Luque de Castro, M. D., and Valc�arcel, M., J. Autom. Chem., 1993, 15, 147. 10 Malcome-Lawes, D. J., Wong, K. H., and Smith, B. V., J. Autom. Chem., 1992, 14, 73. 11 Pasquini, C., and de Oliveira, W. A., Anal. Chem., 1985, 57, 2575. 12 de Andrade, J. C., Ferreira, M., Baccan, N., and Bataglia, O. C., Analyst, 1988, 113, 289. 13 de Andrade, J. C., Bruns, R. E., and Eiras, S. P., Analyst, 1993, 118, 213. 14 de Andrade, J. C., Eiras, S. P., and Bruns, R. E., Anal. Chim. Acta, 1991, 255, 149. 15 Eiras, S. P., de Andrade, J. C., and Bruns, R. E., J. Braz. Chem.Soc., 1993, 4, 128. 16 Tian, L. C., and Wu, S. M., Anal. Chim. Acta, 1992, 261, 301. 17 Reis, B. F., Arruda, M. A. Z., Zagatto, E. A. G., and Ferreira, J. R., Anal. Chim. Acta, 1988, 206, 253. 18 Reis, B. F., Zagatto, E. A. G., Martelli, P. B., and Brienza, S. M. B., Analyst, 1993, 118, 719. 19 Pasquini, C., Anal. Chem., 1986, 58, 2346. 20 Facchin, I., and Pasquini, C., Anal. Chim. Acta, 1995, 308, 231. 21 Raimundo, I. M., Jr., and Pasquini, C., J. Autom. Chem., 1993, 15, 227. 22 Malcome-Lawes, D., Lab. Microcomput., 1987, 6, 16. 23 Souza, P. S., and Pasquini, C., Lab. Microcomput., 1990, 9, 77. 24 Raimundo, I. M., Jr., and Pasquini, C., Lab. Microcomput., 1994, 13, 55. 25 Bergamin Fo., H., Medeiros, J. X., Reis, B. F., and Zagatto, E. A. G., Anal. Chim. Acta, 1978, 101, 9. 26 APH, AWWA and WPCF, Standard Methods for the Examination of Water and Wastewater, American Public Health Association, Washington, DC, 18th edn., 1992. 27 Koch, D. O., and Peters, T., Jr., in Tietz Textbook of Clinical Chemistry, ed.Burts, C. A., and Ashwood, E. R., Saunders, Philadelphia, 2nd edn., 1994, pp. 508–525. 28 Narebor, E. M., J. Autom. Chem., 1990, 12, 189. 29 Tabata, M., Murachi, T., Endo, J., and Totani, M., J. Chromatogr., 1992, 597, 435. 30 Yerian, T. D., Christian, G. D., and R°uöziöcka, J., Analyst, 1986, 111, 865. 31 Petersson, B. A., Andersen, H. B., and Hansen, E. H., Anal. Lett., 1987, 20, 1977. 32 Narinesingh, D., Pope, A., and Ngo, T.T., Talanta, 1992, 39, 1233. 33 Lee, W., Roberts, S. M., and Labbe, R. F., Clin. Chem., 1997, 43, 154. 34 Thakkar, H., Newman, D. J., Holownia, P., Davey, C. L., Wang, C., Lloyd, J., Craig, A. R., and Price, C. P., Clin. Chem., 1997, 43, 109. 35 Stone, M. J., Chowdrey, P. E., Miall, P., and Price, C. P., Clin. Chem., 1996, 42, 1474. Paper 7/02750H Received April 22, 1997 Accepted July 11, 1997 1044 Analyst, October 1997, Vol. 122 Automated Monosegmented Flow Analyser.Determination of Glucose, Creatinine and Urea Ivo M. Raimundo, Jr.*, and Celio Pasquini Instituto de Qu�ýmica, UNICAMP, CP 6154, CEP 13083-970, Campinas, S�ao Paulo, Brazil An automated monosegmented flow analyser containing a sampling valve and a reagent addition module and employing a laboratory-made photodiode array spectrophotometer as detection system is described. The instrument was controlled by a 386SX IBM compatible microcomputer through an IC8255 parallel port that communicates with the interface which controls the sampling valve and reagent addition module.The spectrophotometer was controlled by the same microcomputer through an RS232 serial standard interface. The software for the instrument was written in QuickBasic 4.5. Opto-switches were employed to detect the air bubbles limiting the monosegment, allowing precise sample localisation for reagent addition and signal reading. The main characteristics of the analyser are low reagent consumption and high sensitivity which is independent of the sample volume. The instrument was designed to determine glucose, creatinine or urea in blood plasma and serum without hardware modification.The results were compared against those obtained by the Clinical Hospital of UNICAMP using commercial analysers. Correlation coefficients among the methods were 0.997, 0.982 and 0.996 for glucose, creatinine and urea, respectively. Keywords: Monosegmented flow analysis; automated flow analyser; glucose; creatinine; urea Automatic analysers are being widely used nowadays owing to the demand for high turnover determinations, mainly in the clinical and environmental fields.These analysers give precise and accurate results with low consumption of both reagents and sample, allowing high laboratory productivity. Analysers can be divided into three categories according to the sample processing method: robotic, batch (or discrete) and continuous analysers.1,2 Robot based and batch automatic analysers need high precision mechanical parts and, therefore, are very difficult to maintain by small routine laboratories.On the other hand, continuous flow systems, such as flow injection (FI) and continuous flow analysis (CFA),3 are simple and an automatic flow instrument can be easily implemented. Automatic flow analysers employing the FI technique have recently been described.4–10 The construction of this kind of analyser is relatively simple because samples are individually processed by these systems, that is, the software to control the instrument has to perform a number of sequential operations without any parallel processing because usually only one sample is processed in the manifold each time.Monosegmented flow analysis (MSFA)3 was proposed by Pasquini and de Oliveira11 in 1985. In the MSFA system, the sample (previously mixed with reagents) is introduced into the analyser between two air bubbles.These bubbles minimise sample dispersion, allowing long residence times. The sampling frequency can be maintained as high as in FI systems with several samples simultaneously present in the reaction coil; therefore, there is no direct relationship between sample injection and sample detection. Two main approaches have been taken to add and mix reagents for the samples. The first, which was proposed in the original paper11 and has since been frequently used,12–15 employs differential pumping to mix reagents with the sample, before filling the sample loop.The second uses continuous addition of reagents through a confluence point after injection.16–18 The first procedure does not allow methods based on sequential reactions to be adapted to MSFA systems such as the determination of urea by employing urease enzymatic hydrolysis followed by the Berthelot reaction for ammonium determination. The second, in addition to the high reagent consumption, destroys the integrity of the monosegment and is only feasible when the air bubbles are either removed before sample detection16 or do not cause spurious signals in the detector, as when AAS is employed.17,18 Air bubble removal has often been employed before the sample reaches the detector.11–15,19 This operates spurious signals but increases sample dispersion.However, Facchin and Pasquini20 have recently described monosegmented flow systems which perform liquid–liquid extractions, showing that it is possible to carry out the determination without removal of air bubbles.This paper describes the construction of a microcomputer controlled automatic monosegmented flow analyser which has three main components: a sampling valve, a reagent addition module and a detection system with a photodiode array spectrophotometer.21 Opto-switches were employed to detect the air bubbles limiting the monosegment, allowing sample localisation for reagent addition and for detection.The analyser was applied to the determination of glucose, creatinine and urea in blood plasma and serum by employing the well established GOD–PAP, Jaff�e reaction and urease–Berthelot methods, respectively. The manifold was designed to allow the determination of each of these analytes with only minor changes. These three analytes were chosen because they are often required in clinical tests; for example, they represent about 40% of the whole demand for analyses at the Clinical Hospital of UNICAMP.Experimental Fig. 1 shows a simplified diagram of the analyser. The instrument was controlled by a 386SX IBM compatible microcomputer (25 MHz, 2 Mbytes RAM) through an IC 8255 parallel port22 that communicates with an interface which uses an address decoder similar to one described elsewhere.23 The interface controls the peristaltic pump on–off state and the sampling valve. Sample localisation and sampling valve position were followed by employing opto-switches that generate TTL signals which can be accessed by the microcomputer as described previously.24 A laboratory-made diode array spectrophotometer21 was used as a detector and was controlled by the microcomputer through an RS232 serial interface.Automatic Sampling Valve The sampling valve was constructed by employing a proportional injector25 whose sliding central bar was connected to a Analyst, October 1997, Vol. 122 (1039–1044) 1039stepper motor (24 V, 1 A, 7.5° per step).The sampling and injection positions of the valve were determined by using two PCST 2103 opto-switches (optos S and I in Fig. 1). The microcomputer sends a TTL pulse that enables an electronic circuit to switch the valve from sampling to injection position. A third opto-switch (opto R in Fig. 1) was used to generate another TTL pulse that is necessary to return the valve to its initial position. This last pulse is produced when the first bubble of the monosegment passes through opto-switch R, which was placed at a distance from the sampling valve equivalent to the size of the monosegment.Sampling valve commutation was found to occur in about 400 ms. Automatic Reagent Addition Module This device was constructed by inserting one (or more) hypodermic syringe needles in a PTFE tube (1.6 mm id), which was fixed with polyester resin, as shown in Fig. 2(a). This needle was placed between two opto-switches and each needle was connected to a three-way solenoid valve (12 V, 80 mA), as shown in Fig. 2(b). The opto-switches can locate the air bubbles and, therefore, the monosegment containing the sample for reagent addition. The first opto-switch was placed before the needle and the second one air bubble away from the needle. When the first air bubble reaches the second sensor, the solenoid valve is turned on and reagent is delivered into the sample monosegment. The valve is turned off when the second air bubble reaches the first opto-switch. The electronic circuit necessary to perform this operation is shown in Fig. 3 The analyser was constructed with two modules that can add up to three and up to two reagents, respectively (modules 1 and 2 in Fig. 1). The addition of the reagent can be selected and enabled/disabled by software. Detection System The detection system was constructed with a flow cell, an optoswitch and the diode array spectrophotometer.21 An opto-switch (D) was placed after the flow cell as shown in Fig. 1, so that the central zone of the sample monosegment is inside the flow cell when its first bubble reaches the switch. At this moment, a logic signal is generated, triggering the microcomputer to perform the absorbance measurements. Software for the Analyser The software for instrument control, data acquisition and treatment was written in Microsoft QuickBasic 4.5. A simplified flow chart of the computer program is shown in Fig. 4.First, it allows start-up of the instrument, by filling the reaction coil with the carrier fluid and the tubing of the addition modules with reagents. The software requests from the operator the sample identification, the number of standards (3–7) and their respective concentrations (to construct calibration curves), the reagents that will be delivered (up to three in the first module and up to two in the second) and the wavelength at which the absorbance will be measured. The spectrophotometer is controlled as described elsewhere21 and the intensity signals for three, five or nine diodes (covering about a 1.2, 1.9 and 3.5 nm wavelength range centred around the selected wavelength) are transferred to the microcomputer to obtain averaged absorption signals.Before starting analysis, the microcomputer requests a reference spectral data set to perform absorbance calculations. The software asks for solutions (standards or samples) necessary to perform the determination.Data are processed in real time, results (as a report, showing the calibration curve and concentration of the samples) are shown on the microcomputer video and stored into a file named by the operator. A hard copy of the report can be obtained, if desired. Reagents and Solutions Analytical-reagent grade reagents and de-ionized water were used to prepare all solutions. Chromium(vi) working standard solutions from 0.200 to 1.400 mg l21 were prepared by dilution of a 1000 mg l21 CrVI stock standard solution.A 0.25% m/v diphenylcarbazide (DPC) solution was prepared in 25% v/v acetic acid and 2.0 mol l21 sulfuric acid solution was prepared by dilution of the concentrated acid. A 0.01 mol l21 PIPES buffer solution (pH 7.2) was prepared by 1 + 4 v/v dilution of the Merck (Darmstadt, Germany) solution (catalogue No. 14144). Merck reactive No. 14143 (GOD–PAP method) was diluted 1 + 40 with Merck solution No. 14144. b-d-Glucose standard solutions were prepared in the range 0.50–10.0 mg dl21 in 0.01 mol l21 PIPES buffer solution.Creatinine standard solutions were prepared from 0.10 to 1.20 mg dl21 in 0.1 mol l21 hydrochloric acid and 4.0 mol l21 sodium hydroxide and 5.5 3 1022 mol l21 picric acid solutions were prepared with de-ionized water. Urea standard solutions were prepared from 0.50 to 5.00 mg dl21. A 0.10 mol l21 phosphate buffer (pH 7.2) was prepared in 0.9% sodium chloride and 0.001% v/v Brij 30 solution.A 44 kU l21 urease solution was prepared in water. Fig. 1 Schematic diagram of the automated flow analyser. For details, see text. 1040 Analyst, October 1997, Vol. 122Other solutions were 6.0% phenol plus 1.0% sodium nitroprusside and 1.0% sodium hypochlorite in 4.0 mol l21 sodium hydroxide. Procedures Evaluation of the analyser A manifold similar to that shown in Fig. 1 was employed, but with reactor 1 removed and a PTFE tubing coil of 1.5 m length and 1.6 mm id used as reactor 2.De-ionized water was used as the carrier at a flow rate of 2.0 ml min21. The first and the second air bubbles, limiting the monosegment, had volumes of 90 and 50 ml, respectively. These flow parameters allowed a residence time of 90 s for the samples, after the second module of reagent addition. The sample monosegment volume was 300 ml, except where specified otherwise. Determination of glucose, creatinine and urea The manifold shown in Fig. 1 was employed. Two glass reactors of 1.6 mm id were used; the carrier flow rate and bubble air volumes were kept as in the evaluation of the analyser, allowing residence times for samples of 2.0 and 6.5 min in the first and second reactors, respectively. The sample loop had a volume of 220 ml.For glucose deteination, blood plasma samples were manually diluted 1 + 45 v/v with 0.01 mol l21 PIPES buffer solution. This buffer solution was also employed as the carrier and the reagent was delivered through the second reagent addition module at a flow rate of 0.16 ml min21.Absorbance measurements were carried out at 510 nm. For creatinine determination, blood serum was deproteinized with 5% trichloroacetic acid solution (1 + 1 v/v). The supernatant was manually diluted 1 + 1 v/v with deionized water. Sodium hydroxide (0.16 ml min21) and picric acid (0.28 ml min21) were added to the sample through the first and Fig. 2 (a) Reagent addition module and (b) addition point P (stainlesssteel needle).Fig. 3 Electronic circuit of the reagent addition module: (a) circuit to turn on and turn off the solenoid valve (enabled by the microcomputer) and (b) circuit to extract logic signal from the opto-switch. Analyst, October 1997, Vol. 122 1041second reagent addition modules, respectively. De-ionized water was used as the carrier. Absorbance measurements were made at 500 nm. For urea determination, blood serum samples were deproteinized as in the determination of creatinine.The supernatant was diluted 1 + 45 v/v with phosphate buffer solution, which was also used as the carrier. Urease solution (0.16 ml min21) was added through the first module; phenol–sodium nitroprusside (0.16 ml min21) and sodium hypochlorite–sodium hydroxide (0.16 ml min21) solutions were both mixed with the sample through the second reagent addition module. Absorbance was measured at 620 nm. Results and Discussion Evaluation of the Analyser MSFA analysers usually work with several samples being processed sequentially in the reaction coil, in order to allow for long residence times without decreasing the sample throughput. Therefore, one sample could be passing through the detector (and absorbance measurement must be performed) while some other tasks, such as switching of the sampling valve or addition of reagent, have also to be carried out.Thus, the hardware and software of the MSFA analyser was developed in order not to miss an absorbance measurement.The switching of the sampling valve from injection position to sampling position and the addition of reagents to the sample are performed automatically under hardware control enabled by the computer. The microcomputer has the task of sending a logic signal to perform sample injection and this action can be delayed for a few seconds if an absorbance measurement is being obtained for a sample present in the flow cell. Fig. 4 shows the flow diagram of the routine that allows the control of these tasks.In addition to releasing the microcomputer, the opto-switch, used to trigger the return of the sampling valve to its injection position, makes this event independent of the flow rate, which is an advantage when a method is being developed. A disadvantage of an MSFA analyser, in general, is related to the admission and/or formation of small air bubbles in the reactor, because the air bubbles of the monosegment are used to drive reagent delivery through the addition modules and to control sample measurement.This problem was minimised by adjusting the opto-switch sensitivities with RC components of the circuit shown in Fig. 3(b). For a carrier flow rate of 2.0 ml min21, the opto-switch sensitivities were adjusted in order not to generate a logic level transition for air bubbles lower than 20 ml. The CrVI–DPC reaction was used to evaluate the analyser performance, by adding 2.0 mol l21 sulfuric acid and 0.25% DPC at flow rates of 0.07 and 0.15 ml min21, respectively, consecutively to 300 ml of sample through modules 1 and 2.The concentrations of the reagents and the flow rate ratios between reagents and carrier (sample) were determined according to the standard recommended method.26 Absorbance measurements were performed at 540 nm, with a bandwidth of 3.5 nm (averaging signal intensities of nine diodes).Standard solutions of CrVI from 0.2 to 1.4 mg l21 were injected in tripiclate at a sampling frequency of 60 h21.Absorbance values were obtained in the range 0.0611– 0.4069, with an average absolute standard deviation of 0.0017. The precision obtained in these absorbance measurements agrees with those obtained previously in the absence of reactions,21 indicating that the analyser shows a very good performance. The injection of a blank solution (water, A = 0.0016 ± 0.0016) after a 1.4 mg l21 CrVI standard solution (A = 0.4069 ± 0.0021) showed that there is no significant carry-over between samples.The calibration curve obtained with these data is A = (0.0078 ± 0.0028) + (0.2880 ± 0.0031)C (r = 0.9997), where A is solution absorbance and C is the CrVI concentration, in mg l21. Considering that the analyser has a flow cell with only a 5 mm long pathlength, these results also agree with those obtained previously, with respect to sensitivity and linearity.11 The injection of 300 ml of a CrVI sample solution, as described, resulted in a consumption of 7.5 ml of sulfuric acid and 12 ml of the DPC solution.When 100 ml of sample were injected, these consumptions were lowered to 2.5 and 4.0 ml, respectively. Table 1 shows some parameters obtained under different conditions of analysis; the sensitivity is almost independent of the sample volume whereas the precision (determined by the standard deviation of ten replicates of a 1.00 mg l21 CrVI solution) of the measurements decreases when the sample volume is decreased and at higher sampling frequencies.As can be seen, this automatic monosegmented flow analyser shows a good performance, allowing a sensitivity that is virtually independent of the sample volume and consuming less reagent than other instruments because the reagents are not delivered continuously but only into the monosegment. Fur- Fig. 4 Flow diagram of the software developed to control sample processing in the analyser (event 1 means solution in flow cell). 1042 Analyst, October 1997, Vol. 122thermore, the reagent addition module makes possible the use of methods employing sequential reactions in MSFA, without disturbing the monosegment pattern. Determination of Glucose, Creatine and Urea Standard, manually processed, methods were adapted to the analyser in order to allow the determination of glucose, creatinine and urea with minor changes to the manifold. Therefore, the experimental parameters employed (mainly sample dilution and residence time) were not optimised for each analyte, but were aimed to suit the overall performance of the analyser.For example, although the glucose reagent was added through the second module, the first reactor (not necessary for this determination) was not removed from the manifold. However, this procedure did not alter the frequency of sample introduction but just increased both the delay necessary for the first sample to reach the detector and the sample dispersion to a minor extent. It is important to emphasise that in the urea determination, urease is added through the first module and, after this reaction has been processed, reagents for ammonium determination are delivered through the second reagent addition module.This operation is the main feature of the proposed analyser, i.e., it became possible to perform sequential reactions without disturbing the monosegment and with reagents being added only to the sample.Glass reactors were used in the manifold because the sample monosegment was not stable in PTFE reactors, mainly in the determination of creatinine. This probably occurs because blood proteins have a stronger affinity for PTFE. On the other hand, glass is wet by aqueous solutions and, therefore, when this material is employed, an increase in cross-contamination and a decrease in precision are observed. A manifold made with PTFE reactors allows insignificant cross-contamination and an RSD of 0.7% for six injections of a 1.00 mg dl21 creatinine aqueous reference solution. A signal that is 2.5% of that obtained for any creatinine reference solution in the range 0.1–1.2 mg dl21 was observed for the first blank introduced after the reference solution in a glass reactor.This characterises a cross-contamination that should be considered if the introduction of samples and/or reference solutions is not replicated. However, crosscontamination effects were minimised by injecting samples in triplicate and averaging the three signals obtained, because it only affects the first signal.Furthermore, as in real calibration and sample determinations the change in concentration is not so drastic, the cross-contamination is minimised. This is particularly true for the samples. Table 2 shows the figures of merit for the methodologies adapted to the developed analyser. The results obtained with the analyser (MSFA) were plotted against those obtained by the Clinical Hospital (CH) of UNICAMP and the results for glucose, creatinine and urea were MSFA = 4.72 + 0.895CH (r = 0.997, n = 25), MSFA = 0.0785 + 1.155CH (r = 0.982, n = 29) and MSFA = 13.5 + 0.956CH (r = 0.996, n = 17), respectively.At the Clinical Hospital, the determinations were performed by automatic discrete analysers, i.e., Merck–Vitalab Selectra (glucose) and Roche Cobas–Mira (creatinine and urea). Glucose and creatinine were also determined by the GOD–PAP and Jaff�e methods, respectively.However, a kinetic procedure was employed in both determinations and the results were obtained from the difference between two absorbance measurements, made in a pre-defined time interval. Urea determination was based on the reaction of ammonium ion (produced by urease catalysed urea hydrolysis) with 2-oxoglutarate and NADH, in the presence of glutarate dehydrogenase, and the decrease in absorbance, due to the NADH consumed, was measured at 340 nm.Although a good correlation coefficient was always observed (r > 0.98) for the three analytes, the results do not agree completely and there are both constant and proportional systematic differences. The origin of these differences can be attributed to the different methodologies and/or instruments employed, as pointed out by Koch and Peters.27 For example, in the creatinine determination some interferences (e.g., from proteins) can be eliminated by employing a kinetic method, as in the procedure used in the Clinical Hospital.Differences such as those found in this work have often been reported for clinical methodologies28–35 and seem to be tolerated from the clinical point of view. According to this point of view, these differences are not a serious drawback to the use of the proposed methodologies because the range of reference values for blood analyte concentrations is a function of the methodology and/or instrument employed for the determination. 27 Conclusions The automated analyser allows determinations to be performed with low reagent consumption.Furthermore, with the development of the reagent addition modules it is possible to adapt methods based on sequential reactions to MSFA without disturbing the monosegment, because the reagents are delivered only into the sample zone. Direct adaptation of the manual procedures can be made by setting appropriate flow rates for carrier fluid and reagents, which maintain the proportion of the manual procedures.The sensitivity of the analyser shows little dependence on the carrier flow rate if the reagent flow rate is kept proportional and the reaction reaches completion. Also, the sensitivity is almost independent of the sample volume, owing to low monosegment dispersion and proportional addition of reagents. Finally, the same manifold can be used to determine different analytes, although some straightforward dilution Table 1 Dependence of the sensitivity and precision of the monosegmented flow analyser on sample volume and sample frequency Calibration curve Vsample/ Frequency/ Linear A ± s* ml h21 coefficient Slope r (n = 10) 300 60 0.0027 0.2459 0.9996 0.2519 ± 0.0016 100 60 0.0020 0.2376 0.9999 0.2337 ± 0.0031 100 120 0.0029 0.2306 0.9996 0.2341 ± 0.0051 * Absorbance of a 1.00 mg l21 CrVI reference solution ± standard deviation.Table 2 Figures of merit for glucose, creatinine and urea determination with the monosegmented flow analyser Averaged Upper limit of precision linear range/ Analyte (RSD) (%) mg dl21* Glucose 1.8 400 Creatinine 3.6 2.00 Urea 3.7 200 * Before sample dilution, as described under Experimental.Analyst, October 1997, Vol. 122 1043operations need to be performed before sample introduction into the system. The authors are grateful to Dr. C. H. Collins for manuscript revision, to Dr. L. Parentoni for blood samples and to M. S. Toma for construction of the flow cell and the mechanical parts of the sampling valve.References 1 Valc�arcel, M., and Luque de Castro, M. D., Automatic Methods of Analysis (Techiniques and Instrumentation in Analytical Chemistry, Vol. 9), Elsevier, Amsterdam, 1988. 2 Valc�arcel, M., and Luque de Castro, M. D., Analysis por Inyeccion en Flujo, Imprenta San Pablo, Cordoba, 1984. 3 van der Linden, W. E., Pure Appl. Chem., 1994, 66, 2493. 4 Pasquini, C., and de Faria, L. C., J. Autom. Chem., 1991, 13, 143. 5 Reis, B. F., Gin�e, M. F., Krug, F. J., and Bergamin Fo., H., J. Anal. At. Spectrom., 1992, 7, 865. 6 Clark, G. D., Christian, G. D., Ruzicka, J., Anderson, G. F., and van Zee, J. A., Anal. Instrum., 1989, 18, 1. 7 Malcome-Lawes, D. J., and Pasquini, C., J. Autom. Chem., 1988, 10, 192. 8 Prodromiris, M. I., Tsibiris, A. B., and Karayannis, M. I., J. Autom. Chem., 1995, 17, 187. 9 Cosano, J. S., Luque de Castro, M. D., and Valc�arcel, M., J. Autom. Chem., 1993, 15, 147. 10 Malcome-Lawes, D. J., Wong, K. H., and Smith, B. V., J. Autom. Chem., 1992, 14, 73. 11 Pasquini, C., and de Oliveira, W. A., Anal. Chem., 1985, 57, 2575. 12 de Andrade, J. C., Ferreira, M., Baccan, N., and Bataglia, O. C., Analyst, 1988, 113, 289. 13 de Andrade, J. C., Bruns, R. E., and Eiras, S. P., Analyst, 1993, 118, 213. 14 de Andrade, J. C., Eiras, S. P., and Bruns, R. E., Anal. Chim. Acta, 1991, 255, 149. 15 Eiras, S. P., de Andrade, J. C., and Bruns, R. E., J. Braz. Chem. Soc., 1993, 4, 128. 16 Tian, L. C., and Wu, S. M., Anal. Chim. Acta, 1992, 261, 301. 17 Reis, B. F., Arruda, M. A. Z., Zagatto, E. A. G., and Ferreira, J. R., Anal. Chim. Acta, 1988, 206, 253. 18 Reis, B. F., Zagatto, E. A. G., Martelli, P. B., and Brienza, S. M. B., Analyst, 1993, 118, 719. 19 Pasquini, C., Anal. Chem., 1986, 58, 2346. 20 Facchin, I., and Pasquini, C., Anal. Chim. Acta, 1995, 308, 231. 21 Raimundo, I. M., Jr., and Pasquini, C., J. Autom. Chem., 1993, 15, 227. 22 Malcome-Lawes, D., Lab. Microcomput., 1987, 6, 16. 23 Souza, P. S., and Pasquini, C., Lab. Microcomput., 1990, 9, 77. 24 Raimundo, I. M., Jr., and Pasquini, C., Lab. Microcomput., 1994, 13, 55. 25 Bergamin Fo., H., Medeiros, J. X., Reis, B. F., and Zagatto, E. A. G., Anal. Chim. Acta, 1978, 101, 9. 26 APH, AWWA and WPCF, Standard Methods for the Examination of Water and Wastewater, American Public Health Association, Washington, DC, 18th edn., 1992. 27 Koch, D. O., and Peters, T., Jr., in Tietz Textbook of Clinical Chemistry, ed. Burts, C. A., and Ashwood, E. R., Saunders, Philadelphia, 2nd edn., 1994, pp. 508–525. 28 Narebor, E. M., J. Autom. Chem., 1990, 12, 189. 29 Tabata, M., Murachi, T., Endo, J., and Totani, M., J. Chromatogr., 1992, 597, 435. 30 Yerian, T. D., Christian, G. D., and R°uöziöcka, J., Analyst, 1986, 111, 865. 31 Petersson, B. A., Andersen, H. B., and Hansen, E. H., Anal. Lett., 1987, 20, 1977. 32 Narinesingh, D., Pope, A., and Ngo, T. T., Talanta, 1992, 39, 1233. 33 Lee, W., Roberts, S. M., and Labbe, R. F., Clin. Chem., 1997, 43, 154. 34 Thakkar, H., Newman, D. J., Holownia, P., Davey, C. L., Wang, C., Lloyd, J., Craig, A. R., and Price, C. P., Clin. Chem., 1997, 43, 109. 35 Stone, M. J., Chowdrey, P. E., Miall, P., and Price, C. P., Clin. Chem., 1996, 42, 1474. Paper 7/02750H Received April 22, 1997 Accepted July 11, 1997 1044 Analyst, October 1997
ISSN:0003-2654
DOI:10.1039/a702750h
出版商:RSC
年代:1997
数据来源: RSC
|
8. |
Flow Injection Photometric Determination of Zinc and Copper With Zincon Based on the Variation of the Stability of the Complexes With pH |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1045-1048
Pablo Richter,
Preview
|
|
摘要:
Flow Injection Photometric Determination of Zinc and Copper With Zincon Based on the Variation of the Stability of the Complexes With pH Pablo Richter*a, M. In�es Torala, A. Eugenia Tapiab and Emely Fuenzalidaa a Department of Chemistry, Faculty of Sciences, Univesity of Chile, P.O. Box 653, Santiago, Chile b Department of Technology, Technologic Metropolitan University, P.O. Box 9845, Santiago, Chile A flow injection photometric method for the sequential determination of zinc and copper in mixtures was developed based on the variation of the stability of the chromogenic complexes between the analytes and the reagent zincon with pH.At pH 5.0 only the Cu–zincon complex exists, whereas at pH 9.0 the copper and zinc chelates co-exist. A three-channel manifold was implemented containing two alternating buffer streams (pH 5 and 9) which permit the colored reaction products to be formed sequentially at both pH values, and consequently the mixtures can be resolved.A continuous preconcentration unit (Chelex-100) was used in order to increase the sensitivity of the method, thus allowing the analysis of water samples in which the analytes are present at the ng ml21 level. On the other hand, preconcentration was not required when the analytes were determined in brass. Under the optimum conditions and using a preconcentration time of 2 min, the detection limits (3s) were found to be 0.35 and 0.80 ng ml21 for zinc and copper, respectively.The repeatability of the method, expressed as the RSD, was in all instances less than 3.1%. Considering the sequential determination of both species, a sampling rate of 70 h21 was obtained if preconcentration of the samples was not required. Keywords: Flow injection; sequential determination; copper; zinc; zincon; water; brass In analytical chemistry, multi-elemental determinations are in increasing demand. At present, the use of an ICP with either MS and AES detection is probably the best selection when the interest is in multi-elemental determinations at trace levels. However, when the costs involved in instrumental acquisition and maintenance are considered, normally mostly laboratories opt for alternative techniques. Flow injection analysis (FIA) has been applied in many fields of natural sciences. The basic aim of FIA was initially devoted to the rapid and precise determination of a single species in a large number of samples.1 However, the versatility of this technique permits the easy design of devices for the determination of several species in a sample, which commonly implies some decrease in the sampling rate.In order to resolve mixtures of analytes by FIA, diverse alternatives have been proposed which are based on different approaches,2–11 including differential kinetics, the use of several reagents and reaction media, coupling of FIA and chromatography, computational methods and coupled techniques. Copper and zinc are often found together in a great number of samples of different nature.Consequently, the simultaneous determination of both species at different concentration levels is in great demand. From an analytical point of view, when a distinction can be established between the chemical reactivity of two or more species with a common reagent, this can be very useful in developing methods for the simultaneous determination of analytes in mixtures. In this context, the different rate of the reactions between copper and zinc with a common reagent, zincon, has served as the basis for the resolution of their binary mixtures by using an FIA differential kinetic method.4 In addition to the different kinetic reactivities, the Cu– and Zn– zincon complexes also show variations in stability with the pH of the medium.In this work, using the Cu–Zn–zincon system, copper and zinc could be sequentially determined in a continuous flow process based on the variation in the stability of the complexes with pH.Analytical reactions involving zincon as a chromogenic reagent have been used previously for spectrophotometric determinations of copper and zinc by conventional manual procedures.12,13 Liu et al.11 reported an FIA procedure for determining both species sequentially using zincon. This approach, which involves the selective masking of copper using the merging zone technique, permits the determination of both analytes in serum at the mg ml21 level.The FIA method reported here is based on the fact that at pH < 5.5 only the CuII–Zincon complex exists, whereas at pH 9.0 the ZnII and CuII chelates co-exist. A three-channel manifold with two alternating buffer streams (pH 5.0 and 9.0) was used to implement the method. Determinations below 0.3 mg ml21 required the use of a preconcentration unit containing Chelex- 100 chelating resin. Sodium citrate was included as a masking agent in both buffer streams in order to avoid interferences from iron, aluminum and manganese.The method was applied to the determination of both elements in tap water and brass. Experimental Instruments and Apparatus Absorbances were measured at 612 nm with a Shimadzu (Kyoto, Japan) UV-160 spectrophotometer equipped with a Hellma (Jamaica, NY, USA) Model 178.010-OS flow cell. An Orion (Cambridge, MA, USA) Model 701 digital ion analyzer with glass and saturated calomel electrodes were used for pH measurements. Two four-channel Ismatec fixed-speed peristaltic pumps fitted with Tygon tubes, Teflon flow injection tubes of 0.56 mm id, two Rheodyne (Cotati, CA, USA) Model 5041 injection valves, two Teflon PTFE three-way connectors, a Teflon PTFE three-way selecting valve and a microcolumn made of Tygon tubing (1.5 cm long, 2.5 mm i.d.) were also used.Reagents All chemicals were of analytical-reagent grade. De-ionized water (NANOpure ultrapure water system; Barnstead, Dubuque, IA, USA) was used throughout.Working standard solutions of copper and zinc were prepared by dilution of aqueous 1000 mg l21 stock standard solutions. A 1.40 3 1024 m solution of 2-carboxy-2A-hydroxy-5A-sulfoformacylbenzol (zincon) was prepared in 0.02 m sodium hydroxide. Sodium Analyst, October 1997, Vol. 122 (1045–1048) 1045acetate–acetic acid buffer solution (pH 5) was prepared in 30% ethanol and a pH of 5 was reached by adding acetic acid to 0.2 m sodium acetate solution.Citrate (0.1 m) was added to this solution as a masking agent. A Clark and Lubs buffer (pH 9) was prepared adding 21.3 ml of 0.5 m sodium hydroxide to 50 ml of 0.5 m boric acid in 0.2 m potassium chloride and diluting to 200 ml. Citrate (0.2 m) was added to this buffer solution. An iminodiacetic acid chelating resin (Chelex-100) was used for preconcentration of the analytes from water samples and also 0.1 m nitric acid was used as eluting solution. Manifold and Procedure A schematic diagram of the proposed FIA system is depicted in Fig. 1. The manifold contained two injection valves in series. A Chelex-100 microcolumn was located in the loop of one of the valves (IV1), in which the analytes were preconcentrated by passing the sample solution through the loop for a pre-set interval (Tp) at a flow rate of 3.0 ml min21. The loop (50 ml) of the other injection valve (IV2) was filled with 0.1 m nitric acid. After the preconcentration time, which depended on the concentration of the analytes in the samples, valves IV1 and IV2 were sequentially switched in that order with an interval of 2 s.The nitric acid solution passed through the microcolumn and the concentrated metal ions were quantitatively eluted. Depending on the position of the selecting valve (SV), the sample zone was merged and mixed in L1 (25 cm 3 0.56 mm id) with a buffer system of pH 5 or 9 at a flow rate of 2.0 ml min21, and the analytical reaction of the analytes with zincon (R) occurred subsequently in L2 (150 cm 3 0.56 mm id) at a flow rate of 6.0 ml min21.The signal obtained at pH 5 was used to calculate the copper content in the sample. The copper contribution was subsequently subtracted from the signal at pH 9 in order to determine the zinc concentration. The preconcentration system inside the dotted section of Fig. 1 can be excluded from the manifold when the analyte concentration in the samps is > 0.3 mg ml21. In this case, direct injection of the samples gave rise to well defined signals.Results and Discussion McCall et al.12 reported that the stabilities of Cu– and Zn– zincon complexes are different and pH dependent. It is well known that the principal factor affecting the formation of chelates in practical situations is the acidity of the solutions. This can be explained by considering the conditional equilibrium constants for both Cu– and Zn–zincon complexes, which are strongly pH dependent.14 For instance, the pK values for the Zn–zincon complex are 7.9 and 0.6 at pH 9 and 5, respectively, which indicates that complexation of Zn at pH 5.0 is negligible.On the other hand, the true stability of the Cu–zincon complex must be considerably higher than that of the Zn complex, because at pH 5.0 the copper complex is still stable. Based on this fact, McCall et al.12 suggested that the copper concentration can be determined by measuring the absorbance of a solution containing both copper and zinc in the presence of zincon at pH 5.2, where the absorbance resulted entirely from the Cu complex. The total concentration of the two analytes was found by measuring the signal of a similar solution at pH 9.0, where the absorbance was due to both complexes. However, McCall et al.could not obtain satisfactory results because at pH 5.2 precipitation of the reagent occurred with most samples, which made it impossible to measure accurately the signal due to the copper content.In order to avoid problems associated with precipitation of the reagent, McCall et al. preferred to form the complexes at pH 8.5–9.5 to determine the total concentration of the two elements and subsequently to destroy selectively the complex of one of the analytes without affecting the color intensity of the other. Similarly, selective masking of copper at pH 9 was implemented for the determination of both species by FIA.11 We found that zincon shows very low solubility in aqueous acidic media and, consequently, it precipitates at pH 5.0.However, it was observed that the reagent becomes soluble and perfectly stable in the temperature range 20 ± 5 °C when a zincon solution stream is adjusted continuously to pH 5.0 by merging, at a similar flow rate, with a buffer system of pH 5.0 prepared in a mixed water–ethanol (70 + 30 v/v) medium. In view of this, the difference in stability between the analyte– zincon complexes at different pH values was the basis of the method reported here.The continuous flow manifold depicted in Fig. 1 permits the alternate flow of two streams with different pH values (5.0 and 9.0). A stream solution buffered at pH 5 was prepared in the presence of 30% of ethanol, as indicated above, and the pH 9.0 buffer system was prepared in water, because the reagent does not precipitate under these conditions. Fig. 2 shows typical analytical signals obtained for copper and zinc at pH 5.0 and 9.0.At pH 5.0 only the Cu–zincon complex is formed, and consequently successive injections of standard solutions of zinc do not show any variation in absorbance. In contrast, at pH 9.0, both elements gave rise to similar FIA signals. When citrate is included in the buffer streams, which favors the masking of interference from species such as iron, aluminum and manganese, the signal at pH 9.0 is almost completely due to the Zn–zincon complex, because citrate at pH 9.0 also masks copper almost completely.Fig. 1 Flow injection manifold for implementation of the method. P = peristaltic pump, E = eluting agent, S = sample, C = carrier (H2O) stream, B-5 and B-9 = buffer systems of pH 5 and 9, respectively, R = reagent (zincon), q = flow rate, SV = selection valve, IV = injection valve, CH-100 = Chelex-100 microcolumn, L = mixing coil, D = detector and W = waste. The dotted section indicates the preconcentration system. Fig. 2 Analytical signals obtained with the manifold in Fig. 1. 1, Copper at pH 5.0; 2, zinc at pH 5.0; 3, copper at pH 9.0, and 4, zinc at pH 9.0. Segmented signals were obtained under the same experimental conditions but in the presence of citrate as masking agent. 1046 Analyst, October 1997, Vol. 122However, in all instances the small contribution of copper to the signal at pH 9.0 must be subtracted, after determining this analyte at pH 5.0, in order to calculate the zinc concentration accurately.The chemical and flow injection variables were optimized by the univariate method, and the best analytical conditions for the determination of the two species were determined. Table 1 gives the optimum values found for the variables studied. The presence of ethanol in the carrier stream at pH 5 was strictly necessary. However, its presence does not produce noise for eventual inadequate mixing. If the ethanol content was < 30% v/v, precipitation occurred after a few minutes of continuous flow operation of the manifold.The zincon reagent was prepared in a similar manner to that reported for the classical method,12 although 20 times more dilute, which is sufficient for good sensitivity. Further increments in the reagent concentration increased the sensitivity, but the signals were considerably less reproducible and the possibility of precipitation increased. Because the method is based on the different responses of the analytes of pH 9.0 and 5.0, and as the analytical signal in both instances is strongly pH dependent,4,12 the buffer systems used were relatively concentrated, in order to avoid changes in the pH of the sample when it meets the buffer stream and the reaction takes place.On the other hand, it was necessary to inject the samples in a carrier stream of water, which is subsequently mixed with the buffer streams, thus avoiding the noise due to changes in refraction indices, which always occurred when the samples were injected directly in the buffer streams.The optimum flow rates and lengths of the reactors were selected so as to obtain the maximum sensitivity for both analytes, taking into account that the formation of the Zn– zincon complex requires a development time longer than that observed for the Cu–zincon complex.4 Preconcentration of the analytes was necessary when their concentrations in the samples were < 0.3 ppm. An on-line preconcentration unit similar to that described earlier8 was included in the manifold (Fig. 1).A Chelex-100 microcolumn was used to preconcentrate and separate the analytes from very diluted aqueous samples. To achieve better performance, the samples were adjusted to pH Å 6.5 before loading on to a Chelex-100 column using a flow rate of 3.0 ml min21.8,15 The elements were quantitatively eluted from the Chelex-100 resin with 50 ml of 0.1 m nitric acid. Under the selected conditions given in Table 1, when using a microcolumn containing 40 mg of Chelex-100, the maximum loading of the column was 310 ng of Cu and 250 ng of Zn.Table 2 gives the analytical features of the method. Calibration graphs were obtained separately for each element at pH 9.0 and 5.0 with and without preconcentration. When preconcentration was carried out, the RSD values in Table 2 reflect the repeatability of the combined preconcentration– elution system and the FIA method. According to the slopes of the calibration graphs (Table 2), the preconcentration factors were about 100 for a preconcentration time of 2 min.Calibration graphs for each element in the presence of the other showed the same slopes as those corresponding to the individual elements, which implies that the sensitivity is not affected by the other metal. The sampling rates, considering the sequential determination of both analytes (two injections for each determination), were 70 and 14 h21 by using the manifold without or with preconcentration, respectively.Although calibration for one element is not altered in the presence of the other, synthetic water samples were prepared in order to test the applicability of the method. The synthetic samples contained 50 ng ml21 Zn, 50 ng ml21 Cu, 50 ng ml21 Fe, 50 mg ml21 Ca, 50 mg ml21 Mg and 3% NaCl. The recoveries were 104.2 ± 3.2% and 97.6 ± 3.0% for Zn and Cu, respectively. Determination of both analytes was then carried out in a tap water sample (collected in January 1996 at Santiago, Chile).The concentrations found were Cu 6.5 ± 0.6 and Zn 10.1 ± 0.5 ng ml21, which were consistent with those determined by AAS. The method was also applied to the analysis of brass. In this case, the preconcentration system was not used. Seven portions of about 7 mg of sample were accurately weighed, dissolved in 25 ml of nitric acid (1 + 3) and then diluted to 1000 ml with water. The copper and zinc contents in the sample were determined by the proposed method and the results are given in Table 3, together with those obtained by using other methods.Considering that AAS is usually recognized as a standard Table 1 Optimization of variables Studied Selected Variable range value FIA— Injected volume (IV1)*/ml 50–250 100 Delay coil (L1)/cm 20–250 25 (L2)/cm 20–250 150 Flow rate (q1)/ml min21 0.6–3.0 2.0 (q2)/ml min21 0.6–3.0 2.0 (q3)/ml min21 0.6–3.0 2.0 Chemical— pH 3–11 5.0 and 9.0 Buffer components/m (pH 5): Acetic acid + sodium acetate 0.012–0.36 0.31 Citrate/m 0.01–0.3 0.1 EtOH, % v/v 5–40 30 Buffer components/m (pH 9): Boric acid + borate 0.02–0.60 0.125 Potassium chloride/m — 0.05 Citrate/m 0.01–0.4 0.2 Zincon/m 1.4 3 1025–1.4 3 1023 1.4 3 1024 Eluting agent, HNO3/m 0.1–1.0 0.1 (50 ml) Chelex-100/mg 20–80 40 * Manifold without preconcentration unit.Table 2 Features of the method Correlation Determination RSD (%) LOD‡/ Analyte* pH Equation† coefficient range/mg ml21 (n = 11) ng ml21 Cu 5 A = 5.82 3 1022[Cu] + 2.3 3 1023 0.9997 0.30–8.0 0.72 90 Cu(P) 5 A = 6.60[Cu] + 2.5 3 1022 0.9998 0.0026–0.025 1.50 0.8 Cu 9 A = 6.34 3 1023[Cu] + 2.8 3 1024 0.9997 — 1.16 — Cu(P) 9 A = 0.750[Cu] + 3.6 3 1023 0.9998 — 3.10 — Zn 9 A = 6.40 3 1022[Zn] 2 3.8 3 1024 0.9989 0.14–8.0 1.80 40 Zn(P) 9 A = 7.65[Zn] + 5.0 3 1022 0.9998 0.0012–0.025 1.90 0.35 * (P): preconcentration unit included in the manifold.† A in absorbance units, analyte concentration in mg ml21.‡ LOD: Limit of detection for a preconcentration time of 2 min. Analyst, October 1997, Vol. 122 1047technique, the results obtained indicate a good level of accuracy. Conclusions The difference in stability between the analyte–zincon complexes at different pH values was the basis of the method reported here. Because the preconcentration increases the sensitivity about 100-fold, the method can be used to determination copper and zinc in water samples. On the other hand, determination of both metal in alloys does not require the preconcentration step.Comparison of the results with those obtained by other methods indicates that the proposed method is suitable for the analysis of these types of samples. In contrast to the classical determination with zincon, this continuous flow method permits the determination of copper at pH 5.0 without precipitation of the reagent. The present continuous flow method is considerably faster than the classical approaches12,13 and the consumption of zincon is about 20 times lower.The financial support of the Direcci�on de Investigaci�on de la Universidad Tecnol�ogica Metropolitana (Project 053) and FONDECYT Project 1970466 is gratefully acknowledged. References 1 Valcarcel, M., and Luque de Castro, M. D., Flow Injection Analysis. Principles and Applications, Ellis Horwood, Chichester, 1987. 2 Arruda, M. A. Z., Zagatto, E. A. G., and Maniasso, N., Anal. Chim. Acta, 1993, 283, 476. 3 Christian, G. D., and Ruzicka, J., Anal. Chim. Acta, 1992, 261, 11. 4 Richter, P., Toral, M. I., Tapia, A. E., Ubilla, C., and Bunster, M., Bol. Soc. Chil. Quim., 1996, 41, 167. 5 Fern�andez, A., Luque de Castro, M. D., and Valc�arcel, M., Anal. Chem., 1984, 56, 1146. 6 Richter, P., Toral, M. I., Parra, V., Fuentes, S., and Araya, E., Bol. Soc. Chil. Quim., 1995, 40, 337. 7 Richter, P., Toral, M. I., and Hern�andez, P., Anal. Lett., 1996, 29, 1013. 8 Richter, P., Fern�andez-Romero, J.M., Luque de Castro, M. D., and Valc�arcel, M., Chromatographia, 1992, 34, 445. 9 Luque de Castro, M. D., and Tena, T., Talanta, 1995, 42, 151. 10 Hern�andez, O., Jim�enez, F., Jim�enez, A. I., and Arias, J. J., Analyst, 1996, 121, 169. 11 Liu, R. M., Liu, D. J., and Sun, A. L., Talanta, 1993, 40, 511. 12 McCall, J. T., Davis, G. K., and Stearns, T. W., Anal. Chem., 1958, 30, 1345. 13 Platte, J. A., and Marcy, V. M., Anal. Chem., 1959, 31, 1226. 14 Ringbom, A., Formaci�on de Complejos en Qu�ýmica Anal�ýtica, Editorial Alhambra, Madrid, 1979. 15 Pai, S. C., Anal. Chim. Acta, 1988, 211, 271. Paper 7/03379F Received May 16, 1997 Accepted June 17, 1997 Table 3 Determination of copper and zinc in brass Amount found* (%) (Df)† Proposed Kinetic AAS Analyte method method method Zinc 41.3 (±1.23) 40.1 (±1.03) 39.3 (±1.85) Copper 58.1 (±1.65) 57.9 (±1.65) 58.2 (±0.82) * Mean of five determinations. † Df values in parentheses; f = confidence interval of 99%. 1048 Analyst, October 1997, Vol. 122 Flow Injection Photometric Determination of Zinc and Copper With Zincon Based on the Variation of the Stability of the Complexes With pH Pablo Richter*a, M. In�es Torala, A. Eugenia Tapiab and Emely Fuenzalidaa a Department of Chemistry, Faculty of Sciences, Univesity of Chile, P.O. Box 653, Santiago, Chile b Department of Technology, Technologic Metropolitan University, P.O. Box 9845, Santiago, Chile A flow injection photometric method for the sequential determination of zinc and copper in mixtures was developed based on the variation of the stability of the chromogenic complexes between the analytes and the reagent zincon with pH.At pH 5.0 only the Cu–zincon complex exists, whereas at pH 9.0 the copper and zinc chelates co-exist. A three-channel manifold was implemented containing two alternating buffer streams (pH 5 and 9) which permit the colored reaction products to be formed sequentially at both pH values, and consequently the mixtures can be resolved.A continuous preconcentration unit (Chelex-100) was used in order to increase the sensitivity of the method, thus allowing the analysis of water samples in which the analytes are present at the ng ml21 level. On the other hand, preconcentration was not required when the analytes were determined in brass. Under the optimum conditions and using a preconcentration time of 2 min, the detection limits (3s) were found to be 0.35 and 0.80 ng ml21 for zinc and copper, respectively.The repeatability of the method, expressed as the RSD, was in all instances less than 3.1%. Considering the sequential determination of both species, a sampling rate of 70 h21 was obtained if preconcentration of the samples was not required. Keywords: Flow injection; sequential determination; copper; zinc; zincon; water; brass In analytical chemistry, multi-elemental determinations are in increasing demand. At present, the use of an ICP with either MS and AES detection is probably the best selection when the interest is in multi-elemental determinations at trace levels.However, when the costs involved in instrumental acquisition and maintenance are considered, normally mostly laboratories opt for alternative techniques. Flow injection analysis (FIA) has been applied in many fields of natural sciences. The basic aim of FIA was initially devoted to the rapid and precise determination of a single species in a large number of samples.1 However, the versatility of this technique permits the easy design of devices for the determination of several species in a sample, which commonly implies some decrease in the sampling rate.In order to resolve mixtures of analytes by FIA, diverse alternatives have been proposed which are based on different approaches,2–11 including differential kinetics, the use of several reagents and reaction media, coupling of FIA and chromatography, computational methods and coupled techniques.Copper and zinc are often found tomples of different nature. Consequently, the simultaneous determination of both species at different concentration levels is in great demand. From an analytical point of view, when a distinction can be established between the chemical reactivity of two or more species with a common reagent, this can be very useful in developing methods for the simultaneous determination of analytes in mixtures.In this context, the different rate of the reactions between copper and zinc with a common reagent, zincon, has served as the basis for the resolution of their binary mixtures by using an FIA differential kinetic method.4 In addition to the different kinetic reactivities, the Cu– and Zn– zincon complexes also show variations in stability with the pH of the medium. In this work, using the Cu–Zn–zincon system, copper and zinc could be sequentially determined in a continuous flow process based on the variation in the stability of the complexes with pH.Analytical reactions involving zincon as a chromogenic reagent have been used previously for spectrophotometric determinations of copper and zinc by conventional manual procedures.12,13 Liu et al.11 reported an FIA procedure for determining both species sequentially using zincon. This approach, which involves the selective masking of copper using the merging zone technique, permits the determination of both analytes in serum at the mg ml21 level.The FIA method reported here is based on the fact that at pH < 5.5 only the CuII–Zincon complex exists, whereas at pH 9.0 the ZnII and CuII chelates co-exist. A three-channel manifold with two alternating buffer streams (pH 5.0 and 9.0) was used to implement the method. Determinations below 0.3 mg ml21 required the use of a preconcentration unit containing Chelex- 100 chelating resin. Sodium citrate was included as a masking agent in both buffer streams in order to avoid interferences from iron, aluminum and manganese.The method was applied to the determination of both elements in tap water and brass. Experimental Instruments and Apparatus Absorbances were measured at 612 nm with a Shimadzu (Kyoto, Japan) UV-160 spectrophotometer equipped with a Hellma (Jamaica, NY, USA) Model 178.010-OS flow cell. An Orion (Cambridge, MA, USA) Model 701 digital ion analyzer with glass and saturated calomel electrodes were used for pH measurements.Two four-channel Ismatec fixed-speed peristaltic pumps fitted with Tygon tubes, Teflon flow injection tubes of 0.56 mm id, two Rheodyne (Cotati, CA, USA) Model 5041 injection valves, two Teflon PTFE three-way connectors, a Teflon PTFE three-way selecting valve and a microcolumn made of Tygon tubing (1.5 cm long, 2.5 mm i.d.) were also used. Reagents All chemicals were of analytical-reagent grade. De-ionized water (NANOpure ultrapure water system; Barnstead, Dubuque, IA, USA) was used throughout.Working standard solutions of copper and zinc were prepared by dilution of aqueous 1000 mg l21 stock standard solutions. A 1.40 3 1024 m solution of 2-carboxy-2A-hydroxy-5A-sulfoformacylbenzol (zincon) was prepared in 0.02 m sodium hydroxide. Sodium Analyst, October 1997, Vol. 122 (1045–1048) 1045acetate–acetic acid buffer solution (pH 5) was prepared in 30% ethanol and a pH of 5 was reached by adding acetic acid to 0.2 m sodium acetate solution.Citrate (0.1 m) was added to this solution as a masking agent. A Clark and Lubs buffer (pH 9) was prepared adding 21.3 ml of 0.5 m sodium hydroxide to 50 ml of 0.5 m boric acid in 0.2 m potassium chloride and diluting to 200 ml. Citrate (0.2 m) was added to this buffer solution. An iminodiacetic acid chelating resin (Chelex-100) was used for preconcentration of the analytes from water samples and also 0.1 m nitric acid was used as eluting solution.Manifold and Procedure A schematic diagram of the proposed FIA system is depicted in Fig. 1. The manifold contained two injection valves in series. A Chelex-100 microcolumn was located in the loop of one of the valves (IV1), in which the analytes were preconcentrated by passing the sample solution through the loop for a pre-set interval (Tp) at a flow rate of 3.0 ml min21. The loop (50 ml) of the other injection valve (IV2) was filled with 0.1 m nitric acid.After the preconcentration time, which depended on the concentration of the analytes in the samples, valves IV1 and IV2 were sequentially switched in that order with an interval of 2 s. The nitric acid solution passed through the microcolumn and the concentrated metal ions were quantitatively eluted. Depending on the position of the selecting valve (SV), the sample zone was merged and mixed in L1 (25 cm 3 0.56 mm id) with a buffer system of pH 5 or 9 at a flow rate of 2.0 ml min21, and the analytical reaction of the analytes with zincon (R) occurred subsequently in L2 (150 cm 3 0.56 mm id) at a flow rate of 6.0 ml min21.The signal obtained at pH 5 was used to calculate the copper content in the sample. The copper contribution was subsequently subtracted from the signal at pH 9 in order to determine the zinc concentration. The preconcentration system inside the dotted section of Fig. 1 can be excluded from the manifold when the analyte concentration in the samples is > 0.3 mg ml21.In this case, direct injection of the samples gave rise to well defined signals. Results and Discussion McCall et al.12 reported that the stabilities of Cu– and Zn– zincon complexes are different and pH dependent. It is well known that the principal factor affecting the formation of chelates in practical situations is the acidity of the solutions. This can be explained by considering the conditional equilibrium constants for both Cu– and Zn–zincon complexes, which are strongly pH dependent.14 For instance, the pK values for the Zn–zincon complex are 7.9 and 0.6 at pH 9 and 5, respectively, which indicates that complexation of Zn at pH 5.0 is negligible.On the other hand, the true stability of the Cu–zincon complex must be considerably higher than that of the Zn complex, because at pH 5.0 the copper complex is still stable. Based on this fact, McCall et al.12 suggested that the copper concentration can be determined by measuring the absorbance of a solution containing both copper and zinc in the presence of zincon at pH 5.2, where the absorbance resulted entirely from the Cu complex.The total concentration of the two analytes was found by measuring the signal of a similar solution at pH 9.0, where the absorbance was due to both complexes. However, McCall et al. could not obtain satisfactory results because at pH 5.2 precipitation of the reagent occurred with most samples, which made it impossible to measure accurately the signal due to the copper content.In order to avoid problems associated with precipitation of the reagent, McCall et al. preferred to form the complexes at pH 8.5–9.5 to determine the total concentration of the two elements and subsequently to destroy selectively the complex of one of the analytes without affecting the color intensity of the other. Similarly, selective masking of copper at pH 9 was implemented for the determination of both species by FIA.11 We found that zincon shows very low solubility in aqueous acidic media and, consequently, it precipitates at pH 5.0.However, it was observed that the reagent becomes soluble and perfectly stable in the temperature range 20 ± 5 °C when a zincon solution stream is adjusted continuously to pH 5.0 by merging, at a similar flow rate, with a buffer system of pH 5.0 prepared in a mixed water–ethanol (70 + 30 v/v) medium. In view of this, the difference in stability between the analyte– zincon complexes at different pH values was the basis of the method reported here.The continuous flow manifold depicted in Fig. 1 permits the alternate flow of two streams with different pH values (5.0 and 9.0). A stream solution buffered at pH 5 was prepared in the presence of 30% of ethanol, as indicated above, and the pH 9.0 buffer system was prepared in water, because the reagent does not precipitate under these conditions.Fig. 2 shows typical analytical signals obtained for copper and zinc at pH 5.0 and 9.0. At pH 5.0 only the Cu–zincon complex is formed, and consequently successive injections of standard solutions of zinc do not show any variation in absorbance. In contrast, at pH 9.0, both elements gave rise to similar FIA signals. When citrate is included in the buffer streams, which favors the masking of interference from species such as iron, aluminum and manganese, the signal at pH 9.0 is almost completely due to the Zn–zincon complex, because citrate at pH 9.0 also masks copper almost completely. Fig. 1 Flow injection manifold for implementation of the method. P = peristaltic pump, E = eluting agent, S = sample, C = carrier (H2O) stream, B-5 and B-9 = buffer systems of pH 5 and 9, respectively, R = reagent (zincon), q = flow rate, SV = selection valve, IV = injection valve, CH-100 = Chelex-100 microcolumn, L = mixing coil, D = detector and W = waste. The dotted section indicates the preconcentration system.Fig. 2 Analytical signals obtained with the manifold in Fig. 1. 1, Copper at pH 5.0; 2, zinc at pH 5.0; 3, copper at pH 9.0, and 4, zinc at pH 9.0. Segmented signals were obtained under the same experimental conditions but in the presence of citrate as masking agent. 1046 Analyst, October 1997, Vol. 122However, in all instances the small contribution of copper to the signal at pH 9.0 must be subtracted, after determining this analyte at pH 5.0, in order to calculate the zinc concentration accurately. The chemical and flow injection variables were optimized by the univariate method, and the best analytical conditions for the determination of the two species were determined.Table 1 gives the optimum values found for the variables studied. The presence of ethanol in the carrier stream at pH 5 was strictly necessary. However, its presence does not produce noise for eventual inadequate mixing. If the ethanol content was < 30% v/v, precipitation occurred after a few minutes of continuous flow operation of the manifold.The zincon reagent was prepared in a similar manner to that reported for the classical method,12 although 20 times more dilute, which is sufficient for good sensitivity. Further increments in the reagent concentration increased the sensitivity, but the signals were considerably less reproducible and the possibility of precipitation increased. Because the method is based on the different responses of the analytes of pH 9.0 and 5.0, and as the analytical signal in both instances is strongly pH dependent,4,12 the buffer systems used were relatively concentrated, in order to avoid changes in the pH of the sample when it meets the buffer stream and the reaction takes place.On the other hand, it was necessary to inject the samples in a carrier stream of water, which is subsequently mixed with the buffer streams, thus avoiding the noise due to changes in refraction indices, which always occurred when the samples were injected directly in the buffer streams.The optimum flow rates and lengths of the reactors were selected so as to obtain the maximum sensitivity for both analytes, taking into account that the formation of the Zn– zincon complex requires a development time longer than that observed for the Cu–zincon complex.4 Preconcentration of the analytes was necessary when their concentrations in the samples were < 0.3 ppm. An on-line preconcentration unit similar to that described earlier8 was included in the manifold (Fig. 1). A Chelex-100 microcolumn was used to preconcentrate and separate the analytes from very diluted aqueous samples. To achieve better performance, the samples were adjusted to pH Å 6.5 before loading on to a Chelex-100 column using a flow rate of 3.0 ml min21.8,15 The elements were quantitatively eluted from the Chelex-100 resin with 50 ml of 0.1 m nitric acid. Under the selected conditions given in Table 1, when using a microcolumn containing 40 mg of Chelex-100, the maximum loading of the column was 310 ng of Cu and 250 ng of Zn.Table 2 gives the analytical features of the method. Calibration graphs were obtained separately for each element at pH 9.0 and 5.0 with and without preconcentration. When preconcentration was carried out, the RSD values in Table 2 reflect the repeatability of the combined preconcentration– elution system and the FIA method.According to the slopes of the calibration graphs (Table 2), the preconcentration factors were about 100 for a preconcentration time of 2 min. Calibration graphs for each element in the presence of the other showed the same slopes as those corresponding to the individual elements, which implies that the sensitivity is not affected by the other metal. The sampling rates, considering the sequential determination of both analytes (two injections for each determination), were 70 and 14 h21 by using the manifold without or with preconcentration, respectively.Although calibration for one element is not altered in the presence of the other, synthetic water samples were prepared in order to test the applicability of the method. The synthetic samples contained 50 ng ml21 Zn, 50 ng ml21 Cu, 50 ng ml21 Fe, 50 mg ml21 Ca, 50 mg ml21 Mg and 3% NaCl. The recoveries were 104.2 ± 3.2% and 97.6 ± 3.0% for Zn and Cu, respectively. Determination of both analytes was then carried out in a tap water sample (collected in January 1996 at Santiago, Chile).The concentrations found were Cu 6.5 ± 0.6 and Zn 10.1 ± 0.5 ng ml21, which were consistent with those determined by AAS. The method was also applied to the analysis of brass. In this case, the preconcentration system was not used. Seven portions of about 7 mg of sample were accurately weighed, dissolved in 25 ml of nitric acid (1 + 3) and then diluted to 1000 ml with water.The copper and zinc contents in the sample were determined by the proposed method and the results are given in Table 3, together with those obtained by using other methods. Considering that AAS is usually recognized as a standard Table 1 Optimization of variables Studied Selected Variable range value FIA— Injected volume (IV1)*/ml 50–250 100 Delay coil (L1)/cm 20–250 25 (L2)/cm 20–250 150 Flow rate (q1)/ml min21 0.6–3.0 2.0 (q2)/ml min21 0.6–3.0 2.0 (q3)/ml min21 0.6–3.0 2.0 Chemical— pH 3–11 5.0 and 9.0 Buffer components/m (pH 5): Acetic acid + sodium acetate 0.012–0.36 0.31 Citrate/m 0.01–0.3 0.1 EtOH, % v/v 5–40 30 Buffer components/m (pH 9): Boric acid + borate 0.02–0.60 0.125 Potassium chloride/m — 0.05 Citrate/m 0.01–0.4 0.2 Zincon/m 1.4 3 1025–1.4 3 1023 1.4 3 1024 Eluting agent, HNO3/m 0.1–1.0 0.1 (50 ml) Chelex-100/mg 20–80 40 * Manifold without preconcentration unit.Table 2 Features of the method Correlation Determination RSD (%) LOD‡/ Analyte* pH Equation† coefficient range/mg ml21 (n = 11) ng ml21 Cu 5 A = 5.82 3 1022[Cu] + 2.3 3 1023 0.9997 0.30–8.0 0.72 90 Cu(P) 5 A = 6.60[Cu] + 2.5 3 1022 0.9998 0.0026–0.025 1.50 0.8 Cu 9 A = 6.34 3 1023[Cu] + 2.8 3 1024 0.9997 — 1.16 — Cu(P) 9 A = 0.750[Cu] + 3.6 3 1023 0.9998 — 3.10 — Zn 9 A = 6.40 3 1022[Zn] 2 3.8 3 1024 0.9989 0.14–8.0 1.80 40 Zn(P) 9 A = 7.65[Zn] + 5.0 3 1022 0.9998 0.0012–0.025 1.90 0.35 * (P): preconcentration unit included in the manifold.† A in absorbance units, analyte concentration in mg ml21. ‡ LOD: Limit of detection for a preconcentration time of 2 min. Analyst, October 1997, Vol. 122 1047technique, the results obtained indicate a good level of accuracy. Conclusions The difference in stability between the analyte–zincon complexes at different pH values was the basis of the method reported here. Because the preconcentration increases the sensitivity about 100-fold, the method can be used to determination copper and zinc in water samples.On the other hand, determination of both metal in alloys does not require the preconcentration step. Comparison of the results with those obtained by other methods indicates that the proposed method is suitable for the analysis of these types of samples. In contrast to the classical determination with zincon, this continuous flow method permits the determination of copper at pH 5.0 without precipitation of the reagent. The present continuous flow method is considerably faster than the classical approaches12,13 and the consumption of zincon is about 20 times lower. The financial support of the Direcci�on de Investigaci�on de la Universidad Tecnol�ogica Metropolitana (Project 053) and FONDECYT Project 1970466 is gratefully acknowledged. References 1 Valcarcel, M., and Luque de Castro, M. D., Flow Injection Analysis. Principles and Applications, Ellis Horwood, Chichester, 1987. 2 Arruda, M. A. Z., Zagatto, E. A. G., and Maniasso, N., Anal. Chim. Acta, 1993, 283, 476. 3 Christian, G. D., and Ruzicka, J., Anal. Chim. Acta, 1992, 261, 11. 4 Richter, P., Toral, M. I., Tapia, A. E., Ubilla, C., and Bunster, M., Bol. Soc. Chil. Quim., 1996, 41, 167. 5 Fern�andez, A., Luque de Castro, M. D., and Valc�arcel, M., Anal. Chem., 1984, 56, 1146. 6 Richter, P., Toral, M. I., Parra, V., Fuentes, S., and Araya, E., Bol. Soc. Chil. Quim., 1995, 40, 337. 7 Richter, P., Toral, M. I., and Hern�andez, P., Anal. Lett., 1996, 29, 1013. 8 Richter, P., Fern�andez-Romero, J. M., Luque de Castro, M. D., and Valc�arcel, M., Chromatographia, 1992, 34, 445. 9 Luque de Castro, M. D., and Tena, T., Talanta, 1995, 42, 151. 10 Hern�andez, O., Jim�enez, F., Jim�enez, A. I., and Arias, J. J., Analyst, 1996, 121, 169. 11 Liu, R. M., Liu, D. J., and Sun, A. L., Talanta, 1993, 40, 511. 12 McCall, J. T., Davis, G. K., and Stearns, T. W., Anal. Chem., 1958, 30, 1345. 13 Platte, J. A., and Marcy, V. M., Anal. Chem., 1959, 31, 1226. 14 Ringbom, A., Formaci�on de Complejos en Qu�ýmica Anal�ýtica, Editorial Alhambra, Madrid, 1979. 15 Pai, S. C., Anal. Chim. Acta, 1988, 211, 271. Paper 7/03379F Received May 16, 1997 Accepted June 17, 1997 Table 3 Determination of copper and zinc in brass Amount found* (%) (Df)† Proposed Kinetic AAS Analyte method method method Zinc 41.3 (±1.23) 40.1 (±1.03) 39.3 (±1.85) Copper 58.1 (±1.65) 57.9 (±1.65) 58.2 (±0.82) * Mean of five determinations. † Df values in parentheses; f = confidence interval of 99%. 1048 Ana
ISSN:0003-2654
DOI:10.1039/a703379f
出版商:RSC
年代:1997
数据来源: RSC
|
9. |
Determination of Aluminium-26 in Biological Materials by Accelerator Mass Spectrometry |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1049-1055
S. J. King,
Preview
|
|
摘要:
Determination of Aluminium-26 in Biological Materials by Accelerator Mass Spectrometry S. J. Kinga, C. Oldhama, J. F. Popplewella, R. S. Carlinga, J. P. Day*a, L. K. Fifieldb, R. G. Cresswellb, Kexin Liub and M. L. di Tadab a Department of Chemistry, University of Manchester, Manchester, UK M13 9PL b Department of Nuclear Physics, Australian National University, Canberra, ACT 0200, Australia E-mail: philip.day@man.ac.uk Studies of the biological chemistry of aluminium can gain significantly from the use of the long-lived isotope 26Al as a tracer, although the cost of the isotope often precludes its determination by radiochemical counting techniques.Accelerator mass spectrometry (AMS) provides an ultra-sensitive method of determination, free from isobaric interference from atomic (26Mg) or molecular species. The source materials for AMS can be aluminium oxide or phosphate, both of which can be readily prepared at a sufficient level of purity from biological substrates.Natural aluminium (27Al, 100%) is added to the preparations as a chemical yield monitor and to provide the reference for the isotope ratio measurement. 26Al/27Al ratios can be determined over the range 10214–1027, implying a limit of detection for 26Al of around 10218 g. The precision of measurement and long-term reproducibility are < 5% and < 7% (RSD), respectively. Chemical methodologies for routine measurements on blood and urine samples have been developed.Keywords: Accelerator mass spectrometry; aluminium-26; biological analysis Over the past 20 years, aluminium has emerged as an important toxic element, both in human medicine and in the wider environment as a consequence of acidic precipitation.1 However, until recently, the study of the biological chemistry of aluminium was hampered by the lack of a suitable tracer isotope. Natural aluminium is monoisotopic (27Al; Table 1) and, of the accessible radioisotopes, only 26Al is sufficiently longlived for its practical use in tracer experiments in living systems.However, the combination of high cost and low specific activity render this isotope too expensive for general use as a radiotracer, and the potential isobaric interference from the abundant magnesium isotope, 26Mg, makes the use of conventional mass spectrometry (even at high resolution) all but impracticable. The isobar problems can, in principle, be overcome by the use of accelerator mass spectrometry,2 and since the late 1980s developments in this technique have facilitated a number of biological and biomedical studies, many involving human subjects.3 The chemical methodology needed to couple biological experiments to the sophisticated physics of AMS measurement has certain unique characteristics, and it is the object of this paper to explain, justify and quantify the procedures which have been developed.AMS is a mass spectrometric technique for the determination of trace amounts of stable or long-lived radioactive isotopes, in which a tandem electrostatic particle accelerator is coupled to a number of magnetic, and sometimes also electrostatic, dispersing elements.2 The concentration of the rare isotope of interest is measured by identifying and counting individual monatomic ions with nuclear detection techniques after acceleration to energies in the MeV range.By comparing the counting rate with the ion current of one of the element’s major isotopes, the concentration of the rare isotope can be determined.The principal attributes of the technique are an extremely high sensitivity, almost total selectivity and an exceptionally wide range of isotope concentration measurement. Detection limits down to a few thousand atoms are achievable, often against measurement backgrounds near to zero and, in contradistinction to conventional mass spectometry, isobaric and molecular interferences can be completely eliminated.Isotope ratio measurements in the range 10214–1027 are readily achievable. Historically, the major applications of AMS have been to the natural environment, associated with the measurement of long lived cosmogenic radionuclides.4,5 The most widely applied isotope is 14C, with applications ranging from archaeological dating to studies of global climate change, and in these applications the use of AMS has greatly extended the sensitivity of 14C determination over more traditional radiochemical methods.6 Extensive use has also been made of the isotopes 10Be, 26Al and 36Cl for studies of landscape evolution and hydrology.5 More recently, the technique has been applied to biological research, using artificially produced long lived radionuclides in isotopic tracer studies.In this area, the ultrasensitivity of AMS has proved a particularly valuable asset, allowing the use of such tracers in human studies at acceptably low radiation doses.Successful applications using 14C,7 41Ca,8 and 26Al3,9–13 have been reported, and we have recently extended the potential biomedical use of AMS to the actinide nuclides.14,15 The first application of 26Al AMS to tracer studies in humans was carried out at the University of Manchester, using the accelerator at the late UK Nuclear Stucture Facility, Daresbury. 9 Following the closure of this accelerator in 1993, this research programme has been continued in a collaboration between the Manchester group and the Department of Nuclear Physics, Australian National University (ANU).16 The methodology of the AMS technique now described is that currently employed for 26Al measurements at the ANU.Accelerator Mass Spectrometry Principles In the most common AMS configuration, negative ions of the tracer isotope and its abundant stable counterpart(s) are Table 1 The isotopes of aluminium Isotope mass number Radioactive half-life 25 7.2 s 26 716 000 y 27 Stable 28 2.3 min 29 6.6 min Analyst, October 1997, Vol. 122 (1049–1055) 1049generated in a Cs sputter source, passed through a magnetic sector and accelerated through a potential of several MV to a positive terminal. At the terminal, passage through a thin foil or low pressure gas generates positive ions of high charge, which then accelerate back to ground potential, where they pass through further electrostatic and magnetic selection. Individual ions of the tracer isotope are counted by standard high-energy nuclear detection techniques, and the abundant isotope is quantified by measurement of its beam current at some point in the system.The AMS system, as applied to 26Al at the Australian National University, is shown schematically in Fig. 1.17 In outline, its mode of operation is as follows. 1. An ion source produces negative ions from a suitable aluminium-containing compound, generally aluminium oxide (Al2O3). The negative ion beam contains, amongst others, the ion species of interest, Al2. 2. A first magnetic analysis (the injector magnet) selects ions of the required mass-to-charge ratio, in this case m/z 26 or 27 (the magnetic field is cycled between the two settings and the isotopes are selected alternately). 3. These ions are accelerated to the positive high voltage terminal of the accelerator, where they pass through a very thin (approximately 5 mg cm22) carbon foil. Atomic species, such as Al2, are stripped of several electrons, and the resulting positive ions experience a further acceleration back to ground potential. At the ANU, an accelerating potential of 11.4 MV is used for 26Al analysis, and under these conditions about 30% of the 26Al ions are in the 7+ charge state after stripping, with the majority of the remainder distributed over the range +5 to +9.Most importantly, any molecular ions which passed the first magnetic analysis are fully dissociated and partially stripped of electrons, giving a range of positive ions, mostly of low mass, which are also further accelerated. 4.After acceleration, a further high resolution magnetic analysis (the analysing magnet) selects the ionic species of interest at a well defined energy. In our example, this would be either 26Al7+ or 27Al7+ at about 90 MeV. Ions originating from molecular fragments do not in general have the correct magnetic rigidity to pass this analysis, although charge exchange processes during transit may result in a small proportion of anomalous ions passing the analysing magnet.Included in this are small amounts of 25Mg (from 25MgH2 ) and 26Mg (from 26MgH2), which accompany 26Al and 27Al, respectively. 5. Finally, the accelerated ions are detected in a gas ionisation chamber, which is able to identify each arriving ion unambiguously. It does this by measuring the total energy of the ion, and by making multiple measurements of the energy loss (which is in part dependent on nuclear charge) as the ion slows in the detector gas. Each species thus occupies a unique position in a multi-dimensional space, in marked contrast to low-energy mass spectrometry, where a detector capable of detecting individual ions is not able to discriminate between, say, 26Al and 26Mg, or more generally, 26Al and molecular ions of m/z 26. 6. Since the quantity of interest is the isotope ratio, 26Al/27Al, it is also necessary to measure the intensity of the stable isotope.In our system, this is done periodically by switching the first (low-energy) mass analysis to mass 27 and changing the terminal voltage to 11.0 MV, in order to give 27Al7+ ions the same magnetic rigidity as the 26Al7+ ions (the corresponding energy is 88 MeV). In this way, the 27Al7+ ions can be transmitted to a Faraday cup inserted (during the 27Al cycle) immediately in front of the ionisation detector. The 27Al7+ intensity is thus measured as an ion current. 7. There are also a number of electrostatic and magnetic quadrupole lenses and steerers, used to optimise transmission of the Al beam through the machine from the ion source to the detector, and a recently added velocity filter (Wien filter) after the analysing magnet, which is set to select against extraneous ions accompanying 26Al or 27Al.The overall transmission efficiency of the machine is 14%, which is largely determined by the abundance (about 30%) of the 7+ charge state. Ion Source Ion production So far, a first generation caesium sputter source (Hiconex Model 83218) has been used for biological AMS.Samples of Al2O3 (typically 0.2–5 mg) are mixed with approximately the same mass of silver powder (which serves as an electrical and thermal conductor) and packed into 2 mm stainless-steel grub screws inserted into cylindrical copper blocks (approximately 1 cm 3 1 cm diameter). These blocks are loaded into a 12-position sample wheel and mounted in the ion source. Negative ions are produced as a result of bombarding the Al2O3–Ag surface under high vacuum with a beam of 22 keV Cs+ ions.The beam both dislodges atomic/molecular ions from the surface and deposits Cs atoms. The deposited Cs (which has a very low ionisation energy) facilitates electron transfer from the cathode to emerging ions, promoting the formation (in order of abundance) of O2, Ag2, AlO2, Al2 and other negative ions, which are extracted by the applied electric field.Typically, beam currents of up to 50 and 500 nA for the ions Al2 and AlO2, respectively, are produced from the Al2O3–Ag mixture. The negative ion beam may also include MgO2, MgH2, BO2, C22, CN2 and other ions from impurities in the source material. However, although Mg, which is a common biological element, may well be present in macroscopic amounts (up to 1%) in the Al2O3, the ion Mg2 is not stable and is, therefore, not present to any extent in the beam, an exclusion which is of major importance in removing the potential isobaric interference from 26Mg (11% natural abundance).Thus, although the molecular AlO2 beam is generally far more intense than the Al2 beam, the former species is not selected in our application because the analogous Mg ions (MgO2) are stable, and formed as readily as AlO2, so that the discrimination against the 26Mg isobar would Fig. 1 The accelerator mass spectrometer system at the Australian National University (note the position of the recently introduced Wien filter in the detector beam line). 1050 Analyst, October 1997, Vol. 122be lost (an alternative approach which has been used previously9 is to select the molecular ion beam and fully strip the metals to Al13+ and Mg12+, which can be separated in a magnetic spectrometer). More modern high intensity sources can produce Al2 beams up to 1 mA,19 and a source of this type, with a 32-position sample wheel, is now in use at ANU. Sample material Although metallic aluminium would in principle be the ideal ion source material for aluminium AMS, for biological work the oxide or phosphate is more convenient as these compounds are readily prepared from organic matrices, whereas production of the metal would be more problematic.However, both these compounds are thermal and electrical insulators, and function much more effectively when mixed with silver powder, which prevents the build-up of surface charge and removes heat by conduction. We have shown earlier that beam currents obtained from alumina samples mixed with silver powder passed through a well defined maximum between 50 and 80% Ag.9 Aluminium oxide or phosphate samples are, therefore, mixed with an approximately equal mass of silver powder before pressing, although for very small samples ( < 1 mg) much higher ratios of silver powder ranging from 1 : 2 up to 1 : 10 are used to bulk out the sample.In order to reduce the possibility of cross-contamination between samples, and to obtain measurements with reasonable counting statistics, the amount of 27Al added to each sample is adjusted on the basis of an estimate of 26Al content, to generate an isotope ratio (26Al/27Al) in the range 1028–10212.To produce stable and sustainable Al2 beam currents, the amount of 27Al is optimally at least 1 mg, although measurements have been obtained with amounts down to 0.1 mg bulked out with powdered silver. Measurement of Isotopes Detection of 26Al The gas ionisation chamber used for this work has been described previously17 (see Fig. 2). Propane (about 150 Torr) is retained in the detector by a 1.5 mm thick Mylar window. Incoming high energy ions pass through the window (energy loss Å 3%) and traverse the length of the detector between a planar cathode and a parallel, segmented anode. The passage of the ions through the gas produces ionisation of some of the gas molecules and the electrons produced move under the influence of the applied electric field.Signals are taken from the cathode and from the various segments of the anode plane. The cathode signal is proportional to the intensity of gas ionisation, hence the total deposited energy, while each of the anode signals is proportional to the energy deposited in the region of space adjacent to the segment in question. At the energies available from tandem accelerators, the rate of energy loss depends on both the nuclear charge and instantaneous kinetic energy of the ion, and hence a particular combination of total energy and differential energy loss is characteristic of a particular isotopic species. This is illustrated in Fig. 3, which shows a twodimensional representation of some typical 26Al data. Note that, despite the high level of discrimination implicit in the AMS system, 26Al ions are far from being the only ions to reach the detector, and the ion identification capability of the detector is crucial.Whilst the initial data analysis is carried out against the two variables depicted in Fig. 3, further discrimination can be obtained by the application of the energy loss rate parameters, which essentially add additional dimensions to the spectral analysis. Other ions reaching the detector Other ionic species result from the fragmentation of mass 26 molecular ions (e.g., 10B16O2, 12C14N2, 25Mg1H2 and 24Mg2H2), and may arrive at the detector as a result of a fortuitous combination of circumstances which has a very low but finite probability. Specifically, following dissociation and stripping in the high-voltage terminal, a very small fraction undergoes a charge-changing collision with a residual gas molecule during the second stage of acceleration.If this occurs at just the right place, the ion can acquire the correct energy to pass round the final magnetic analysis and hence to the detector. For example, for an 16O ion to reach the detector, it must be injected as the BO2 molecular ion, dissociated and stripped to 3+ in the terminal and then charge exchanged to 4+ after it has experienced 37% of the second stage of acceleration.Although under normal circumstances these ions are easily distinguished from 26Al, if their arrival rate is sufficiently rapid there is a significant probability that a second ion will enter the detector within the time taken (2 ms) to collect the electrons deposited by the passage of the first.Under these conditions, the pulses from the detector electrodes will overlap (‘pile up’), and the total energy and energy loss measurements will exhibit a spread of values ranging from the values for a single ion up to the values for the sum of the two ions, depending on their relative times of arrival. This pile-up produces a non-zero background over a wide area of the 2-D spectrum (e.g., Fig. 3), including the 26Al region, and for this reason it is desirable to keep the rates low.Thus, in the past particular attention has been paid in the production of the source material to eliminate sources of B, C and N, and to reduce Mg concentrations to as low as is practicable. However, most of the problems relating to extraneous ions can also be eliminated by including a velocity Fig. 2 The gas ionisation detector. Fig. 3 Two-dimensional representation of a typical 26Al mass spectrum obtained in the absence of the Wien filter.With the filter in operation, only 26Al (shown in the defining ellipse) is observed in the 2-D spectrum. Analyst, October 1997, Vol. 122 1051filter before the detector, and the effect of this modification is described below. Wien filter A recent addition to the system is a final analysis stage consisting of crossed electric and magnetic fields, which functions as a velocity filter (commonly termed a Wien filter) and is included between the analysing magnet and the detector.This is set to allow 26Al ions to pass undeflected, but other ions which have different velocities are deflected out of the beam line and thus do not reach the detector. All of the data presented in this paper were obtained prior to the installation of the Wien filter. Subsequent experience with this device has shown that it is extremely effective in preventing any ions except 26Al from reaching the detector, i.e., the spectrum equivalent to Fig. 3 now contains only the 26Al group and nothing else.It follows that in the future it will be less necessary to reduce levels of B, C, N and Mg in the sample preparation. Detection of 27Al Measurements of the two isotopes, 26Al and 27Al, are made sequentially. The 27Al component of the sample is determined by measuring the 27Al7+ beam current impinging on a Faraday cup placed immediately in front of the detector window, during the phase of the operating cycle when the field of the injector magnet and terminal potential have been adjusted to transmit 27Al7+ through the machine.This adjustment is relatively slow (about 15 s), so that the precision with which the isotope ratio can be determined is highly dependent on the stability of the Al beam. In practice, the 27Al7+ beam is integrated for a period of up to 20 s before, during and after every 26Al counting period (usually 300–600 s). Because there is no method of monitoring beam current during the 26Al counting phase, the effective 27Al beam current during each counting period is taken as the mean of the two spanning measurements.The error associated with measurement of 27Al intensity is probably one of the more important factors limiting the precision of isotope ratio measurement. Determination of the 26Al/27Al atom ratio Most laboratories measure the 26Al/27Al ratios of samples relative to a standard of known ratio. This has the obvious advantage that variations in machine performance that might affect the transmission of the 26Al and 27Al beams differentially are nullified.However, the method has the disadvantage that the accuracy of the ratio determinations for the samples in any particular run is limited by the accuracy of the calibration measurement for that particular run. However, because in our system the two beams traverse identical paths through the machine, it is possible to adopt an alternative approach, and to make a quasi-absolute measurement of isotope ratio.Thus, both the 26Al and 27Al intensities can be measured by reference to a primary physical quantity, namely time or current, respectively: the 26Al atoms are individually counted by the detector and the 27Al beam current is measured directly. Provided that the different stripping efficiencies of the two isotopes (0.339 for 27Al7+ at 11.00 MeV and 0.312 for 26Al7+ at 11.42 MeV) are taken into account, the isotope ratio can then be calculated. We adopted this approach, and merely use concurrent measurements on a reference material (26Al doped Al2O3, of known 26Al/27Al ratio) to monitor both the short and long term stability of the system. The accuracy and precision of measurement are considered below (see Instrumental Performance).Instrumental Performance Baseline It should be emphasised that, in contrast to most other analytical techniques, including conventional mass spectrometry, AMS can be background free. In the absence of detector pile-up, if no 26Al ions reach the detector, no counts will be recorded in the 26Al region of the 2-D spectrum.The detection limit is determined, therefore, not by an unresolvable background, but by the output and efficiency of the ion source, the efficiency of transmission through the AMS system and the period of the observation (which affects the statistics of counting). For measurements carried out on pure aluminium oxide (i.e., no 26Al; a ‘machine blank’), over a 600 s counting period, it is rare that more than one count will be recorded, corresponding to a nominal 26Al/27Al ratio of about 10214.Under these circumstances, the lower limit to the useful range of measurement is about 10213, as in practice it is only above this ratio that the precision of measurement ceases to be determined by the statistics of counting. However, this limit is more than adequate for most biomedical applications. Accuracy and precision of measurement As described previously, sample 26Al/27Al ratios are determined absolutely, without the need for instrument calibration.However, to test the accuracy of this procedure, and to determine short and long term reproducibility of measurement, the isotope ratio of a standard material (26Al-containing Al2O3) is generally measured within each 12-sample set. The standard (supplied by S. Vogt, Purdue University, USA) was prepared by serial dilution of an 26Al stock solution, originally characterised by gamma spectrometry, and the calculated 26Al/27Al ratio is 2.78 3 10210 (subject to an estimated uncertainty of 4%, mainly arising from the uncertainty with which the radioactive half life of 26Al has been determined).The mean value determined by AMS over 4 years (Table 2) is 2.75 3 10210 (RSD 6.5%, n = 66), in good agreement with the nominal value. However, the absolute accuracy of measurement is never an issue in biomedical applications involving the use of isotopic tracers, as the 26Al content of the original tracer is always determined alongside the working samples, and experimental results are thus internally calibrated.The precision of the measurement technique was determined from repeat measurements on the calibration material during the course of a single run, i.e., a measurement period where the overall tuning of the accelerator is not altered. Under these circumstances, the RSD of a number of measurements of 26Al/ 27Al ratio varies between 3 and 6% (n = 6).Cross-contamination Cross-contamination between samples can occur in the ion source. This possibility was investigated by placing aluminium Table 2 Repeat measurements on the 26Al/27Al standard over a 4 year period. The values obtained show a normal distribution, with a mean value of 2.75 310210 (RSD = 6.5%, n = 66), not significantly different from the nominal value (2.78 310210; S. Vogt, unpublished). There has also been no significant drift over this period, as is demonstrated by the means for each sequential 12 month period Year Number Mean RSD (%) 1992–93 16 2.69 4.99 1993–94 19 2.69 5.50 1994–95 11 2.67 7.65 1995–96 20 2.91 4.06 All years 66 2.75 6.47 1052 Analyst, October 1997, Vol. 122oxide samples of widely differing 26Al/27Al ratios ( < 10214 and 10210) in neighbouring positions in the sample wheel. The blank sample was measured for a 20 min period both before and after sputtering the higher level sample for the same period.The blank was not significantly affected by the sputtering of the higher ratio sample, recording 0 and 1 count for the two measurements, respectively, where 1 count would correspond to a ratio 26Al/27Al = 3 3 10214. This implies that crosscontamination was in this instance @1 part in 104. Interferences 26Mg (11% of natural Mg) is the only stable nuclide isobaric with 26Al, and presents a potential problem of interference, as magnesium is usually a major constituent of biological materials and the 26Mg/26Al ratio in typical unprocessed samples may range from 108 to 1012.Selective isolation of Al during sample preparation may reduce this ratio by 3–4 orders of magnitude (see later), but the Mg content of the final sample material is invariably far higher than the 26Al content. Two factors help to eliminate the isobaric interference. First, the Mg2 ion is unstable, and 26Mg2 ions do not survive long enough to reach the high voltage terminal of the accelerator.Hence 26Mg2 is effectively eliminated by the ion source. In principle, a low energy tail on the 26MgH2 beam could permit a very small fraction of 26Mg2 containing molecular ions to pass the injector magnet and undergo the first stage of acceleration. However, the 26Mg component of this molecular ion arrives at the terminal with only 96.3% (i.e., 26/27) of the energy of the 26Al2 ions. Hence, following stripping, the 26Mg ions must undergo charge-changing collisions during the second stage of acceleration in order to reach the detector.In practice, 26Mg ions have not been observed in the detector, and this sequence of events must have an extremely low probability. Molecular ions of the lighter Mg isotopes, 25MgH2, 24MgH22 and 24Mg2H2, are all accepted by the injector magnet at m/z 26. Again, 24Mg and 25Mg ions can only reach the detector as a result of charge-changing collisions, but these ions have been observed. In practice, 25Mg ions are substantially more abundant at the detector than 24Mg ions, although counting rates rarely exceed 10 s21, and the detector provides excellent discrimination from 26Al (Fig. 3). In order to confirm experimentally that Mg in the sample does not produce interference with the 26Al signal, six Al2O3 samples were prepared, three containing 26Al and four containing Mg (see Table 3). The measured 26Al/Al ratios are apparently unaffected by the presence or otherwise of Mg, up to an Mg/Al ratio of 1 : 40.The other potential interference (i.e., effect producing spurious counts which might be asigned to 26Al) is ion pile-up in the detector; the basis for the phenomenon was described earlier. Pile-up is generally caused by the arrival of high intensities of C, N and O positive ions at the detector, resulting in turn from the transmission of mass 26 molecular ions (e.g., BO2, CN2) to the terminal. Residual C and N in samples results from incomplete oxidation of organic matter, and high B levels may result from leaching of borosilicate glass vessels during acid digestion stages. The effects of pile-up can be reduced by careful attention to the chemistry of the final stages of sample preparation, including the use of acid-etched glassware (to remove leachable boron) and prolonged high temperature ignition, if necessary in an oxygen-enriched atmosphere, of the final aluminium oxide preparation (to remove traces of carbon).However, the recent introduction of a Wien filter in the system has greatly reduced the stringency of the chemical requirements in this respect. Analysis of Biological Materials Sample Preparation General principles Sample preparation in this context consists of the conversion of a biological material into an amount of aluminium oxide or phosphate suitable for presentation as an ion source. It is assumed that the biological part of the experiment has been appropriately designed, to yield samples containing at least 10216 g of 26Al and not more than about 10 mg of 27Al.The preparation procedure consists essentially of four stages: (i) addition and homogenisation of 27Al carrier; (ii) removal of, or separation from, the organic matrix; (iii) isolation of aluminium from the inorganic matrix; and (iv) conversion of the aluminium- containing fraction to dry aluminium oxide or phosphate. Aluminium of natural isotopic abundance (i.e., 27Al) is normally added, as an aliquot of an acidic solution, to a known mass/volume of the raw sample material, prior to any chemical treatment.The Al acts as an isotope carrier, and the amount added is determined on three criteria: first, the amount must be significantly greater (ideally, at least 100 times) than the natural Al already present in the samples; second, the desirable range for the final 26Al/27Al ratio in measured material is from 10212 to 1028; and third, the minimum amount of Al2O3 which can be employed, and which will produce a stable Al2 beam for at least 30 min, is about 100 mg.Because the amount of 26Al tracer used in the biological experiments is likely to have been decided in the design of the experiment, the appropriate amount of 27Al carrier will normally fall in the range 1–10 mg (if, unusually, it is required to determine the 26Al/Al isotope ratio in the original biological sample, the 27Al concentration in the biological sample must, of course, be determined before the addition of carrier).Homogenisation of added 27Al with the sample 26Al is achieved by conversion of the entire sample into inorganic form in solution, concurrently with the removal of the organic matrix by nitric acid oxidation at elevated temperature. It is clearly important to ensure that no fractionation of Al isotopes can occur prior to this stage, when their chemical forms may be different. Removal of the organic matrix and isolation of Al from other inorganic components is required both to reduce the dilution factor (e.g., if a large amount of sodium or calcium is present) and, specifically, to reduce the Mg content to a tolerable level (target level < 1%).The organic components are normally oxidised by digestion with concentrated nitric acid at elevated temperature and pressure, using microwave heating. Methods employed to extract Al from the strongly acidic residue, depending on circumstance, are selective precipitation of Al as the 8-hydroxyquinoline derivative20 or solvent extraction of Al as the acetylacetonate.21 In each case, once the Al component has been isolated as a solid phase, high temperature ashing in a muffle furnace (with oxygen flow if necessary) is generally sufficient to produce the ion source material.The purities of the Al compounds produced were estimated by elemental analysis (ICP-OES for Al, Mg, Ca, Na and K), and the stepwise and overall yields were determined using 26Al Table 3 Test of the effect of added Mg on the AMS determination of the 26Al content of Al2O3 containing an aliquot (Y) of 26Al 1012(26Al/27Al) 26Al 106(26Mg/Al) (measured) 0 0 < 0.1 0 2.5 < 0.1 0 2.5 < 0.1 Y 0 238 Y 2.5 249 Y 2.5 238 Analyst, October 1997, Vol. 122 1053tracer and radiochemical methods (liquid scintillation counting and gamma spectrometry). In most circumstances, the overall yields are 70–90% and metal atom impurities below 1% (i.e., given as atom ratios relative to Al), which are generally satisfactory.Provided that sufficient material is produced to make an ion source, the chemical processing yield has no effect on the accuracy of the AMS determination, as once the 27Al carrier has been added the 26Al/27Al isotope ratio will not alter significantly in the subsequent chemical processing. In all preparations described below, analytical-reagent grade reagents (AnalaR grade, BDH/Merck, Poole, Dorset, UK) were employed, and acid leached, previously unused glass- or plasticware was employed for 26Al sample preparations.Blood and soft tissues Blood and plasma sample volumes were typically in the range 1–5 ml and the soft tissue mass was normally less than 10 mg. The relatively small size of the biological samples allowed the removal of organic matter by oxidation under pressure with concentrated nitric acid [10 ml at 180 °C at about 10 bar, using a CEM (Matthews, NC, USA) Model 2000 microwave digestion system], following the addition of the appropriate amount of 27Al carrier (usually 1–5 mg Al).The resulting strongly acidic solutions were evaporated to dryness in acid leached glass beakers on a hot-plate. The residue was dissolved in hot nitric acid (1 m; 10 ml), transferred into a 50 ml centrifuge tube and treated with sodium acetate (1 m, 10 ml, as a pH buffer) and a substantial excess of 8-hydroxyquinoline20 (5% in 2 m acetic acid, using 5 ml per 1 mg of 27Al).The pH was then adjusted to 5.8 with concentrated ammonia solution and the precipitated aluminium 8-hydroxyquinolinate separated by centrifugation, washed with distilled water and dried at 120 °C. The material was transferred into a porcelain crucible and converted into aluminium oxide by heating in air in a muffle furnace (raised slowly to 800 °C and held for 8 h). The application of this technique allowed the separation of Al from Na, K, Mg and Ca present in the original blood/tissue.Aluminium starts to form a neutral complex with 8-hydroxyquinoline at about pH 4.5, and this is precipitated quantitatively above pH 5.5. Although Mg and Ca form complexes with 8-hydroxyquinoline, it is claimed20 that these compounds do not precipitate appreciably at this pH, and we confirmed this in our chemical systems (see Fig. 4). Analysis of the 8-hydroxyquinoline precipitate showed the Al yield to be about 90% (based on the original 27Al addition), with contamination by Mg or Ca below 1% (atom ratio versus Al).Urine Urine samples (generally 24 h collections) were acidified with nitric acid (10 ml of acid per litre of urine) and refrigerated. The first stage of the extraction method was designed to remove aluminium from the organic material, by the coprecipitation of aluminium, calcium and magnesium phosphates, using the calcium and magnesium naturally present in the urine (normally about 2.5–7.5 and 3.3–4.9 mmol d21 respectively). The appropriate amount of 27Al carrier (5–50 mg) and a large excess of sodium phosphate (1 m NaH2PO4 at 20 ml per litre of urine) were added and the solution was heated at 80 °C for 30 min to bring any solids present into solution.After cooling to room temperature, the solution was brought to pH 8–9 with 6 m ammonia solution. After about 12 h, the mixed phosphate precipitate was isolated by decantation and centrifugation of the solution and was then redissolved in nitric acid (8 m; 20 ml).Citric acid (6 g) was added and the pH brought to 5.5 by the addition of ammonia solution (the citric acid just prevents the precipitation of calcium phosphate at this pH). Pentane- 2,4-dione (acetylacetone) (5 ml) was added and the mixture shaken vigorously for 5 min, followed by 2-methylbutan-1-one (isobutylmethyl ketone; IBMK) (15 ml). Following phase separation (assisted by mild centrifugation if necessary), the organic layer was removed, further IBMK (15 ml) added and the phases separated.The yield of Al at this point is about 70% (see Table 4), the loss of Al occurring probably because some is trapped by the calcium phosphate solid which precipitates. To recover this, the aqueous solution was re-acidified (nitric acid) to dissolve the precipitate, and the entire extraction procedure repeated, giving an overall yield of about 90–95%. Aluminium was recovered by back-extraction from the combined organic phase with nitric acid (1.0 m; two portions of 10 ml). The aqueous phase was washed with two portions of IBMK (5 ml) to remove any residual acetylacetone, and subsequently evaporated to dryness on a hot-plate in a small beaker.Two portions of nitric acid (16 m; 5 ml) were added sequentially to the residue, each followed by boiling and evaporation to dryness, and the final residue was heated to high temperature (about 300 °C) on the hot-plate. The dry residue (which was grey– white) was converted into Al2O3 (white) by ignition at 800 °C as described previously (the overall yield from 27Al added to original urine was 70–90%).Performance Characteristics Detection limits The lower limit of ratio measurement, as discussed earlier, is around 10214. Measurements at this level could be made without undue difficulty on samples containing 0.1 mg of 27Al, implying a detection limit of about 10–18 g of 26Al (about 20 000 atoms). Linear range The physics of measurement determine that the response is essentially linear.The upper limit of practicable measurement is determined by the saturation rate of the gas ionisation detector, about 104 counts s21. The lower limit, as discussed earlier, is Fig. 4 pH dependence of 8-hydroxyquinoline precipitation for Al3+, Ca2+ and Mg2+ Table 4 26Al recovery by acetylacetone extraction from calcium phosphate solution (organic phase, IBMK; aqueous pH, 5.5, containing excess citric acid) Operation 26Al recovered (%) First extraction (2) 70 Second extraction (2) 25 Total organic phase 95 Back-extraction (2) 95 Residual, aqueous phase < 3 Residual, organic phase < 3 1054 Analyst, October 1997, Vol. 122about 1023 counts s21, determined by the detector background and the practicable limits to counting time. The measurement range thus covers seven orders of magnitude, corresponding to a 26Al/27Al isotope ratio, from 10214 to 1027. Precision The short and long term reproducibility of measurement based on repeated determinations of the 26Al/27Al ratio of a calibration material have already been reported (Table 2).The RSD over 4 years is approximately 6.5%, and this figure may be taken to represent the precision of measurement of the machine itself over a long period. In order to gauge the reproducibility of measurement on a real sample material, five independent repeated measurements on a blood plasma sample containing a moderate level of 26Al were made, using the method described earlier.The results (Table 5; RSD = 8.3%) show a slightly larger variability than those for the calibration material, and this figure probably represents the reliability of measurement on a real sample under relatively good conditions. This work was supported (in part) by Withington Hospital Renal Unit Endowment Fund, the Royal Society (London) and the Engineering and Physical Sciences Research Council, UK. Accelerator facilities at the Australian National University were provided under the EPSRC/ANU Joint Agreement.References 1 Flaten, T. P., Alfrey, A. C., Birchall, J. D., Savory, J., and Yokel, R. A., J. Toxicol. Environ. Health, 1996, 48, 527. 2 Litherland, A. E., Philos. Trans. R. Soc. London, Ser A, 1987, 323, 5. 3 Day, J. P., Barker, J., King, S. J., Miller, R. V., Templar, J., Lilley, J. S., Drumm, P. V., Newton, G. W. A., Fifield, L. K., Stone, J. O. H., Allan, G. L., Edwardson, J. A., Moore, P.B., Ferrier, I. N., Priest, N. D., Newton, D., Talbot, R. J., Brock, J. H., Sanchez, L., Dobson, C. B., Itzhaki, R. F., Radunovic, A., and Bradbury, M. W. B., Nucl. Instrum. Methods Phys. Res., Sect. B, 1994, 92, 463. 4 Fifield, L. K., Ophel, T. R., Bird, J. R., Calf, G. E., Allison, G. B., and Chivas, A. R., Nucl. Instrum. Methods Phys. Res., Sect. B, 1987, 29, 114. 5 Reedy, R. C., Tuniz, C., and Fink, D., Nucl. Instrum. Methods Phys. Res., Section B, 1994, 92, 335. 6 Hedges, R.E. M., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 428. 7 Vogel, J. S., and Turtletaub, K. W., Nucl. Instrum. Methods Phys. Res., Sect. B, 1994, 92, 445. 8 Elmore, D., Bhattacharyya, M. H., Sacco-Gibson, N., and Peterson, D. P., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 531. 9 Barker, J., Day, J. P., Aitken, T. W., Charlesworth, T. R., Cunningham, R. C., Drumm, P. V., Lilley, J. S., Newton, G. W. A., and Smithson, M. J., Nucl. Instrum. Methods Phys.Res., Sect. B, 1990, 52, 540. 10 Day, J. P., Barker, J., Evans, L. J. A., Perks, J., and Seabright, P. J., Lancet, 1991, 337, 1345. 11 King, S. J., Day, J. P., Moore, P. B., Edwardson, J. A., Taylor, G. A., Fifield, L. K., and Cresswell, R. G., Nucl. Instrum. Methods Phys. Res., Ser. B, 1997, 123, 254. 12 Priest, N. D., Newton, D., Day, J. P., Talbot, R. J., and Warner, A. J., Hum. Exp. Toxicol., 1995, 14, 287. 13 Priest, N. D., Talbot, R. J., Austin, J. G., Day, J. P., King, S.J., Fifield, L. K., and Cresswell, R. G., Biometals, 1996, 9, 221. 14 Fifield, L. K., Cresswell, R. G., di Tada, M. L., Ophel, T. R., Day, J. P., Clacher, A. P., King, S. J., and Priest, N. D., Nucl. Instrum. Methods Phys. Res., Sect. B, 1996, 117, 295. 15 Fifield, L. K., Clacher, A. P., Morris, K., King, S. J., Cresswell, R. G., Day, J. P., and Livens, F. R., Nucl. Instrum. Methods Phys. Res., Ser. B, 1997, 123, 400. 16 Fifield, L. K., Allan, G. L., Stone, J. O. H., and Ophel, T.R., Nucl. Instrum. Methods Phys. Res., Sect. B, 1994, 92, 85. 17 Fifield, L. K., Ophel, T. R., Allan, G. L., Bird, J. R., and Davie, R. F., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 233. 18 Brand, K., Nucl. Instrum. Methods Phys. Res., 1977, 141, 519. 19 Middleton, R., Nucl. Instrum. Methods Phys. Res., 1984, 220, 105. 20 Vogel, A., A Textbook of Quantitative Inorganic Analysis, Longmans, London, 1978, 3rd edn., p. 516. 21 Eshelman, H. C., Dean, J.A., Menis, O., and Rains, T. C., Anal. Chem., 1959, 31, 183. Paper 7/02002C Received March 24, 1997 Accepted June 16, 1997 Table 5 Reproducibility test: fivefold replicate determination of the 26Al/ 27Al ratio in a sample of blood plasma Sample No. 1010(26Al/27Al) SD* 1 2.40 0.10 2 2.28 0.10 3 1.98 0.13 4 2.23 0.15 5 2.01 0.11 Mean: 2.18 s† 0.18 RSD (%) 8.26 * Standard deviation of the measurement based on counting statistics for 26Al. † Standard deviation of the measurements based on the observed variability.Analyst, October 1997, Vol. 122 1055 Determination of Aluminium-26 in Biological Materials by Accelerator Mass Spectrometry S. J. Kinga, C. Oldhama, J. F. Popplewella, R. S. Carlinga, J. P. Day*a, L. K. Fifieldb, R. G. Cresswellb, Kexin Liub and M. L. di Tadab a Department of Chemistry, University of Manchester, Manchester, UK M13 9PL b Department of Nuclear Physics, Australian National University, Canberra, ACT 0200, Australia E-mail: philip.day@man.ac.uk Studies of the biological chemistry of aluminium can gain significantly from the use of the long-lived isotope 26Al as a tracer, although the cost of the isotope often precludes its determination by radiochemical counting techniques.Accelerator mass spectrometry (AMS) provides an ultra-sensitive method of determination, free from isobaric interference from atomic (26Mg) or molecular species. The source materials for AMS can be aluminium oxide or phosphate, both of which can be readily prepared at a sufficient level of purity from biological substrates. Natural aluminium (27Al, 100%) is added to the preparations as a chemical yield monitor and to provide the reference for the isotope ratio measurement. 26Al/27Al ratios can be determined over the range 10214–1027, implying a limit of detection for 26Al of around 10218 g.The precision of measurement and long-term reproducibility are < 5% and < 7% (RSD), respectively.Chemical methodologies for routine measurements on blood and urine samples have been developed. Keywords: Accelerator mass spectrometry; aluminium-26; biological analysis Over the past 20 years, aluminium has emerged as an important toxic element, both in human medicine and in the wider environment as a consequence of acidic precipitation.1 However, until recently, the study of the biological chemistry of aluminium was hampered by the lack of a suitable tracer isotope. Natural aluminium is monoisotopic (27Al; Table 1) and, of the accessible radioisotopes, only 26Al is sufficiently longlived for its practical use in tracer experiments in living systems.However, the combination of high cost and low specific activity render this isotope too expensive for general use as a radiotracer, and the potential isobaric interference from the abundant magnesium isotope, 26Mg, makes the use of conventional mass spectrometry (even at high resolution) all but impracticable.The isobar problems can, in principle, be overcome by the use of accelerator mass spectrometry,2 and since the late 1980s developments in this technique have facilitated a number of biological and biomedical studies, many involving human subjects.3 The chemical methodology needed to couple biological experiments to the sophisticated physics of AMS measurement has certain unique characteristics, and it is the object of this paper to explain, justify and quantify the procedures which have been developed.AMS is a mass spectrometric technique for the determination of trace amounts of stable or long-lived radioactive isotopes, in which a tandem electrostatic particle accelerator is coupled to a number of magnetic, and sometimes also electrostatic, dispersing elements.2 The concentration of the rare isotope of interest is measured by identifying and counting individual monatomic ions with nuclear detection techniques after acceleration to energies in the MeV range.By comparing the counting rate with the ion current of one of the element’s major isotopes, the concentration of the rare isotope can be determined. The principal attributes of the technique are an extremely high sensitivity, almost total selectivity and an exceptionally wide range of isotope concentration measurement. Detection limits down to a few thousand atoms are achievable, often against measurement backgrounds near to zero and, in contradistinction to conventional mass spectometry, isobaric and molecular interferences can be completely eliminated. Isotope ratio measurements in the range 10214–1027 are readily achievable. Historically, the major applications of AMS have been to the natural environment, associated with the measurement of long lived cosmogenic radionuclides.4,5 The most widely applied isotope is 14C, with applications ranging from archaeological dating to studies of global climate change, and in these applications the use of AMS has greatly extended the sensitivity of 14C determination over more traditional radiochemical methods.6 Extensive use has also been made of the isotopes 10Be, 26Al and 36Cl for studies of landscape evolution and hydrology.5 More recently, the technique has been applied to biological research, using artificially produced long lived radionuclides in isotopic tracer studies. In this area, the ultrasensitivity of AMS has proved a particularly valuable asset, allowing the use of such tracers in human studies at acceptably low radiation doses.Successful applications using 14C,7 41Ca,8 and 26Al3,9–13 have been reported, and we have recently extended the potential biomedical use of AMS to the actinide nuclides.14,15 The first application of 26Al AMS to tracer studies in humans was carried out at the University of Manchester, using the accelerator at the late UK Nuclear Stucture Facility, Daresbury. 9 Following the closure of this accelerator in 1993, this research programme has been continued in a collaboration between the Manchester group and the Department of Nuclear Physics, Australian National University (ANU).16 The methodology of the AMS technique now described is that currently employed for 26Al measurements at the ANU.Accelerator Mass Spectrometry Principles In the most common AMS configuration, negative ions of the tracer isotope and its abundant stable counterpart(s) are Table 1 The isotopes of aluminium Isotope mass number Radioactive half-life 25 7.2 s 26 716 000 y 27 Stable 28 2.3 min 29 6.6 min Analyst, October 1997, Vol. 122 (1049–1055) 1049generated in a Cs sputter source, passed through a magnetic sector and accelerated through a potential of several MV to a positive terminal. At the terminal, passage through a thin foil or low pressure gas generates positive ions of high charge, which then accelerate back to ground potential, where they pass through further electrostatic and magnetic selection.Individual ions of the tracer isotope are counted by standard high-energy nuclear detection techniques, and the abundant isotope is quantified by measurement of its beam current at some point in the system. The AMS system, as applied to 26Al at the Australian National University, is shown schematically in Fig. 1.17 In outline, its mode of operation is as follows. 1. An ion source produces negative ions from a suitable aluminium-containing compound, generally aluminium oxide (Al2O3).The negative ion beam contains, amongst others, the ion species of interest, Al2. 2. A first magnetic analysis (the injector magnet) selects ions of the required mass-to-charge ratio, in this case m/z 26 or 27 (the magnetic field is cycled between the two settings and the isotopes are selected alternately). 3. These ions are accelerated to the positive high voltage terminal of the accelerator, where they pass through a very thin (approximately 5 mg cm22) carbon foil.Atomic species, such as Al2, are stripped of several electrons, and the resulting positive ions experience a further acceleration back to ground potential. At the ANU, an accelerating potential of 11.4 MV is used for 26Al analysis, and under these conditions about 30% of the 26Al ions are in the 7+ charge state after stripping, with the majority of the remainder distributed over the range +5 to +9. Most importantly, any molecular ions which passed the first magnetic analysis are fully dissociated and partially stripped of electrons, giving a range of positive ions, mostly of low mass, which are also further accelerated. 4. After acceleration, a further high resolution magnetic analysis (the analysing magnet) selects the ionic species of interest at a well defined energy. In our example, this would be either 26Al7+ or 27Al7+ at about 90 MeV. Ions originating from molecular fragments do not in general have the correct magnetic rigidity to pass this analysis, although charge exchange processes during transit may result in a small proportion of anomalous ions passing the analysing magnet.Included in this are small amounts of 25Mg (from 25MgH2 ) and 26Mg (from 26MgH2), which accompany 26Al and 27Al, respectively. 5. Finally, the accelerated ions are detected in a gas ionisation chamber, which is able to identify each arriving ion unambiguously. It does this by measuring the total energy of the ion, and by making multiple measurements of the energy loss (which is in part dependent on nuclear charge) as the ion slows in the detector gas.Each species thus occupies a unique position in a multi-dimensional space, in marked contrast to low-energy mass spectrometry, where a detector capable of detecting individual ions is not able to discriminate between, say, 26Al and 26Mg, or more generally, 26Al and molecular ions of m/z 26. 6. Since the quantity of interest is the isotope ratio, 26Al/27Al, it is also necessary to measure the intensity of the stable isotope.In our system, this is done periodically by switching the first (low-energy) mass analysis to mass 27 and changing the terminal voltage to 11.0 MV, in order to give 27Al7+ ions the same magnetic rigidity as the 26Al7+ ions (the corresponding energy is 88 MeV). In this way, the 27Al7+ ions can be transmitted to a Faraday cup inserted (during the 27Al cycle) immediately in front of the ionisation detector.The 27Al7+ intensity is thus measured as an ion current. 7. There are also a number of electrostatic and magnetic quadrupole lenses and steerers, used to optimise transmission of the Al beam through the machine from the ion source to the detector, and a recently added velocity filter (Wien filter) after the analysing magnet, which is set to select against extraneous ions accompanying 26Al or 27Al. The overall transmission efficiency of the machine is 14%, which is largely determined by the abundance (about 30%) of the 7+ charge state.Ion Source Ion production So far, a first generation caesium sputter source (Hiconex Model 83218) has been used for biological AMS. Samples of Al2O3 (typically 0.2–5 mg) are mixed with approximately the same mass of silver powder (which serves as an electrical and thermal conductor) and packed into 2 mm stainless-steel grub screws inserted into cylindrical copper blocks (approximately 1 cm 3 1 cm diameter).These blocks are loaded into a 12-position sample wheel and mounted in the ion source. Negative ions are produced as a result of bombarding the Al2O3–Ag surface under high vacuum with a beam of 22 keV Cs+ ions. The beam both dislodges atomic/molecular ions from the surface and deposits Cs atoms. The deposited Cs (which has a very low ionisation energy) facilitates electron transfer from the cathode to emerging ions, promoting the formation (in order of abundance) of O2, Ag2, AlO2, Al2 and other negative ions, which are extracted by the applied electric field.Typically, beam currents of up to 50 and 500 nA for the ions Al2 and AlO2, respectively, are produced from the Al2O3–Ag mixture. The negative ion beam may also include MgO2, MgH2, BO2, C22, CN2 and other ions from impurities in the source material. However, although Mg, which is a common biological element, may well be present in macroscopic amounts (up to 1%) in the Al2O3, the ion Mg2 is not stable and is, therefore, not present to any extent in the beam, an exclusion which is of major importance in removing the potential isobaric interference from 26Mg (11% natural abundance).Thus, although the molecular AlO2 beam is generally far more intense than the Al2 beam, the former species is not selected in our application because the analogous Mg ions (MgO2) are stable, and formed as readily as AlO2, so that the discrimination against the 26Mg isobar would Fig. 1 The accelerator mass spectrometer system at the Australian National University (note the position of the recently introduced Wien filter in the detector beam line). 1050 Analyst, October 1997, Vol. 122be lost (an alternative approach which has been used previously9 is to select the molecular ion beam and fully strip the metals to Al13+ and Mg12+, which can be separated in a magnetic spectrometer). More modern high intensity sources can produce Al2 beams up to 1 mA,19 and a source of this type, with a 32-position sample wheel, is now in use at ANU. Sample material Although metallic aluminium would in principle be the ideal ion source material for aluminium AMS, for biological work the oxide or phosphate is more convenient as these compounds are readily prepared from organic matrices, whereas production of the metal would be more problematic.However, both these compounds are thermal and electrical insulators, and function much more effectively when mixed with silver powder, which prevents the build-up of surface charge and removes heat by conduction.We have shown earlier that beam currents obtained from alumina samples mixed with silver powder passed through a well defined maximum between 50 and 80% Ag.9 Aluminium oxide or phosphate samples are, therefore, mixed with an approximately equal mass of silver powder before pressing, although for very small samples ( < 1 mg) much higher ratios of silver powder ranging from 1 : 2 up to 1 : 10 are used to bulk out the sample.In order to reduce the possibility of cross-contamination between samples, and to obtain measurements with reasonable counting statistics, the amount of 27Al added to each sample is adjusted on the basis of an estimate of 26Al content, to generate an isotope ratio (26Al/27Al) in the range 1028–10212. To produce stable and sustainable Al2 beam currents, the amount of 27Al is optimally at least 1 mg, although measurements have been obtained with amounts down to 0.1 mg bulked out with powdered silver.Measurement of Isotopes Detection of 26Al The gas ionisation chamber used for this work has been described previously17 (see Fig. 2). Propane (about 150 Torr) is retained in the detector by a 1.5 mm thick Mylar window. Incoming high energy ions pass through the window (energy loss Å 3%) and traverse the length of the detector between a planar cathode and a parallel, segmented anode. The passage of the ions through the gas produces ionisation of some of the gas molecules and the electrons produced move under the influence of the applied electric field.Signals are taken from the cathode and from the various segments of the anode plane. The cathode signal is proportional to the intensity of gas ionisation, hence the total deposited energy, while each of the anode signals is proportional to the energy deposited in the region of space adjacent to the segment in question.At the energies available from tandem accelerators, the rate of energy loss depends on both the nuclear charge and instantaneous kinetic energy of the ion, and hence a particular combination of total energy and differential energy loss is characteristic of a particular isotopic species. This is illustrated in Fig. 3, which shows a twodimensional representation of some typical 26Al data. Note that, despite the high level of discrimination implicit in the AMS system, 26Al ions are far from being the only ions to reach the detector, and the ion identification capability of the detector is crucial.Whilst the initial data analysis is carried out against the two variables depicted in Fig. 3, further discrimination can be obtained by the application of the energy loss rate parameters, which essentially add additional dimensions to the spectral analysis. Other ions reaching the detector Other ionic species result from the fragmentation of mass 26 molecular ions (e.g., 10B16O2, 12C14N2, 25Mg1H2 and 24Mg2H2), and may arrive at the detector as a result of a fortuitous combination of circumstances which has a very low but finite probability.Specifically, following dissociation and stripping in the high-voltage terminal, a very small fraction undergoes a charge-changing collision with a residual gas molecule during the second stage of acceleration. If this occurs at just the right place, the ion can acquire the correct energy to pass round the final magnetic analysis and hence to the detector.For example, for an 16O ion to reach the detector, it must be injected as the BO2 molecular ion, dissociated and stripped to 3+ in the terminal and then charge exchanged to 4+ after it has experienced 37% of the second stage of acceleration. Although under normal circumstances these ions are easily distinguished from 26Al, if their arrival rate is sufficiently rapid there is a significant probability that a second ion will enter the detector within the time taken (2 ms) to collect the electrons deposited by the passage of the first.Under these conditions, the pulses from the detector electrodes will overlap (‘pile up’), and the total energy and energy loss measurements will exhibit a spread of values ranging from the values for a single ion up to the values for the sum of the two ions, depending on their relative times of arrival. This pile-up produces a non-zero background over a wide area of the 2-D spectrum (e.g., Fig. 3), including the 26Al region, and for this reason it is desirable to keep the rates low. Thus, in the past particular attention has been paid in the production of the source material to eliminate sources of B, C and N, and to reduce Mg concentrations to as low as is practicable. However, most of the problems relating to extraneous ions can also be eliminated by including a velocity Fig. 2 The gas ionisation detector. Fig. 3 Two-dimensional representation of a typical 26Al mass spectrum obtained in the absence of the Wien filter. With the filter in operation, only 26Al (shown in the defining ellipse) is observed in the 2-D spectrum.Analyst, October 1997, Vol. 122 1051filter before the detector, and the effect of this modification is described below. Wien filter A recent addition to the system is a final analysis stage consisting of crossed electric and magnetic fields, which functions as a velocity filter (commonly termed a Wien filter) and is included between the analysing magnet and the detector.This is set to allow 26Al ions to pass undeflected, but other ions which have different velocities are deflected out of the beam line and thus do not reach the detector. All of the data presented in this paper were obtained prior to the installation of the Wien filter. Subsequent experience with this device has shown that it is extremely effective in preventing any ions except 26Al from reaching the detector, i.e., the spectrum equivalent to Fig. 3 now contains only the 26Al group and nothing else. It follows that in the future it will be less necessary to reduce levels of B, C, N and Mg in the sample preparation. Detection of 27Al Measurements of the two isotopes, 26Al and 27Al, are made sequentially. The 27Al component of the sample is determined by measuring the 27Al7+ beam current impinging on a Faraday cup placed immediately in front of the detector window, during the phase of the operating cycle when the field of the injector magnet and terminal potential have been adjusted to transmit 27Al7+ through the machine.This adjustment is relatively slow (about 15 s), so that the precision with which the isotope ratio can be determined is highly dependent on the stability of the Al beam. In practice, the 27Al7+ beam is integrated for a period of up to 20 s before, during and after every 26Al counting period (usually 300–600 s).Because there is no method of monitoring beam current during the 26Al counting phase, the effective 27Al beam current during each counting period is taken as the mean of the two spanning measurements. The error associated with measurement of 27Al intensity is probably one of the more important factors limiting the precision of isotope ratio measurement. Determination of the 26Al/27Al atom ratio Most laboratories measure the 26Al/27Al ratios of samples relative to a standard of known ratio.This has the obvious advantage that variations in machine performance that might affect the transmission of the 26Al and 27Al beams differentially are nullified. However, the method has the disadvantage that the accuracy of the ratio determinations for the samples in any particular run is limited by the accuracy of the calibration measurement for that particular run. However, because in our system the two beams traverse identical paths through the machine, it is possible to adopt an alternative approach, and to make a quasi-absolute measurement of isotope ratio.Thus, both the 26Al and 27Al intensities can be measured by reference to a primary physical quantity, namely time or current, respectively: the 26Al atoms are individually counted by the detector and the 27Al beam current is measured directly. Provided that the different stripping efficiencies of the two isotopes (0.339 for 27Al7+ at 11.00 MeV and 0.312 for 26Al7+ at 11.42 MeV) are taken into account, the isotope ratio can then be calculated.We adopted this approach, and merely use concurrent measurements on a reference material (26Al doped Al2O3, of known 26Al/27Al ratio) to monitor both the short and long term stability of the system. The accuracy and precision of measurement are considered below (see Instrumental Performance). Instrumental Performance Baseline It should be emphasised that, in contrast to most other analytical techniques, including conventional mass spectrometry, AMS can be background free.In the absence of detector pile-up, if no 26Al ions reach the detector, no counts will be recorded in the 26Al region of the 2-D spectrum. The detection limit is determined, therefore, not by an unresolvable background, but by the output and efficiency of the ion source, the efficiency of transmission through the AMS system and the period of the observation (which affects the statistics of counting). For measurements carried out on pure aluminium oxide (i.e., no 26Al; a ‘machine blank’), over a 600 s counting period, it is rare that more than one count will be recorded, corresponding to a nominal 26Al/27Al ratio of about 10214.Under these circumstances, the lower limit to the useful range of measurement is about 10213, as in practice it is only above this ratio that the precision of measurement ceases to be determined by the statistics of counting. However, this limit is more than adequate for most biomedical applications. Accuracy and precision of measurement As described previously, sample 26Al/27Al ratios are determined absolutely, without the need for instrument calibration. However, to test the accuracy of this procedure, and to determine short and long term reproducibility of measurement, the isotope ratio of a standard material (26Al-containing Al2O3) is generally measured within each 12-sample set.The standard (supplied by S. Vogt, Purdue University, USA) was prepared by serial dilution of an 26Al stock solution, originally characterised by gamma spectrometry, and the calculated 26Al/27Al ratio is 2.78 3 10210 (subject to an estimated uncertainty of 4%, mainly arising from the uncertainty with which the radioactive half life of 26Al has been determined).The mean value determined by AMS over 4 years (Table 2) is 2.75 3 10210 (RSD 6.5%, n = 66), in good agreement with the nominal value. However, the absolute accuracy of measurement is never an issue in biomedical applications involving the use of isotopic tracers, as the 26Al content of the original tracer is always determined alongside the working samples, and experimental results are thus internally calibrated. The precision of the measurement technique was determined from repeat measurements on the calibration material during the course of a single run, i.e., a measurement period where the overall tuning of the accelerator is not altered. Under these circumstances, the RSD of a number of measurements of 26Al/ 27Al ratio varies between 3 and 6% (n = 6).Cross-contamination Cross-contamination between samples can occur in the ion source. This possibility was investigated by placing aluminium Table 2 Repeat measurements on the 26Al/27Al standard over a 4 year period. The values obtained show a normal distribution, with a mean value of 2.75 310210 (RSD = 6.5%, n = 66), not significantly different from the nominal value (2.78 310210; S.Vogt, unpublished). There has also been no significant drift over this period, as is demonstrated by the means for each sequential 12 month period Year Number Mean RSD (%) 1992–93 16 2.69 4.99 1993–94 19 2.69 5.50 1994–95 11 2.67 7.65 1995–96 20 2.91 4.06 All years 66 2.75 6.47 1052 Analyst, October 1997, Vol. 122oxide samples of widely differing 26Al/27Al ratios ( < 10214 and 10210) in neighbouring positions in the sample wheel. The blank sample was measured for a 20 min period both before and after sputtering the higher level sample for the same period.The blank was not significantly affected by the sputtering of the higher ratio sample, recording 0 and 1 count for the two measurements, respectively, where 1 count would correspond to a ratio 26Al/27Al = 3 3 10214. This implies that crosscontamination was in this instance @1 part in 104. Interferences 26Mg (11% of natural Mg) is the only stable nuclide isobaric with 26Al, and presents a potential problem of interference, as magnesium is usually a major constituent of biological materials and the 26Mg/26Al ratio in typical unprocessed samples may range from 108 to 1012.Selective isolation of Al during sample preparation may reduce this ratio by 3–4 orders of magnitude (see later), but the Mg content of the final sample material is invariably far higher than the 26Al content. Two factors help to eliminate the isobaric interference.First, the Mg2 ion is unstable, and 26Mg2 ions do not survive long enough to reach the high voltage terminal of the accelerator. Hence 26Mg2 is effectively eliminated by the ion source. In principle, a low energy tail on the 26MgH2 beam could permit a very small fraction of 26Mg2 containing molecular ions to pass the injector magnet and undergo the first stage of acceleration. However, the 26Mg component of this molecular ion arrives at the terminal with only 96.3% (i.e., 26/27) of the energy of the 26Al2 ions.Hence, following stripping, the 26Mg ions must undergo charge-changing collisions during the second stage of acceleration in order to reach the detector. In practice, 26Mg ions have not been observed in the detector, and this sequence of events must have an extremely low probability. Molecular ions of the lighter Mg isotopes, 25MgH2, 24MgH22 and 24Mg2H2, are all accepted by the injector magnet at m/z 26. Again, 24Mg and 25Mg ions can only reach the detector as a result of charge-changing collisions, but these ions have been observed.In practice, 25Mg ions are substantially more abundant at the detector than 24Mg ions, although counting rates rarely exceed 10 s21, and the detector provides excellent discrimination from 26Al (Fig. 3). In order to confirm experimentally that Mg in the sample does not produce interference with the 26Al signal, six Al2O3 samples were prepared, three containing 26Al and four containing Mg (see Table 3).The measured 26Al/Al ratios are apparently unaffected by the presence or otherwise of Mg, up to an Mg/Al ratio of 1 : 40. The other potential interference (i.e., effect producing spurious counts which might be asigned to 26Al) is ion pile-up in the detector; the basis for the phenomenon was described earlier. Pile-up is generally caused by the arrival of high intensities of C, N and O positive ions at the detector, resulting in turn from the transmission of mass 26 molecular ions (e.g., BO2, CN2) to the terminal. Residual C and N in samples results from incomplete oxidation of organic matter, and high B levels may result from leaching of borosilicate glass vessels during acid digestion stages.The effects of pile-up can be reduced by careful attention to the chemistry of the final stages of sample preparation, including the use of acid-etched glassware (to remove leachable boron) and prolonged high temperature ignition, if necessary in an oxygen-enriched atmosphere, of the final aluminium oxide preparation (to remove traces of carbon).However, the recent introduction of a Wien filter in the system has greatly reduced the stringency of the chemical requirements in this respect. Analysis of Biological Materials Sample Preparation General principles Sample preparation in this context consists of the conversion of a biological material into an amount of aluminium oxide or phosphate suitable for presentation as an ion source.It is assumed that the biological part of the experiment has been appropriately designed, to yield samples containing at least 10216 g of 26Al and not more than about 10 mg of 27Al. The preparation procedure consists essentially of four stages: (i) addition and homogenisation of 27Al carrier; (ii) removal of, or separation from, the organic matrix; (iii) isolation of aluminium from the inorganic matrix; and (iv) conversion of the aluminium- containing fraction to dry aluminium oxide or phosphate.Aluminium of natural isotopic abundance (i.e., 27Al) is normally added, as an aliquot of an acidic solution, to a known mass/volume of the raw sample material, prior to any chemical treatment. The Al acts as an isotope carrier, and the amount added is determined on three criteria: first, the amount must be significantly greater (ideally, at least 100 times) than the natural Al already present in the samples; second, the desirable range for the final 26Al/27Al ratio in measured material is from 10212 to 1028; and third, the minimum amount of Al2O3 which can be employed, and which will produce a stable Al2 beam for at least 30 min, is about 100 mg.Because the amount of 26Al tracer used in the biological experiments is likely to have been decided in the design of the experiment, the appropriate amount of 27Al carrier will normally fall in the range 1–10 mg (if, unusually, it is required to determine the 26Al/Al isotope ratio in the original biological sample, the 27Al concentration in the biological sample must, of course, be determined before the addition of carrier).Homogenisation of added 27Al with the sample 26Al is achieved by conversion of the entire sample into inorganic form in solution, concurrently with the removal of the organic matrix by nitric acid oxidation at elevated temperature. It is clearly important to ensure that no fractionation of Al isotopes can occur prior to this stage, when their chemical forms may be different.Removal of the organic matrix and isolation of Al from other inorganic components is required both to reduce the dilution factor (e.g., if a large amount of sodium or calcium is present) and, specifically, to reduce the Mg content to a tolerable level (target level < 1%). The organic components are normally oxidised by digestion with concentrated nitric acid at elevated temperature and pressure, using microwave heating.Methods employed to extract Al from the strongly acidic residue, depending on circumstance, are selective precipitation of Al as the 8-hydroxyquinoline derivative20 or solvent extraction of Al as the acetylacetonate.21 In each case, once the Al component has been isolated as a solid phase, high temperature ashing in a muffle furnace (with oxygen flow if necessary) is generally sufficient to produce the ion source material. The purities of the Al compounds produced were estimated by elemental analysis (ICP-OES for Al, Mg, Ca, Na and K), and the stepwise and overall yields were determined using 26Al Table 3 Test of the effect of added Mg on the AMS determination of the 26Al content of Al2O3 containing an aliquot (Y) of 26Al 1012(26Al/27Al) 26Al 106(26Mg/Al) (measured) 0 0 < 0.1 0 2.5 < 0.1 0 2.5 < 0.1 Y 0 238 Y 2.5 249 Y 2.5 238 Analyst, October 1997, Vol. 122 1053tracer and radiochemical methods (liquid scintillation counting and gamma spectrometry).In most circumstances, the overall yields are 70–90% and metal atom impurities below 1% (i.e., given as atom ratios relative to Al), which are generally satisfactory. Provided that sufficient material is produced to make an ion source, the chemical processing yield has no effect on the accuracy of the AMS determination, as once the 27Al carrier has been added the 26Al/27Al isotope ratio will not alter significantly in the subsequent chemical processing. In all preparations described below, analytical-reagent grade reagents (AnalaR grade, BDH/Merck, Poole, Dorset, UK) were employed, and acid leached, previously unused glass- or plasticware was employed for 26Al sample preparations. Blood and soft tissues Blood and plasma sample volumes were typically in the range 1–5 ml and the soft tissue mass was normally less than 10 mg.The relatively small size of the biological samples allowed the removal of organic matter by oxidation under pressure with concentrated nitric acid [10 ml at 180 °C at about 10 bar, using a CEM (Matthews, NC, USA) Model 2000 microwave digestion system], following the addition of the appropriate amount of 27Al carrier (usually 1–5 mg Al).The resulting strongly acidic solutions were evaporated to dryness in acid leached glass beakers on a hot-plate. The residue was dissolved in hot nitric acid (1 m; 10 ml), transferred into a 50 ml centrifuge tube and treated with sodium acetate (1 m, 10 ml, as a pH buffer) and a substantial excess of 8-hydroxyquinoline20 (5% in 2 m acetic acid, using 5 ml per 1 mg of 27Al).The pH was then adjusted to 5.8 with concentrated ammonia solution and the precipitated aluminium 8-hydroxyquinolinate separated by centrifugation, washed with distilled water and dried at 120 °C. The material was transferred into a porcelain crucible and converted into aluminium oxide by heating in air in a muffle furnace (raised slowly to 800 °C and held for 8 h).The application of this technique allowed the separation of Al from Na, K, Mg and Ca present in the original blood/tissue. Aluminium starts to form a neutral complex with 8-hydroxyquinoline at about pH 4.5, and this is precipitated quantitatively above pH 5.5. Although Mg and Ca form complexes with 8-hydroxyquinoline, it is claimed20 that these compounds do not precipitate appreciably at this pH, and we confirmed this in our chemical systems (see Fig. 4). Analysis of the 8-hydroxyquinoline precipitate showed the Al yield to be about 90% (based on the original 27Al addition), with contamination by Mg or Ca below 1% (atom ratio versus Al). Urine Urine samples (generally 24 h collections) were acidified with nitric acid (10 ml of acid per litre of urine) and refrigerated. The first stage of the extraction method was designed to remove aluminium from the organic material, by the coprecipitation of aluminium, calcium and magnesium phosphates, using the calcium and magnesium naturally present in the urine (normally about 2.5–7.5 and 3.3–4.9 mmol d21 respectively).The appropriate amount of 27Al carrier (5–50 mg) and a large excess of sodium phosphate (1 m NaH2PO4 at 20 ml per litre of urine) were added and the solution was heated at 80 °C for 30 min to bring any solids present into solution. After cooling to room temperature, the solution was brought to pH 8–9 with 6 m ammonia solution. After about 12 h, the mixed phosphate precipitate was isolated by decantation and centrifugation of the solution and was then redissolved in nitric acid (8 m; 20 ml).Citric acid (6 g) was added and the pH brought to 5.5 by the addition of ammonia solution (the citric acid just prevents the precipitation of calcium phosphate at this pH). Pentane- 2,4-dione (acetylacetone) (5 ml) was added and the mixture shaken vigorously for 5 min, followed by 2-methylbutan-1-one (isobutylmethyl ketone; IBMK) (15 ml).Following phase separation (assisted by mild centrifugation if necessary), the organic layer was removed, further IBMK (15 ml) added and the phases separated. The yield of Al at this point is about 70% (see Table 4), the loss of Al occurring probably because some is trapped by the calcium phosphate solid which precipitates. To recover this, the aqueous solution was re-acidified (nitric acid) to dissolve the precipitate, and the entire extraction procedure repeated, giving an overall yield of about 90–95%.Aluminium was recovered by back-extraction from the combined organic phase with nitric acid (1.0 m; two portions of 10 ml). The aqueous phase was washed with two portions of IBMK (5 ml) to remove any residual acetylacetone, and subsequently evaporated to dryness on a hot-plate in a small beaker. Two portions of nitric acid (16 m; 5 ml) were added sequentially to the residue, each followed by boiling and evaporation to dryness, and the final residue was heated to high temperature (about 300 °C) on the hot-plate.The dry residue (which was grey– white) was converted into Al2O3 (white) by ignition at 800 °C as described previously (the overall yield from 27Al added to original urine was 70–90%). Performance Characteristics Detection limits The lower limit of ratio measurement, as discussed earlier, is around 10214. Measurements at this level could be made without undue difficulty on samples containing 0.1 mg of 27Al, implying a detection limit of about 10–18 g of 26Al (about 20 000 atoms).Linear range The physics of measurement determine that the response is essentially linear. The upper limit of practicable measurement is determined by the saturation rate of the gas ionisation detector, about 104 counts s21. The lower limit, as discussed earlier, is Fig. 4 pH dependence of 8-hydroxyquinoline precipitation for Al3+, Ca2+ and Mg2+ Table 4 26Al recovery by acetylacetone extraction from calcium phosphate solution (organic phase, IBMK; aqueous pH, 5.5, containing excess citric acid) Operation 26Al recovered (%) First extraction (2) 70 Second extraction (2) 25 Total organic phase 95 Back-extraction (2) 95 Residual, aqueous phase < 3 Residual, organic phase < 3 1054 Analyst, October 1997, Vol. 122about 1023 counts s21, determined by the detector background and the practicable limits to counting time. The measurement range thus covers seven orders of magnitude, corresponding to a 26Al/27Al isotope ratio, from 10214 to 1027. Precision The short and long term reproducibility of measurement based on repeated determinations of the 26Al/27Al ratio of a calibration material have already been reported (Table 2).The RSD over 4 years is approximately 6.5%, and this figure may be taken to represent the precision of measurement of the machine itself over a long period. In order to gauge the reproducibility of measurement on a real sample material, five independent repeated measurements on a blood plasma sample containing a moderate level of 26Al were made, using the method described earlier.The results (Table 5; RSD = 8.3%) show a slightly larger variability than those for the calibration material, and this figure probably represents the reliability of measurement on a real sample under relatively good conditions. This work was supported (in part) by Withington Hospital Renal Unit Endowment Fund, the Royal Society (London) and the Engineering and Physical Sciences Research Council, UK. Accelerator facilities at the Australian National University were provided under the EPSRC/ANU Joint Agreement. References 1 Flaten, T. P., Alfrey, A. C., Birchall, J. D., Savory, J., and Yokel, R. A., J. Toxicol. Environ. Health, 1996, 48, 527. 2 Litherland, A. E., Philos. Trans. R. Soc. London, Ser A, 1987, 323, 5. 3 Day, J. P., Barker, J., King, S. J., Miller, R. V., Templar, J., Lilley, J. S., Drumm, P. V., Newton, G. W. A., Fifield, L. K., Stone, J. O. H., Allan, G. L., Edwardson, J. A., Moore, P. B., Ferrier, I. N., Priest, N. D., Newton, D., Talbot, R. J., Brock, J. H., Sanchez, L., Dobson, C. B., Itzhaki, R. F., Radunovic, A., and Bradbury, M. W. B., Nucl. Instrum. Methods Phys. Res., Sect. B, 1994, 92, 463. 4 Fifield, L. K., Ophel, T. R., Bird, J. R., Calf, G. E., Allison, G. B., and Chivas, A. R., Nucl. Instrum. Methods Phys. Res., Sect. B, 1987, 29, 114. 5 Reedy, R. C., Tuniz, C., and Fink, D., Nucl. Instrum. Methods Phys. Res., Section B, 1994, 92, 335. 6 Hedges, R. E. M., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 428. 7 Vogel, J. S., and Turtletaub, K. W., Nucl. Instrum. Methods Phys. Res., Sect. B, 1994, 92, 445. 8 Elmore, D., Bhattacharyya, M. H., Sacco-Gibson, N., and Peterson, D. P., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 531. 9 Barker, J., Day, J. P., Aitken, T. W., Charlesworth, T. R., Cunningham, R. C., Drumm, P. V., Lilley, J. S., Newton, G. W. A., and Smithson, M. J., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 540. 10 Day, J. P., Barker, J., Evans, L. J. A., Perks, J., and Seabright, P. J., Lancet, 1991, 337, 1345. 11 King, S. J., Day, J. P., Moore, P. B., Edwardson, J. A., Taylor, G. A., Fifield, L. K., and Cresswell, R. G., Nucl. Instrum. Methods Phys. Res., Ser. B, 1997, 123, 254. 12 Priest, N. D., Newton, D., Day, J. P., Talbot, R. J., and Warner, A. J., Hum. Exp. Toxicol., 1995, 14, 287. 13 Priest, N. D., Talbot, R. J., Austin, J. G., Day, J. P., King, S. J., Fifield, L. K., and Cresswell, R. G., Biometals, 1996, 9, 221. 14 Fifield, L. K., Cresswell, R. G., di Tada, M. L., Ophel, T. R., Day, J. P., Clacher, A. P., King, S. J., and Priest, N. D., Nucl. Instrum. Methods Phys. Res., Sect. B, 1996, 117, 295. 15 Fifield, L. K., Clacher, A. P., Morris, K., King, S. J., Cresswell, R. G., Day, J. P., and Livens, F. R., Nucl. Instrum. Methods Phys. Res., Ser. B, 1997, 123, 400. 16 Fifield, L. K., Allan, G. L., Stone, J. O. H., and Ophel, T. R., Nucl. Instrum. Methods Phys. Res., Sect. B, 1994, 92, 85. 17 Fifield, L. K., Ophel, T. R., Allan, G. L., Bird, J. R., and Davie, R. F., Nucl. Instrum. Methods Phys. Res., Sect. B, 1990, 52, 233. 18 Brand, K., Nucl. Instrum. Methods Phys. Res., 1977, 141, 519. 19 Middleton, R., Nucl. Instrum. Methods Phys. Res., 1984, 220, 105. 20 Vogel, A., A Textbook of Quantitative Inorganic Analysis, Longmans, London, 1978, 3rd edn., p. 516. 21 Eshelman, H. C., Dean, J. A., Menis, O., and Rains, T. C., Anal. Chem., 1959, 31, 183. Paper 7/02002C Received March 24, 1997 Accepted June 16, 1997 Table 5 Reproducibility test: fivefold replicate determination of the 26Al/ 27Al ratio in a sample of blood plasma Sample No. 1010(26Al/27Al) SD* 1 2.40 0.10 2 2.28 0.10 3 1.98 0.13 4 2.23 0.15 5 2.01 0.11 Mean: 2.18 s† 0.18 RSD (%) 8.26 * Standard deviation of the measurement based on counting statistics for 26Al. † Standard deviation of the measurements based on the observed variability. Analyst, October 1997, Vol. 122 1055
ISSN:0003-2654
DOI:10.1039/a702002c
出版商:RSC
年代:1997
数据来源: RSC
|
10. |
Speciation of Selenium and Arsenic Compounds by Capillary Electrophoresis With Hydrodynamically Modified Electroosmotic Flow and On-line Reduction of Selenium(VI) to Selenium(IV) With Hydride Generation Inductively Coupled Plasma Mass Spectrometric Detection |
|
Analyst,
Volume 122,
Issue 10,
1997,
Page 1057-1062
Matthew L. Magnuson,
Preview
|
|
摘要:
Speciation of Selenium and Arsenic Compounds by Capillary Electrophoresis With Hydrodynamically Modified Electroosmotic Flow and On-line Reduction of Selenium(VI) to Selenium(IV) With Hydride Generation Inductively Coupled Plasma Mass Spectrometric Detection Matthew L. Magnuson*a, John T. Creedb and Carol A. Brockhoffb a United States Environmental Protection Agency, National Risk Management Research Laboratory, Water Supply and Water Resources Division, Treatment Technologies Evaluation Branch, 26 W.Martin Luther King Drive, Cincinnati, OH 45268, USA b United States Environmental Protection Agency, National Exposure Research Laboratory, Human Exposure Research Division, Chemical Exposure Research Branch, 26 W. Martin Luther King Drive, Cincinnati, OH 45268, USA Capillary electrophoresis (CE) with hydride generation inductively coupled plasma mass spectrometry was used to determine four arsenicals and two selenium species. Selenate (SeVI) was reduced on-line to selenite (SeIV) by mixing the CE effluent with concentrated HCl.A microporous PTFE tube was used as a gas–liquid separator to eliminate the 40Ar37Cl and 40Ar35Cl interference from 77Se and 75As, respectively. The direction of the electroosmotic flow during CE was reversed with hydrodynamic pressure, which allowed increased freedom of buffer choice. For conventional pressure injection, method detection limits for SeIV and SeVI based on seven replicate injections were 10 and 24 pg, respectively.Recoveries of SeIV and SeVI in drinking water were measured. Keywords: Speciation; selenium; arsenic; capillary electrophoresis; hydride generation; inductively coupled plasma mass spectrometry Selenium is an essential nutrient but is considered toxic at higher concentrations. Like arsenic, the toxicity of selenium is related to the oxidation state. Selenite, SeIV, is more toxic than selenate, SeVI. Therefore, speciating selenium provides a more accurate toxicity-based risk assessment than an analysis based on total selenium.One approach to selenium speciation is to use ion chromatography (IC)1–5 or capillary electrophoresis (CE)6–9 to separate the selenium species prior to detection. Detectors for IC have included conductivity,1 ICP-AES,2 ICP-MS,3 atomic absorption (AAS)4 and atomic fluorescence spectrometry (AFS).5 For CE, detector schemes have used UV,7,8 conductivity7 and ICP-MS.9 An indirect approach to selenium speciation is based on the inability of SeVI to form a hydride readily.In this approach, SeIV in a sample is determined through hydride generation. Then, the SeVI in the sample is reduced to SeIV, usually by high HCl10–12 or HBr13 concentrations, sometimes assisted by microwave heating,14,15 and the SeVI concentration determined by difference. Speciation based analysis with ICP-MS detection provides excellent selectivity and isotopic information for isotope dilution analysis16 and isotope tracer studies.12 The sensitivity of ICP-MS for selenium is limited because the isotopes amenable to ICP-MS are present in low natural abundance ( < 10%).3,9,17 Hydride generation can compensate for this loss of sensitivity because of its nearly quantitative transport of the gaseous hydride into the ICP without interference from the sample matrix.The compatibility of hydride generation (HG) with chromatographic or transient based signals requires rapid on-line conversion of SeVI (non-hydride forming) into SeIV (hydride forming) to reduce diffusional broadening.The kinetics of the reduction of SeVI to SeIV by HCl have recently been investigated by HG–AFS.18 Pitts et. al.5 found that large volumes of HCl and microwave heating were required for online reduction of selenium in IC effluents. The use of CE, which is a low flow separation technique, may decrease, via the kinetics of the reduction reaction, the volume of HCl consumed and eliminate the need for microwave heating.The potential of the HCl reductant to produce isobaric interferences such as 40Ar35Cl and 40Ar37Cl has been reported17 and can be minimized by the use of membrane based gas separators.19–33 The on-line reduction of SeVI in CE effluents using HG with a membrane based gas–liquid separator was investigated in this work. A second area of investigation was the development of a CE based separation which is flexible enough to allow for dual detection of selenium and arsenic on the same electropherogram.The separation of arsenic via CE–HG–ICP-MS has been reported,19 in which the use of hydrodynamically modified electroosmotic flow (HMEOF) via pressurizing the sample vial allowed for a means of controlling the bulk flow independent of the electro-osmotic flow induced by the applied CE potential. Pressurizing the sample vial causes a laminar flow which is opposite to the direction of the electroosmotic flow. The effect of this laminar flow on CE with HG–ICP-MS is described in more detail elsewhere.19 This HMEOF technique is convenient for the separation of anions (such as SeIV and SeVI) because control over the pressure allows control of elution times and increases freedom of buffer choice, and is relatively electrophoretically reproducible.19 The CE–on-line reduction (OLR)– HG–ICP-MS interface design is described and evaluated for the speciation of selenium.Recoveries of fortified SeVI and SeIV samples were determined in natural water matrices.Experimental Instrumentation The ICP-MS system was a Hewlett-Packard (Avondale, PA, USA) Model 4500 (HP 4500) benchtop instrument. Optimized system parameters for the HP 4500 with HG were similar to solution nebulization parameters. The standard HP 4500 utilizes a nickel sampler cone (1.0 mm orifice) and a nickel skimmer cone (0.4 mm orifice). The rf power was set at 1200 W, the Analyst, October 1997, Vol. 122 (1057–1061) 1057plasma gas flow rate at 15 l min21 and the auxiliary gas flow rate at 1.0 l min21.A sampling depth of 5.2 mm was used with the torch position slightly (20.1 mm horizontally, 0.8 mm vertically, set in the instrument software) off the axis of the sampling orifice, which reduced noise while maintaining the signal. Electrophoretic data were collected in the time resolved analysis (TRA) mode, and peak areas were measured with the chromatographic integration software provided with the instrument.Although CE–HG–ICP-MS produces unique peak shapes, the software is suitable for integrating electrophoretic peak shapes. For consistency, the peak area was integrated using the average level of the background to define the beginning and end of each peak. The CE unit was a Dionex (Sunnyvale, CA, USA) CES-1. The polyimide-coated, fused silica capillary (85 cm 375 mm id) was obtained from Polymicro Technologies (Phoenix, AZ, USA). To apply pressure for the HMEOF, the pressure regulator built into the CE unit was replaced with an external pressure source, a sub-miniature pressure regulator (McMaster Carr, Chicago, IL, Part No. 41795K3) equipped with a suitable pressure gauge (McMaster Carr, Part No. 3842K5). This allowed control of the pressure applied to the sample vial in increments of 0.1 psi, compared with the 0.5 psi increments with the pressure regulator standard with the CE unit. Fig. 1 is a schematic diagram of the CE–HG–ICP-MS system.By design, concentrated HCl comes into contact with inert plastic fittings, tubes, etc., but not the peristaltic pump tubing. Another design (not shown or discussed here), previously used for the speciation of arsenic compounds,19 was evaluated in which the concentrated HCl came into contact with the tubing, and the tubing needed frequent replacement. The platinum ground wire for the CE circuit and the concentrated HCl enter through opposite ports of a PTFE cross (Omnifit, Toms River, NJ, USA).The HCl is drawn from its ventilated sample bottle by the ‘isolating peristaltic pump’ (Minipuls; Gilson, Middletown, WI, USA). The isolating peristaltic pump has two functions. First, it isolates the CE interface from the hydride generation reaction, which produces large volumes of hydrogen gas. The volume of gas induces a back-pressure on the column exit. This back-pressure tends to cause a large, undesirable flow in the capillary. Second, the isolating peristaltic pump draws the HCl from a ventilated reservoir.The suction required to draw the HCl also produces a suction on the capillary. To limit this suction on the capillary, the tube connecting the cross to the peristaltic pump has an id of 3.0 mm, compared with the 0.075 mm id of the capillary. This id difference allows the isolating peristaltic pump to preferentially draw fluid from the HCl reservoir. The difference in ids reduces the suction on the capillary sufficiently such that a currentinterrupting air bubble is not pulled into the capillary in the time it takes to move the capillary between two vials within the autosampler of the CE unit (about 10 s).Following the 3.0 mm id tube, the concentrated HCl is mixed with the CE effluent within a 23 cm 3 0.30 mm id PTFE tube. The capillary has an od of 0.375 mm and therefore is positioned about 0.1 mm from the 0.30 mm tube to prevent blockage. (Note: in Fig. 1, for graphical contrast in the figure, the end of the capillary appears far from the 0.3 mm tube.) The HCl mixed with capillary effluent is then diluted in a three-way PTFE manifold by make-up flow (0.81 ml min21 of distilled water or dilute HCl, discussed below) supplied by the HP 4500’s built in peristaltic pumps.The isolating peristaltic pump (with Tygon tubing, id 1.52 mm) delivers this solution into another PTFE manifold, shown in the upper part of Fig. 1, where it mixes with NaBH4 supplied by the HP 4500’s peristaltic pump (0.42 ml min21).A 60 cm 3 0.8 mm id PTFE tube (not shown) connects this manifold to the membrane gas–liquid separator (MGLS), which is described in detail elsewhere.19,20 The analyte gases and the excess H2 migrate across the microporous membrane of the MGLS into a stream of argon carrier gas, which is introduced directly into the central channel of the ICP torch via a short length of Tygon tubing. The membrane used in the gas–liquid separator was a high density (0.9 g ml21) expanded PTFE material of 0.7 cm id, available from International Polymer Engineering (Tempe, AZ, USA).Reagents All reagents and solutions were handled and prepared in a Class 100 clean air hood to avoid contamination. The distilled water used was de-ionized to 18 MW with a Milli-Q system (Millipore, Milford, MA, USA). Solutions were prepared in fresh Nalgene polyethylene bottles. The CE sample vials were purchased from Dionex. The HCl (Fisher, Fair Lawn, NJ, USA; ACS+ certified) was used after determining that the arsenic concentration within this Fig. 1 Schematic diagram of CE–HG–ICP-MS unit. MGLS represents the membrane gas–liquid separator. 1058 Analyst, October 1997, Vol. 122acid was lower than a ‘high purity’ acid. Dilutions were made (m/m) with distilled water. The NaBH4 (97 + % pure, Alfa AESAR, Johnson Matthey, Ward Hill, MA, USA) was made up on an m/m basis. The NaBH4 was stabilized by adding 7.5 ml of 50% m/m NaOH (Fisher) per liter of solution, and fresh solutions were prepared daily.The arsenic and selenium solutions were prepared on an arsenic and selenium mass basis, respectively. Stock solutions containing 1000 ppm of selenium were prepared from sodium selenate and sodium selenite (Aldrich, Milwaukee, WI, USA). Arsenite was derived from solid arsenic(iii) trioxide (SPEX Industries, Edison, NJ, USA) made up to 1000 ppm in 1% nitric acid, and arsenate was prepared from a 1000 ppm standard of orthoarsenic acid in 2% nitric acid (SPEX Industries).Monomethylarsonic acid and dimethylarsinic acid (both 98% pure) were obtained from Chem Service Chemicals (West Chester, PA, USA). The buffer used for the capillary electrophoresis was prepared from boric acid (99%, Fisher) and from potassium hydrogenphthalate (primary standard, Fisher). The pH meter used was an Orion (Boston, MA, USA) Model 620 pH meter equipped with a temperature compensated Orion Model 6165 probe.Results Optimization of SeVI Reduction The rate of reduction of SeVI to SeIV by HCl when the CE effluent is added to a flowing HCl stream is related both to the HCl concentration and to the contact time between the HCl and the SeVI. The contact time of the HCl and CE effluent in the 0.30 mm tube is controlled by the flow rate of the isolating peristaltic pump (Fig. 1). Fig. 2 illustrates the reduction of SeVI to SeIV by HCl as a function of contact time, which can be calculated by dividing the volume of the 0.3 mm tubing by the flow rate provided by the isolating peristaltic pump.To study the reduction, 100 ppb solutions of SeVI and SeIV were placed in separate CE sample vials and injected sequentially by pressure into the HCl stream at around 3.1 ml min21.19 The ‘SeVI response’ in Fig. 2 is the HG–ICP-MS signal at m/z 77 from the HCl-reduced SeVI, and the ‘SeIV response’ is from the injection of SeIV. The ratio of SeVI to SeIV response is plotted versus contact time.Fig. 2 illustrates the effect of HCl concentration and contact time on the reduction of SeVI to SeIV. The dashed vertical line represents the effect of different concentrations of HCl used for the reduction. The HCl concentrations in the caption of Fig. 2 are reported based on the mass percentage of concentrated HCl. The effect of contact time was then investigated using 100% HCl, chosen because of its higher conversion of SeVI and also because there is no preparation step.The solid line in Fig. 2 indicates that longer contact times favor the reduction of SeVI to SeIV. As the contact time increases, the reduction of SeVI to SeIV is driven more towards completion. Complete conversion of SeVI to SeIV is not achieved with the existing experimental set-up even if 100% HCl is used with a 24 s contact time. Longer contact/conversion times were not investigated because long contact times were observed to degrade the CE peak shapes.The choice of contact time represented a compromise between conversion efficiency and the need to preserve the CE peak shape. If the contact time was too long, then CE peak shape distortion and asymmetry resulted. The contact time was chosen empirically on the basis of peak shape. This limited the conversion of SeVI to SeIV to about 30% for 100% m/m HCl. Optimization of Hydride Generation Conditions Because only one selenium species, SeIV, is hydride active, optimizing the concentrations of the hydride generating reagents, NaBH4 and HCl, is straightforward.To study the optimization, a 100 ppb SeIV solution was placed in the CE sample vials and continuously introduced at about 3.1 ml min21 into the make-up stream by pressurizing the sample vial. The HCl for the SeVI reduction is drawn by the isolating peristaltic pump. Additional HCl may be added as ‘make-up flow’ (Fig. 1). Fig. 3 is a plot of analyte response versus concentration of HCl added as make-up flow.There are only small relative changes in the response from 0 to 10%, and the response decrease after 10% is probably produced by excess of H2 gas which may dilute the flow of analyte across the membrane of the gas–liquid separator and also decrease the ionization efficiency of the plasma.19–21 Because of relatively small changes in the response, it seems that there is no benefit from increasing the HCl concentration via the make-up flow, so distilled, de-ionized water was used as the make-up flow.Fig. 4 is a plot of NaBH4 versus SeIV response with distilled, de-ionized water used as the make-up flow. The response for selenium increases up to 1% m/m NaBH4, after which the intensity decreases, presumably Fig. 2 Effect of contact time with HCl on SeVI reduction. The contact time is calculated by dividing the volume of the peristaltic pump tubing by the flow rate provided by the isolating peristaltic pump.Fig. 3 Effect of changing the HCl concentration in the make-up flow. Fig. 4 SeIV response as a function of NaBH4 concentration. Analyst, October 1997, Vol. 122 1059because of the excess of H2 gas.19–21 Therefore, 1% NaBH4 was chosen for subsequent experiments. Capillary Electrophoresis Experiment Injections into the CE capillary were performed using pressure injection. Lower detection limits could be achieved using electrokinetic injection, as was demonstrated for arsenic.19 For the electrophoretic separation HMEOF was employed, which is more completely described elsewhere.19 To summarize, the HMEOF experiment involves pressurizing the vial containing the CE running buffer during a separation.The high negative voltage (222 000 V) induces an electroosmotic flow away from the detector, but the application of the HMEOF at a low pressure ( < 3 psi) causes the bulk flow to move towards the detector. The retention time of the analytes is then dependent on the HMEOF induced bulk flow rate and the electrophoretic mobilities of the analytes.For low HMEOF separation pressures, the peak shape is not excessively broadened by laminar flow induced by hydrodynamic pressure, as discussed in more detail in ref. 19. The use of HMEOF to direct the bulk flow allows flexibility in choosing the buffer, thereby potentially decreasing the analysis time. The buffer chosen was 20 mm potassium hydrogenphthlate (KHP)–20 mm boric acid adjusted to pH 9.03.This buffer was used for the speciation of arsenic compounds by CE–HG–ICPMS, 19 and was thus chosen for the possibility of dual separation. Pitts et al.5 reported that the addition of KHP to a flow stream of high concentrations of HCl resulted in plugging of an IC system due to the solubility of the KHP. No plugging was observed in the CE–OLR–HG–ICP-MS system, perhaps owing to the low flow rate of the KHP buffer. Method Detection Limits for SeIV and SeVI The method detection limits,34as defined in the US Federal Code of Regulations, is a measure of the precision of several replicate injections of an analyte.For an injection volume of 250 nl, the detection limits for an electrophoretic separation of SeVI and SeIV were 10 and 24 pg, respectively. These detection limits, based on 3.14sn21 of seven replicate injections, were probably influenced by the long term stability in the CE– OLR–HG–ICPMS system. A source of instability was a slowly rising background resulting from the membrane tubing used in the gas–liquid separator of the hydride generator.After continued use, small selenium peaks (judging from the observation of other selenium isotopes) appeared. These disappeared after the outside of the membrane in the gas–liquid separator had been flushed with 5% HNO3. Similar behavior was not observed for arsenic hydrides, probably reflecting the differing chemistry of selenium. Future studies are needed to investigate the interaction between the selenium and the gas–liquid separator.Recoveries in Drinking Water Matrices of SeIV and SeVI To investigate the CE–OLR–HG–ICP-MS system, three drinking water samples were fortified with SeIV and SeVI at levels equal to about 10 times the detection limit. The average recoveries (xn ± sn21, n = 5) in the three waters for SeV were 86 ± 9, 90 ± 9 and 100 ± 4% and those for SeVI were 88 ± 11, 97 ± 4, and 86 ± 7%. These waters, obtained from diverse sources, are expected to have different conductivities.Because the CE peak shape is influenced by the difference in conductivity between the sample and the running buffer,6 the CE peak shapes differed slightly from sample to sample. This difference may cause the integration program to function less reliably, resulting in lower recoveries and higher RSDs for some samples. Speciation of Selenium and Arsenic CE–HG–ICP-MS has been investigated for the speciation of arsenic compounds.19 Fig. 5 is an electropherogram for the separation of four arsenic species and the two selenium species in distilled water using the CE–OLR–HG–ICP-MS system. The buffer was KHP–20 mm borate (pH 9.03), which was used in the speciation of arsenic compounds by CE–HG–ICP-MS.19 Selenium( iv) and AsV co-elute in Fig. 5, but the selectivity of the ICP-MS detector allows the resolution of the two species. Fig. 5 illustrates the differences in the sensitivities of arsenic and selenium.The concentration of selenium (250 ng ml21 of SeIV, 500 ng ml21 of SeVI) is higher than the concentration of the arsenic (10 ng ml21 of each species). The selenium electropherogram has been scaled by a factor of 1/5 for illustrative purposes. Thus, the response difference shown in Fig. 5 is about a factor of 1/20. Arsenic (m/z 75, 100% abundance) is 50% ionized in the plasma,18 whereas selenium (m/z 77, 7.63% abundance) is about 35% ionized. The lower ionization (50% for As, 35% for Se) and the lower abundance (100% As, 7.63% Se, m/z 77) put the expected sensitivity of Se at about 1/19 compared with As, which is about the same as the observed response difference of about 1/20 (Fig. 5). Conclusions We have demonstrated that CE can be interfaced on-line with HG–ICP-MS to allow for the reduction of SeVI. The sensitivity for SeVI is less than that for SeIV because of the compromise in conditions governing SeVI reduction and CE peak shapes. Because isotopic information is available from ICP-MS, future studies can examine the transformation of Se species while taking advantage of the large range of available CE techniques for speciated selenium compounds.Dual detection of CE speciated arsenic and selenium compounds has been demonstrated. Further investigation is necessary to verify the use of the CE–OLR–HG–ICP-MS system for the simultaneous determination of selenium, arsenic and other hydride forming species. This work was performed while M.L.M.held a National Research Council–US EPA Associateship with the National Exposure Research Laboratory in Cincinnati, OH. References 1 Reddy, K. J., Zhang, Z., Blaylock, M. H., and Vance, G. F., Environ. Sci. Technol., 1995, 29, 1754. 2 Schlegel, D., Mattusch, J., and Dittrich, K., J. Chromatogr. A., 1994, 683, 261. 3 Roehl, R., paper presented at the 1996 Winter Conference on Plasma Spectrochemistry, January 8–13, 1996. 4 Laborda, F., Chakraborti, D., Mir, J.M., and Castillo, J. R., J. Anal. At. Spectrom., 1993, 8, 643. 5 Pitts, L., Fisher, A., Worsfold, P., and Hill, S. J., J. Anal. At. Spectrom., 1995, 10, 519. Fig. 5 Simultaneous detection of Se and As. The selenium signal has been scaled graphically by a factor of 5. 1060 Analyst, October 1997, Vol. 1226 Kuhn, R., and Hoffstetter-Kuhn, S., Capillary Electrophoresis: Principles and Practice, Springer, Berlin, 1993. 7 Schlegel, D., Mattusch, J., and Wennrich, R., Fresenius’ J.Anal. Chem., 1996, 354, 535. 8 Li, K., and Li, S. F. Y., Analyst, 1995, 120, 361. 9 Liu, Y., Lopez-Avita, V., Zhu, J. J., Wiederin, D. R., and Beckert, W. F., Anal. Chem., 1995, 67, 2020. 10 Rayman, M. P., Abou-Shakra, F. R., and Ward, N. I., J. Anal. At. Spectrom., 1996, 11, 61. 11 Diaz-Alarcon, J. P., Navarro-Alarcon, M., Lopez-Garcia de la Serrana, H., Asensio-Drima, C., and Lopez-Martinez, M. C., J. Agric. Food Chem., 1996, 44, 2423. 12 Buckley, W. T., Buda, J.J., and Godfrey, D. V., Anal. Chem., 1992, 64, 724. 13 D’Ulivo, L., Sfetsios, I., and Zamboni, R., Spectrochim. Acta, Part B, 1993, 48, 387. 14 Bryce, D. W., Izquierdo, A., and Luque de Castro, M. D., J. Anal. At. Spectrom., 1995, 10, 1059; Analyst, 1995, 120, 2171. 15 Pitts, L., Worsfold, P. J., and Hill, S. J., Analyst, 1994, 119, 2785. 16 Gallus, S. M., and Heumann, K. G., J. Anal. At. Spectrom., 1996, 11, 887. 17 Thompson, M., and Walsh, J. N., A Handbook of Inductively Coupled Plasma Spectrometry, Blackie Glasgow, 1983. 18 Hill, S. J., Pitts, L., and Worsfold, P., J. Anal. At. Spectrom., 1995, 10, 409. 19 Magnuson, M. L., Creed, J. T., and Brockhoff, C. A., J. Anal. At. Spectrom., 1997, 12, 689. 20 Magnuson, M. L., Creed, J. T., and Brockhoff, C. A., J. Anal. At. Spectrom., 1996, 11, 893. 21 Creed, J. T., Chamberlain, I., Magnuson, M. L., Brockhoff, C. A., and Sivaganesan, M., J. Anal. At. Spectrom., 1996, 11, 504. 22 Story, W. C., Caruso, J. A., Heitkemper, D.T., and Perkins, L., J. Chromatogr. Sci., 1992, 30, 427, related personal communications. 23 Branch, S., Corns, W. T., Ebdon, L., Hill, S., and O’Neill, P., J. Anal. At. Spectrom., 1991, 6, 155. 24 Wang, X., Viczian, A. M., Lasztity, A., and Barnes, R. M., J. Anal. At. Spectrom., 1988, 3, 155. 25 Buckley, W. T., Budac, J. J., and Godfrey, D. V., Anal. Chem., 1992, 64, 724. 26 Brockmann, A., Nonn, C., and Golloch, A., J. Anal. At. Spectrom., 1993, 8, 397. 27 Tao, H., Miyazaki, A., and Bansho, K., Anal.Sci., 1990, 6, 195. 28 Cave, M. R., and Green, K. A., J. Anal. At. Spectrom., 1989, 4, 223. 29 Barnes, R. M., and Wang, X., J. Anal. At. Spectrom., 1988, 3, 1083. 30 Wang, X., and Barnes, R. M., J. Anal. At. Spectrom., 1988, 3, 1091. 31 Nakata, F., Sunahara, H., Hujimoto, H., Yamamoto, M., and Kumamaru, T., J. Anal. At. Spectrom., 1988, 3, 579. 32 Motomizu, S., Toei, J., Kuwaki, T., and Oshima, M. Anal. Chem., 1987, 59, 2930. 33 Pacey, G.E., Straka, M. R., and Gord, J. R., Anal. Chem., 1986, 58, 502. 34 Glaser, J. A., Foerst, D. L., McKee, G. D., Quave, S. A., and Budde, W. L., Environ. Sci. Technol., 1981, 15, 1426. Paper 7/03039H Received May 6, 1997 Accepted July 11, 1997 Analyst, October 1997, Vol. 122 1061 Speciation of Selenium and Arsenic Compounds by Capillary Electrophoresis With Hydrodynamically Modified Electroosmotic Flow and On-line Reduction of Selenium(VI) to Selenium(IV) With Hydride Generation Inductively Coupled Plasma Mass Spectrometric Detection Matthew L.Magnuson*a, John T. Creedb and Carol A. Brockhoffb a United States Environmental Protection Agency, National Risk Management Research Laboratory, Water Supply and Water Resources Division, Treatment Technologies Evaluation Branch, 26 W. Martin Luther King Drive, Cincinnati, OH 45268, USA b United States Environmental Protection Agency, National Exposure Research Laboratory, Human Exposure Research Division, Chemical Exposure Research Branch, 26 W. Martin Luther King Drive, Cincinnati, OH 45268, USA Capillary electrophoresis (CE) with hydride generation inductively coupled plasma mass spectrometry was used to determine four arsenicals and two selenium species.Selenate (SeVI) was reduced on-line to selenite (SeIV) by mixing the CE effluent with concentrated HCl. A microporous PTFE tube was used as a gas–liquid separator to eliminate the 40Ar37Cl and 40Ar35Cl interference from 77Se and 75As, respectively.The direction of the electroosmotic flow during CE was reversed with hydrodynamic pressure, which allowed increased freedom of buffer choice. For conventional pressure injection, method detection limits for SeIV and SeVI based on seven replicate injections were 10 and 24 pg, respectively. Recoveries of SeIV and SeVI in drinking water were measured. Keywords: Speciation; selenium; arsenic; capillary electrophoresis; hydride generation; inductively coupled plasma mass spectrometry Selenium is an essential nutrient but is considered toxic at higher concentrations.Like arsenic, the toxicity of selenium is related to the oxidation state. Selenite, SeIV, is more toxic than selenate, SeVI. Therefore, speciating selenium provides a more accurate toxicity-based risk assessment than an analysis based on total selenium. One approach to selenium speciation is to use ion chromatography (IC)1–5 or capillary electrophoresis (CE)6–9 to separate the selenium species prior to detection.Detectors for IC have included conductivity,1 ICP-AES,2 ICP-MS,3 atomic absorption (AAS)4 and atomic fluorescence spectrometry (AFS).5 For CE, detector schemes have used UV,7,8 conductivity7 and ICP-MS.9 An indirect approach to selenium speciation is based on the inability of SeVI to form a hydride readily. In this approach, SeIV in a sample is determined through hydride generation. Then, the SeVI in the sample is reduced to SeIV, usually by high HCl10–12 or HBr13 concentrations, sometimes assisted by microwave heating,14,15 and the SeVI concentration determined by difference. Speciation based analysis with ICP-MS detection provides excellent selectivity and isotopic information for isotope dilution analysis16 and isotope tracer studies.12 The sensitivity of ICP-MS for selenium is limited because the isotopes amenable to ICP-MS are present in low natural abundance ( < 10%).3,9,17 Hydride generation can compensate for this loss of sensitivity because of its nearly quantitative transport of the gaseous hydride into the ICP without interference from the sample matrix.The compatibility of hydride generation (HG) with chromatographic or transient based signals requires rapid on-line conversion of SeVI (non-hydride forming) into SeIV (hydride forming) to reduce diffusional broadening. The kinetics of the reduction of SeVI to SeIV by HCl have recently been investigated by HG–AFS.18 Pitts et.al.5 found that large volumes of HCl and microwave heating were required for online reduction of selenium in IC effluents. The use of CE, which is a low flow separation technique, may decrease, via the kinetics of the reduction reaction, the volume of HCl consumed and eliminate the need for microwave heating. The potential of the HCl reductant to produce isobaric interferences such as 40Ar35Cl and 40Ar37Cl has been reported17 and can be minimized by the use of membrane based gas separators.19–33 The on-line reduction of SeVI in CE effluents using HG with a membrane based gas–liquid separator was investigated in this work.A second area of investigation was the development of a CE based separation which is flexible enough to allow for dual detection of selenium and arsenic on the same electropherogram. The separation of arsenic via CE–HG–ICP-MS has been reported,19 in which the use of hydrodynamically modified electroosmotic flow (HMEOF) via pressurizing the sample vial allowed for a means of controlling the bulk flow independent of the electro-osmotic flow induced by the applied CE potential.Pressurizing the sample vial causes a laminar flow which is opposite to the direction of the electroosmotic flow. The effect of this laminar flow on CE with HG–ICP-MS is described in more detail elsewhere.19 This HMEOF technique is convenient for the separation of anions (such as SeIV and SeVI) because control over the pressure allows control of elution times and increases freedom of buffer choice, and is relatively electrophoretically reproducible.19 The CE–on-line reduction (OLR)– HG–ICP-MS interface design is described and evaluated for the speciation of selenium.Recoveries of fortified SeVI and SeIV samples were determined in natural water matrices. Experimental Instrumentation The ICP-MS system was a Hewlett-Packard (Avondale, PA, USA) Model 4500 (HP 4500) benchtop instrument. Optimized system parameters for the HP 4500 with HG were similar to solution nebulization parameters.The standard HP 4500 utilizes a nickel sampler cone (1.0 mm orifice) and a nickel skimmer cone (0.4 mm orifice). The rf power was set at 1200 W, the Analyst, October 1997, Vol. 122 (1057–1061) 1057plasma gas flow rate at 15 l min21 and the auxiliary gas flow rate at 1.0 l min21. A sampling depth of 5.2 mm was used with the torch position slightly (20.1 mm horizontally, 0.8 mm vertically, set in the instrument software) off the axis of the sampling orifice, which reduced noise while maintaining the signal. Electrophoretic data were collected in the time resolved analysis (TRA) mode, and peak areas were measured with the chromatographic integration software provided with the instrument.Although CE–HG–ICP-MS produces unique peak shapes, the software is suitable for integrating electrophoretic peak shapes. For consistency, the peak area was integrated using the average level of the background to define the beginning and end of each peak.The CE unit was a Dionex (Sunnyvale, CA, USA) CES-1. The polyimide-coated, fused silica capillary (85 cm 375 mm id) was obtained from Polymicro Technologies (Phoenix, AZ, USA). To apply pressure for the HMEOF, the pressure regulator built into the CE unit was replaced with an external pressure source, a sub-miniature pressure regulator (McMaster Carr, Chicago, IL, Part No. 41795K3) equipped with a suitable pressure gauge (McMaster Carr, Part No. 3842K5). This allowed control of the pressure applied to the sample vial in increments of 0.1 psi, compared with the 0.5 psi increments with the pressure regulator standard with the CE unit. Fig. 1 is a schematic diagram of the CE–HG–ICP-MS system. By design, concentrated HCl comes into contact with inert plastic fittings, tubes, etc., but not the peristaltic pump tubing. Another design (not shown or discussed here), previously used for the speciation of arsenic compounds,19 was evaluated in which the concentrated HCl came into contact with the tubing, and the tubing needed frequent replacement.The platinum ground wire for the CE circuit and the concentrated HCl enter through opposite ports of a PTFE cross (Omnifit, Toms River, NJ, USA). The HCl is drawn from its ventilated sample bottle by the ‘isolating peristaltic pump’ (Minipuls; Gilson, Middletown, WI, USA). The isolating peristaltic pump has two functions.First, it isolates the CE interface from the hydride generation reaction, which produces large volumes of hydrogen gas. The volume of gas induces a back-pressure on the column exit. This back-pressure tends to cause a large, undesirable flow in the capillary. Second, the isolating peristaltic pump draws the HCl from a ventilated reservoir. The suction required to draw the HCl also produces a suction on the capillary. To limit this suction on the capillary, the tube connecting the cross to the peristaltic pump has an id of 3.0 mm, compared with the 0.075 mm id of the capillary.This id difference allows the isolating peristaltic pump to preferentially draw fluid from the HCl reservoir. The difference in ids reduces the suction on the capillary sufficiently such that a currentinterrupting air bubble is not pulled into the capillary in the time it takes to move the capillary between two vials within the autosampler of the CE unit (about 10 s).Following the 3.0 mm id tube, the concentrated HCl is mixed with the CE effluent within a 23 cm 3 0.30 mm id PTFE tube. The capillary has an od of 0.375 mm and therefore is positioned about 0.1 mm from the 0.30 mm tube to prevent blockage. (Note: in Fig. 1, for graphical contrast in the figure, the end of the capillary appears far from the 0.3 mm tube.) The HCl mixed with capillary effluent is then diluted in a three-way PTFE manifold by make-up flow (0.81 ml min21 of distilled water or dilute HCl, discussed below) supplied by the HP 4500’s built in peristaltic pumps.The isolating peristaltic pump (with Tygon tubing, id 1.52 mm) delivers this solution into another PTFE manifold, shown in the upper part of Fig. 1, where it mixes with NaBH4 supplied by the HP 4500’s peristaltic pump (0.42 ml min21). A 60 cm 3 0.8 mm id PTFE tube (not shown) connects this manifold to the membrane gas–liquid separator (MGLS), which is described in detail elsewhere.19,20 The analyte gases and the excess H2 migrate across the microporous membrane of the MGLS into a stream of argon carrier gas, which is introduced directly into the central channel of the ICP torch via a short length of Tygon tubing.The membrane used in the gas–liquid separator was a high density (0.9 g ml21) expanded PTFE material of 0.7 cm id, available from International Polymer Engineering (Tempe, AZ, USA). Reagents All reagents and solutions were handled and prepared in a Class 100 clean air hood to avoid contamination.The distilled water used was de-ionized to 18 MW with a Milli-Q system (Millipore, Milford, MA, USA). Solutions were prepared in fresh Nalgene polyethylene bottles. The CE sample vials were purchased from Dionex. The HCl (Fisher, Fair Lawn, NJ, USA; ACS+ certified) was used after determining that the arsenic concentration within this Fig. 1 Schematic diagram of CE–HG–ICP-MS unit. MGLS represents the membrane gas–liquid separator. 1058 Analyst, October 1997, Vol. 122acid was lower than a ‘high purity’ acid. Dilutions were made (m/m) with distilled water. The NaBH4 (97 + % pure, Alfa AESAR, Johnson Matthey, Ward Hill, MA, USA) was made up on an m/m basis. The NaBH4 was stabilized by adding 7.5 ml of 50% m/m NaOH (Fisher) per liter of solution, and fresh solutions were prepared daily. The arsenic and selenium solutions were prepared on an arsenic and selenium mass basis, respectively.Stock solutions containing 1000 ppm of selenium were prepared from sodium selenate and sodium selenite (Aldrich, Milwaukee, WI, USA). Arsenite was derived from solid arsenic(iii) trioxide (SPEX Industries, Edison, NJ, USA) made up to 1000 ppm in 1% nitric acid, and arsenate was prepared from a 1000 ppm standard of orthoarsenic acid in 2% nitric acid (SPEX Industries). Monomethylarsonic acid and dimethylarsinic acid (both 98% pure) were obtained from Chem Service Chemicals (West Chester, PA, USA).The buffer used for the capillary electrophoresis was prepared from boric acid (99%, Fisher) and from potassium hydrogenphthalate (primary standard, Fisher). The pH meter used was an Orion (Boston, MA, USA) Model 620 pH meter equipped with a temperature compensated Orion Model 6165 probe. Results Optimization of SeVI Reduction The rate of reduction of SeVI to SeIV by HCl when the CE effluent is added to a flowing HCl stream is related both to the HCl concentration and to the contact time between the HCl and the SeVI.The contact time of the HCl and CE effluent in the 0.30 mm tube is controlled by the flow rate of the isolating peristaltic pump (Fig. 1). Fig. 2 illustrates the reduction of SeVI to SeIV by HCl as a function of contact time, which can be calculated by dividing the volume of the 0.3 mm tubing by the flow rate provided by the isolating peristaltic pump. To study the reduction, 100 ppb solutions of SeVI and SeIV were placed in separate CE sample vials and injected sequentially by pressure into the HCl stream at around 3.1 ml min21.19 The ‘SeVI response’ in Fig. 2 is the HG–ICP-MS signal at m/z 77 from the HCl-reduced SeVI, and the ‘SeIV response’ is from the injection of SeIV. The ratio of SeVI to SeIV response is plotted versus contact time. Fig. 2 illustrates the effect of HCl concentration and contact time on the reduction of SeVI to SeIV. The dashed vertical line represents the effect of different concentrations of HCl used for the reduction.The HCl concentrations in the caption of Fig. 2 are reported based on the mass percentage of concentrated HCl. The effect of contact time was then investigated using 100% HCl, chosen because of its higher conversion of SeVI and also because there is no preparation step. The solid line in Fig. 2 indicates that longer contact times favor the reduction of SeVI to SeIV. As the contact time increases, the reduction of SeVI to SeIV is driven more towards completion.Complete conversion of SeVI to SeIV is not achieved with the existing experimental set-up even if 100% HCl is used with a 24 s contact time. Longer contact/conversion times were not investigated because long contact times were observed to degrade the CE peak shapes. The choice of contact time represented a compromise between conversion efficiency and the need to preserve the CE peak shape. If the contact time was too long, then CE peak shape distortion and asymmetry resulted.The contact time was chosen empirically on the basis of peak shape. This limited the conversion of SeVI to SeIV to about 30% for 100% m/m HCl. Optimization of Hydride Generation Conditions Because only one selenium species, SeIV, is hydride active, optimizing the concentrations of the hydride generating reagents, NaBH4 and HCl, is straightforward. To study the optimization, a 100 ppb SeIV solution was placed in the CE sample vials and continuously introduced at about 3.1 ml min21 into the make-up stream by pressurizing the sample vial.The HCl for the SeVI reduction is drawn by the isolating peristaltic pump. Additional HCl may be added as ‘make-up flow’ (Fig. 1). Fig. 3 is a plot of analyte response versus concentration of HCl added as make-up flow. There are only small relative changes in the response from 0 to 10%, and the response decrease after 10% is probably produced by excess of H2 gas which may dilute the flow of analyte across the membrane of the gas–liquid separator and also decrease the ionization efficiency of the plasma.19–21 Because of relatively small changes in the response, it seems that there is no benefit from increasing the HCl concentration via the make-up flow, so distilled, de-ionized water was used as the make-up flow. Fig. 4 is a plot of NaBH4 versus SeIV response with distilled, de-ionized water used as the make-up flow. The response for selenium increases up to 1% m/m NaBH4, after which the intensity decreases, presumably Fig. 2 Effect of contact time with HCl on SeVI reduction. The contact time is calculated by dividing the volume of the peristaltic pump tubing by the flow rate provided by the isolating peristaltic pump. Fig. 3 Effect of changing the HCl concentration in the make-up flow. Fig. 4 SeIV response as a function of NaBH4 concentration. Analyst, October 1997, Vol. 122 1059because of the excess of H2 gas.19–21 Therefore, 1% NaBH4 was chosen for subsequent experiments.Capillary Electrophoresis Experiment Injections into the CE capillary were performed using pressure injection. Lower detection limits could be achieved using electrokinetic injection, as was demonstrated for arsenic.19 For the electrophoretic separation HMEOF was employed, which is more completely described elsewhere.19 To summarize, the HMEOF experiment involves pressurizing the vial containing the CE running buffer during a separation.The high negative voltage (222 000 V) induces an electroosmotic flow away from the detector, but the application of the HMEOF at a low pressure ( < 3 psi) causes the bulk flow to move towards the detector. The retention time of the analytes is then dependent on the HMEOF induced bulk flow rate and the electrophoretic mobilities of the analytes. For low HMEOF separation pressures, the peak shape is not excessively broadened by laminar flow induced by hydrodynamic pressure, as discussed in more detail in ref. 19.The use of HMEOF to direct the bulk flow allows flexibility in choosing the buffer, thereby potentially decreasing the analysis time. The buffer chosen was 20 mm potassium hydrogenphthlate (KHP)–20 mm boric acid adjusted to pH 9.03. This buffer was used for the speciation of arsenic compounds by CE–HG–ICPMS, 19 and was thus chosen for the possibility of dual separation. Pitts et al.5 reported that the addition of KHP to a flow stream of high concentrations of HCl resulted in plugging of an IC system due to the solubility of the KHP. No plugging was observed in the CE–OLR–HG–ICP-MS system, perhaps owing to the low flow rate of the KHP buffer.Method Detection Limits for SeIV and SeVI The method detection limits,34as defined in the US Federal Code of Regulations, is a measure of the precision of several replicate injections of an analyte. For an injection volume of 250 nl, the detection limits for an electrophoretic separation of SeVI and SeIV were 10 and 24 pg, respectively.These detection limits, based on 3.14sn21 of seven replicate injections, were probably influenced by the long term stability in the CE– OLR–HG–ICPMS system. A source of instability was a slowly rising background resulting from the membrane tubing used in the gas–liquid separator of the hydride generator. After continued use, small selenium peaks (judging from the observation of other selenium isotopes) appeared. These disappeared after the outside of the membrane in the gas–liquid separator had been flushed with 5% HNO3.Similar behavior was not observed for arsenic hydrides, probably reflecting the differing chemistry of selenium. Future studies are needed to investigate the interaction between the selenium and the gas–liquid separator. Recoveries in Drinking Water Matrices of SeIV and SeVI To investigate the CE–OLR–HG–ICP-MS system, three drinking water samples were fortified with SeIV and SeVI at levels equal to about 10 times the detection limit.The average recoveries (xn ± sn21, n = 5) in the three waters for SeV were 86 ± 9, 90 ± 9 and 100 ± 4% and those for SeVI were 88 ± 11, 97 ± 4, and 86 ± 7%. These waters, obtained from diverse sources, are expected to have different conductivities. Because the CE peak shape is influenced by the difference in conductivity between the sample and the running buffer,6 the CE peak shapes differed slightly from sample to sample.This difference may cause the integration program to function less reliably, resulting in lower recoveries and higher RSDs for some samples. Speciation of Selenium and Arsenic CE–HG–ICP-MS has been investigated for the speciation of arsenic compounds.19 Fig. 5 is an electropherogram for the separation of four arsenic species and the two selenium species in distilled water using the CE–OLR–HG–ICP-MS system. The buffer was KHP–20 mm borate (pH 9.03), which was used in the speciation of arsenic compounds by CE–HG–ICP-MS.19 Selenium( iv) and AsV co-elute in Fig. 5, but the selectivity of the ICP-MS detector allows the resolution of the two species. Fig. 5 illustrates the differences in the sensitivities of arsenic and selenium. The concentration of selenium (250 ng ml21 of SeIV, 500 ng ml21 of SeVI) is higher than the concentration of the arsenic (10 ng ml21 of each species). The selenium electropherogram has been scaled by a factor of 1/5 for illustrative purposes.Thus, the response difference shown in Fig. 5 is about a factor of 1/20. Arsenic (m/z 75, 100% abundance) is 50% ionized in the plasma,18 whereas selenium (m/z 77, 7.63% abundance) is about 35% ionized. The lower ionization (50% for As, 35% for Se) and the lower abundance (100% As, 7.63% Se, m/z 77) put the expected sensitivity of Se at about 1/19 compared with As, which is about the same as the observed response difference of about 1/20 (Fig. 5). Conclusions We have demonstrated that CE can be interfaced on-line with HG–ICP-MS to allow for the reduction of SeVI. The sensitivity for SeVI is less than that for SeIV because of the compromise in conditions governing SeVI reduction and CE peak shapes. Because isotopic information is available from ICP-MS, future studies can examine the transformation of Se species while taking advantage of the large range of available CE techniques for speciated selenium compounds. Dual detection of CE speciated arsenic and selenium compounds has been demonstrated. Further investigation is necessary to verify the use of the CE–OLR–HG–ICP-MS system for the simultaneous determination of selenium, arsenic and other hydride forming species.This work was performed while M.L.M. held a National Research Council–US EPA Associateship with the National Exposure Research Laboratory in Cincinnati, OH. References 1 Reddy, K. J., Zhang, Z., Blaylock, M. H., and Vance, G.F., Environ. Sci. Technol., 1995, 29, 1754. 2 Schlegel, D., Mattusch, J., and Dittrich, K., J. Chromatogr. A., 1994, 683, 261. 3 Roehl, R., paper presented at the 1996 Winter Conference on Plasma Spectrochemistry, January 8–13, 1996. 4 Laborda, F., Chakraborti, D., Mir, J. M., and Castillo, J. R., J. Anal. At. Spectrom., 1993, 8, 643. 5 Pitts, L., Fisher, A., Worsfold, P., and Hill, S. J., J. Anal. At. Spectrom., 1995, 10, 519. Fig. 5 Simultaneous detection of Se and As.The selenium signal has been scaled graphically by a factor of 5. 1060 Analyst, October 1997, Vol. 1226 Kuhn, R., and Hoffstetter-Kuhn, S., Capillary Electrophoresis: Principles and Practice, Springer, Berlin, 1993. 7 Schlegel, D., Mattusch, J., and Wennrich, R., Fresenius’ J. Anal. Chem., 1996, 354, 535. 8 Li, K., and Li, S. F. Y., Analyst, 1995, 120, 361. 9 Liu, Y., Lopez-Avita, V., Zhu, J. J., Wiederin, D. R., and Beckert, W. F., Anal. Chem., 1995, 67, 2020. 10 Rayman, M. P., Abou-Shakra, F. R., and Ward, N. I., J. Anal. At. Spectrom., 1996, 11, 61. 11 Diaz-Alarcon, J. P., Navarro-Alarcon, M., Lopez-Garcia de la Serrana, H., Asensio-Drima, C., and Lopez-Martinez, M. C., J. Agric. Food Chem., 1996, 44, 2423. 12 Buckley, W. T., Buda, J. J., and Godfrey, D. V., Anal. Chem., 1992, 64, 724. 13 D’Ulivo, L., Sfetsios, I., and Zamboni, R., Spectrochim. Acta, Part B, 1993, 48, 387. 14 Bryce, D. W., Izquierdo, A., and Luque de Castro, M. D., J. Anal. At. Spectrom., 1995, 10, 1059; Analyst, 1995, 120, 2171. 15 Pitts, L., Worsfold, P. J., and Hill, S. J., Analyst, 1994, 119, 2785. 16 Gallus, S. M., and Heumann, K. G., J. Anal. At. Spectrom., 1996, 11, 887. 17 Thompson, M., and Walsh, J. N., A Handbook of Inductively Coupled Plasma Spectrometry, Blackie Glasgow, 1983. 18 Hill, S. J., Pitts, L., and Worsfold, P., J. Anal. At. Spectrom., 1995, 10, 409. 19 Magnuson, M. L., Creed, J. T., and Brockhoff, C. A., J. Anal. At. Spectrom., 1997, 12, 689. 20 Magnuson, M. L., Creed, J. T., and Brockhoff, C. A., J. Anal. At. Spectrom., 1996, 11, 893. 21 Creed, J. T., Chamberlain, I., Magnuson, M. L., Brockhoff, C. A., and Sivaganesan, M., J. Anal. At. Spectrom., 1996, 11, 504. 22 Story, W. C., Caruso, J. A., Heitkemper, D. T., and Perkins, L., J. Chromatogr. Sci., 1992, 30, 427, related personal communications. 23 Branch, S., Corns, W. T., Ebdon, L., Hill, S., and O’Neill, P., J. Anal. At. Spectrom., 1991, 6, 155. 24 Wang, X., Viczian, A. M., Lasztity, A., and Barnes, R. M., J. Anal. At. Spectrom., 1988, 3, 155. 25 Buckley, W. T., Budac, J. J., and Godfrey, D. V., Anal. Chem., 1992, 64, 724. 26 Brockmann, A., Nonn, C., and Golloch, A., J. Anal. At. Spectrom., 1993, 8, 397. 27 Tao, H., Miyazaki, A., and Bansho, K., Anal. Sci., 1990, 6, 195. 28 Cave, M. R., and Green, K. A., J. Anal. At. Spectrom., 1989, 4, 223. 29 Barnes, R. M., and Wang, X., J. Anal. At. Spectrom., 1988, 3, 1083. 30 Wang, X., and Barnes, R. M., J. Anal. At. Spectrom., 1988, 3, 1091. 31 Nakata, F., Sunahara, H., Hujimoto, H., Yamamoto, M., and Kumamaru, T., J. Anal. At. Spectrom., 1988, 3, 579. 32 Motomizu, S., Toei, J., Kuwaki, T., and Oshima, M. Anal. Chem., 1987, 59, 2930. 33 Pacey, G. E., Straka, M. R., and Gord, J. R., Anal. Chem., 1986, 58, 502. 34 Glaser, J. A., Foerst, D. L., McKee, G. D., Quave, S. A., and Budde, W. L., Environ. Sci. Technol., 1981, 15, 1426. Paper 7/03039H Received May 6, 1997 Accepted July 11, 1997 Analyst, October 1997, Vol. 122 1061
ISSN:0003-2654
DOI:10.1039/a703039h
出版商:RSC
年代:1997
数据来源: RSC
|
|